Traceoid: A path toward AGI

Traceoid has identified a viable path toward AGI. We are developing an algorithm and a programming language to make AGI possible.

Modern machine learning relies on two pillars, GPUs and automatic differentiation. Conventional wisdom holds scaling GPUs alone will lead to AGI.

Traceoid assumes a contrarian stance: automatic differentiation has become a bottleneck. We are developing a far-reaching generalization thereof.

Modern machine learning GPUs and automatic differentiation.

Automatic differentiation

Automatic differentiation or autodiff is the the algorithmic foundation of modern machine learning. Autodiff became the de facto approach to model training around 2015. It is not a coincidence that the ongoing machine learning revolution appears to have begun around this time.

Despite the astounding progress we have witness in machine learning over the last decade, autodiff has stagnated, it has remained conceptually unchanged since its invention in 1970.

(we are developing a theory that demonstates that problems of contemporary machine learning can be ) early experiments points We have a working theory that problems of contemporary machine learning can be traced back to autodiff. This includes slow & expensive training, limited autonomous operation, lack of model interpretability, lack of model composability.

Product

To that end, Traceoid is developing a generalized approach to machine learning, an approach based on an algorithm, autotrace, and a programming language ,traceoid, based on autotrace.

The autotrace algorithm is an exponentially more powerful generalization of autodiff. Not only does it address shortcomings of autodiff, it also provides a unified approach to dealing with equilibrium-seeking systems, systems that are defined by their equilibria. Besides machine learning, this includes (at least partially) problems in various subfields of optimization, inverse theory, dynamical systems, statistical inference, variational inference, control theory etc.

The traceoid programming language then acts as an interface (intermediary) to autotrace.

traceoid introduces higher-order programming constructs to provide first-class support for describing equilibrium-seeking systems. Modeling and solving these systems is framed as a dialog between the developer and the autotrace algorithm, a dialog mediated through the programming language.

System descriptions that emerge from this dialog operate with minimal information, rather than perfect information, parts of the description are intentionally left unspecfied and are filled in dynamically by autotrace during run-time. This gap between these two descriptions is desirable, it is the very essence of autonomous behavior, a balance between hard constraints and the conduits through which the system can autonomously evolve.

We are inspired by the experience of crafting proofs with the Lean theorem prover where proofs arise from the dialog between the language and developer. Lean does not require the developer to specify every step, it fills some gaps on its own. However the developer is not precluded from taking a more granular appraoch if needed.

  • safeguards, guarantees about runtime behavior

Advantages

  • ROI

Humans are notoriously ill-equipped for reasoning about systems that involve nonlinearity, second-order effects, non-determinism, uncertainty and other mental tar pits.

traceoid will allow you to do that easily.

This in itself has second order effects. Better reasoning about uncertainty has obvious implications for describing machine learning models. It also has less obvious implications for reasoning about distributed systems, which itself is failures

dialog allows the developer to provide only a minimal description of the system, where things are left unspecified. This is in contrast with complete descriptions.

Traceoid makes the uncertainty that is implicit in specifying models explicit. In face of uncertainty non-determinism

Advantages

Traceoid will enable large scale energy based models which are considered the holy grail of machine leraning

  • in addition

  • essentially infinte context window

  • ebms have a loser structure which we need

AGI

Our

Investments

We are not actively raising funds, however reach out if you are intersted in investing in a future round, w

References