The Industrial Revolution transformed physical work by pumping and transforming energy.
We believe the Intelligence Revolution lies in AI that can 'pump' and transform information – taking raw observations and compressing them into powerful, executable models.
Machine learning currently has one pump - stochastic gradient descent. We want to expand this to include the many inductive 'pumps' that humans have invented, including and especially — the scientific process.
LLMs
We are building self-referential AI that mirrors human cognition by integrating heuristic and symbolic reasoning within a machine learning framework.
Unlike typical machine learning, this will generate results that are human interpretable and falsifiable — as is necessary for pushing the frontiers of science and math, and for creating new knowledge.

Mechanistic Cognition
Industry-standard LLMs are great at intuitive and approximate reasoning, essentially performing System 1 thinking. However, LLMs capabilities are limited at performing accurate, deliberate, and logical reasoning required for System 2 thinking.
At Springtail, we aim to bridge these these two modes of thinking.
We are working on novel architectures that enable "mechanistic cognition" through System 1 perception & prediction, alongside deliberate System 2 exploration & evaluation.

Strange Loop
Current AI systems rely on humans to "close the final loop" of action, observation, learning, and adaptation.
During our research on small mechanistic reasoning models, it was always us that closed the final informational loop as the task complexity increase. Frustratingly, the reasoning methods we used to close this final loop were identical to the methods we were teaching the ML models!
This problem can be solved with a self-referential loop i.e. a "strange loop" as described by Douglas Hofstadter.
We want to enable AI to close this loop itself.

Defying Neural Scaling Laws
The current ML paradigm is to train LLMs with boil-the-ocean levels of compute & data to compress knowledge that already exists in the world.
Scientists and children, on the other hand, use much smaller samples of observations paired with iteration, induction and deduction to infer novel truths about the universe.
LLMs have not proved themselves capable of this level of open-ended induction effectively, even after being fed enormous amounts of data.
At Springtail, we are building a highly efficient and small reasoning kernel that do exactly this. We believe that our approach will surpass neural scaling laws.
As open-source tools, our small reasoning kernels will enable humanity to broadly advance scientific and mathematical knowledge.