Wednesday, March 18, 2026
HomeSoftware DevelopmentOvercoming the Twin Traps of AI

Overcoming the Twin Traps of AI

-


For all of the capabilities enabled by advances in generative AI know-how prior to now few years, issues within the underlying structure are holding it again in a number of methods.

Counterintuitive AI is an organization trying to reinvent the AI reasoning stack to handle these points, and it believes that present LLM know-how suffers from what the corporate calls the Twin Traps drawback.

Gerard Rego, founding father of Counterintuitive AI, has spent a profession spanning business and academia, holding tech management positions at firms like Nokia, GM India, and MSC Software program, in addition to being a fellow at Stanford College, The Wharton Faculty of Enterprise on the College of Pennsylvania, and Cambridge College.

He believes that the primary lure of those Twin Traps pertains to the truth that trendy LLMs run on floating level arithmetic, which is designed for efficiency somewhat than reproducibility. With this mathematical basis, each operation introduces rounding drift and order variance as a result of fractions are rounded to the closest quantity that may be represented in binary, resulting in the identical computation leading to completely different solutions throughout completely different runs or machines.

“Think about you will have 2 to the facility of 16 digits,” mentioned Rego. “Each time you run the machine, you’re going to select up one of many potentialities in that quantity. So let’s say this time it picks up the 14th digit and solutions you. You’ll say ‘this can be a little completely different from the earlier reply.’ Yeah, as a result of it’s probabilistic math so the quantity could be comparable nevertheless it’s not reproducible.”

The second situation is that present AI fashions are memoryless, as they construct on one thing known as Markovian Mimicry, which primarily involves a conclusion based mostly on present state somewhat than previous historical past (ie predicting the following phrase in a sentence based mostly solely on the phrase that got here earlier than it). In different phrases, they predict the following token with out retaining the reasoning that led it to that output.

Each of those points contribute to AI and the GPUs powering it utilizing a whole lot of vitality, resulting in detrimental implications for the setting.

These Twin Traps additionally lead to a number of bottlenecks:

  • Physics ceiling: Sooner or later making chips smaller doesn’t stabilize unstable math
  • Compute ceiling: Including extra chips multiplies inconsistency as an alternative of enhancing efficiency
  • Vitality and capital ceiling: Energy and cash are wasted on correcting computational noise

“I’m a visiting fellow at Cambridge and in 2019, 2020, I used to be sitting there and speaking to a bunch of oldsters and saying ‘hey, this AI factor goes to break down on its head in about 5 to 6 years,’ and that’s as a result of they’re going to hit a floating level wall and vitality wall,” Rego mentioned.

He defined that at this time’s AI know-how was constructed on these ideas that have been developed between the 70s and 90s and there hasn’t actually been something terribly groundbreaking within the final 30 years, which is what’s driving Counterintuitive AI to return to the drafting board to construct one thing completely different from the bottom up which will tackle the present limitations. He believes that the following huge leap in AI will come from re-imagining how machines suppose, somewhat than making an attempt to proceed scaling compute, and losing a whole lot of vitality and cash within the course of.

This new strategy follows 4 important ideas:

  • A reasoning-first structure the place the AI can justify its selections
  • Programs that measure the vitality value of each resolution
  • Auditable logic of each reasoning step
  • Human-in-the-loop design the place people are augmented by AI as an alternative of changed

The corporate plans to measure progress not through benchmarks, however by how effectively the methods persistently reproduce reasoning, how safely they act when unsure, and the way vitality environment friendly they’re.

“We mentioned let’s construct a non-floating level strategy, what we name deterministic arithmetic. Let’s write software program, which isn’t memoryless. So it’s truly inheriting the lineage of your thought course of. Each time you work together, it understands the trigger and impact, not simply the basic query of grammar,” Rego mentioned.

The corporate lately introduced it’s engaged on creating a brand new sort of reasoning chip known as a man-made reasoning unit (ARU) that executes causal logic, reminiscence lineage, and verifiable deduction. It referred to the ARU as initiating the “post-floating level GPU period of computing.”

The corporate additionally plans to develop a full reasoning stack to enhance the ARU, which it believes will allow anybody to construct methods that “can motive with traceable logic, keep in mind selections and reproduce reality at scale, all with margins of security.”

With this new stack, the reasoning behind a solution could be extra publicly accessible, versus how now a lot of the information of how these generative AI methods truly work is restricted to some firms and labs.

“Scientific progress accelerates when concepts are clear and instruments are accessible. We are going to create interfaces for experimentation and construct a group round deterministic reasoning—spanning {hardware}, logic, and concept. Our work stands on the shoulders of scientific custom: when intelligence turns into reproducible, information compounds quicker,” the corporate believes.

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0FollowersFollow
0SubscribersSubscribe

Latest posts