Home

Feb 20, 2025

CTGT Raises $7M To Help Enterprises Break Through The Limits Of AI Compute

It's no secret that we are reaching the limits of AI compute. This challenge has been highlighted by some of the brightest minds in AI, including Ilya Sutskever, who has been vocal about these constraints.

Training runs for large models can cost tens of millions of dollars due to the cost of chips. The costs have become so high that Anthropic has estimated that it could cost as much to update Claude as it did to develop it in the first place. Companies like Amazon are spending billions to erect new AI data centers in an effort to keep up with compute demands. While the recent release of DeepSeek’s R1 model sparked the conversation around scaling perhaps not being all you need, R1 is still subject to the limitations inherent to traditional AI.

But maybe all of this isn't necessary. With a better foundational understanding of how AI works, we can approach AI model training and deployment in new ways that require a fraction of the energy and compute. 

As the Endowed Chair’s Fellow at the University of California San Diego, my work since undergrad singularly focused on solving this problem. In my first year of grad school, I was invited to give a presentation at ICLR that examined the benefits of a new way of evaluating and training AI models that was an order of magnitude faster and resulted in three nines of accuracy - a huge leap over existing methods.

We replaced deep neural networks and backpropagation, and this methodology became the basis for CTGT.

I teamed up with Trevor Tuttle, an expert on hyperscalable ML systems, to build an entirely new AI stack. Instead of scaling by brute force, we broke down how deep learning systems actually learn and developed a way to achieve the same (or better) insights up to 500x faster while using a fraction of the compute. The same month we started, we won the inaugural PyTorch Conference Startup Showcase Award. It was remarkable to be recognized by the most widely used framework for training deep learning models while actively working to redefine the paradigm.

The key development underpinning this was understanding how neural networks actually learn. Deep learning models feature artificial neurons vaguely similar to ours, filtering data through them and then backpropagating information to learn features. That last step—backpropagation—is entirely artificial and not how biological learning works. By eschewing the inefficiencies and less theoretically justified parts of deep learning, we create a path forward to the next generation of truly intelligent AI. 

In the short time we’ve been building CTGT, we’ve continually seen growing evidence that this approach is breaking through the wall the traditional approach to deep learning has hit. By embracing the bias towards simplicity seen in “The Bitter Lesson,” we can more intelligently scale models with practical applications. While we've seen incredible advancements with deep learning over the past decade, we now need to build the next evolution of AI beyond deep learning.

CTGT is doing just that. You can read about our new round of funding here

We're grateful to Gradient, General Catalyst, Y Combinator, Liquid 2 and all the amazing angels who share our vision for the next generation of AI, including Francois Chollet, creator of Keras, Paul Graham, co-founder of Y Combinator, and other luminaries. 

If you're interested in working at the forefront of intelligence, join us.

Cyril Gorlla

Partner with Us for AI Excellence

Ready to See What CTGT Can Do for You?

Our team is here to help you eliminate the noise and focus on what matters—achieving your AI goals faster.

Contact Us