▾ G11 Media Network: | ChannelCity | ImpresaCity | SecurityOpenLab | Italian Channel Awards | Italian Project Awards | Italian Security Awards | ...
InnovationOpenLab

Goodfire Raises $7M to Break Open the Black Box of Generative AI Models

Today, Goodfire announced a $7M seed round to advance its mission of demystifying generative AI models. The startup develops tools that enable developers to debug AI systems by providing deep insights...

Business Wire

Seed funding led by Lightspeed Venture Partners will accelerate the development of groundbreaking tools to understand, edit, and debug AI models

SAN FRANCISCO: Today, Goodfire announced a $7M seed round to advance its mission of demystifying generative AI models. The startup develops tools that enable developers to debug AI systems by providing deep insights into their internal workings. Lightspeed Venture Partners led the round, with participation from Menlo Ventures, South Park Commons, Work-Bench, Juniper Ventures, Mythos Ventures, Bluebirds Capital, and several notable angels. The funding will be used to scale up the engineering and research team, as well as to enhance Goodfire’s core technology.

Generative models (e.g., LLMs) are becoming increasingly complex, making them difficult to understand and debug. The black-box nature of these models poses significant challenges for safe and reliable deployment — a 2024 McKinsey survey reveals that 44% of business leaders have experienced at least one negative consequence due to unintended model behavior (source). To address this issue, researchers and developers are turning to a new approach called mechanistic interpretability. Mechanistic interpretability is the study of how AI models reason and make decisions, aiming to understand their internal workings at a detailed level.

Goodfire's product is the first to apply interpretability research for practical understanding and editing of AI model behavior. Their product will provide developers with deeper insights into their models' internal processes, and precise controls to steer model output (analogous to performing “brain surgery” on the model). Moreover, interpretability-based approaches can reduce the need for expensive retraining or trial-and-error prompt engineering.

"Interpretability is emerging as a crucial building block in AI," said Nnamdi Iregbulem, Partner at Lightspeed Venture Partners. "Goodfire's tools will serve as a fundamental primitive in AI development, opening up the ability for developers to interact with models in entirely new ways. We're backing Goodfire to lead this critical layer of the AI stack.”

The Goodfire team brings together experts in AI interpretability and startup scaling. "We were brought together by our mission, which is to fundamentally advance humanity's understanding of advanced AI systems," said Eric Ho, CEO and co-founder of Goodfire. "By making AI models more interpretable and editable, we're paving the way for safer, more reliable, and more beneficial AI technologies.”

  • Eric Ho, CEO, previously founded RippleMatch, a Series B AI recruiting startup backed by Goldman Sachs.
  • Tom McGrath, Chief Scientist, previously senior research scientist at DeepMind, where he founded DeepMind's mechanistic interpretability team.
  • Dan Balsam, CTO, was the founding engineer at RippleMatch, where he led the core platform and machine learning teams to scale the product to millions of active users.

Nick Cammarata, a leading interpretability researcher formerly at OpenAI, underscores the importance of Goodfire's work: "There is a critical gap right now between frontier research and practical usage of interpretability methods. The Goodfire team is the best team to bridge that gap.”

Goodfire is looking for agentic, mission-driven, kind, and thoughtful people to help us build the future of interpretability. Ready to decode AI and secure its future? Apply to join.

About Goodfire

Goodfire is an SF-based public benefit corporation dedicated to advancing humanity's understanding of advanced AI systems. We build cutting-edge tools that enable developers to understand, edit, and debug AI models. By enhancing transparency and control in AI development, we aim to mitigate catastrophic risks while fostering the creation of safer, more beneficial AI systems.

Fonte: Business Wire

If you liked this article and want to stay up to date with news from InnovationOpenLab.com subscribe to ours Free newsletter.

Related news

Last News

RSA at Cybertech Europe 2024

Alaa Abdul Nabi, Vice President, Sales International at RSA presents the innovations the vendor brings to Cybertech as part of a passwordless vision for…

Italian Security Awards 2024: G11 Media honours the best of Italian cybersecurity

G11 Media's SecurityOpenLab magazine rewards excellence in cybersecurity: the best vendors based on user votes

How Austria is making its AI ecosystem grow

Always keeping an European perspective, Austria has developed a thriving AI ecosystem that now can attract talents and companies from other countries

Sparkle and Telsy test Quantum Key Distribution in practice

Successfully completing a Proof of Concept implementation in Athens, the two Italian companies prove that QKD can be easily implemented also in pre-existing…

Most read

Newsletter signup

Join our mailing list to get weekly updates delivered to your inbox.

Sign me up!