Building Applications with AI - Lessons from LangChain, Hearth, & Context.ai by @ttunguz

Building Applications with AI – Lessons from LangChain, Hearth, & Context.ai by @ttunguz

Source Node: 2284684

image

Yesterday at TechCrunch Disrupt, Harrison Chase, founder of LangChain, Ashe Magalhaes founder of Hearth, & Henry Scott-Green, founder of Context.ai, & I discussed the future of building LLM-enabled applications.

We assembled the panel as a three layer cake : Hearth, the application ; Langchain, the infrastructure ; Context.ai, the product analytics. Here are my takeaways from the conversation.

First, it’s very early in LLM application development in every sense of the word. Few applications are managing significant user volumes. Many remain in testing & are working to develop quality scores for LLM performance before launch. The state of the art is using “vibes” ; how much better did the model feel?

Fine-tuning may top the conversations in social media, but not many have launched fine-tuned models into production.

Second, the stack is maturing at the same time. The core components include technologies to provide context to a model (retrieving relevant information), the models themselves, & then the monitoring (LangChain’s LangSmith)/analytics (Context.ai) infrastructure to investigate performance.

It’s possible that the world will move to “constellations of models.” A user issues a query, the system adds the right context. Then a router sends to the best model for the task. (either a large language model or a bespoke small model). But, today, most traffic is going to the most accurate LLMs. There’s not much specialization in models just yet.

Third, user intent data — following a user to understand their goal in a product — is essential to building great LLM applications. Part of this data set will be derived from product use; some of it will be imputed using analysis ; & human labeling may play a role.

Understanding when a model acts unexpectedly or inappropriately is critical but hard to do & even harder to catch in real time. But this will be an important part of the stack.

Last, agentic applications, those which act on behalf of users, have the potential to transform work. There are new product design challenges to solve including how to fashion these products to work for an individual & a team. Most of the LLM products today are single-player.

Thanks to Ashe, Harrison, & Henry for joining me on the panel.

Time Stamp:

More from Tomasz Tunguz