During sleep, the human brain sorts through different memories, consolidating important ones while discarding those that don’t matter. What if AI could do the same?

Bilt, a company that offers local shopping and restaurant deals to renters, recently deployed several million agents with the hopes of doing just that.

Bilt uses technology from a startup called Letta that allows agents to learn from previous conversations and share memories with one another. Using a process called “sleeptime compute,” the agents decide what information to store in its long-term memory vault and what might be needed for faster recall.

“We can make a single update to a [memory] block and have the behavior of hundreds of thousands of agents change,” says Andrew Fitz, an AI engineer at Bilt. “This is useful in any scenario where you want fine-grained control over agents’ context,” he adds, referring to the text prompt fed to the model at inference time.

Large language models can typically only “recall” things if information is included in the context window. If you want a chatbot to remember your most recent conversation, you need to paste it into the chat.

Most AI systems can only handle a limited amount of information in the context window before their ability to use the data falters and they hallucinate or become confused. The human brain, by contrast, is able to file away useful information and recall it later.

“Your brain is continuously improving, adding more information like a sponge,” says Charles Packer, Letta’s CEO. “With language models, it’s like the exact opposite. You run these language models in a loop for long enough and the context becomes poisoned; they get derailed and you just want to reset.”

Packer and his cofounder Sarah Wooders previously developed MemGPT, an open-source project that aimed to help LLMs decide what information should be stored in short-term vs. long-term memory. With Letta, the duo has expanded their approach to let agents learn in the background.

Bilt’s collaboration with Letta is part of a broader push to give AI the ability to store and recall useful information, which could make chatbots smarter and agents less error-prone. Memory remains underdeveloped in modern AI, which undermines the intelligence and reliability of AI tools, according to experts I spoke to.

Harrison Chase, cofounder and CEO of LangChain, another company that has developed a method for improving memory in AI agents, says he sees memory as a vital part of context engineering—wherein a user or engineer decides what information to feed into the context window. LangChain offers companies several different kinds of memory storage for agents, from long-term facts about users to memories of recent experiences. “Memory, I would argue, is a form of context,” Chase says. “A big portion of an AI engineer’s job is basically getting the model the right context [information].”

Consumer AI tools are gradually becoming less forgetful, too. This February, OpenAI announced that ChatGPT will store relevant information in order to provide a more personalized experience for users—although the company did not disclose how this works.

Letta and LangChain make the process of recall more transparent to engineers building AI systems.

“I think it’s super important not only for the models to be open but also for the memory systems to be open,” says Clem Delangue, CEO of the AI hosting platform Hugging Face and an investor in Letta.

Intriguingly, Letta’s CEO Packer hints that it might also be important for AI models to learn what to forget. “If a user says, ‘that one project we were working on, wipe it out from your memory’ then the agent should be able to go back and retroactively rewrite every single memory.”

The notion of artificial memories and dreams makes me think of Do Androids Dream of Electric Sheep? by Philip K. Dick, a mind-bending novel that inspired the stylishly dystopian movie Blade Runner. Large language models aren’t yet as impressive as the rebellious replicants of the story, but their memories, it seems, can be just as fragile.


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.



Source link


author

Leave a Reply

Your email address will not be published. Required fields are marked *