Skip to content

WSQS/rlm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RLM

This repository is an experiment in building AI workflows.

The code in this repo is inspired by rlms.

Programmatic Tool Calling (PTC) and related techniques allow LLMs to emit executable code and run it in a REPL environment.

Recursive Language Models (RLMs) go one step further. The model can recursively call itself to handle long contexts.

Reflection

When handling long contexts, the LLM behaves like a parser: semantics can flow both up and down in a tree structure. From this perspective, the key point in this structure is how the semantics flow up and down. In RLMs, semantics flow down by query with sub-context, semantics flow up by subagent return it's result.

One problem is in this view, how can rlm handle multiple long input context?

For context or system prompt, is it possible or useful to not write the situation, but just put the core code into it? I think this will like lisp, which core implementation can be written in on page.

For RLMs, variables act as signifiers, while the context supplies the signified meanings they refer to.

Consider the situation where LLMs only output code, what they are really outputting is an abstract syntax tree (AST).

Is it possible to transfer prompt to code? I think with rlm and reflection, it is possible, and the code can iteration.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages