Show HN: Evolving Agents Framework
github.comHey HN,
I've been working on an open-source framework for creating AI agents that evolve, communicate, and collaborate to solve complex tasks. The Evolving Agents Framework allows agents to:
Reuse, evolve, or create new agents dynamically based on semantic similarity Communicate and delegate tasks to other specialized agents Continuously improve by learning from past executions Define workflows in YAML, making it easy to orchestrate agent interactions Search for relevant tools and agents using OpenAI embeddings Support multiple AI frameworks (BeeAI, etc.) Current Status & Roadmap This is still a draft and a proof of concept (POC). Right now, I’m focused on validating it in real-world scenarios to refine and improve it.
Next week, I'm adding a new feature to make it useful for distributed multi-agent systems. This will allow agents to work across different environments, improving scalability and coordination.
Why? Most agent-based AI frameworks today require manual orchestration. This project takes a different approach by allowing agents to decide and adapt based on the task at hand. Instead of always creating new agents, it determines if existing ones can be reused or evolved.
Example Use Case: Let’s say you need an invoice analysis agent. Instead of manually configuring one, our framework: Checks if a similar agent exists (e.g., a document analyzer) Decides whether to reuse, evolve, or create a new agent Runs the best agent and returns the extracted information
Here's a simple example in Python:
import asyncio from evolving_agents.smart_library.smart_library import SmartLibrary from evolving_agents.core.llm_service import LLMService from evolving_agents.core.system_agent import SystemAgent
async def main(): library = SmartLibrary("agent_library.json") llm = LLMService(provider="openai", model="gpt-4o") system = SystemAgent(library, llm)
result = await system.decide_and_act(
request="I need an agent that can analyze invoices and extract the total amount",
domain="document_processing",
record_type="AGENT"
)
print(f"Decision: {result['action']}") # 'reuse', 'evolve', or 'create'
print(f"Agent: {result['record']['name']}")
if __name__ == "__main__":
asyncio.run(main())Next Steps Validating in real-world use cases and improving agent evolution strategies Adding distributed multi-agent support for better scalability Full integration with BeeAI Agent Communication Protocol (ACP) Better visualization tools for debugging Would love feedback from the HN community! What features would you like to see?
Your framework name suggests that you have an effective method of taking an existing agent that is "close" to meeting requirements by some similarity metric and evolving a new agent that will be better suited than the base agent is to meet the requirements.
If this is true, your post, your repo README file, and your BeeAI Community call presentation (which starts here: https://www.youtube.com/watch?v=5-xqQBv-ccY&t=1294s) ought to be proclaiming such a notable success. Yet, I've seen little to nothing about it in any of those places. Am I missing it?
How exactly does your agent evolution process work?
It appears to be a "please improve yourself" style prompt that is ran through the LLM.
https://github.com/matiasmolinas/evolving-agents?tab=readme-...
[dead]
Here is the March 2025 BeeAI Community Call ( https://www.youtube.com/watch?v=5-xqQBv-ccY ), where I presented the draft of the framework and shared some ideas on why it makes sense for me to provide this kind of framework and tools to AI agents.
Cool project! Thanks for sharing.
Intrigued by the storage-why not use a VDB?
> The second part demonstrates how agents communicate with each other through workflows defined in YAML
oh no
It's not continuously evolving outside the developer's control, right? Companies need to checkpoint dependencies for resilience. Can you define tests to ensure compliance of new versions?
As long as we don't ask it to make paperclips it should be fine.
[dead]
Feels like the JavaScript framework wars era all over again. Wondering what will be the prevailing paradigm.
No matter what paradigm will come out on top:
Someone will notice there's something to be gained by building abstractions around caching/memoization and that running workflows in distributed environments might be augmented by some form of precompilation and/or hydration.
Then others will boast that they can do everything with whatever flavour of vanilla they are familiar with.
Then someone will bring up Greenspun's tenth rule.
Anyways: We'll have pages and pages of HN threads to scroll through.
There should be an image of Agent Smith somewhere in the readme (j/k) :)
Sadly we’ll have many agent Smiths but as in the movies, without some rudimentary Matrix (it’s possible to make some good democratic one), Neo and we will have no chance against them.
Agents can change our world and us, but we have limited or no capability to change the online world and their often private multimodal “brains”. We can represent them as familiar 3D environments and this way level the playing field.
Agents are like strict librarians who spit quotes but we cannot enter the library. It has too many stolen things and companies don’t want us to see that, they won’t make 3D game-like representations of those libraries
And here I was, thinking I was clever for coming up with the agent smith image for an agent framework.
https://codeberg.org/jfkimmes/TinyAgentSmith
You’re not alone: https://synw.github.io/agent-smith/
Sure would beat the cringe ugly AI image currently in the readme.
Is this distributed and peer to peer? Because I don't want to pay the cost of 50 agents myself. And we truly need something that can't be centralized
I love the approach
Would you like to join forces? Reach out to me: https://engageusers.ai/ecosystem.pdf
[dead]