If you are looking for LLM agents that go off and do a bunch of work on their own, you will be supremely underwhelmed. Anyone who went straight to building agents without a human in some large loop found that they were trying to make the LLM do things it was extremely bad at.
The right approach to build toward agents is to start with something that gives pretty good responses to prompts and build up an agentic mode to let it do more and more in response to each prompt. It should be thought of as extending how much you get per prompt, and doing so by chaining together components you've already worked at making to good at.
Cursor (the LLM powered VS Code fork) has an agentic mode and they are doing this the right way. The normal chat window is good at producing changes to your code, and at applying them, at looking at lints, at suggesting terminal commands, at doing directory listings or RAG on your codebase. Agentic mode is tying those together to do more of the work you want with fewer prompts from you.
As a side note, while I know of several language model based systems that have been deployed in companies, some companies don't want to talk about it:
1. Its still perceived as an issue of competitive advantage
2. There is a serious concern about backlash. The public's response to finding out that companies have used AI has often not been good (or even reasonable) -- particularly if there was worker replacement related to it.
It's a bit more complicated with "agents" as there are 4 or 5 competing definitions for what that actually means. No one is really sure what an 'agentic' system is right now.
With all the agencies and the YouTube demos of n8n and Make.com they should be everywhere.
I look at my workplace and I see places where they might fit in but if the reliability isn’t 99.5% they won’t be trusted and I think that’s a problem.
I made a toy in n8n that collects transactions in YNAB via API and matches them to Amazon orders in GMail. It then uses GPT-4o with vision to categorize the product pictures according to my budget’s categories but I have to add the order link to the transaction memo and add a flag for human review because it’s only 80% or so. It has sped up the workflow for sure but nowhere near good enough to set it and forget it.
If we're going to have a conversation about agents or agentic it is really important we agree on which definition of those terms we are using for the purpose of this conversation.
If you ask two different people in the AI space to define "agent" you almost always get two slightly (or significantly) different definitions!
Here are just some of the definitions I've seen over time: https://news.ycombinator.com/item?id=42216217#42228364
For the purpose of this thread the most cynical definition, "LLMs that do something useful", might actually be the best fit!
We've been using them to find novel vulnerabilities in open source web apps. The past 4 posts here have details:
- Auth bypass/arbitrary file read in Scoold: https://xbow.com/blog/xbow-scoold-vuln/
- SSRF in 2FAuth: https://xbow.com/blog/xbow-2fauth-ssrf/
- Stored XSS in 2FAuth: https://xbow.com/blog/xbow-2fauth-xss/
- Path traversal in Labs.AI EDDI: https://xbow.com/blog/xbow-eddi-path/
Each of those has an associated agent trace so you can go read exactly what the agent did to find and exploit the vulnerability.
I know of many, many LLM systems in production system, since that's what I've been helping companies build since the start of the year. Mostly it's pretty rote automation work but the cost savings are incredible.
Agentic workflows are a much higher bar that are just barely starting to work. I can't speak to their efficacy but here's a few of the ones that are sort of starter-level agents that I've started seeing some companies adopt:
The way I look at Agentic systems is that there are Tools an LLM can call out to, and do work with.
Last week Wednesday I participated in Anthropic's Model Context Protocol hackathon, and built a system with my team partner Zia to automatically search and find restaurants for your dietary preferences and group size.
It also automatically downloads social media of the restaurant to get a vibe for the place.
There's a video of it in action here: https://www.youtube.com/watch?v=c6vGrfHFyu8
And a Github repo here: https://github.com/zia-r/gotta-eat
Not sure if this fits, but I am founder of ArdentAI, which is an Agentic data engineer
You can use Ardent to sync and transform data, autofix airflow pipelines, modify schemas and more directly on your own existing stack
It's designed to run completely independently. You just tell it to do a task and let it do the work for you
https://ardentai.io <- to check it out
I asked a similar question a few months ago: https://news.ycombinator.com/item?id=39886178
It seems the community has gotten more negative about agentic approaches since then, and it wasn’t pretty then.
We use LLM agents to do proofreading and editing of transcripts after they are edited by people. They are good at applying our customer's specific requirements (e.g. capitalization, formatting, etc.) without us having our folks worry about any of that. We use https://transcriberai.com or https://otter.ai/ (there are a bunch) to create the first transcript for our transcriptionists.
We built several LLM-powered applications that collectively served thousands of users. The biggest challenge we faced was ensuring reliability: making sure the workflows were robust enough to handle edge cases and deliver consistent results.
In practice, achieving this reliability meant repeatedly:
1. Breaking down complex goals into simpler steps: Composing prompts, tool calls, parsing steps, and branching logic. 2. Debugging failures: Identifying which part of the workflow broke and why. 3. Measuring performance: Assessing changes against real metrics to confirm actual improvement.
We tried some existing observability tools or agent frameworks and they fell short on at least one of these three dimensions. So we built our own: https://github.com/PySpur-Dev/PySpur
1. Graph-based interface: We can lay out an LLM workflow as a node graph. A node can be an LLM call, a function call, a parsing step, or any logic component. The visual structure provides an instant overview, making complex workflows more intuitive. 2. Integrated debugging: When something fails, we can pinpoint the problematic node, tweak it, and re-run it on some test cases right in the UI. 3. Evaluate at the node level: We can assess how node changes affect performance downstream.
We hope it's useful for other LLM developers out there
You'd probably have to define agents first. What large / mega caps call agents is LLM + RAG + API Calls to read data and trigger jobs. And there are plenty of those online
The term "agent" is quite broad. In my definition, an LLM becomes an agent when it utilizes the tool usage option.
ChatGPT is a good example: you ask for an image, and you receive one; you ask for a web search, and the chatbot provides an answer based on that search.
In both cases, the chatbot has the ability to rewrite your query for that tool and is even able to call the tools multiple times based on the previous result.
Windsurf IDE from Codeium. Still some rough edges, but they’ve beat the Claude UI and Cursor for coding. Their code search is also next-level. Crazy efficiency gains for me for small-to-medium sized projects. Apparently, they have a ton of enterprise customers and are doing fast iteration loops relative to user signals (e.g. accepting diffs).
Yesterday I recorded an example of an O'reilly auto parts customer service agent to show how users can invoke them using RAG - last part of this video https://youtu.be/Qk_pVHtgcyA
There are plenty of RAG-capable LLMs in production, but still few products/UX oriented toward agentic work.
An AI product that can make purchases and API requests to external services like delivery drivers, calendars, etc. is still needed to truly enable these "agents" - which right now are basically read-only domain-specific LLMs.
We have a couple of systems at work that incorporate LLMs. There are a bunch of RAG chatbots for large documentation collections and a bunch of extract-info-from-email bots. I would none of these call an agent. The one thing that comes close to an agent is a very bot that can query a few different SQL and API data sources. Given a users text query, it decides on its own which tool(s) to use. It can also retry, or re-formulate its task. The agentic parts are mainly done in LangGrah.
They're ALL bullshit and there's a technical reason why.
Your rube goldberg contraption that you put together for your borderline-fradulent pitch deck is NOT an assembly-line nor is it a product anyone's gonna buy. Why?
Because cosine similarity search mathematically sucks a* , large context windows, while better, are nowhere close to being fast and practical ( maybe with a small exception of the generic sounding 1M context summaries you now get from gemini flash 2.0 exp ) . You probably don't have any kind of CI/CD setup, no testing at all zero, no benchmarking of accuracy, you probably can't even get lm_eval installed in the first place so no troubleshooting methodoloy, no formal iteration pipeline, you're not putting out a new model every 2 weeks and iterating upon, and YOU at this point probably can't find your own way to your own fkin toilet seat without Cursor's GPS showing your where it is and then writing a whole factory just to open the toilet seat.
You look at the youtube demos and it's just another investor slop to be sold to other sloppy investors. I even asked on uncle Elons twitter if anyone had a demo of agents doing anything in real life, and after 1/4million views the only thing that even worked AT ALL were spambots and Pliny's agent making a sh*tcoin. https://x.com/nisten/status/1808522547169763448
People cook something at home and immediately get delusional thinking they now have an assembly line that's just going to print money... have you ever actually looked at an industrial pasta-maker machine. Do YOU have the skills to make that? I'm sorry but no ammount of shrooms and microdosed-meth pills is gonna get you that.
Agents do not exist yet, they will sooner or later, but right now they're a concept more along the lines of scammy ledger-backed dbs.
You can always prove me wrong with a real life demonstration of an automated tool doing a complex ammount of steps that you'd normally expect an average-ish worker to do for you on a RELIABLE rate basis. I.E. Doing your taxes like your accountant or 10 year old hopefully does.
Setting aside buzzwords, how are people currently dealing with the problem of LLM errors propagating/accumulating through a pipeline? All of these model call feeding into model calls feeding into model calls results in a pretty low probability that the overall task stays on a happy path. And adding even more calls to guardrail the steps adds compounding latency.
IMO "Agents" are a marketing term, they are simply software that use LLMs somewhere in the backend. Often daisy chained into a series of operations that may involve additional LLM calls or calls to other internal/external services.
One we've been using for meeting notes + action items works quite well https://fireflies.ai
We're running an agentic LLM system in production that generates marketing strategy. As of now, we're up to about 60 agents and as we add functionality we'll add more. And yes, it's not easy to get them to stay on track and cooperate. https://www.goguma.io
Devin is closest for me. I’ve had it implement additional language locales and add dark mode to our UI.
Support bots. Scrapers. Personal assistants. Search startups like perplexity. Scammer bots. Bots that spread political agenda. "AI" memecoins.
Agentic workflow is great only for demos without real business cases. Each agent can hallucinate, which will pass this hallucination to another agent. In the end, you have just garbage. But... It's better to be silent, we still need to inflate this bubble.
LLM agents: not so much.
Actual real Intelligent Autonomous Agents - go to Mars, kick a rover... there's one. Go try front running on the markets, you'll meet about 6000 other ones trying to out run you.
An anecote that helps you maybe:
I do contracting work, we're building a text-to-sql automated business analyst. It's quite well-rounded: it tries to recover from errors, allows automatic creation of appropriate visualisations, has a generic "faq" component to help the user understand how to use the tool. The tool is available to some 10.000 b2b users.
It's just a bunch of prompts conditionally slapped together in a call graph.
The client needed AGENTIC AI, without specifying exactly what this meant. I spent two weeks pushing back on it, stating that if you replace the hardcoded call graph with something that has """free will""", accuracy and interpretability goes down whilst runtimes go up... but no, we must have agents.
So I did nothing, and called the current setup "constrained agentic ai". The result: High fives all around, everyone is happy
Make of that what you will... ai agents are at least 90% hype.