Ask HN: How much of OpenAI code is written by AI?

growbell_social | 66 points

I think this is the wrong question.

The right question is how much human code can a human push now vs prior to AI.

Everything we've done in coding has been assisted.

Prior to this current generation of web applications, we had the advent of concepts like Object Orientated Programming and prior to that even C was a massive move up from Assembly and punch cards.

AI has written a lot of code. AI has written very little high velocity production code by itself (ie. for people with no coding background).

In Ruby on Rails, the concept of fast coding has been around for over 20 years, look up this concept of Scaffolding: https://www.rubyguides.com/2020/03/rails-scaffolding/

So to answer your question,

1. AI has pushed a lot of code 2. AI has pushed almost no code without the oversight of human software engineers 3. Software engineers are pushing a magnitude more code and producing more functional utility and solving more bugs than ever before

I don't know what the future holds, but I do think that this is not a new trend to use software to help humans build faster, and I don't think software has the ability to fully replace humans (yet).

charlesju | 2 days ago

Not OpenAI, but Anthropic CPO Mike Krieger said in response to a question of how much of Claude Code is written by Claude Code: "At this point, I would be shocked if it wasn't 95% plus. I'd have to ask Boris and the other tech leads on there."

[0] https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what...

notfried | 2 days ago

> If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend.

This is a naive take. Throughout history things have been automated with the help of professions who were being automated away.

crop_rotation | 2 days ago

> If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend. I personally don't view it as a threat, but some people (non engineers?) obviously do.

Software engineers have been automating away jobs for other people for nearly a century. It would be quite rich if the profession suddenly felt qualms about the process! (TBC I think automation is great and should always be pursued. Ofc there are real human concerns when change happens quickly but I am skeptical that smashing the looms is the best response)

appreciatorBus | 2 days ago

> If AI is a threat to software engineering, I wouldn't expect many software engineers to actively accelerate that trend.

There are two strong forces at play. Employees generally want to put in the least amount of effort possible and go home at 5. Employers want to save money and pay for fewer employees. AI creates a strong symbiosis here and both sides are focused on a short term win.

add-sub-mul-div | 2 days ago

This is like asking me "how much of your software is built by the compiler?" -> the answer is 100%.

Ask "how much did you build then?" -> also 100%.

The compiler and I operate on different layers.

crazylogger | 2 days ago

I don't work for OpenAI and I doubt some random employee is going to come here and share what is likely a secret. I'm in the industry though so I have some idea of what's going on these days, both where I work and more broadly.

AI is getting better at writing code. However writing code is just some fraction of the work of many software engineers. AI doesn't work independently, it needs to be guided, its work needs to be reviewed, tested etc. There are some domains where it does better and some domains where it doesn't. There's a range of "AI" work between auto-complete style work, assisting in understanding a code base, and writing code from some spec or doing other types work.

All in all I would say it's a decent improvement to productivity for many situations. It's really hard to say how much and it's also not a zero sum game, as productivity improves there's more work.

Something to keep in mind is that if you look at a modern software project likely most of the code executing is not code written by the developers of that project. There's a huge stack of open source bits executing for almost any new project.

Specifically in OpenAI you also need to consider what type of software they are likely writing. Some of it may be more or less "vanilla" code and other is likely very specialized/performance critical. The vanilla code like API wrappers or simple front end pieces is likely more amenable to be written by AI whereas the more cutting edge algorithmic/scheduling/optimization work is almost certainly not done by AI. At least yet.

As software organizations become larger there's a lot of overhead and waste. It is possible that AI can enable smaller teams and that has a multiplicative effect because it lets you reduce that waste/overhead. There are likely also software engineers who will become better/adapt to new workflows and some who will not. It's really hard to say where things are going but overall my sense is that this like many other innovations will lead to more software and more jobs and not the other way around. There are many moving pieces here, not just AI itself but geopolitics, macro-economics, etc. Where are those new jobs going to get created, what new types of software/technology are going to be created etc. etc. History seems to show us that we'll adapt/evolve and grow.

YZF | 2 days ago

I absolutely believe that a large proportion of new code written is at least in-part AI generated, but that doesn't mean a large proportion of new code is 100% soup-to-nuts/pull-request-to-merge the result of decisions made by an agent and not a human. I doubt that very much.

I think the difference between situations where AI-driven development works and doesn't is going to be largely down to the quality of the engineers who are supervising and prompting to generate that code, and the degree to which they manually evaluate it before moving it forward. I think you'll find that good engineers who understand what they're telling an agent to do are still extremely valuable, and are unlikely to go anywhere in the short to mid term. AI tools are not yet at the point where they are reliable on their own, even for systems they helped build, and it's unclear whether they will be any time soon purely through model scaling (though it's possible).

I think you can see the realities of AI tooling in the fact that the major AI companies are hiring lots and lots of engineers, not just for AI-related positions, but for all sorts of general engineering positions. For example, here's a post for a backend engineer at OpenAI: https://openai.com/careers/backend-software-engineer-leverag... - and one from Anthropic: https://job-boards.greenhouse.io/anthropic/jobs/4561280008.

Note that neither of these require direct experience with using AI coding agents, just an interest in the topic! Contrast that with many companies who now demand engineers explain how they are using AI-driven workflows. When they are being serious about getting people to do the work that will make them money, rather than engaging in marketing hype, AI companies are honest: AI agents are tools, just like IDEs, version control systems, etc. It's up to the wise engineer to use them in a valuable way.

Is it possible they're just hiring these folks to try and make their models better to later replace those people? It's possible. But I'm not sure when in time, if ever, they'll reach the point where that was viable.

ivraatiems | 2 days ago

I don't work at Open AI but I use Codex as I imagine most people there do to.

I actually use it from the web app not the cli. So far I've run over 100 codex sessions a great percentage of which I turned in to pull requests.

I kick off codex for 1 or more tasks and then review the code later. So they run in the background while I do other things. Occasionally I need to re-prompt if I don't like the results.

If I like the code I create a PR and test it locally. I would say 90% of my PR's are AI generated (with human in the loop).

Since using codex, I very rarely create hand written PR's.

ianpurton | 2 days ago

IME I have had to review a lot of code written by AI. At certain almost all of it. And sometimes write the code myself because LLMs just don't get it. AI has written 95% of my code but not without any review.

theusus | 2 days ago

Estimates will always be off compared to a plugin like wakatime tracking the real amount of AI generated code vs human written code.

welder | 2 days ago