If a company the size of MS isn't able handle the DOS caused by the LLM slurpers, then it really is game over for the open internet. We are going to need government approved ID based logins to even read the adverts at this rate.
But this feels like a further attempt to create a walled garden around 'our' source code. I say our, but the first push to KYC, asking for phone numbers, was enough for me to delete all and close my account. Being on the outside, it feels like those walls get taller every month. I often see an interesting project mentioned on HN and clone the repo, but more and more times that is failing. Trying to browse online is now limited, and they recently disabled search without an account.
For such a critical piece of worldwide technology infrastructure, maybe it would be better run by a not-for-profit independent foundation. I guess, since it is just git, anyone could start this, and migration would be easy.
I don’t think the publication date (May 8, as I type this) on the GitHub blog article is the same date this change became effective.
From a long-term, clean network I have been consistently seeing these “whoa there!” secondary rate limit errors for over a month when browsing more than 2-3 files in a repo.
My experience has been that once they’ve throttled your IP under this policy, you cannot even reach a login page to authenticate. The docs direct you to file a ticket (if you’re a paying customer, which I am) if you consistently get that error.
I was never able to file a ticket when this happened because their rate limiter also applies to one of the required backend services that the ticketing system calls from the browser. Clearly they don’t test that experience end to end.
60 req/hour for unauthenticated users
5000 req/hour for authenticated - personal
15000 req/hour for authenticated - enterprise org
According to https://docs.github.com/en/rest/using-the-rest-api/rate-limi...
I bump into this just browsing a repo's code (unauth).. seems like it's one of the side effects of the AI rush.
Several people in the comments seem to be blaming Github for taking this step for no apparent reason.
Those of us who self-host git repos know that this is not true. Over at ardour.org, we've passed the 1M-unique-IP's banned due to AI trawlers sucking our repository 1 commit at a time. It was killing our server before we put fail2ban to work.
I'm not arguing that the specific steps Github have taken are the right ones. They might be, they might not, but they do help to address the problem. Our choice for now has been based on noticing that the trawlers are always fetching commits, so we tweaked things such that the overall http-facing git repo works, but you cannot access commit-based URLs. If you want that, you need to use our github mirror :)
> These changes will apply to operations like cloning repositories over HTTPS, anonymously interacting with our REST APIs, and downloading files from raw.githubusercontent.com.
Or randomly when clicking through a repository file tree. The first time I hit a rate limit was when I was skimming through a repository on my phone, and about the 5th file I clicked I was denied and locked out. Not for a few seconds either, it lasted long enough that I gave up on waiting then refreshing every ~10 seconds.
Are the scraper sites using a large number of IP addresses, like a distributed denial of service attack? If not, rather than explicit blocking, consider using fair queuing. Do all the requests from IP addresses that have zero requests pending. Then those from IP addresses with one request pending, and so forth. Each IP address contends with itself, so making massive numbers of requests from one address won't cause a problem.
I put this on a web site once, and didn't notice for a month that someone was making queries at a frantic rate. It had zero impact on other traffic.
Does it seem to anyone like eventually the entire internet will be login only?
At this point knowledge seems to be gathered and replicated to great effect and sites that either want to monetize their content OR prevent bot traffic wasting resources seem to have one easy option.
What does “secondary” stand for here in the error message?
> You have exceeded a secondary rate limit.
Edit and self-answer:
> In addition to primary rate limits, GitHub enforces secondary rate limits
(…)
> These secondary rate limits are subject to change without notice. You may also encounter a secondary rate limit for undisclosed reasons.
https://docs.github.com/en/rest/using-the-rest-api/rate-limi...
The blog post is tagged with "improvement" - ironic for more restrictive rate limits.
Also, neither the new nor the old rate limits are mentioned.
The truth is this won't actually stop AI crawlers and they'll just move to a large residential proxy pool to work around it. Not sure what the solution is honestly.
I assume they're trying to keep ai bots from strip mining the whole place.
Or maybe your IP/browser is questionable.
Wow, I'm realizing this applies to even browsing files in the web UI without being logged in, and the limits are quite low?
This rather significantly changes the place of github hosted code in the ecosystem.
I understand it is probably a response to the ill-behaved decentralized bot-nets doing mass scraping with cloaked user-agents (that everyone assumes is AI-related, but I think it's all just speculation and it's quite mysterious) -- which is affecting most of us.
The mystery bot net(s) are kind of destroying the open web, by the counter-measures being chosen.
Most of these unauthenticated requests are read-only.
All of public github is only 21TB. Can't they just host that on a dumb cache and let the bots crawl to their heart's content?
This was announced https://github.blog/changelog/2025-05-08-updated-rate-limits...
Did I miss where it says what the new rate limits are? Or are they secret?
Also https://github.com/orgs/community/discussions/157887 "Persistent HTTP 429 Rate Limiting on *.githubusercontent.com Triggered by Accept-Language: zh-CN Header" but the comments show examples with no language headers.
I encountered this too once, but thought it was a glitch. Worrying if they can't sort it.
I remember getting this error a few months ago, this does not seem like a temporary glitch. They dont want llm makers to slurp all the data.
Probably to throttle scraping from AI competitors, and have them pay for the privilege as many other services have been doing
Even with authenticated requests, viewing a pull request and adding `.diff` to the end of the URL is currently ratelimited at 1 request per minute. Incredibly low, IMO.
you mean you want to better track users
Good that tools like Homebrew that heavily rely on GitHub usually support environment variables like GITHUB_TOKEN
Once again people post in the "community", but nobody official replies; these discussion-pages are just users shouting into the void.
Time for Mozilla (and other open-source projects) to move repositories to sourcehut/Codeberg or self-hosted Gitlab/Forgejo?
How would this affect Go dependencies?
It sucks that we've collectively surrendered the urls to our content to centralized services that can change their terms at any time without any control. Content can always be moved, but moving the entire audience associated with a url is much harder.
https://github.com/orgs/community/discussions/157887 This has been going on for weeks and is clearly not a simple mistake.
Just tried it on chrome incognito on iOS and do hit this 429 rate limit :S That sucks, it’s already bad enough when GitHub started enforcing login to even do a simple search.
[dead]
GitHub answered https://github.com/orgs/community/discussions/159123#discuss...