URLs are state containers
I agree, and this reminds me: I really wish there was better URL (and DNS) literacy amongst the mainstream 'digitally literate'. It would help reduce risk of phishing attacks, allow people to observe and control state meaningful to their experience (e.g. knowing what the '?t=_' does in youtube), trimming of personal info like tracking params (e.g. utm_) before sharing, understanding https/padlock doesn't mean trusted. Etc. Generally, even the most internet-savvy age group, are vastly ill-equipped.
If the URL is your state container, it also becomes a leakage mechanism of internals that, at the very least, turns into a versioning requirement (so an old bookmark won’t break things). That also means that there’s some degree of implicit assumption with browsers and multi-browser passing. At some point, things might not hold up (Authentication workflows, for example).
That said, I agree with the point and expose as much as possible in the URL, in the same way that I expose as much as possible as command line arguments in command line utilities.
But there are costs and trade offs with that sort of accommodation. I understand that folks can make different design decisions intentionally, rather than from ignorance/inexperience.
Recommendation:
https://github.com/Nanonid/rison
Super old but still a very functional library for saving state as JSON in the URL, but without all the usual JSON clutter. I first saw it used in Elastic's Kibana. I used it on a fancy internal React dashboard project around 2016, and it worked like a charm.
Sample: http://example.com/service?query=q:'*',start:10,count:10
> Browsers and servers impose practical limits on URL length (usually between 2,000 and 8,000 characters) but the reality is more nuanced. As this detailed Stack Overflow answer explains, limits come from a mix of browser behavior, server configurations, CDNs, and even search engine constraints. If you’re bumping against them, it’s a sign you need to rethink your approach.
So what is the reality? The linked StackOverflow answer claims that, as of 2023, it is "under 2000 characters". How much state can you fit into under 2000 characters without resorting to tricks for reducing the number of characters for different parameters? And what would a rethought approach look like?
Unfortunately, too many websites use tracking parameters in URLs, so when a URL is too long I tend to assume it's tracking and just remove all the extra parameters from it when saving or sending it to anyone.
Though I guess this won't happen if it's obvious at first glance what the parameters do and that they're all just plaintext, not b64 or whatever.
When the system evolves, you need to change things. State structure also evolves and you will refactor and rework it. You'll rename things, move fields around.
URL is considered a permanent string. You can break it, but that's a bad thing.
So keeping state in the URL will constrain you from evolving your system. That's bad thing.
I think, that it's more appropriate to treat URL like a protocol. You can encode some state parameters to it and you can decode URL into a state on page load. You probably could even version it, if necessary.
For very simple pages, storing entire state in the URL might work.
> ?theme=dark&lang=en - UI preferences
Although the article is completely on point, I disagree that theme should be stored in URL.
Imagine you’re browsing a site, and at some point you switch from light to dark theme. After some time, you press “Back”. Do you really expect to switch back to light theme, and not to go to the previous page?
I've seen a lot of “well‑behaved” sites, that are storing their state in the URL, but I've never seen one, that stores current theme.
It’s interesting that the theme is part of the state too, yet you don’t want to store it in the URL. So, this means not every part of the state should be stored in the URL? Then what's the criteria for choosing what to store, and what not to?
HATEOAS never gets the love it deserves until you call it something else..
Probably because it sounds like the most poorly named breakfast cereal ever.
One barrier to adoption is that big URLs are just ugly. Things are smooshed together without spaces, URL encoding, human-readable words mixed with random characters, etc. I think even devs who understand what they're looking at find it a little unsatisfying.
Maybe a solution is some kind of browser widget that displays query params in a user-friendly way that hides the ugliness, sort of like an object explorer interface.
I believe draw.io achieves complete state persistence solely through the URL. This allows you to effortlessly share your diagrams with others by simply providing a link that contains an embedded Base64-encoded string representing the diagram’s data. However, I’m uncertain whether this approach would qualify as a “state container” according to the definition presented in the article.
The latest version of Microsoft Teams is absolutely terrible at this... just one URL for everything. No way to bookmark even a particular team.
I loke to keep state in the URL. It's nive when you can bookmark any section in an app and it brings you back to the exact same place, all the menus exactly the same. Also it's amazing for debugging. Any bug, I tell the user to send me the URL. I reproduce the issue instantly, fixed in 5 minutes. I wrote some very complex frontends without any tests thanks to this approach... Also it's great during development; when I make a change anywhere in the app, I just refreshed the page... I never have to click through menus to get back to the part of the code I want to test. Really brings down my iteration time... Also I use Vanilla JavaScript Web Components so I don't have to wait for transpiler or bundler. Then I use Claude Code. It's crazy how fast I can code these days when it's my own project.
You are still thinking of the web as being a hyperlinked collection of information serving the betterment of human knowledge, rather than a set of SPAs where you through trial and error try and get whatever AI enabled product you are now forced to use to do what you ask.
One might even say that hyperlinks are the engine of application state.
Mmm.
Youre doing two things:
1) youre moving state into an arbitrary untrusted easy to modify location.
2) youre allowing users to “deep link” into a page that is deep inside some funnel that may or may not be valid, or even exist at some future point in time, forget skipping the messages/whatever further up.
You probably dont want to do either of those two things.
>Scott Hanselman famously said “URLs are UI”
I actually implemented a comment system where users just pick any arbitrary URL on the domain, ie, http://exampledomain.com/, and append /@say/ to the URL along with their comment so the URL is the UI. An example comment would be typed in the URL bar like,
http://exampledomain.com/somefolder/somepage.html/@say/Hey! Cool somepage. - Me
And then my perl script tailing the webserver log file sees the line and and adds the comment "Hey! Cool somepage. - Me" to the .html file on disk for comments.
I wish there was a way to have undo/redo like when using pushState, but without polluting history. There is no separate "serializable state" API that is not tied to a URL. I could use LocalStorage, but I want to have multiple states in different tabs, persistent across reloads. Maybe storing "tab IDs" in URLs and state in LocalStorage is a good idea.
The amount of state that early video games stored in like 256 bytes of ram was actually quite impressive. I bet with some creativity one could do similarly for a web app. Just don’t use gzipped b64-encoded json as your in-url state store!
This is one of the things that bothered me the most from existing React libraries, if you wanted to update a single query parameter now you needed to do a lot of extra work. It bothered me so much I ended up making a library around this [1], where you can do just:
// /some/path?name=Francisco
const [name, setName] = useQuery("name");
console.log(name); // Francisco
setName('whatever');
Here's a bit more complex example with a CodeSadnbox[2]: export default function SearchForm() {
const [place, setPlace] = useQuery("place");
const [max, setMax] = useQuery("max");
return (
<form>
<header>
<h1>Search Trips</h1>
<p>Start planning your holidays on a budget</p>
</header>
<TextInput
label="Location:"
name="place"
placeholder="Paris"
onChange={setPlace}
value={place}
/>
<NumberInput
label="Max Price ($):"
name="max"
placeholder="0"
onChange={setMax}
value={max}
/>
</form>
);
}
[1] https://crossroad.page/The new web standard initiative BRAID is trying to make web to be more human and machine friendly with a synchronous web of state [1],[2],[3].
"Braid’s goal is to extend HTTP from a state transfer protocol to a state sync protocol, in order to do away with custom sync protocols and make state across the web more interoperable.
Braid puts the power of operational transforms and CRDTs on the web, improving network performance and enabling natively p2p, collaboratively-editable, local-first web applications." [4]
[1] A Synchronous Web of State:
[2] Braid: Synchronization for HTTP (88 comments):
https://news.ycombinator.com/item?id=40480016
[3] Most RESTful APIs aren't really RESTful (564 comments):
https://news.ycombinator.com/item?id=44507076
[4] Braid HTTP:
I'm not certain that I agree with this because a URL makes no claims about idempotency or side-effects or many other behaviors that we take for granted when building systems. While it is possible to construct such a system, URLs do not guarantee this.
I think the fundamental issue here is that semantics matter and URLs in isolation don't make strong enough guarantees about them.
I'm all for elegant URL design but they're just one part of the puzzle.
> #/dashboard - Single-page app routing (though it’s rarely used these days)
I actually use that for my self-hosted app, because hash routing doesn't require .htaccess or other URL rewriting functionality server-side. So yes, it's not ideal, but you don't fully control the deployment environment, it's better to reduce as much as you can the requirements.
Yes! This is a very under-utilized concept, especially with client-side execution (WASM etc!)
Few years back, I built a proof-of-concept of a PDF data extraction utility, with the following characteristic - the "recipe" for extracting data from forms (think HIPAA etc) can be developed independently of confidential PDFs, signed by the server, and embedded in the URL on the client-side.
The client can work entirely offline (save the HTML to disk, airgap if you want!) off the "recipe" contained in the URL itself, process the data in WASM, all client-side. It can be trivially audited that the server does not receive any confidential information, but the software is still "web-based", "browser-based" and plays nice with the online IDE - on dummy data.
Found a working demo link - nothing gets sent to the server.
https://pdfrobots.com/robot/beta/#qNkfQYfYQOTZXShZ5J0Rw5IBgB...
I'm going to provide a dissenting opinion here. I think the URL is for location, not state. I believe that using the URL as a state container leads to unexpected and unwanted behaviour.
First, I think it's a fact that the average user does not consider a URL to be a state container. The fact that developers in this thread lament the "new school" React developers who don't use the URL as a state container is proof of this. If it follows that a React developer, no matter how inexperienced, is at least as knowledgeable if not more about URLs than the average person, if they don't even consider the URL to be a valid container for state than neither does the average person.
Putting state in the URL breaks a fundamental expectation of the user that refreshing a page resets its state. If I put a page into an unwanted state, or god forbid there is a bug that places it in an impossible state, I expect a refresh of the page to reset the state back. Putting state in the URL violates this principle.
Secondly, putting state in a URL breaks the expectation of the user for sharing locations. When I receive Youtube links from friends, half of the time the "t" parameter is set to somewhere in the video and I don't know if my friend explicitly wanted to provide a timestamp. The general user has no idea what ?t=294833289 means in a URL. It would be better to store that state somewhere else and have the user explicitly create a link a timestamp parameter if the desired outcome was to link to an explicit point in the video. As it stands now, when I send YouTube links to friends I have to remember to clear the ?=t parameter before sharing. This is not good UX.
There are other reasons why I think its a bad idea but I don't want this comment to be too long.
That doesn't mean not to use search parameters though. Consider a page for a t-shirt, with options for color and size. This is a valid use case for putting the color and size in the URL because it's a location property - the resource for a blue XL shirt is different from a red SM shirt, and that should be reflected in the URL.
That's not to say that state should never be put in the URL - in some cases it makes sense. But that's a judgement call that the developer should make by considering what behaviour the user expects, and how the link will most likely be used. For a trivial example, it's unlikely that a user wants to share their scroll position or if a dropdown is open when sharing a page. But they probably want to share the location they've navigated to on a map, as it's unlikely they're sharing a link to `maps.google.com` with others (although debatably that's not state, but rather a location property).
I use URLs for pixel art: https://www.mathsuniverse.com/pixel-art?p=GgpUODLkg-N0JchwOF...
Finishing building a framework at the moment. I'd rather say that they are state descriptors... They don't contain all the state. But they are some kind of hashkey that allow to retrieve application state. "Hypertext as the engine of application state."
URLs are user supplied. You can't trust user data in 95% of cases. Storing stuff belongs in a database or a cookie.
The wild thing about this is that for the longest time, URLs were the mechanism for maintaining state on a page. It is only with the complete takeover of JavaScript-based web pages that we even got away from this being "just the way it is". Browsers and server-rendered pages have a number of features that folks try their best to recreate with javascript, and often recreate it rather poorly.
I think the set of rules around when to put things in the URL and when not to are incredibly complex and require serious thought. I don't want the whole history polluted with loads of entries either so when the replace the current history item and when to push a new one also requires a lot of discussion.
It's fast becoming a lost art (alongside ensuring the text can be read by the 10% of the male population that is colour blind). It's one thing to coach a junior dev on implementing it properly into a Nextjs app (or whatever is trendy at the time), but quite another to have to explain this stuff to a Product Manager. If you're going to spend copious amounts of time with a designer to make sure the site is pixel perfect visually you should also have time to get your URLs right.
That’s the reason i stay away and keep my customers away from SPAs. Good ole html forms do the trick for 99.95% use cases.
Hot module replacement masks a lot of annoyances for end users. Yes its more instantaneous than reloading a page and relying on urls for all of the state and I am not advocating hard for abolishing HMR anymore, but it would be nice if we still used way more url state than currently the case. Browsers will also hibernate tabs to varying degrees, server sessions expire all the time, things are not shareable. The only thing that works as users expect is url state. One thing i absolutely hate about ios apps is how every state is lost if i just have the app in the background for a few seconds, this even applies to major apps like youtube, google maps, many email clients etc. Why do we live in this stupid world were things are not getting better, just because someone made things more convenient for developers?
PS: and i curse the day the social media brainwashed marketing freak coined the term "deep link" to mean just a normal link as its supposed to work.
I really like this approach, and think it should be used more!
In a previous experiment, I created a simple webpage which renders media stored in the URL. This way, it's able to store and render images, audio, and even simple webpages and games. URLs can get quite long, so can store quite a bit of data.
nuqs[0] is a great (React) library for managing state inside of the URL.
Modern browsers have an "open clean link" feature that strips all the query parameters (everything after the '?' character in the URL).
This is because many sites cram the URL full of tracking IDs, and people like to browse without that.
So if you are embedding state in your URL, you probably want to be sure that your application does something sane if the browser strips all of that out.
Remember when URLs became unstable wacky identifiers 10 years ago. Thankfully that trend died.
This and the lack of proper a hrefs is the biggest pet peeve of mine with a lot of spa's
Hanselman famously said “URLs are UI” and he’s absolutely right
A challenge for this is that the URL is the most visible part of an HTTP request but there are many other submerged parts that are not available as UI yet are significant to the http response composition.
Additionally, aside from very basic protocol, domain, and path, the URL is a very not human friendly UI for composing the state.
>If you need to base64-encode a massive JSON object, the URL probably isn’t the right place for that state.
Why?
I get it if we're talking about a size that flirts with browser limitations. But other than that I see absolutely no problem with this. In fact it makes me think the author is actually underrating the use-case of URL's as state containers.
I use this for my rss reader!
https://rssrdr.com/?rss=raw.githubusercontent.com/Roald87/Ha...
Also to consider: bot traffic and SEO.
Depending on which mechanism you use to construct your state URLs they will see them as different pages, so you may end up with a lot of extra traffic and/or odd SEO side effects. For SEO at least there are clear directives you can set that help.
Not saying you shouldn't do this - just things to consider.
Letterboxd does this really well - each view is its own page! It's so pretty compared to other sites
I disagree in the public URL, as either GPG --quick-generate in coining a counterpoint as a feature of anti-DDOS protocols.
Key is to generate capitol, which is being either a URL or playing hand in ball.
I use the concept for https://libmap.org to save the state of the map. You can share the libmap link via mastodon social or bluesky to make it permanent.
This is a small hobby project, I am not in IT.
Deeplinking is awesome! The Azure portal is my favorite example. You could be many layers deep in some configuration "blade" and the URL will retain the exact location you are in the UI.
To fully describe client side state you also need to look at DOM and cookies. The server can effectively see this stuff too (e.g., during form post).
I design my SSR apps so that as much state as possible lives in the server. I find the session cookie to be far more critical than the URL. I could build most of my apps to be URL agnostic if I really wanted to. The current state of the client (as the server sees it) can determine its logical location in the space of resources. The URL can be more of an optional thing for when we do need to pin down a specific resource for future reference.
Another advantage of not urlizing everything is that you can implement very complex features without a torturous taxonomy. "/workflow/18" is about as detailed as I'd like to get in the URL scheme of a complex back office banking product.
This is a risky idea, actually — at least in its fully expanded form.
Sure, in the prismjs.com case, I have one of those comments in my code too. But I expect it to break one day.
If a site is a content generator and essentially idempotent for a given set of parameters, and you think the developer has a long-term commitment to the URL parameters, then it's a reasonable strategy (and they should probably formalise it).
Perhaps you implement an explicit "save to URL" in that case.
But generally speaking, we eliminated complex variable state from URLs for good reasons to do with state leakage: logged-in or identifying state ending up in search results and forwarded emails, leaking out in referrer logs and all that stuff.
It would be wiser to assume that the complete list of possible ways that user- or session-identifying state in a URL could leak has not yet been written, and to use volatile non-URL-based state until you are sure you're talking about something non-volatile.
Search keywords: obviously. Seach result filters? yeah. Sort direction: probably. Tags? ehh, as soon as you see [] in a URL it's probably bad code: think carefully about how you represent tags. Presentation customisation? No. A backlink? no.
It's also wiser to assume people want to hack on URLs and cut bits out, to reduce them to the bit they actually want to share.
So you should keep truly persistent, identifying aspects in the path, and at least try not to merge trivial/ephemeral state into the path when it can be left in the query string.
i see the complaints around URL length limits and i raise you..
storing the entire state in the hash component of the URL
since this is entirely client-side, you can pretty much bypass all of the limits.
one place i've seen this used is the azure portal.. (payload | gzip | b64) take of that what you will.
This should be used more often. I wish websites like Google could respect the language given in the URL. Always tries to guess what's my language based on IP and fails
Sounds like ASP.Net Web Forms! Except it would fall apart anyway when you would reload!
Reminds me of xlink:href with an #xpointer(xpath) — with it you could xinclude an inner XML node out of a remote file
This is something you learn to appreciate when you do web scraping. I do overlook it for frontend webdev though
As an application developer I think this is very good advice and I wish I wouldve be more strict about it earlier.
One of my previous side projects used this idea in the extreme: It's a two-player online word game (scrabble with some twists) but all the state is stored in the URL so it doesn't need a backend.
https://scrobburl.com/ https://github.com/Jcparkyn/scrobburl
React kid discovers the web
Sure and file names are state & attribute containers too. A URL is a uniform resource locator. You can hack it, of course, but this is no less kludgy than overloading filename. It is never ceases to amaze me seeing the recylcing of good and bad idea in this field.
You are either changing the meaning of "state", or probably unaware of what it means. To start with, state of what? app (http server) or the http client?
Not quite. As the L in URL says, it is the locator or address of the state. The S in REST implies the same, indicating states as the content, not path to it.
More good content with a bunch of GPT noise added, obvious from patterns like
No database. No cookies. No localStorage
Themes chosen. Languages selected. Plugins enabled.
Which have the pattern of rhetoric but no substance. Clearly the author put significant effort it so why get an LLM to add noise?
Any blob of byte is a state container
you can save so much data in the url, I like how pocketcal.com stores the calendar informations
Yes, but keep it less than 1024 chars in length.
Duh :)
is this not a basic rest principle? URLs and req/res bodies are the only way to transfer anything so they must be the way to transfer state
It’s kind of nuts this even has to be explained. I had a coworker I’ve been trying to teach good application design and React state is the first “crap bucket” he always reaches for. I had to explain to him, “when we put values in the Url we don’t need to use state, because everything is already stored right?” “Uhhh sure fine go ahead and change it.”
But what bugs me about it is that this isn’t even that novel or intelligent of a realization. If you’ve used a web browser you’ve seen the url change. Connecting that with putting values in the url shouldn’t be such a huge leap. This was for a simple search page.
How do I stop this sort of brain dead unrealized thinking?
[dead]
[flagged]
When I get my way reviewing a codebase, I make sure that as much state as possible is saved in a URL, sometimes (though rarely) down to the scroll position.
I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.
Developing like this on small teams also tends, in my experience, to lead to better UX, because it makes you much more aware of how much state you're cramming into a view. I'll admit it makes development slower, but I'll take the hit most days.
I've seen some people in this thread comment on how having state in a URL is risky because it then becomes a sort of public API that limits you. While I agree this might be a problem in some scenarios, I think there are many others where that is not the case, as copied URLs tend to be short-lived (bookmarks and "browser history" are an exception), mostly used for refreshing a page (which will later be closed) or for sharing . In the remaining cases, you can always plug in some code to migrate from the old URL to the new URL when loading, which will actually solve the issue if you got there via browser history (won't fix for bookmarks though).