> That said, is it a beneficial assignment to ask a student to read a long book? I haven't a clue. Is it beneficial to ask a student to write an essay? More on that in a moment.
As far as I can tell, he dropped this thread, and there wasn't, "more on that in a moment."
But I'm kinda surprised why he "hasn't a clue" that the answer to that question is a clear "yes." The whole point of school is to teach kids to use their brains, and reading something complicated, understanding it, and being able to synthesize something about it is a pretty important brain-use.
> Here we can think of the long email with meaningless fluffy padding as being the business speak protocol that office employees communicate with today. And we can think of the bullet points as how we actually think.
You've got to be careful to not overgeneralize this kind of observation, a problem that software engineers are especially prone to. Sometimes the "fluff" is meaningless, and "bullet points" are all that's really there. Other times the "fluff" is important stuff that the person making the judgement just doesn't understand or has trouble comprehending.
Software engineer types are often pretty ignorant, without really realizing it, and often assume their superiority because "intelligence" or some dumb shit like that.
Yes, LLM expansion of bullet points is fluff that wastes everyone's time, but that doesn't mean all or even most true thought can be compressed into concise bullet points. Furthermore, even when it can, there's are often good reasons why the most concise and compact form is not the most desirable presentation.
the author ironically uses a lot of words to say very little, though I agree with the conclusion. it’s already annoying to have someone use a lot of words to say very little (especially in a business context). now it’s free and easily accessible for anyone, whereas before it at least took some social stamina
so people will do it, people will be annoyed by it, people will prioritize to more efficient communicators
Personally I want to double down on this approach I wrote about a couple of weeks ago: https://www.sealambda.com/blog/this-post-passed-unit-tests/
Which is, to keep using LLMs as reviewers, rather than as writers.
A bullet-point list of ways you screwed up communicates something entirely different than a long form email filled with flowery "fluff" (as the author puts it).
In fact, if the author feels confident in this theory, I suggest they replace the blog post with this AI-generated bullet-point summary I just made...
> One day we'll just send bullet points as emails. We'll reach business speak protocol version 2.0. That which was verbose becomes terse. No more time wasted translating thoughts and injecting platitudes.
I'll celebrate the day this happens and gets widespread. Conversing with Americans is painful compared to Germans, because Americans insist on being coddled all the time and the very second you don't they'll complain behind your back to your boss.
Fun fact - that cultural difference was also a huge part why Wal-Mart failed to gain traction here in Germany. German consumers really didn't like staff welcoming them with a forced smile, that and bad press from crossed labor laws was their downfall.
I agree that LLMs turn short prompts into long code blocks, but I don't agree that it's fluff in the same way that email pleasantries are fluff.
The short prompt leaves a lot of room for interpretation. The code itself leaves zero room for interpretation (assuming the behavior of the coding language is well understood). I don't agree that AI will allow us to start relying on code that isn't fully defined just because it might allow our emails to remove fluff that didn't contribute to the meaning at all.
> How are professionals expected to communicate with each other? Usually with empty platitudes to kick things off like "Hey! How's it going? How's the family? How are you doing?" Messages are expected to be written across several coherent English sentences, neatly organized into paragraphs, finally with some sort of signature. In the programming world we refer to this as boilerplate. What is it that we are really trying to communicate? Usually a few short ideas that could be represented as bullet points, but that we need to fluff up with meaningless words so that we don't sound rude. Of course, this changes by culture and by language and is not applicable to many parts of the world, but it is definitely a thing in American English.
Do the rest of you really do this? I can’t recall receiving any slack or emails with this sort of thing over the 25+ years I’ve been working in a business environment, even though all of them have been in American English. I certainly don’t use that sort of “boilerplate”. I just jump straight into what the topic is.
It is like https://marketoonist.com/2023/03/ai-written-ai-read.html being written down at length.
I do agree with the fact, that this is an annoying phenomenon. It took me a while to understand that there are people who are not just using LLM to write these style of emails, but those people are the source of the training data for LLMs.
The solution is "simple", to move aways from such people and stick to genuine communicators.
I have a new rule: if someone sends me an "AI" message, I will probably denylist them.
Not only hasn't the person thought about it or invested in it, but now they're jamming my ability to read into exactly what that one person said, and how they said it.
Many people have made the observation/joke that meaning->LLM->LLM->meaning is silly, but I don't recall anyone pointing out that information that a skilled reader/listener can discern from direct expression is lost.
> One day we'll just send bullet points as emails. We'll reach business speak protocol version 2.0. That which was verbose becomes terse. No more time wasted translating thoughts and injecting platitudes.
I'm not sure about you all, but as a non-manager at [big tech company] I can count on one hand the number of actual emails I've sent in the past year.
Everything is IM chats. Its actually pretty nice
I don't think the final point about programming languages makes much sense.
In the overall software development process, lots of people contribute different things to create the product.
The job of the software developer is to bring the amount of ambiguity in the specification to zero, because computers can only run a program with zero ambiguity.
There have been lots of high level programming languages that abstract away certain things or give the programmer control over those things. The real thing that you want to do is pick a programming language that allows you control over the things you care about. Do you care about when memory is allocated and deallocated? Do you care about how hardware is used (especially GPUs and ML accelerators) or do you want the hardware completely abstracted away? Do you care more about runtime or dev iteration time? Does your program need to exist in a certain tech context?
There's no programming language that will let people who care about different things deal with them or not deal with them.
Let's discuss this spot on the wall while the building we are in is being excavated from beneath.
Yes, AI is going to change the world, not as anyone seems to be discussing. We've created interactive logical assistance for everything and anything intellectual anyone does. The ramifications of this are ambitions requiring intellectual difficulty are going to skyrocket. This is thanks to the magical thinking that is already rampant is all aspects of society. Net result will be significantly higher demands on anyone with a knowledge based career, after all "you have AI assistance now, why are you not 10X?"
We will all be forced to become adept using AI, and not just casually. We will be required to operate on our intellectual edge, or we will find ourselves unemployable.
I was expecting some analysis on the economic impact of losing a whole class of jobs, and decimating a load of others which causes the wages to drop.
Thats the way it'll change the world, think manufacturing job losses in Europe/america. in the late 90s/00
I actually got a very similar automatic response to a take home (that I spent 12 hours of my time on) for an interview process. Some feedback was good, but other feedback was not (one example: The feedback mentions not enforcing a Node version, while such enforcement was in the package.json file). That combined with the formatting made me realize that it was either a 100% copy paste output based on some prompts, maybe tuned two or three words without checking for factuals.
A very distressing experience that prompted me to change how I approach take home assignments.
This was a very common meme two years ago when ChatGPT was released.
It's fine to see Thomas catching up with the times, but two pages of writing seems a bit overkill, imo.
Edit: Found it, https://marketoonist.com/2023/03/ai-written-ai-read.html ... and yeah two years ago to the date, always right :).
I mostly agree with what the author is saying with regard to "the current state of things." I do not feel, however, that it is a particularly large concern, especially long-term, given the second step (summarizing) is probably already integrated into everyone's e-mail client, and if not -- it will be at some point. The "smoothing out of difficult communications", however, may end up being worth the whole "having to read a summarized response."
The reason it won't matter "long term" is that e-mail clients are solving/will solve[0] the "give me 'the point' of this e-mail." If my couple of decades of experience across multiple employers is any indication[1], the vast majority of people in software development fall into one of two camps: (a) They don't have the basics of written communication down. It's not a matter of misspellings or improper semi-colon, or emdash. It's all lower case[2] with no punctuation, or with "..." (no spaces between, either) in place of every other form of punctuation. Or (b) they are generally grumpy people who write in a manner that fits their personality.
Conveying tone, correctly, via written text is hard unless the tone you're trying to convey is "frustration/anger/impatience". And, of course, the same folks who can't figure out punctuation tend to respond tersely. Between co-workers who work closely together, that's preferred. When my boss has to tell me something minor about my performance and sends it in a five-word e-mail, it comes off like I need to start looking for new work. Prior to AI, "good managers who were writing-challenged" would find templates online and replace words. It never sounded genuine. AI brings us a lot closer to that, while not requiring an enormous amount of effort on the part of the writer. It'll be a matter of time before a lot of that process happens within the client (if it doesn't, already). I know tone detection is a common feature on communications tools I use[3].
[0] Not entirely sure; I use e-mail so infrequently, but thinking about the chat app we use at work, it provides AI summaries of the day's chats in each channel.
[1] Anecdata, I know, but it's all I've got
[2] Including the first letter of every meeting invite and subject; if I have OCD, that triggers it.
[3] Divorce communications ...
Would it also be true to say:
AI will change the world, but not in the way the OP (Thomas Hunter) thinks.
--
The first statement, AI will change the world, is low surprise and clearly true already.
The second statement, not in the way X thinks, is also low surprise, because most technologies have very unpredictable impacts, especially if it is "close to singularity" or the singularity.
There was a moment when google introduced autocomplete and it was a game changer.
LLMs are still waiting for their autocomplete moment: when they become an extension of the keyboard and complete our thoughts so fast, that i could write this article in 2 minutes. That will feel magical.
The speed is currently missing
i was expecting a "written from these bullet points by an llm" at the end.
Stop assuming how I think!
I hate titles that attempt to address the reader directly and personally when they can't possibly do so.
You can express the same idea without using this tactic. Just say "... but not in the way most might think"
By definition if I knew how AI would change the world, I would invest/build things to that end. The fact that we still don't have a great AI product outside of chatgpt, shows that no one knows what will happen.
The author claims to be able to tell AI content. I am wondering, is there any test to help me train to distinguish AI content? Like: this paragraph is written by a human this by an AI and see how well we all do?
"Why waste time say lot word when few word do trick?" - Malone, Kevin
> Messages are expected to be written across several coherent English sentences, neatly organized into paragraphs, finally with some sort of signature. In the programming world we refer to this as boilerplate.
"This HTTP call has no message body and therefore no content, and can thus be ignored" he said confidently, not noticing the status code. The verbiage is where you find out useful information, like whether the speaker is on your side and whether they understand the problem the same way as you and whether they're dumb. It's not as if the reason we didn't adopt bullet-points-only ten years ago is that it required better AI.
More generally, I submit that when faced with a long email thread, skim reading is superior to LLM summaries in all cases (except maybe the one where the reader is too inexperienced to do it well). It's faster, captures more detail, and (probably most important) avoids the problem of the people on the thread coming away with subtly different understandings of the conversation.
The primary commercial application for AI seems to be enshittification. I think that will continue.
Laughable, the author is really bullet-point-brained.
Taking wagers he only knows English
TL;DR: AI summaries and bullet points for everything will change human communication to that format.
The problem with this post is that, despite mentioning programming languages in the title, the examples are about writing emails. The author forgets to address programming at all, which is very much a use case and will remain one as processors only run compiled machine code and not bullet points.
In my view the author is putting the cart before the horse here. His primary argument seems to be that people already think in bullet points, so the fluff around them is unnecessary and can be excised without destroying the original message. But that fluff is there for many reasons. It adds context, it allows us to commingle our meaningful and valid emotions alongside our facts, and ultimately, it lets us tell a human story.
The way in which we create and consume information has a direct effect on our experience of the world, and I think there is a deeper point to be made here about how the way we use communication technology. The endless firehose of information is drowning our brains to the point that we are compelled to find a way to cope. But I would argue that the way to do that is to rate limit receipt of messages so that only the quality stuff gets through, rather than letting everything through and destroy every human aspect of them in the process. It’s Twitter’s 140 character limit argument from last decade all over again; the medium becomes the message, so we must be careful what mediums we use.