A quick and sobering guide to cloning yourself
What I haven't seen mentioned yet, and what greatly interests me, is the creation of semi-sentient messenger constructs. I don't have a better term for it, but although unsexy that covers it pretty well.
GPT-4 sure shows signs of sentience. Once fine tuning to a specific task becomes commonplace, you could conceivably finetune a LLM also to your own personality. Loaded up with specific knowledge+personality, this can then allow you to send people interactive messages.
A pseudo-intelligent construct that conveys your message, which the receiver can interrogate. And not just text. As this article shows it could well be a multi-modal talking head, to tickle the social centers of your brain and give things more (perceived) personality.
No longer spouting your requirements to the team over a boring video call; everyone gets your opinion as an interactive avatar, to query at will. I hope someone is working on this..!
It's only more terrifying when you cross it with the idea that around 50% of the population already can't separate real from fake on the internet
This is very impressive. Don't miss clicking the first link
This video feels like a tortured digital twin. Very unnerving action with the mouth. So creepy. But... if you did 5 seconds of full screen and then minimized down into a small circle and had your main content, I bet I wouldn't have noticed. Voice is passable.
And thus, it now becomes imperative for everyone who cares, to establish a good record of themselves in the public space so that the LLM corpus is representative of them or at least their “best foot forward”?
> Hopefully, the positive uses will outweigh the negative, but our world is changing rapidly, and the consequences are likely to be huge.
The word hopefully is holding an Atlas class amount of weight in that sentence.
People are not capable of scaling themselves to the amount of spam, fraud and manipulation AI enables. I don't think a human-like personal tutor is going to counter the tsunami of malicious AI generated content designed to optimally divide and break down the populace.
Just tried it. Elevenlabs doesn't really work with non-English language. Five bucks out the window :)
It's time to stop answering unsolicited phone calls
Pretty cheap to do! It’ll only get better as well.
Ezra Klein at the New York Times has been running a series of excellent podcast episodes and columns on the topics of both generative AI. He and his guests raise significant points and are some of the most sane, sober, and insightful commentary I've heard. It's well worth reflection and consideration.
There's the very poorly titled column "This Changes Everything" ("this" is "generative AI"): <https://www.nytimes.com/2023/03/12/opinion/chatbots-artifici...>
And two podcast episodes in particular:
- "The Imminent Danger of A.I. Is One We're Not Talking About": <https://www.nytimes.com/2023/02/26/opinion/microsoft-bing-sy...> TL;DR: "Who will these machines serve?"
- "A.I. Is About to Get Much Weirder. Here’s What to Watch For." <https://www.nytimes.com/2023/03/21/opinion/ezra-klein-podcas...>
- And on a different but closely-related theme: "How the $500 Billion Attention Industry Really Works" <https://www.nytimes.com/2023/02/14/opinion/ezra-klein-podcas...>
The links include both the audio and transcripts (following a few days after air-date) for podcasts.
I expect Klein to cover both aspects throughout the next year.
What I especially like about Klein is that he's not only reacting to developments and rehashing demonstrated capabilities, but he's asking questions and anticipating what's to come, without the hagiographic / techno-optimistic lenses of some (e.g., Bill Gates's recently published note "The Age of AI has Begun" with its depressingly uninsightful "I’ve been thinking a lot about how—in addition to helping people be more productive—AI can reduce some of the world’s worst inequities".
What we've learnt about technology is that what it does depends greatly on who it serves. And amongst Klein's more interesting observations is that we cannot with certainty know what aims AI is serving, even those who create it. Klein repeatedly notes that many of those directly engaged in creating the technology itself have little idea where it is headed or what it will be able to do:
Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on A.I. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.
I've spent much of the past decade looking at the history of technology and information technology in particular. There's a pair of books which stand out to me, they share a title though are separated by a decade and different authors: The Control Revolution, respectively by James Beniger (1986) and Andrew L. Shapiro (1999).
Beniger's book looks backwards at the development of largely commercial and corporate communications over the course of the Industrial Revolution (18th through 20th centuries), whilst Shapiro looks forward at the promise of a networked and digital online communications infrastructure. Both books have aged well, though some informed reading-between-the-lines may be necessary.
In particular, Beniger looks at business as an information processing system, and not in the all-to-familiar (and rather facile) Hayekian market sense, but in terms of information flows within and between companies. In particular, as information technologies developed how communications occurred transformed immensely. Many of those changes to me seem to revolve around issues of trust.
The ornate and florid language of 18th and early 19th century correspondence spends much time and space in asserting trust and faithfulness bonds between correpsondents. (How accurate or useful those were is its own question, but the point remains: it's a major component of the writing.) Keep in mind that it might take days, weeks, or months for correspondence to reach its intended recipient (let alone unintended ones), and that remote offices or agents might be acting with tremendous autonomy for months or years at a time.
With the development of the telegraph, two things occurred:
- Communications became instantaneous, with multiple round-trip messages within the course of a day or an hour possible.
- Words got expensive.
Language became telegraphic.
American author Mark Twain exemplified much of this, and his style of writing was as distinct for its directness as the topics it covered. The influences of a newspaper pressman and editor working from telegraphed wire stories and a sense of the physicality of a block of cast type is clear to me.
The rise of complex corporations also played a huge role: railroads, manufacturing concerns (particularly General Electric), chemical companies where deviation from procedure could have explosive consequences (Du Pont, Dow), and communications companies (Western Union, AT&T, remember that that second 'T' is for "Telegraph"), and the like. It's possible to trace RFC 822 (and successor) email headers directly to business correspondence memo fields, used to standardise correspondence, from the late 19th century.
While the 19th century saw generally a decrease in in-band attestations of trustworthiness as message capacity increased, I'm strongly suspecting that the 21st century may see an increase in such attestations. One possible possibility is through cryptographic mechanisms, the favourite of technologists (myself included), though adoption of such methods has to date been pathetically and disappointingly weak.
Another is that multiple independent verifications of information will be required and increasingly common. This is already used in fields such as journalism and human-rights investigations. A problem emerges when it cannot be readily determined that two sources are in fact independent.
For business increasingly dependent on remote interactions, the risks of impersonation and fraud (is that really the CEO calling on a scratchy phone line, or an AI bot?) is a huge and growing problem, along with invoice and billing frauds and the like.
How we're going to address this, and how the notions of "something you are, something you know, and something you have" as multiple forms of remote attestation will evolve ... is going to be an interesting set of questions.
1. "Generative AI" seems to be the general term describing collectively ChatGPT, Sydney, Bard, and other current-generation large language model (LLM) AI chatbots.
2. Note: <https://www.gatesnotes.com/The-Age-of-AI-Has-Begun> HN discussion: <https://news.ycombinator.com/item?id=35250564>
5. See for example: <https://www.humanrightscareers.com/skills/beginners-guide-ho...>
Title should read “digitally cloning yourself”, as I thought this was actual human cloning.
NPR has picked up this month-old Substack post with commentary that both obscures some of the technical details (specific audio and video tools used) whilst adding commentary on political, propaganda, and fraud prospects of the technique:
"It takes a few dollars and 8 minutes to create a deepfake. And that's only the start"
HN discussion: <https://news.ycombinator.com/item?id=35275104>
Both items approach sufficiently distinct angles that I'd feel separate posts are at least arguably warranted.