We were keeping our eye on 1984. When the year came and the prophecy didn’t, thoughtful Americans sang softly in praise of themselves. The roots of liberal democracy had held. Wherever else the terror had happened, we, at least, had not been visited by Orwellian nightmares.
But we had forgotten that alongside Orwell’s dark vision, there was another – slightly older, slightly less well known, equally chilling: Aldous Huxley’s Brave New World. Contrary to common belief even among the educated, Huxley and Orwell did not prophesy the same thing. Orwell warns that we will be overcome by an externally imposed oppression. But in Huxley’s vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think.
What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy. As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions”. In 1984, Huxley added, people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us.
From Amusing Ourselves to Death: Public Discourse in the Age of Show Business, by Neil Postman
apatters
Humans are now a middleware layer
It’s the Year of AI, and right now everyone feels compelled to use AI at every opportunity.
I reckon I have a good eye for generated content and one thing I’ve noticed recently is that there’s a lot of content being published which is pretty clearly a combination of one or two meaningful paragraphs, plus a big chunk of AI generated fluff.
I’m sure the reason for that is because we’re also still in the Age of SEO, and Google really wants an article to be more than a few lines long. (That’s also why you need to read ten pages about Grandma’s rustic kitchen decor and her ritual of lovingly feeding the chickens every morning for 20 years when you were just trying to find a recipe for cookies on Google.) You need to write longer articles for Google so you can sprinkle in the keywords that matter.
Just like non-fiction books are often five times longer than they need to be, because to justify the retail price the publisher demands at least 200 pages.
But those books were still all human written (nowadays? who knows).
So nowadays probably millions of human-hours are being frittered away as people read paragraphs of machine-generated content which was generated with another machine as the primary audience.
We are now a middleware layer in the paperclip factory, with AIs optimizing their revenue above us, and different AIs optimizing their revenue below us. Our attention is the transit pipe.
In recognition of the concern, this is a short, fully AI-free post.
My all-time favorite video games
Why not list them here. In rough chronological order. Incomplete, very much WIP.
- Shadowgate (NES) (1989)
- Final Fantasy VI (1994)
- Chrono Trigger (1995)
- World of Warcraft (Original trilogy, Vanilla/TBC/WotLK) (2004)
- Shadowrun Returns, Shadowrun: Dragonfall (2013)
- Rimworld (2013)
- Technobabylon (2015)
- Tormentum – Dark Sorrow (2015)
- Valheim (2021)
Asian Geopolitical Predictions, September 2022
The caveat here is that nothing really bad has happened since the end of the Cold War, but I like to write down predictions and find out in a few years whether I was right or not. No specific timeline for this. This sounds great.
A) Encouraged by deterioration of global supply chains, domestic unrest, and Xi Jinping thought/Wolf Warrior diplomacy/whatever, China takes Taiwan, giving it a strategic position in the first island chain.
B) US fails to retake Taiwan and commences a military buildup in the region. New US bases from Japan down to Southeast Asia.
C) The resulting sanctions and embargoes end supply chains as we know them. Small and export-led economies are the hardest hit. North America fares the best as it has the capacity to be self-sufficient in energy, manufacturing etc. Europe somewhere in between.
And here I was just last week feeling I was underweight on ex-US stocks…
A splash of cold water for the Mars futurists
I think for an average Martian Joe any type of outdoor recreation would be a rare thing. Rip your suit in this rugged rocky landscape and almost immediately you start to develop serious injuries, pass out and die. And what if you vomit? How do you pee? Cheaply protecting someone from radiation, near-vacuum, and below freezing temperatures while keeping them comfortable is way beyond our current level of tech.
Most likely if we actually built a city of people on Mars, life would be similar to living on an earth city but with low-G and you never go outdoors. We’d replicate typical earth city attractions and add a low-G spin where it was appropriate. Access to nature would be a problem (expensive fake Earth hab dome, dangerous outdoor games, no unfiltered sunlight on your skin, etc.)
Elon’s ravings notwithstanding, what makes sense to me is extensive robotic exploration of Mars, perhaps operated by a skeleton crew of people who don’t need quite the extensive training and qualifications of an astronaut. With enough exploring, experimenting and digging and we’re likely to find something very exciting.
Arcane: Season One is a Stupendous Achievement of Animated Storytelling
Arcane is a nine episode animated series based on the world and characters of the video game League of Legends.
I played League of Legends many years ago but was never a huge fan. When this adaptation was released in November, I gave it a miss. LoL didn’t strike me as a particularly fertile medium for incubating good television.
A few months later I learned, through the accident of one recommendation algorithm or another, that Sting had written a song for the Arcane soundtrack, titled What Could Have Been. Really? THE Sting? Holy shit, this is actually one of his better songs in my opinion.
OK, I reasoned that if they were willing to hire Sting and he made the effort to get this right — I might as well give Arcane a watch.
What followed was the fastest series binge of my life.
Critics have almost universally acclaimed Arcane as the greatest video adaptation ever made. Admittedly, this is a low bar. That genre is filled with cash grabs and low budget productions. (Not the case with Arcane, which took six years to create, and is rumored to be the most expensive animated series ever produced.)
In fact, Arcane’s triumph in its own category is so obvious that the question which has preoccupied fans and critics since its release has been: Is Arcane the best animation ever created in the West, period? I struggle to think of anything better. How about in the East? That raises the bar substantially. Are we dealing with something here which rivals the ethereal creativity of Miyazaki’s capstones or the existential heft of the better Ghost in the Shell adaptations? I’m not sure, but owing to the progress of technology and the talent of Fortiche Productions, the quality of Arcane’s animation surpasses them all. Its music is also one hell of a ride (Sting, Raymond Chen, Imagine Dragons, and Pusha T all contributing original work to one album? WTF*$!#! planet am I on?!).
And we still haven’t gotten to the real strength of Arcane: its story and character development, which have lit the Internet on fire with endless plot analyses and fan reaction videos. This show makes people cry, man. It makes them cry buckets. It tackles subject matter which is deeply relevant to modern audiences, but it’s smart enough not to beat them over the head with moral conclusions. I question whether a story like Arcane’s could ever be the story of a live action production, and I’m not talking about the fantasy elements — I’m talking about the personal impact of the issues around which the plot revolves, the flaws of its heroes, the humanity of its villains, and how close to home the hard parts clearly hit for a surprising number of people, which a Hollywood studio in 2022 would struggle to give the green light. Joker comes to mind as a live action production in this category and that’s about it. (I think Arcane is better than Joker.)
So ironically this is a computer animated production which manages to be more real than just about all the live action out there.
Arcane certainly has its flaws, which I’ve started to notice after watching it three times — but it feels like talking about them would be doing it a disservice at this stage. It has no equal within its category. In the future people may talk about animation in terms of what came before Arcane, and what came after it. You should watch it now. It’s too soon to give all the other stuff much consideration — right now, the thing to do is just to watch it.
Can Twitter Be Salvaged?
Twitter’s very popular, but the way it reduces communication to a series of hastily crafted, artificially truncated blurbs produces dystopian quantities of toxic and fallacious discourse. Quality Twitter is a website which attempts to provide a better experience. It presents Twitter threads with a Medium-style reading experience. (A Twitter thread is a series of tweets which are all written by the same author and have been linked together by that author.) Individual, non-thread tweets as well as all comments are excluded. Threads are displayed as if they were “long read” style articles. Popular topics or hashtags are featured on the homepage like a news site, so the user can browse through multiple “articles” on the topic he’s interested in.
The theory behind this is that Tweet threads tend to be the highest quality content on Twitter, with single tweets and comments generally containing most of the knee-jerk trash. (A casual scan of recent Twitter threads found this to be true, though a lot of threads are still garbage, and many feel disjointed when you read them because they weren’t composed as a single piece of prose.)
Quality Mastodon might work too, if Mastodon supports threads, but Twitter has more content.
This idea is similar to threadreaderapp.com, but they differ in a few ways:
1. TRA requires opt-in, they only “unroll” a thread if someone requests it. Opt-in is irrelevant to my idea; as far as I know, presenting any Tweet thread in this format is compliant with the Twitter ToS. The focus of Quality Twitter is to expose the best content for popular topics or hashtags, so Quality Twitter would instead monitor those hashtags and add new threads as they emerge.
2. TRA has a poor discovery, browsing and reading experience. You can only discover threads by searching for hashtags. They don’t give you any hints as to which hashtags might be interesting to search for. The thread browsing and reading experience is mediocre. URLs are not SEO friendly. Overall, the site doesn’t feel like a destination. This is all pretty easy stuff to fix so it probably just isn’t the focus of the small team behind TRA.
Would Quality Twitter work as a business? Maybe, but I’m not sure it’s a slam dunk. The content would be better than Twitter, but worse than Medium. The only means of revenue generation I can think of are ads and subscriptions, and traditional publications struggle to make ends meet online through these methods. The presentation of the content might be more satisfying than Twitter itself, but it would definitely be less addictive. Still, there might be a lot of people who would like to read the more thoughtful content posted to Twitter and avoid the rest, and the idea isn’t capital intensive (the engineering is pretty simple and not much else to the product). Dependency on Twitter is an obvious liability but could perhaps be mitigated by introducing other sources of content (any textual social media would be a candidate for inclusion).
This points to a deeper problem: in the world we live in today, billions of dollars are being poured into software which hijacks our lizard brains in order to maximize profits for its owner, even when this isn’t in the best interest of the user. Examples include most popular social media as well as play-to-win/lootbox video games, which compulsive addicts spend tens of thousands of dollars on. These products are the Frosted Flakes of software, you don’t eat it because it’s healthy, you eat it because your brain is hooked on sugar. Most people who use this software don’t know any better or are using it for the same reasons we all get addicted to something: we crave a distraction to dull the pain of existence. Have we ever found a way to counteract the lure of tobacco, alcohol, drugs or gambling?
My initial answer to this question was no, all we’ve come up with is regulating these things to limit the damage they cause in excess. But I realized we actually do have another trick up our collective sleeve, which is to substitute something that’s equally addictive, but less harmful. This is a big part of vaping is so popular, it’s equally addictive to smoking, but probably less deadly, so the switch makes sense to both the lizard limbic system and the rational prefrontal cortex. Sure enough, a lot of people are making this switch. I have to think more about this.
Cliffs notes on wearing a mask for coronavirus
I’ll explain whether you should wear a mask for the novel coronavirus and why the debate has been complicated.
The short answer is yes, make or buy a reusable cloth mask, wash it and wear it regularly.
- Wearing a mask properly reduces your risk of contracting COVID-19. Crucially, if you already have the disease, it decreases the change you will spread the coronavirus to other people.
- For the purpose of preventing COVID-19, any brand new mask is probably better than no mask, since it’s primarily spread through liquid droplets.
- Wearing masks properly includes not touching anything other than the ear straps, and changing to a new mask regularly. Use of the mask degrades its effectiveness.
- The outcome of wearing a mask improperly is neither well studied nor well understood, it may help, hurt, or have no effect in different circumstances.
- Wearing a mask is not a substitute for social distancing, which includes maintaining 6 feet of distance from other people at all times.
- The supply of masks is limited and as a result health care workers in many places don’t have enough masks to use them properly. If health care workers become infected it can lead to a catastrophic outcome because they easily become super-spreaders.
- Advice from various authorities to not wear a mask made sense because the supply is limited and it needs to be reserved for health care workers.
- Advice to make your own mask, wear and wash it regularly also makes sense because while we don’t know much about this approach, it’s probably better than nothing.
Any questions?
Netflix pulled the plug on a feature designed to get kids addicted to Netflix
https://www.vanityfair.com/hollywood/2018/03/netflix-patch-testing-kids-binge-watching
Any metric or KPI is going to get gamed in harmful ways if it’s prioritized too highly. So the answer is to use A/B testing to make sure you’re not regressing on the goals of a design change, and judiciously, to validate whether your change is moving things in the right direction.
But beyond that you have to do the hard slog of mastering the discipline of UX, real, human-centric design, and accept that not everything important is measurable.
We have plenty of examples of how doing real UX instead of playing a numbers game can differentiate your business, Apple has applied this philosophy consistently over many years.
The root of the problem is a “fuck you, market share at all costs” culture that has come to permeate Silicon Valley. And you can argue (somewhat cynically) that this philosophy makes sense in a blue ocean where you have no competition and just need to gobble up people and turn them into cash before someone else does. But I think going forward this mentality may actually become a liability as more humane alternatives to heavily despised products emerge. Many of the current crop of giants seem to have forgotten that a company’s most valuable asset is always its brand.
On monopolies
Winners in the tech industry turn into giants via vertical integration and predatory pricing. They keep adding stuff related to the segment they first won in. They keep making stuff cheap or free to undercut competitors.
There are many examples of this and we have two big examples in the US of anti-trust regulators stepping in and hobbling the giant: IBM in the 60s/70s and Microsoft in the 90s/00s. In both cases the government forced the giant to change its behavior and the industry eventually experienced a new wave of small firms that filled the void, competed, and innovated.
Notably in both cases the giant also got off relatively easy in the end (no breakup). And both are successful companies to this day. But they were tied up in court long enough to lose their ability to squelch competitors and dominate new markets.
Now you could argue this was not fair to IBM or to Microsoft (in the future, to Google?). In both cases you’d be right. I’d argue you don’t have to be fair to the winner especially once he starts playing dirty tricks to keep others down. You be fair to the little guys and you get tough on the big guys, that’s the foundation for both a healthy market and a healthy society.
You could also argue that making things free is good for consumers. You’d be right again, except in the case where the freebies are designed to kill off competition and stagnate the product category (Internet Explorer being a classic case where Microsoft bundled it, gave it away for free, then stopped improving it for years).
We have 50 years of precedents and the people at the DOJ know their history. I suspect it’s just a matter of political winds and timing before Alphabet has to follow in IBM and Microsoft’s footsteps.