Gonna say right up front that I use AI. I use it as a tool in writing, to catch spelling or grammatical errors. I use it in coding to do the boring bits, like file parsing and stuff so that I can focus on the creative bits. I generate images using AI. And, at work I am required to use AI. Really. I get marked down if I don’t use it enough.
In fact, all our developer innovation days seem to be centered on how AI can be entwined more deeply into every corner of the business. And it’s not just us; the urgency is because we may, somehow, be left behind in the industry if we don’t whole-heartedly embrace AI.
We know AI is prone to errors. It will tell you that, if you ask. But if you have to check an AI response for veracity every time you use it, you’re not really saving any time at all, and so in the interest of expediency, people will just begin to accept what AI tells them without question.
Elon Musk is showing us the future: his AI chatbot is specifically programmed to reflect his warped world view. It is his aim to ensure the responses his chatbot generates reflect a world that exists only within his twisted imagination; by manifesting it, he hopes to make it real.
OpenAI’s Sam Altman doesn’t really know what people are for, once AI is doing everything that needs to be done. Bill Gates says that in ten years “AI will replace most doctors and teachers — humans won’t be needed for most things”. Peter Thiel wants to use AI to track our every movement. Larry Ellison loves it, saying total AI surveillance will ensure “citizens will be on their best behavior.”
We use AI to render images, we use AI to write letters (or have the letters written for us), we use AI to write music for us, soon AI will drive us around, cook for us, teach us, take care of us… Billionaires foresee a world where humanity is docile and responsive to billionaire needs because AI has used our centuries of innovation and creativity against us.
I’ve read that kids, shown famous works of art, assume they were generated by AI. Why learn to paint, or play an instrument, or write, when AI can do it for you?
The idea was that AI would do the boring jobs; the housework, the carrying stuff and so on. But those things are hard. It turns out that stealing our creativity was the easy part.
Am I going to stop using AI? Probably not. One, I am required to as part of my job. And two, I don’t believe AI generally does an acceptable job. I’ve tried having AI write stuff, but it boring, leaden and dull. The music it generates is sometimes exciting, but most of the time forgettable. The images are full of artifacts no human artist would add.
So I’m undecided, really. But if billionaires want it so bad, it can only be bad for humanity in the long run.
The header image was partially generated by AI.




They keep talking about the AI bubble and how it is going to pop. I think if it does, THIS is the bubble that will pop. And we’ll be left using AI to write boilerplate code and checking our grammar and things.
And in robots. Not meaning Rosie from the Jetsons but dedicated assembly line robots and warehouse robots and such. Though I dunno, we might get Rosie-style robots for old folk who live alone and need companionship and someone to help. Almost like a mechanical service dog.
But yeah the billionaires are going to do their best to use it as another way to keep regular people oppressed in whatever ways they can. Eat the rich!
And once we’ve integrated AI into everything, they will suddenly increase the price so that they can have ALL the money.
The plain fact of it seems to be that it’s going to have to work one whole hell of a lot better than it does now before more than a few very niche applications are going to be handed off to AI without full human oversight. As you can very easily test for yourself, it just doesn’t do most things well enough for the end result to be trusted.
Of course, that won’t stop people trusting it anyway. But trusting it won’t make it wok any better, either. What happens when what the AI doctors advise doesn’t make people better? Or when the AI teachers don’t hold the atention of their students? And as for AI driving cars, that’s something only someone living in a country with lots of straight roads and cities laid out on grids could possibly believe. In most of the world it’s not going to happen until we have the mythical self-aware AI, which we are nowhere remotely near and which, if we ever do get to it, will by definition have its own ideas about how it wants to spend its time, which probably isn’t going to be acting as a an unpaid chauffeur to meatbags.
I have a couple of articles bookmarked to turn into a blog post some day soon, each of which makes a pretty strong case that the companies that are going to do best out of the current AI boom are those who refuse to engage with it at all and who take the opportunity to hire the pick of the humans who get let go by the companies that wrongly believe AI will be able to do their jobs. That seems like a much more likely outcome to me.
Plus, the really big question, what happens if the tech bros are actually right and AI does successfully manage to replace all these human jobs? Who is going to buy the products and services the companies using the AIs make? How will those unemployed and unemployable humans pay for them? Who is going to buy and sit in all those AI-driven cars and where will they be going and to do what?
As far as I can see, if the hype is true the whole thing falls apart and if the hype isn’t true the whole thing falls apart. Does anyone have a theoretical model in which it works sustainably?
I don’t really understand why a billionaire needs people. I guess it takes a lot of people sitting around trying to figure out how to make their billionaire happy. There’s already lots of places regular people have never heard of, where only billionaires or millionaires with aspirations can go.
Frickin’ Musk wants a whole planet to himself. Nothing like aiming high. I hope he goes soon.
I saw a post the other day somewhere… can’t recall now… that said that AI is great for any assignment where being mostly right is sufficient, but useless for anything that will be scrutinized because accuracy counts. (Also I am pretty sure we had spell check and even grammar checking before OpenAI and what not, but I have also seen a lot of claiming that things that have existed for decades counts as AI, once again bringing into question what AI even is.)
So AI for legal documents; bad.
AI for customized comment spam bots; good… I guess. I am certainly getting a lot more custom tailored spam.
I was looking for a way to better summarize posts for the Daily Blogroll. I knew there was an algorithm some time back for summarizing articles by picking out important words, ranking sentences by how many of those words were repeated and also in the title or headline, and their position in the document to come out with a ranking that scored each sentence by its relevance. Take as many as you need for your target word or character count, re-order them back to the position in the document, and you had your summary, no AI required.
I was looking up the algorithm but I could find very few mentions of it. Almost everything was, “this service feeds your article into AI and then gives you a summary!” And, you know, I already know how to do that. I’m looking for something non-AI.
Almost impossible to find. I did eventually find the algorithm and might still implement it, but it underscored how pervasive AI has become when even programmers have just decided to let AI do it without having any understanding of what is happening. AI is making programmers worse at their jobs.