LLMs and other recent neural network based AIs are not intelligent. They aren’t part way to intelligence. They’re a different kind of thing. They don’t do conjectures and refutations. They don’t think even a little bit like humans do.
They’re problematic because they can be confidently wrong and make stuff up while writing like an educated person who knows what he’s talking about. In many cases, it’s hard to tell when they’re “hallucinating” or not. In many cases, they don’t give their sources. They’re often right about facts but sometimes they aren’t.
There are also issues with AIs (and in some cases their human users too) as plagiarists and copyright violators. I don’t think Meta should have torrented millions of ebooks without paying for them, just ignoring copyright, and I do think they deserve to get in serious legal trouble for that. (Not that paying for one copy of each book is necessarily good enough to have the right to train AIs on those books. They didn’t even do that though. I don’t know exactly what the right answer for this is but I don’t think any of the big AI companies are doing the right thing.)
AIs also use a ton of computing power and electricity. They’re expensive. But venture capitalists and big tech companies are currently paying the bills and providing lots of services for free or cheap.
AIs are a neat tool that sometimes seems kind of like magic. They can be useful. Sometimes, answers are kind of like using Wikipedia. Other times, answers are like you might find with Google except without the content farms, blog spam, ads, SEO, and other crap that has made web search worse and worse over the last 20 years. Sometimes, AIs give answers that are the kind of thing you would have easily found on Stack Overflow 10 years ago, but which are harder to find now.
There are tasks AIs are pretty good at and others they’re bad at. Some answers are obviously bad, which isn’t such a big deal – it’s quick to ask the AI a question and if the answer sucks you can just try something else. Other answers are subtly bad – they can look like good answers but be wrong, which is more dangerous.
AIs can summarize files that you upload, make podcasts that conversationally explain uploaded documents to you, generate images, transcribe speech to text, and help with coding. There is value here along with the issues. I like using AI to automatically figure out the timings of all the words in a script, and create a subtitles file, given an audio file of me reading the script (Descript and YouTube can both do that), which gets better results than creating subtitles by automated transcription.
Writing code with AIs has lots of issues and downsides but it’s neat too and does have major potential upsides. AIs can help with small parts, or with making something instead of nothing, but they can also make a mess in an existing codebase and design code in bad ways.
I don’t think replacing customer service, artists, copy writers and programmers with AIs is a good idea, in the big picture, currently. Like big companies shouldn’t just fire all their staff in these categories thinking AI can do the job instead. And as time goes on, the goal should be more about AI tools aiding humans so they can be more productive, not about replacing humans. (I think humans have important capabilities that AIs don’t. I’m not trying to keep obsolete jobs around just to avoid unemployment.)
Lots of things suck, so AIs being flawed doesn’t necessarily make them worse than alternatives.
AIs are overhyped but that doesn’t make them undeserving of any hype. The amount of hype has been really extreme, and most of it has come from people who don’t have much understanding of how AIs work mathematically/programmatically and from people who think AIs will gain general intelligence with some additional refinements.
I’m not advising using AIs (beyond briefly trying them out, which I do think is worthwhile since they’re a popular technology that gets mentioned a lot in our culture today and you don’t want to get really out of touch), but I’m not advising against using them either. I don’t want people to take some of my criticisms of AIs as meaning they shouldn’t be used for anything. They’re a tool with plusses and minuses that can be used well or poorly. How to use them well is hard to explain, and it’s hard to self-evaluate if you’re using them well or not.
Also, a reminder: AI output posted on this forum must be clearly labeled. No undisclosed AI use here, please.