About


Reading Time: 3 minutes

Welcome to Parkies Unite, a website dedicated to sharing and supporting people living with Parkinson’s disease. We are a community of people who have been diagnosed with this chronic and progressive movement disorder that affects the nervous system and the parts of the body controlled by the nerves. We use generative AI to create and curate content that is relevant, informative, and inspiring for our readers. Whether you are looking for tips on managing symptoms, stories of resilience and hope, or the latest research and treatments, you will find it here at Parkies Unite. Join us today and be part of a network of people who understand what you are going through and who are here to help you live your best life with Parkinson’s disease.

Large language models (LLMs) like ChatGPT have made impressive strides. They can produce text that looks like it was made by humans, leading to a lot of excitement about their capabilities. ‌ But one persistent problem is that they are prone to making things up. These errors are commonly called “AI hallucinations” – but researchers Mike Hicks, Joe Slater and I think this is incorrect and misleading. ‌ We believe what ChatGPT is doing, both when it gets things wrong and when it gets them right, is bullshitting. ‌ In everyday language, we call “bullshit!” when we think something isn’t true; philosophers, as usual, have a more specific definition of the term. In particular, philosopher Harry Frankfurt observed in 1986 that our culture is full of bull – that is, that there’s so much that is said or written just for effect; to sidestep a tricky question, or avoid punishment, or simply to sound clever. ‌ We all know this, and let’s be honest: we’ve all done it at some point. A bullshitter, according to Frankfurt, doesn’t care whether what they say is true, but tries to convince us that they do. ‌ LLMs work by estimating the likelihood that a particular word will appear in a line of text. The model then generates text by adding words one by one, based on how likely they are to come next. The model is “trained” on a huge amount of human-produced text and feedback, with the aim of producing text that looks just like it was produced by humans. ‌ This means they cannot be concerned with truth, because all they are designed to do is produce something that looks as if it is true. It’s possible that they’ll say something true, but only by fluke – only if the thing that is true is also the thing suggested as the best “fit”.
  “If we’re happy just to produce stuff that looks true, if we’re OK with saying any old thing that sounds right, then we’ve abandoned any concern for truth, or for knowledge”  
There’s some technical and definitional caveats to be made (which we do in our full academic paper here), but the core of our argument is that if LLMs are not sapient – that is, if they don’t “think” in any meaningful way – then they can’t care about truth one way or the other. ‌ If they do “think”, then they match the classic Frankfurtian definition of a “bullshitter”, because they’re indifferent to the truth and they’re deceiving us about what they’re up to. We don’t believe that they are thinking, as it happens, but it’s always best to cover your bases. ‌ So, we think LLMs are full of it. But we also think it is a particular problem in education, for two reasons. ‌ Firstly, getting ChatGPT to produce an essay for you means that you don’t engage with the material in the way necessary to produce a cogent, structured argument – you don’t learn anything. ‌ This is, of course, the same problem that would occur if the user just got some other human to write their essay; but there’s a second, deeper problem, and one that links back to one of Frankfurt’s original concerns. ‌ If we’re happy just to produce stuff that looks true, if we’re OK with saying any old thing that sounds right, then we’ve abandoned any concern for truth, or for knowledge. ‌ Rather than trying to understand and critically engage with the topic, we instead fall back on simply trying to appear as if we do. That’s both bad for learning, and for society more broadly. ‌ -With thanks to the papers co-authors Michael Hicks and Joe Slater.