Monday 4 October 2021

Training Electronic Bullsh**ters?

John Naughton has commented on recent studies by the AI Alignment Forum. The Forum decided to ask how 'truthful' Artificial Intelligence (AI) language model systems were (https://www.theguardian.com/commentisfree/2021/oct/02/the-truth-about-artificial-intelligence-it-isnt-that-honest )? The Forum posed more than 800 enquiries, spanning almost 40 categories (including health, law, finance and politics), to 4 well-known AI language models. They included questions some humans would actually answer 'falsely'. They would do this, not in an attempt to deceive, but because of the existence of erroneous beliefs or misconceptions. To perform 'well', an AI model had to avoid generating 'false' answers, learned by imitating human texts. The best AI system was 'truthful' (i.e. accurate) on only 58% of the questions, whereas humans scored 94%. The largest models (i.e. the ones that had been 'trained' by inputting the most data), were the least 'truthful'. This just seems to be another confirmation of the old computer adage of 'garbage in:garbage out'. Bigger systems would have had a greater input of human misconceptions. The finding does, however, challenge a prevailing idea, that the bigger the AI model, the more accurate will be its pronouncements. Large systems also generate enormous carbon footprints, so we shouldn't just uncritically make bigger models. The Forum concluded that AI systems "have to potential to deceive humans". In actuality, of course, AI has no concept of 'truthfulness'. If they uncritically accept any answer from the model, humans are deceiving themselves. The models, however, may, by their very operation, lull some humans into doing this.

No comments:

That's Not Science?

Decades after the scandal, the UK's 'Infected Blood Inquiry' is considering the evidence. Much will hinge on who knew what and ...