Artificial Intelligence

How to test AI, and the hidden victims of pig-butchering scams


In the past few years, multiple researchers claim to have shown that large language models can pass cognitive tests designed for humans, from working through problems step by step, to guessing what other people are thinking. 

These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs; that they could replace teachers, doctors, journalists, and lawyers. Geoffrey Hinton has called out GPT-4’s apparent ability to string together thoughts as one reason he is now scared of the technology he helped create.

But there’s a problem. There’s little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence, while others aren’t convinced one bit. And the desire to anthropomorphize such models is confusing people about what they can and cannot do. Read the full story.

—William Douglas Heaven

The involuntary criminals behind pig-butchering scams

Pig-butchering scams are everywhere. The scams, the term for which refers to the lengthy, trust-building process of raising a pig for slaughter, have extorted victims out of millions, if not billions, of dollars.

But in recent weeks, growing attention has been granted to the scammers behind these crimes, who are often victims themselves. A new book in English, a movie in Chinese, and a slew of media reports are shining a light on the fascinating (and horrifying) aspects of a scary trend in human trafficking, where victims leave their homes in the hope of gaining stable employment, but end up held captive and unable to leave. Read the full story.