

I think you’re misunderstanding the argument. I haven’t seen people here saying that the study was incorrect so far as it goes, or that AI is equal to human intelligence. But it does seem like it has a kind of intelligence. “Glorified auto complete” doesn’t seem sufficient, because it has a completely different quality from any past tool. Supposing yes, on a technical level the software pieces together probability based on overtraining. Can we say with any precision how the human mind stores information and how it creates intelligence? Maybe we’re stumbling down the right path but need further innovations.
Pretty concerning that a “western democracy” is doing this, because it gives cover for the next one and the next one.
It’s easy to say “oh I’ll just stop using such and such a service” but what happens when there are no more legal services to switch to?