Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Research  Why great AI produces lazy humans

#1
C C Offline
https://bigthink.com/the-present/why-gre...zy-humans/

EXCERPTS: Researchers ran an experiment in which one group of consultants worked with the assistance of AI and another group did work the standard way.

[...] The group working with the AI did significantly better than the consultants who were not. We measured the results every way we could — looking at the skill of the consultants or using AI to grade the results as opposed to human graders — but the effect persisted through 118 different analyses. The AI-powered consultants were faster, and their work was considered more creative, better written, and more analytical than that of their peers.

But a more careful look at the data revealed something both more impressive and somewhat worrying. Though the consultants were expected to use AI to help them with their tasks, the AI seemed to be doing much of the work. Most experiment participants were simply pasting in the questions they were asked, and getting very good answers.

The same thing happened in the writing experiment done by economists Shakked Noy and Whitney Zhang from MIT — most participants didn’t even bother editing the AI’s output once it was created for them. It is a problem I see repeatedly when people first use AI: They just paste in the exact question they are asked and let the AI answer it. There is danger in working with AIs — danger that we make ourselves redundant, of course, but also danger that we trust AIs for work too much.

And we saw the danger for ourselves because BCG designed one more task, this one carefully selected to ensure that the AI couldn’t come to a correct answer — one that would be outside the “Jagged Frontier.” ..... (MORE - missing details)
Reply
#2
Zinjanthropos Online
Have a feeling there are a lot of professionals out there with university degrees that they didn’t merit. Is AI the new plagiarism?

On this topic, have you watched TED Talks recently? I think it’s gone WOKE because it’s just supposedly educated people ranting about how bad we all are, especially with the minority groups, special interest and such. It’s unwatchable. I wonder if it also is AI generated BS. Too bad, it used to be very informative and dealt with facts, legitimate studies and discovery. Can’t imagine paying to listen to this stuff live.
Reply
#3
C C Offline
(Mar 28, 2024 03:26 PM)Zinjanthropos Wrote: Have a feeling there are a lot of professionals out there with university degrees that they didn’t merit. Is AI the new plagiarism?

On this topic, have you watched TED Talks recently? I think it’s gone WOKE because it’s just supposedly educated people ranting about how bad we all are, especially with the minority groups, special interest and such. It’s unwatchable. I wonder if it also is AI generated BS. Too bad, it used to be very informative and dealt with facts, legitimate studies and discovery. Can’t imagine paying to listen to this stuff live.

Might very well be. We know that LLMs (like Google's below) are being trained on the ideology so that they don't turn into today's version of potty mouths (which results when they learn from the "real world interactions" data of the internet).

That hidden DEI agenda package has to be suppressed enough to avoid AIs producing ethnic/gender balanced things like Black nazis and female Asian knights and women popes. But the monomaniacal insanity of interpreting and diagnosing everything in the world in terms of systemic social oppression could still pop-out during a digital spasm. (Gore Vidal: "I got the cuckoo to pop out of his forehead.")

Google’s Woke AI wasn’t a mistake. We know. We were there.
https://www.scivillage.com/thread-15612-...l#pid62930
Reply
#4
confused2 Offline
AI should be restricted to science and engineering type stuff - anything else is just asking for trouble. In reality we're just being softened up for more advertising, product placement, data collection etc.
Reply




Users browsing this thread: 1 Guest(s)