https://bigthink.com/the-present/why-gre...zy-humans/
EXCERPTS: Researchers ran an experiment in which one group of consultants worked with the assistance of AI and another group did work the standard way.
[...] The group working with the AI did significantly better than the consultants who were not. We measured the results every way we could — looking at the skill of the consultants or using AI to grade the results as opposed to human graders — but the effect persisted through 118 different analyses. The AI-powered consultants were faster, and their work was considered more creative, better written, and more analytical than that of their peers.
But a more careful look at the data revealed something both more impressive and somewhat worrying. Though the consultants were expected to use AI to help them with their tasks, the AI seemed to be doing much of the work. Most experiment participants were simply pasting in the questions they were asked, and getting very good answers.
The same thing happened in the writing experiment done by economists Shakked Noy and Whitney Zhang from MIT — most participants didn’t even bother editing the AI’s output once it was created for them. It is a problem I see repeatedly when people first use AI: They just paste in the exact question they are asked and let the AI answer it. There is danger in working with AIs — danger that we make ourselves redundant, of course, but also danger that we trust AIs for work too much.
And we saw the danger for ourselves because BCG designed one more task, this one carefully selected to ensure that the AI couldn’t come to a correct answer — one that would be outside the “Jagged Frontier.” ..... (MORE - missing details)
EXCERPTS: Researchers ran an experiment in which one group of consultants worked with the assistance of AI and another group did work the standard way.
[...] The group working with the AI did significantly better than the consultants who were not. We measured the results every way we could — looking at the skill of the consultants or using AI to grade the results as opposed to human graders — but the effect persisted through 118 different analyses. The AI-powered consultants were faster, and their work was considered more creative, better written, and more analytical than that of their peers.
But a more careful look at the data revealed something both more impressive and somewhat worrying. Though the consultants were expected to use AI to help them with their tasks, the AI seemed to be doing much of the work. Most experiment participants were simply pasting in the questions they were asked, and getting very good answers.
The same thing happened in the writing experiment done by economists Shakked Noy and Whitney Zhang from MIT — most participants didn’t even bother editing the AI’s output once it was created for them. It is a problem I see repeatedly when people first use AI: They just paste in the exact question they are asked and let the AI answer it. There is danger in working with AIs — danger that we make ourselves redundant, of course, but also danger that we trust AIs for work too much.
And we saw the danger for ourselves because BCG designed one more task, this one carefully selected to ensure that the AI couldn’t come to a correct answer — one that would be outside the “Jagged Frontier.” ..... (MORE - missing details)