Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  ChatGPT still hasn't solved its diversity issues

#1
C C Offline
ChatGPT has read almost the whole internet. That hasn't solved its diversity issues
https://science.ubc.ca/news/chatgpt-has-...ity-issues

PRESS RELEASE: AI language models are booming. The current frontrunner is ChatGPT, which can do everything from taking a bar exam, to creating an HR policy, to writing a movie script.

But it and other models still can’t reason like a human. In this Q&A, Dr. Vered Shwartz (she/her), assistant professor in the UBC department of computer science, and masters student Mehar Bhatia (she/her) explain why reasoning could be the next step in AI—and why it’s important to train these models using diverse datasets from different cultures.

What is ‘reasoning’ for AI?

Shwartz: Large language models like ChatGPT learn by reading millions of documents, essentially the entire internet, and recognizing patterns to produce information. This means they can only provide information about things that are documented on the internet. Humans, on the other hand, are able to use reasoning. We use logic and common sense to work out meaning beyond what is explicitly said.

Bhatia: We learn reasoning abilities from birth. For instance, we know not to switch on the blender at 2 a.m. because it will wake everyone up. We’re not taught this, but it’s something you understand based on the situation, your environment and your surroundings. In the near future, AI models will handle many of our tasks. We can’t hard code every single common-sense rule into these robots, so we want them to understand the right thing to do in a specific context.

Shwartz: Bolting on common-sense reasoning to current models like ChatGPT would help them provide more accurate answers and so, create more powerful tools for humans to use. Current AI models have displayed some form of common-sense reasoning. For example, if you ask the latest version of ChatGPT about a child’s and an adult’s mud pie, it can correctly differentiate between dessert and a face full of dirt based on context.

Where do AI language models fail?

Shwartz: Common-sense reasoning in AI models is far from perfect. We’ll only get so far by training on massive amounts of data. Humans will still need to intervene and train the models, including by providing the right data.

For instance, we know that English text on the web is largely from North America, so English language models, which are the most commonly used, tend to have a North American bias and are at risk of either not knowing about concepts from other cultures or of perpetuating stereotypes. In a recent paper we found that training a common-sense reasoning model on data from different cultures, including India, Nigeria and South Korea, resulted in more accurate, culturally informed responses.

Bhatia: One example included showing the model an image of a woman in Somalia receiving a henna tattoo and asking why she might want this. When trained with culturally diverse data, the model correctly suggested she was about to get married, whereas previously it had said she wanted to buy henna.

Shwartz: We also found examples of ChatGPT lacking cultural awareness. When given a hypothetical situation where a couple tipped four per cent in a restaurant in Spain, the model suggested they may have been unhappy with the service. This assumes that North American tipping culture applies in Spain when actually, tipping is not common in the country and a four per cent tip likely meant exceptional service.

Why do we need to ensure that AI is more inclusive?

Shwartz: Language models are ubiquitous. If these models assume the set of values and norms associated with western or North American culture, their information for and about people from other cultures might be inaccurate and discriminatory. Another concern is that people from diverse backgrounds using products powered by English models would have to adapt their inputs to North American norms or else they might get suboptimal performance.

Bhatia: We want these tools for everyone out there to use, not just one group of people. Canada is a culturally diverse country and we need to ensure the AI tools that power our lives are not reflecting just one culture and its norms. Our ongoing research aims to foster inclusivity, diversity and cultural sensitivity in the development and deployment of AI technologies.
Reply
#2
confused2 Offline
I am not 100% convinced that 'multiculture' is a good thing. All software is written in English or at least using software that is written in English. You can put like Welsh names on the buttons so the Welsh think it's all done for them ..
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Engineers recreate Star Trek’s Holodeck using ChatGPT & video game assets C C 0 42 Apr 12, 2024 04:52 PM
Last Post: C C
  Article AI consciousness: UN urgently needs answers + ChatGpt figures out chemistry C C 0 78 Dec 21, 2023 10:21 PM
Last Post: C C
  Article Understanding how choice overload in ChatGPT recommendations impacts decision-making C C 1 95 Oct 1, 2023 05:53 PM
Last Post: confused2
  Article ChatGPT and the rise of semi-humans (study) C C 3 135 Oct 1, 2023 12:28 PM
Last Post: Zinjanthropos
  Article ChatGPT is debunking myths on social media around vaccine safety, say experts C C 0 83 Sep 5, 2023 04:08 PM
Last Post: C C
  Article AI: ChatGPT can outperform university students at writing assignments C C 5 210 Aug 25, 2023 09:31 PM
Last Post: confused2
  Article With “thanabots,” ChatGPT is making it possible to talk to the dead C C 5 222 Aug 9, 2023 02:45 PM
Last Post: confused2
  Article Can ChatGPT answer Jewish law questions? (Is AI kosher as a rabbinic lit scholar?) C C 0 72 Jul 3, 2023 03:44 AM
Last Post: C C
  Article Hallucinations could blunt ChatGPT’s success C C 0 72 Mar 15, 2023 12:31 AM
Last Post: C C
  How does ChatGPT — and Its maker — handle vaccine conspiracies? C C 0 57 Feb 20, 2023 05:07 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)