How Well Do LLMs Represent Values Across Cultures? An Analysis of LLM Responses to Hofstede Cultural Dimensions
Large Language Models (LLMs) attempt to imitate human behavior by responding to humans in a way that pleases them, including by adhering to their values. Therefore, it is important to understand whether LLMs, upon understanding a user's national background, will showcase a different set of values to the user. We have found that LLMs have an innate understanding that cultures differ in values, responding differently based on different nationality input. By discovering this information, we empower others to consider whether using LLMs to seek advice will be ethical and beneficial for them, especially with answers that must be culturally-sensitive.
Project sponsored by: Dr Chirag Shah
Project participants:
Julia Kharchenko
Informatics