Go Summarize

CHM Seminar Series: Developing a 'Theory of Machine' to Examine Perceptions of Algorithms — J. Logg

230 views|3 years ago
💫 Short Summary

The video discusses research on algorithm appreciation and algorithmic hiring, exploring how people respond to algorithmic advice and the importance of understanding human judgment in decision-making processes. Findings show a preference for algorithms over human advice, challenging assumptions of algorithm aversion. Factors such as numeracy, expertise, and competition influence preferences. The study highlights the need for clear communication, collaboration between disciplines, and addressing biases in algorithmic decision-making for effective outcomes. Amazon's biased hiring algorithms serve as a cautionary example, emphasizing the importance of diversity in algorithm development teams and auditing algorithms for fairness.

✨ Highlights
📊 Transcript
Jennifer Locke discusses algorithm appreciation and algorithmic hiring at Georgetown University.
00:53
People's reactions to algorithmic advice in decision-making contexts are explored.
The importance of algorithmic judgment in organizations is highlighted.
The need for a better understanding of maximizing the benefits of algorithmic advice is emphasized.
Research delves into how stakeholders assess algorithmic judgment and the evolving role of algorithms in managerial decision-making processes.
Importance of understanding how people respond to algorithmic advice.
03:25
Algorithms generally outperform human experts in accuracy.
Algorithms can only enhance human judgment if people are willing to listen.
Connection between producing insights and applying them is often overlooked.
Psychological distrust of algorithms has been noted, with expert clinicians showing reluctance to believe in the accuracy of simple mathematical calculations over their own judgment.
Research on algorithm accuracy and people's perceptions.
08:09
People tend to distrust algorithms, but this belief is based on anecdotes.
Study focuses on how people update judgments based on advice sources.
Results show a preference for algorithms over humans, challenging algorithm aversion.
Importance of benchmarking algorithmic advice against human advice is highlighted.
Algorithm appreciation: People value advice more when labeled as coming from an algorithm rather than a person.
10:12
Researchers initially predicted the opposite results, highlighting a disconnect between expectations and empirical evidence.
The study also examines how subjective confidence in one's judgment affects the use of algorithmic judgment.
Participants make choices before receiving advice to analyze decision-making processes more effectively.
Impact of confidence and expertise on decision-making between algorithmic advice and personal judgment.
14:07
People tend to have more confidence in their own estimates than external advice, leading them to choose personal judgment.
Expert national security professionals and laypeople were compared in forecasting abilities.
Despite subjective expertise manipulation, participants still leaned towards algorithmic advice.
Research aimed to understand how confidence and expertise affect responses to algorithmic advice, providing insights into decision-making processes.
Algorithmic advice is discounted more by experts than lay people, leading to less accurate forecasts.
17:29
Individuals with higher numeracy levels are more likely to trust algorithmic advice.
Decision makers tend to devalue algorithmic advice when comparing it to their own knowledge.
Expertise in a specific domain results in discounting advice, regardless of the source.
Various factors play a role in algorithm appreciation, emphasizing the complexity of decision-making processes.
Preferences between algorithmic and human advice.
21:22
Older individuals rely on algorithms as much as younger ones.
75% of participants chose algorithms over human advice.
Labels on algorithms influenced people's preferences.
Many prefer expert human advice over algorithms, despite the latter being cheaper and more accessible.
Research challenges assumptions on algorithm aversion and human judgment differences.
23:55
Study compares views on algorithmic hiring vs. human managers, with real-world implications.
Results show applicants have preferences on how their applications are reviewed.
Experiment involved tasks for potential hiring and a murder mystery puzzle element.
Research highlights complexity of algorithm aversion in decision-making processes.
Preferences for hiring algorithms versus human judgment were explored, with 70% of applicants choosing a person over an algorithm in study one.
26:35
Factors influencing this preference included competition within the applicant pool.
Study three looked at how the hiring manager's characteristics impact applicants' preferences, revealing a reversal of preference based on in-group or out-group relations.
Study four found that 75% accuracy is needed for people to prefer an algorithm over a hiring manager.
Study tasks involved an anagram quiz, a trivia quiz, and a murder mystery to measure performance and competitiveness.
30:19
Participants were motivated by bonus pay and the opportunity to create a competitive application packet.
Active participants were assigned roles as applicants or hiring managers, with all participants acting as applicants.
Deception was used in the study to measure performance and competitiveness.
Participants had the opportunity to write persuasive essays and choose who would review their application packet.
Preferences for human or algorithm assessment of application packets are influenced by competition in applicant pools.
33:31
In low competition, more people preferred a person over an algorithm, while the opposite was true in high competition.
Preferences for in-group members to assess applications were shown, but this preference changed if the manager disagreed on key political issues.
Subsequent studies explored systematic versus random error in decision-making biases.
Study findings on applicant preferences for human review vs. algorithm in hiring process.
37:45
70% of applicants prefer human review over an algorithm for their application packets.
Preference weakens with higher competition in the applicant pool.
People also prefer human judgment when the decision is self-relevant.
Results indicate a 75% accuracy benchmark for applicants to prefer an algorithm over a person in the hiring process.
Discrepancies between stated beliefs and actions when updating beliefs are discussed.
40:00
A new project compares responses to algorithmic and human advice in judgment scenarios.
Differences in algorithm appreciation and judge advisor conditions are found in the study.
Deception is mentioned as a potential tool for efficiency, along with the proposal of field studies to test accuracy feedback in real-time.
Viewers' questions prompt further discussion on the wording and introduction of algorithms to study subjects, leading to a discussion on operationalizing algorithm appreciation.
Study on different operationalizations of algorithms and people's responses to advice from a black box algorithm.
43:21
Algorithm appreciation was consistent regardless of the type presented in the study.
Participants had varied definitions of algorithms, categorizing them as math, rules, and computer-related.
Importance of understanding public perceptions of algorithms and the need for clarity in communication about their functions and purpose was emphasized in the study.
Reasons for choosing algorithms in high competition scenarios.
46:48
Participants cited time constraints and concerns about hiring managers as reasons for their choice.
Efficiency played a significant role in their decision-making process.
Lack of a clear explanation sparked further exploration into the mechanisms behind their decisions.
Speaker encouraged future research and experiments to better understand decision-making in competitive environments.
Research projects focus on algorithm appreciation and algorithmic hiring.
50:55
Theoretical framework created to analyze human and algorithmic judgment expectations and perceptions.
Framework aims to document differences in input, process, and output between human and algorithmic judgments.
Importance of understanding people's expectations in leveraging data for both types of judgment emphasized.
Predictions made based on research findings to better understand mechanisms at play.
Perceptions and expectations of algorithmic and human judgment.
54:37
Students worry algorithms may not recognize unique qualities, focusing on objective criteria only.
Algorithms may process fewer categories of cues, potentially leading to bias in decision-making.
People expect algorithms to provide less relevant data and explanations for their judgments, making them less persuasive.
The 'special snowflake' versus the average person influences perceptions of algorithm recommendations.
Importance of Human Interaction in Algorithmic Decision-Making.
57:40
Emphasizes the need to expand the input-process-output framework to include decision-making and judgment.
Discusses applying this framework to algorithmic hiring and feedback scenarios.
Aims to reconcile various research perspectives and contribute to a comprehensive theory of machine learning.
Highlights the significance of human experiences with data analytics and algorithmic recommendations in the development of a collective theory of machine.
Importance of clear communication between data analytics teams and decision-makers.
01:00:01
Disconnect between producing analytics and acting on them, with decision-makers not utilizing data effectively.
Emphasis on collaboration between computer science, human-computer interaction, and psychology for actionable data.
Example of Amazon using biased hiring algorithms and limitations of reverting to human judgment.
Significance of addressing biases and improving communication for effective decision-making based on data.
Addressing algorithmic bias in hiring practices.
01:02:36
Biased language in resumes led to algorithmic hiring bias, favoring certain language associated with confidence, which correlated with gender.
Diversity in algorithm development teams can help mitigate bias.
Auditing algorithms before launch is crucial to identify and address potential biases.
Opportunities for academics and researchers to contribute to the ongoing discussion on algorithmic fairness.