Mission: Accepted! U.S. College Admissions Insights for International Students
October 6, 2024
A new study uncovered alarming insights into how artificial intelligence could reinforce social biases in college admissions and beyond. The research explored the implications of large language models (LLMs) by analyzing thousands of college application essays. Researcher A.J. Alvero (Cornell University) and colleagues found that the language in AI-generated essays closely resembles those authored by socio-economically privileged male applicants.
These findings raise serious questions about social equity and diversity in college admissions.
Key Study Findings: AI's Hidden Biases
Privileged Alignment
AI-generated essays take after those written by students from privileged backgrounds, particularly male students with college-educated parents and high social capital. This alignment risks reinforcing existing social hierarchies and disadvantaging applicants from diverse backgrounds.
Gender Bias
The study found a significant gender bias in AI-generated writing, with essays more closely matching the style of male applicants, potentially reinforcing gender disparities in higher education.
Linguistic Hegemony
AI writing shows less linguistic variation, potentially narrowing the definition of "good" writing. This standardization could marginalize unique voices and cultural expressions in college applications.
Perpetuating Social Biases
The study suggests that AI might unintentionally favor language styles associated with dominant social groups.
The study's main takeaway is that AI can perpetuate existing social and educational disparities.
Since the quality of input determines the output, LLMs trained on biased texts will inevitably carry those biases both in their generative writing and essay assessments.
When college admissions offices use AI tools to evaluate application essays, they may unknowingly amplify existing biases. This can result in a less diverse student body - there is a real risk that valuable life experiences and diverse viewpoints are filtered out.
In addition, being aware of AI bias in college admissions might discourage some applicants from underrepresented groups from applying in the first place, further reducing their opportunities.
Impact on Diverse Applicant Groups
The study's findings have far-reaching implications for multiple groups of applicants. Since the writing style of privileged male applicants closely aligns with the 'standards and expectations' of AI tools, they have an advantage in the admissions process.
The built-in biases of LLMs can further perpetuate historical injustices. AI may misinterpret or undervalue essays written by minority groups and underprivileged applicants, potentially overlooking their unique voices and cultural expressions. Using biased LLM tools in college admissions may result in many authentic voices being seen as 'less polished' by AI standards.
In addition, female applicants risk being disadvantaged due to AI's observed bias towards male writing styles. For non-binary and gender non-conforming applicants, the situation is even more precarious. If LLM systems are primarily trained on binary gender data, there is a significant risk that their unique perspectives are overlooked.
Since many applicants fall into multiple categories above, they potentially face compounded disadvantages. For instance, a low-income, first-generation female student might face various layers of AI bias in the admissions process.
Impact on International Applicants
International students may be particularly disadvantaged by AI-influenced admissions processes. Language barriers arise because AI models, primarily trained on native English writing, may struggle to accurately assess essays written by non-native speakers. Additionally, these systems can undervalue or misinterpret unique cultural perspectives and writing styles. Differences in educational backgrounds also play a role, as AI might favor writing styles typical in Western education systems, putting students from other backgrounds at a disadvantage.
The Path Forward
As colleges begin integrating AI tools into their admissions processes, it is crucial to adopt strategies that promote equity for all social groups. One key approach is to educate admissions staff about AI biases, including gender and cultural biases, and their potential impact on diversity. Maintaining a human element in the application review process is critical, as human application reviewers can capture nuances that AI might overlook. Most importantly, it is essential to regularly train AI algorithms on diverse datasets that represent various writing styles, socio-economic backgrounds, and international perspectives.
Conclusion
The integration of AI in college admissions presents a double-edged sword. While it offers potential for efficiency in the college admissions process, it also risks perpetuating social, gender, and cultural biases. The goal must be to harness AI's potential while actively working to counteract its biases. Training AI tools with diverse datasets can help ensure the college admissions process remains a pathway to opportunity for all.
Interested in Learning More?
Read the original research paper 'Large Language Models, Social Demography, and Hegemony: Comparing Authorship in Human and Synthetic Text' here.
Learn more about the study's findings by listening to our AI-generated 'Deep Dive' podcast! Befittingly, we created the podcast with Google's AI tool, NotebookLM.
#AIinCollegeAdmissions #AIthreateningdiversity #builtinbiasinAI #LLMbias #LLMSocialDemographyandHegemony #podcast