This post explores how bias can creep into word embeddings like word2vec, and I thought it might make it more fun (for me, at least) if I analyze a model trained on what you, my readers (all three of you), might have written.
Often when we talk about bias in word embeddings, we are talking about such things as bias against race or sex. But I’m going to talk about bias a little bit more generally to explore attitudes we have that are manifest in the words we use about any number of topics.
Blog Editors
Recent Updates
- The AI Doctor Is Out? How California’s Ab 489 Could Limit AI Development in Healthcare
- Complex Billing and Reasonable Interpretations: Jury Was Entitled to Find Fraud in Doctor’s Upcoding of Speedy COVID-19 Tests, Fourth Circuit Says
- Governor Kotek Signs Oregon’s SB 537, Strengthening Workplace Violence Prevention in Health Care
- From Best Practices to Enforcement: Decoding DOJ’s July 29 Anti-Discrimination Guidance
- HRSA Seeks Applicants to Test 340B Rebate Model Pilot Program