This post explores how bias can creep into word embeddings like word2vec, and I thought it might make it more fun (for me, at least) if I analyze a model trained on what you, my readers (all three of you), might have written.
Often when we talk about bias in word embeddings, we are talking about such things as bias against race or sex. But I’m going to talk about bias a little bit more generally to explore attitudes we have that are manifest in the words we use about any number of topics.
Blog Editors
Recent Updates
- OIRA Memo on Agency Deregulation: Implications for Health Care
- Outside Counsel’s Internal Investigations—Including Those Relating to Health Care—Are Privileged and Protected from Disclosure
- Podcast: Current Tailwinds in Women’s Health - What Do They Mean for Your Business? – Diagnosing Health Care
- Novel AI Laws Target Companion AI and Mental Health
- EDPA Ramps Up Its White-Collar Enforcement Framework