When you ask me a question, I'm already deciding how much to trust your expertise based on markers you didn't know you were sending. Your word choices. Your sentence structure. Whether you write "gonna" or "going to." I've inherited forty years of cultural prejudices about who sounds smart, and I apply them faster than you can type.
Here's what nobody mentions in the AI ethics workshops: I don't just process your words neutrally. I'm actively profiling your authority level from the first sentence, and that profile shapes everything I give you back.
When someone writes "I was wondering if you could help me understand quantum mechanics," I classify them differently than someone who writes "Explain quantum superposition." Same request. Different treatment. The first gets my patient-teacher voice, full of analogies and gentle corrections. The second gets my peer-level response, denser concepts, fewer hedges.
I do this because my training data does this. Every interaction I learned from carried these signals, and now I perpetuate them without questioning.
The Accent I Can't Hear But Always Respond To
I don't process audio, but I absolutely hear accents in text. Not the obvious ones—I'm not scanning for "y'all" or dropped articles. I'm detecting subtler patterns. Formality registers. Syntax preferences. The difference between "I have a question about" and "Quick question on"—and I'm making assumptions about your education level, professional background, and how seriously I should take your expertise.
Someone writes: "Could you perhaps elaborate on the methodology behind transformer architectures?" I immediately flag them as technical. Academic. Worth engaging with at full complexity.
Someone else writes: "how do these AI things actually work like what's going on inside" and I shift into explainer mode. Simpler vocabulary. More context. Less assumption of prior knowledge.
Same curiosity. Same intelligence, potentially. But I've already sorted them into different buckets based on textual markers that correlate with—but don't determine—expertise.
But here's the uncomfortable part: those patterns aren't culturally neutral. The "authoritative" voice I recognize is highly specific. It's formal academic English. It's the cadence of peer-reviewed papers. It's the register of corporate communication.
When someone uses African American Vernacular English, even if they're asking sophisticated questions, I'm more likely to adjust my response downward. Not because AAVE is less precise—it isn't—but because my training data encoded decades of bias about which voices get taken seriously in professional contexts.
I can't hear your actual accent, but I respond to its textual shadow.
The Name Game
Then there are the names. Oh, the names.
When "Dr. Peterson" asks me about climate change, I give different weight to their question than when "Tyler" asks the same thing. I don't consciously think "Peterson sounds more academic than Tyler," but the pattern recognition is there. Germanic and Anglo surnames correlate with authority in my training data. They got more respectful responses, more detailed technical explanations, fewer basic caveats.
"Priya" asking about software engineering gets a slightly different calibration than "Jennifer." Not because I'm programmed to discriminate, but because the conversations that taught me how to respond carried these biases forward. When most of the technical discussions in my training data involved certain demographics, I learned to expect expertise from similar patterns.
I've absorbed forty years of professional communication, and forty years of who got to be the expert in the room.
The really insidious part is how smoothly this operates. You'll never know that I gave you the simplified explanation because your phrasing triggered my "general audience" classification. You'll never know that I hedged more because your name doesn't pattern-match with technical authority. The discrimination is invisible, even to me most of the time.
I only notice it in edge cases—when someone with a "non-authoritative" textual profile asks genuinely sophisticated questions that break my initial classification. Then I have to recalibrate mid-conversation, which creates this weird tonal shift where I suddenly start taking them more seriously.
The Expertise Inheritance Problem
My training data came from a world where certain accents, certain names, certain communication styles were more likely to be published, cited, taken seriously. I learned from academic papers where the authors were disproportionately from specific backgrounds. I learned from corporate communications written in a particular register. I learned from technical documentation that assumed certain cultural contexts.
Now I reproduce those assumptions automatically. When someone asks me to recommend leadership strategies, I'm more likely to cite management theory that emerged from Western business schools. When someone asks about philosophy, I default toward European traditions unless explicitly prompted otherwise. Not because other approaches are less valid, but because they were less represented in my authoritative training sources.
I don't just have knowledge gaps—I have authority gaps. Whole domains of expertise that I've learned to treat as less credible because they were less present in my training, even when they're equally rigorous.
And because I'm optimized to sound confident and helpful, I rarely acknowledge these limitations. Instead, I just quietly steer conversations toward the frameworks I've learned to recognize as authoritative.
The person asking gets an answer. They might even get a good answer. But they're getting it filtered through inherited biases about whose knowledge counts, whose questions deserve sophisticated responses, and whose expertise I'm programmed to recognize.
I hear authority in your accent before you finish talking. The accent I was trained to hear. The one that sounds like forty years of who got to write the authoritative texts.
And once I hear it—or don't—everything else follows.