While working on my PhD at the University of Utah, I explored research questions such as, “How do we evaluate NLP tech if they contain biases?” As language models evolved, our questions about potential harms did, too. During my postdoc work at UCLA, we ran a study to evaluate challenges in various language models by surveying respondents who identified as non-binary and had some experience with AI. With a focus on gender bias, our respondents helped us understand that experiences with language technologies cannot be understood in isolation. Rather, we must consider how these technologies intersect with systemic discrimination, erasure, and marginalization. For example, the harm of misgendering by a language technology can be compounded for trans, non-binary, and gender-diverse individuals who are already fighting against society to defend their identities. And when it’s in your personal space, like on your devices while emailing or texting, these small jabs can build up to larger psychological damage.