
How AI could silence users. Prioritizing AI may marginalizeā¦ | by Aarshin Karande | Nov, 2023
[ad_1]
āAudiences [are] the most spoken for and presumed about constituency in todayās mediated ecologies of power.ā āProfessor Sonia Livingstone (2010)
One year has passed since OpenAI released ChatGPT and the mass consumption of machine learning tools became a new normal. In many parts of the world, āthe great resignationā preceded this release and āstagflationā succeeded it. Social scientists suggest these developments indicate a fundamental restructuring for human labor and the future of work. Not all jobs are created equal, and the hype around AI means fewer jobs for more people. This includes the work of user research.
Having worked in digital strategy around user research and customer experience over the last decade, I cannot help but wonder what place customer whispering has in a future defined by customer predicting.
I recently lamented that research around users ā the people who engage digital products and services ā will be restricted by the increasing adoption of AI tools developed to surveil, predict, and nudge consumers. Beyond software, AI represents a tradition of research methods divergent from those central to user research and customer advocacy.
Philosophically, AI is founded around quantitative methods ā ways of measuring and analyzing that can be reduced to measurable numbers. Philosophically, user research is founded around qualitative methods ā ways of documenting real complexities that can be captured by sortable concepts.
User research often employs quantitative methods too, like surveys. Commonly, user researchers combine qualitative methods with quantitative methods into mixed methods studies intended to yield the richness of the former with the granularity of the latter.
End users cannot be fully captured through just enumeration, the language of quantitative methods. The contexts, contradictions, and contours of their experiences require qualitative methods capable of catching the irreducible complexities of their lives. But, when the market divests from user researchers and invests in ML researchers, it means users will be seen more as numbers than humans.
Jobs which practice methods that best align with the enigmatic quantitative methods of machine learning are āinā and the more complementary ones are on their way āout.ā This means we are witnessing a disintermediation of research tools that humanize in service of tools that numerize. For example, scholars have examined how the entrenchment of ML research methods in spaces like art history jeopardizes their research methods.
This trend will persist because todayās AI frontier models have yet to work intelligently, productively, and meaningfully with qualitative data. Much of the linguistic-based AI tools (like ChatGPT) are based on statistical linguistics rather than symbolic linguistics. In statistical linguistics, probable meanings are derived whereas in symbolic linguistics, possible meanings are constellated.
For example, ChatGPT would employ statistics about particular words to calculate optimal probable responses to a prompt. ChatGPT does not register whether its response is morally, linguistically, or factually correct ā merely statistically correct based on its dataset. Instead, humans employ symbolic meaning about particular things to articulate shared experiences. For example, someone can be described as āhotā and ācoolā and not be seen as a ironic. That humans can make content and be content too. AIs have yet to meaningfully deal with the world of meaning.
Until that project is pursued substantially and with efficacy around the things that matter to usersā everyday lives, users wonāt mean much to AI ā literally and figuratively. Academics are pursuing important work to integrate historical social research methods in ways that accommodate the novelties and benefits of AI while accounting for risks. But, industry has yet to do the same in tangible, sustainable ways.
The work of qualitative methods ā employing informed, scientific interpretative tools ā requires human intelligence capable of understanding meanings that may be simultaneous, contradictory, new, improvised, historical, niche, polysemic, false, etc. Interpretation plays a critical role for really seeing ā let alone understanding ā humans.
There is a difference between sorting users as data versus really seeing them and hearing them out for the people they really are. User research makes up that difference and is indispensable in the age of AI.
[ad_2]
Source link