Beyond the code: the missing voice of Tech Humanists in the AI revolution
  • 03 Apr 2024
  • 11 Minutes to read
  • Dark
    Light

Beyond the code: the missing voice of Tech Humanists in the AI revolution

  • Dark
    Light

Article Summary

Thank you to Kem-Laurin Lubin, PH.D - C for sharing her knowledge and insight with us.

Click here to read on Medium.

7 reasons why prioritizing people over pixels is the ultimate upgrade in AI development

“Technology is nothing. What’s important is that you have a faith in people, that they’re basically good and smart, and if you give them tools, they’ll do wonderful things with them.” — Steve Jobs.

Alright, I know — here I go again with another AI post, a topic that's become ubiquitous in my work, life, and academic circles. AI has infiltrated every aspect of my surroundings, becoming inescapable. It's the main topic of conversation for many, though not for all. I'm looking at those in the Humanities who observe AI from a distance, theorizing but not engaging further. Also, even, enduring yet another AI pitch, whether on my social media feed or elsewhere, now feels as stimulating as trying to extract profound insights from my toaster. AI is frequently touted as the solution to a myriad of problems, capturing the limelight and sparking a gold rush. This rush sees entities ranging from Big Tech to small startups desperately trying to leverage its alleged capabilities, often prioritizing speedy achievements over deep, meaningful progress and centring human beings.

The prevailing hype insinuates that AI, with its data-driven prowess, can accomplish virtually anything.

Yet, this relentless promotion of AI, reflecting the fatigue I’m experiencing, often glosses over the necessity for a balanced, thoughtful approach to AI development and integration, considering human impact stories. However, the race, particularly led by Big Tech, to exploit AI’s charm, seems to neglect the critical aspects of sustainable growth and ethical practices, leading to a scenario where the hype may outpace the actual, practical value of AI in our lives.

What about the humans?

Amidst the tech-talk cacophony, a notable silence about the human aspect resonates, highlighted humorously by American satirist and comedian Jon Stewart. He mockingly suggests that AI’s ultimate aim is to “come for our jobs,” a sentiment echoed by many I know. While he jests that Big Tech promotes AI as merely an assistant, his cynicism is evident. Indeed, he seems to strike a chord with the broader public sentiment.

The real conundrum lies in finding those who can translate this high-tech enchantment into plain speech and help us all understand the magic behind the AI mirror. In short, to understand what is AI doing with our data?

My more pressing questions, however, include: Who is championing the human aspect amidst this digital tumult? Who is monitoring the human implications, the authentic human consequences?

We mustn’t overlook the individuals — the H.U.M.A.N.S — at the core of this story. It’s crucial that we refine our discussions about technology, especially in the Humanities. We should highlight the human narratives of AI’s impact, affording them the prominence they warrant in our collective conversation.

Stuart Hall, a seminal Jamaican-born British cultural theorist, in his day, profoundly impacted cultural studies and the humanities fields of studies. His critical examination in “The Emergence of Cultural Studies and the Crisis of the Humanities” highlights the field’s interdisciplinary nature, emerging in response to a crisis within traditional humanities. In today’s AI-powered, frenzied era, Hall’s narrative resonates with the experiences of Tech Humanists, like myself. As our technological posture shifts at this rapid race, the echoes of Hall’s critique rings true, underscoring the critical need to anchor technology within the humanistic framework.

“Oh the Humanities, where are you?” Hall might have quipped, emphasizing the enduring importance of humanistic inquiry amidst technological progress.

What is Tech Humanism?

Tech humanism is a contemporary philosophy that seeks to humanize technology by integrating ethical considerations, user-centered design principles, and social responsibility into the development and deployment of technological solutions. At its core, tech humanism recognizes the transformative power of technology in shaping our daily lives and society as a whole. However, it also acknowledges the inherent complexities and potential pitfalls that come with technological advancements.

Central to tech humanism is the belief that technology should serve humanity rather than dictate its course. This philosophy emphasizes the importance of designing technology that is accessible, inclusive, and transparent, ensuring that all individuals, regardless of background or circumstance, can benefit from its use. Moreover, tech humanism promotes the empowerment of users, granting them agency and control over their interactions with technology.

What does a Tech humanist do?

In practical terms, tech humanism guides the development of AI-powered systems, digital platforms, and other technological innovations by prioritizing ethical considerations such as privacy, fairness, and accountability. It encourages interdisciplinary collaboration, fostering dialogue between technologists, ethicists, policymakers, and end-users to address complex societal challenges and ensure that technological advancements align with human values and aspirations.

Ultimately, tech humanism offers a holistic approach to technology that goes beyond mere functionality or efficiency. It advocates for a future where technology serves as a force for positive social change, enhancing human well-being, fostering inclusivity, and promoting a more equitable and sustainable world.

While “tech humanist” may not be a widely recognized or self-identified term, there are several thought leaders in the fields of technology, ethics, and human-centered design who advocate for similar principles. Here are a few notable figures who have written extensively on topics related to tech humanism:

Cathy O’Neil is a mathematician, data scientist, and author known for her work on algorithmic accountability and fairness. Her book Weapons of Math Destruction explores the ethical implications of algorithms and their impact on society.

Joy Buolamwini is a researcher and founder of the Algorithmic Justice League, focusing on the biases in AI, particularly in facial recognition technology. Her work reveals how these systems often misidentify women and people of color, advocating for more ethical and inclusive AI development. Her new book Unmaking AI is a must read for anyone in the AI space.

Safiya Umoja Noble is a professor of Information Studies and African American Studies at UCLA. Her book Algorithms of Oppression examines how algorithms perpetuate bias and inequality, particularly against marginalized communities.

Timnit Gebru is a computer scientist known for her research on ethics in AI. She co-authored influential papers on algorithmic bias and fairness and has been a vocal advocate for diversity and inclusion in the tech industry.

Tristan Harris is a former Google design ethicist and the co-founder of the Center for Humane Technology. He has spoken extensively about the need to design technology that prioritizes human well-being and has raised awareness about the addictive nature of social media platforms.

While these figures may not explicitly identify as “Tech Humanists,” their work aligns closely with the principles of tech humanism, advocating for the ethical and responsible development of technology that serves humanity’s best interests.

Many of these advocates, not just from Science, Technology, Engineering, and Mathematics (STEM) backgrounds, have aimed to engage in discussions about AI’s technological practices, pushing for consideration of the human element. However, this is not sufficient. The situation with Big Tech, like Google’s controversial dismissal of ethicist Timnit Gebru after she highlighted potential biases in their language models against Muslims, illustrates the tension. These are exactly the types of discussions we need. I assert that it’s time to sound the alarm and call for a collective effort to recalibrate our focus away from solely technology and profit-driven perspectives. This pivotal moment in human history demands a reevaluation, where humanists and technologists collaborate, striving for the equilibrium we need.

We’re not just missing the boat; we’re standing on the dock arguing about the colour of the life jackets. An apt analogy for what I am witnessing close up.

This is not the time to sit back and theorize but rather a poignant time in history where we need no longer look to Technology to lead, even though, there is no noticeable places at the Career menu on many AI company websites. Take for example, in looking at a recent job board for Open AI, there is one confusing central message on the company’s page, given their job postings:

Ifmy constant AI musings have felt like a broken record, it’s not just you; it feels the same to me. And while it seems the many of my peers in the humanities are treating AI like a mysterious artifact in an Indiana Jones movie, the conversation presents itself as two-fold. They do not see themselves reflected in job postings and so do not create the necessary spaces to be in dialogues with the tech world, who has left for the races, leaving tales of human experience eating their digital dust.

It is a two way conversation and we need to, with criticality and caution begin to have the hard conversations that yes AI is here? But how are we centring the human stories?

Designing for AI introduces unique challenges for Tech humanism that diverge significantly from traditional software design paradigms and yet there seems to be a complete void of this role as a player in this new transformational space. These challenges stem from the nature of AI as dynamic, data-driven, and often opaque yet black-boxed entity that, wrongly applied can have varying level of human impact.

I’ve authored one of the few academic papers on the broad heuristic principles guiding AI — a topic integral to my focus in computational rhetoric, bolstered by two decades as a Design Researcher. We are Design Researchers need to be included in the dialogue of AI -powered design systems; we also need to be the voice of the community interests, beyond building for “trust & safety,” as depicted in nebulous job titles.

Gone are the days when we could afford the luxury of sidestepping the responsibility to earnestly engage in human-tech dialogue. I strongly believe that adopting a community-centered approach is imperative for comprehensively assessing the immediate and long-term ramifications of integrating this transformative technologies. Designing these technologies with this awareness is paramount. These challenges cannot be sacrificed in the pursuit of speed.

Understanding these challenges is essential for creating AI-driven products that are usable, useful, and trustworthy within the broader societal context. Here are some of the primary challenges we face when designing AI that transcends the current boundaries of societal norms and practices. I have harped on these themes elsewhere in my writings and they are, too, the centred topics of my scholarly work, but I expect repetition is in order. We must understand our specific contributions in their myriad forms and manifestations — from the ethical considerations to the social impact and awareness:

1. Ethical foundations

Tech humanism provides a robust ethical framework, ensuring that AI-powered design systems prioritize principles of fairness, accountability, and respect for human values

2. User-centric approach

With a focus on human needs and experiences, tech humanism ensures that AI-powered design systems are designed to serve users effectively, enhancing usability and satisfaction.

3. Inclusive design practices

Embracing diversity and inclusivity, tech humanism advocates for AI-powered design systems that are accessible to all users, regardless of their abilities, backgrounds, or circumstances.

4. Transparent decision & sense-making

Tech humanism emphasizes the importance of clarity in the workings of AI algorithms and the decision-making processes. It advocates for systems that not only reveal the "how" and "why" behind their recommendations but also enable users to make sense of these processes. This transparency is crucial for nurturing trust and ensuring accountability in the technology we use.

5. User empowerment through control

By empowering users with control over their interactions with technology, tech humanism ensures that AI-powered design systems prioritize user autonomy, customization, and informed decision-making.

6. Iterative learning & improvement

With a commitment to continuous learning and adaptation, tech humanism advocates for AI-powered design systems that evolve over time through user feedback, research insights, and ongoing refinement, staying responsive to changing needs and ethical considerations.

7. Social impact awareness

Tech humanism encourages a critical examination of the social implications of AI-powered design systems, promoting responsible innovation that considers potential risks and benefits for individuals and society as a whole.

Incorporating tech humanism into AI development is the ultimate upgrade because it ensures that technology serves humanity’s broadest goals, enhancing well-being and fostering ethical progress. By prioritizing human values and ethical considerations, tech humanism guides AI towards beneficial outcomes, preventing the exacerbation of societal inequities and avoiding unintended negative consequences. This approach not only fosters trust and acceptance among users but also ensures sustainable innovation that aligns with our moral and ethical frameworks. Ultimately, integrating tech humanism into AI is essential for creating technology that not only advances our capabilities but also upholds and enriches the human experience.

References

  1. Bender, E., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). ACM. https://doi.org/10.1145/3442188.3445922

  2. Buolamwini, J. (2023). Unmaking AI. Penguin Random House.

  3. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 77–91.

  4. O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.

  5. Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.

Related posts


About me: Hello, my name is Kem-Laurin, and I am one half of the co-founding team of Human Tech Futures. At Human Tech Futures, we’re passionate about helping our clients navigate the future with confidence! Innovation and transformation are at the core of what we do, and we believe in taking a human-focused approach every step of the way.

We understand that the future can be uncertain and challenging, which is why we offer a range of engagement packages tailored to meet the unique needs of both individuals and organizations. Whether you’re an individual looking to embrace change, a business seeking to stay ahead of the curve, or an organization eager to shape a better future, we’ve got you covered.

Connect with us at https://www.humantechfutures.ca/contact


Was this article helpful?

ESC

Eddy, a super-smart generative AI, opening up ways to have tailored queries and responses