Hey, what’s in that AI stew? decoding algorithmic outputs
  • 31 Mar 2024
  • 12 Minutes to read
  • Dark
    Light

Hey, what’s in that AI stew? decoding algorithmic outputs

  • Dark
    Light

Article summary

Thank you to Kem-Laurin Lubin, PH.D - C for sharing her insight, knowledge and expertise with us.

You can read this article on Medium as well.

How AI- powered solutions make decisions that impact humans — my journey from UX Design Leader to Computational Rhetoric

“A magician is an actor playing the part of a magician.” — Jean Eugene Robert-Houdin (Houdini)

In2018, I pitched my PhD proposal at the University of Waterloo, specifically within the Department of Philosophy, but it was rejected. My research interest was in exploring human agency within the context of user interface design, and AI and emergent technology, in order to empower users who are often restricted by the machine’s terms. I wanted to underpin my work with philosophy or what has transformed into what we all loosely refer to as Tech Ethics and morality practices but through the lens of human agency in human computer interaction (HCI). I will never forget sitting across from this elderly female professor, with whom I had booked some time but there was no recognition of what I was trying to explain, in her eyes. I believe the rejection of my idea stemmed from a lack of understanding among the older philosophy faculty members about the significant role they would have to play, as a branch of the Humanities, particularly at this critical point in technology history.

Shortly thereafter, I applied to the Digital Arts Communication (DAC) program at the University of Waterloo, now rebranded as the Experimental Digital Media (XDM) program. This was the very crucible where I had forged my Master’s degree nearly two decades prior, a program that wove together my love for technology and language. The XDM program promised an intellectual marriage of rhetoric, philosophy, computation, and user interface design, all orbiting around the nucleus of human agency. To be honest, I had strayed from this option for fear of being obligated to read literature in the brand of the Bronte sisters, books that had never appealed to me, even back then. Nonetheless, I had returned to my academic sanctuary, where much of the reading met my expectations.

The delicious irony of this tale unfurled when I encountered a familiar face on a panel: the same Philosophy professor from yesteryears, whose alleged grasp on AI was about to be unmasked. As the discussion unfolded, it became evident that her understanding of AI was as nebulous as ever, a stark contrast to the clarity she once claimed, as a basis for suggesting I was not a fit for the Philosophy department — a space of clonish sameness, if you were to ask me; I clearly and visibly do not meet the mold. It seems that some minds, much like fine wine, don’t necessarily improve with age. But then, I digress.

Many years earlier, after completing my Masters degree, with my major project, “Screen-based Controls for GUI,” it felt almost predestined that I would embark on a career in design. At one point, fueled by my passion for storytelling, I even aspired to into game writing. Though it didn’t align with my path at the time, I proudly contributed to the evolution of leading Autodesk’s Maya — the 3D animation software, a milestone that facilitated such creative endeavors. This achievement marked a significant chapter in my career, yet it also underscored my hunger for knowledge.

I would also take some time to having and raising two boys. Reflecting on this journey, I’m gratified by the decision to resume what had been a paused chapter in my life. Fortified by a blend of professional experience and personal growth, I found my footing once again.

During my stint as the Design leader for Maya, I stumbled upon the fascinating world of Artificial Intelligence (AI) and Machine Learning (ML), particularly in the context of procedural design generation — what is now coin, Generative AI. This specialization focused on the intelligent selection of materials within Maya. My team and I taught Maya to distinguish between a spectrum of materials, from asphalt to emeralds, enabling the software to generate designs based on specific prompts and some clever coding.

It is humorously reminiscent of that scene in TV show, “Silicon Valley” from Season 4, Episode 4, where the character Jian-Yang creates an app that can identify whether something is a hot dog or not. This humorous concept caught the attention of viewers and was notable for how it played with the themes of startup culture and technology innovation depicted in the show. Interestingly, the app was actually developed and released in real life, allowing users to experiment with its functionality and humorously distinguish between hot dogs and other objects.

Credit — Silicon Valley

Our Design challenges where no different.

This breakthrough allowed us to conceptualize a dynamic library of materials that could be automatically populated and randomized within our designs. Imagine the efficiency of having a bookshelf auto-fill with books of various appearances and sizes, all without the painstaking process of modeling each one from scratch. This was not just a leap in technology; it was a giant stride in creative design, transforming the tedious into the effortless.

My team and I also delivered the first cloud instance of Maya in 2016.

Leaving Design to pursue Computational Rhetoric

This exploration revealed the transformative power of these technologies in streamlining workflows. While the advancements elicited admiration from many, I perceived an impending shift in the role of traditional design. This insight propelled me to return to academia, eager to explore my other passions and understand the broader implications of these technological shifts. I have written about it a few times here on Medium, over the years.

During this period of my career, I also realized the shift in user interface design towards utilizing UI components and how AI was becoming central to user experience through proceduralization. For instance, we used user data to predict and streamline tasks, and “next best action.”

Death of UI Design as I knew it

Inthe world where user interface elements have become modular components, it’s tempting to question the necessity of designers. The argument goes that if software can automatically generate these components, then human input might seem redundant. However, this perspective overlooks the essence of design as a human-centered process. Design is not just about assembling parts; it’s about crafting experiences, understanding user needs, and infusing creativity into interactions.

My contemplation of leaving the field was compounded by the draining commute from my quaint town to the bustling city of Toronto, a journey that not only tested my patience but also sparked reflections on work-life balance and career satisfaction. This theme resonated with many, becoming a focal point in my most-read post on Medium, where I shared insights and personal anecdotes about treading these professional crossroads.

How are machines making decisions about humans?

Myfascination with design extends beyond the aesthetic; it’s rooted in a desire to understand the intricacies of how automated systems influence our lives. I am intrigued by the process of automation — what inputs lead to specific outcomes, and more importantly, how these processes are computed and their impact on humanity. This curiosity drives me to explore the nuanced relationship between technology and user experience, delving into the ‘why’ and ‘how’ behind the interactions that shape our digital environment.

As a designer, my role transcends the creation of visually appealing products; it involves a deep understanding of the user’s behavior and the socio-technical systems that guide these behaviors. I am constantly questioning how automated decisions shape our experiences and societal norms. My goal is to ensure that technology not only serves functional purposes but also aligns with human values and needs, creating a balance between efficiency and empathy in the digital space. This holistic approach to design and technology is what motivates me to continually explore and understand the complex web of interactions that define our digital experiences.

Eventually, as noted earlier, I started my PhD engaging in what is now known as computational rhetoric. This unique program, at the University of Waterloo, dubbed the MIT of the North allowed me to merge my technical abilities with my knowledge of user interface design and rhetorical history. Here, I sought to critically examine AI systems, posing the fundamental question: “What is in the AI stew?” from both a rhetorical and technical perspective.

Technology these days seems to be pulling many rabbits out of hats, attributing them to AI but how is it computing things, weighing things and for what reasons. This topic fascinates me, profoundly. I ask questions like what does the model know, and when, and why and so on.

AI and rhetoric — decision making as persuasion and the challenges

Rhetorically speaking, the advent of AI has ushered in a multitude of capabilities that were once confined to the world of science fiction. However, with great power comes an equally substantial responsibility to address the implicit ramifications that AI has on society. Embedded Bias Propagation is a critical concern as AI systems, primarily learned from historical data, can inadvertently perpetuate existing societal biases. Gender bias, in particular, is a tenacious form of prejudice that AI can reinforce, further entrenching inequality rather than serving as a tool for its dismantlement. These biases, once coded into the digital web of AI algorithms, become part of a self-fulfilling prophecy, giving an undeserved permanence to archaic and unjust societal norms.

Moreover, Digital Profiling Risks brought about by AI create a blueprint for discrimination, with women and marginalized communities often at the receiving end of its adverse effects. Such profiling has the dangerous potential to be exploited for targeted bias, amplifying the impact of discrimination in digital spaces where personal data can be both currency and weapon. Concurrently, the Power Dynamics Preservation inherent in AI systems can inadvertently cement entrenched power structures.

Far from being a neutral party, AI can act as a gatekeeper, maintaining the status quo and presenting formidable barriers to equitable change. These dynamics extend to the Legal and Ethical Implications of AI in judicial matters, where the use of AI in evidence collection and legal decision-making demands unwavering vigilance to ensure that the scales of justice remain balanced. Without such oversight, Transparency and Accountability become casualties, leaving AI’s operations shrouded in mystery and its users helplessly uninformed about the algorithms that significantly shape their lives.

In the context of Social Impact and Responsibility, AI is not merely a technological innovation but a catalyst for cultural and social evolution. Its deployment carries an inherent responsibility to manage its influence on societal norms and human interactions thoughtfully. To foster an era where technology harmonizes with ethical standards, Inclusive Design and Development must be at the forefront of AI creation. Such oversight not only fosters innovation that is equitable but also ensures that the multiplicity of human experiences and viewpoints are respected and reflected.

It is not enough for AI to be advanced; it must also be aligned with the intrinsic values of fairness, transparency, and inclusivity that form the cornerstone of a just society.

AI, Technically speaking — the technical challenges exposed

Now let’s examine this from the technical angle. The burgeoning reliance on algorithms to chart the course of daily human affairs comes with the hidden specter of bias amplification, a phenomenon where algorithmic characterization deepens the grooves of societal disparities, with gender biases being a particularly troubling example. Historical data sets — steeped in antiquated biases and preconceptions — often serve as the bedrock upon which these algorithms are built. Consequently, they risk not only perpetuating but magnifying the very prejudices they are fed. For instance, a hiring algorithm marinated in data from an era with a proclivity for male predominance could inadvertently continue the cycle, sidelining female applicants and thus, cementing gender disparities in the workplace.

The tentacles of algorithms extend into the very opportunities that shape lives — jobs, loans, education — and without a deliberate injection of gender equity into their coding DNA, they may systematically disenfranchise women and other marginalized genders. A man might find doors opening effortlessly, while a woman encounters hidden, algorithmic roadblocks. Compounding the dilemma, a veil of secrecy often shrouds these algorithmic verdicts, clouding the prospects of identifying and rooting out ingrained biases.

Insuch a tech landscape, transparency and accountability don’t merely improve processes; they are lifelines that could pull countless individuals from the undercurrents of bias. Moreover, the chorus of voices crafting these digital destinies is startlingly homogenous. The scarcity of female representation in the echelons of algorithmic development begets tools and services blind to the needs of half the world’s population, thereby exacerbating the gender digital divide.

This interplay of algorithms and gender equity does not only rattle the moral compass — it teeters on the brink of legal and ethical transgression. As laws and international mandates strive to shelter rights and foster equality, it becomes imperative that the digital tendrils shaping the future do not stray from these foundational principles of justice. Upholding gender equity in AI development is not an optional enhancement but a categorical imperative to prevent digital advancements from regressing into digital oppression.

Why we need to care what AI is cooking up

Inthe grand scheme of digital evolution, caring about the “recipe” of AI is not just about understanding the ingredients of innovation — it’s about safeguarding our collective future. As AI becomes the chef in the kitchen of human progress, we must remain vigilant tasters, ensuring that the dishes it serves enhance — not harm — our society. This culinary metaphor extends beyond a mere fascination with technology; it’s a clarion call to actively engage with and responsibly steer the AI gastronomy. By nurturing a technological landscape that values equity, transparency, and accountability, we can feast on the fruits of AI’s labor without the aftertaste of regret. Hence, as AI stirs the pot of possibility, let us ensure that the meal it’s cooking up is one that can be savored by all, nourishing the very fabric of our diverse and vibrant global community.

Related post

References

Robert-Houdin, J. E., & Hoffmann, L. (2011). Secrets of Conjuring and Magic: Or How to Become a Wizard (p. 43). Cambridge University Press.

About me: Hello, my name is Kem-Laurin, and I am one half of the co-founding team of Human Tech Futures. At Human Tech Futures, we’re passionate about helping our clients navigate the future with confidence! Innovation and transformation are at the core of what we do, and we believe in taking a human-focused approach every step of the way.

We understand that the future can be uncertain and challenging, which is why we offer a range of engagement packages tailored to meet the unique needs of both individuals and organizations. Whether you’re an individual looking to embrace change, a business seeking to stay ahead of the curve, or an organization eager to shape a better future, we’ve got you covered.

Connect with us at https://www.humantechfutures.ca/contact


Was this article helpful?

ESC

Eddy AI, facilitating knowledge discovery through conversational intelligence