Design thinking in AI policymaking: crafting human-centric frameworks
  • 15 Mar 2024
  • 28 Minutes to read
  • Dark

Design thinking in AI policymaking: crafting human-centric frameworks

  • Dark

Article Summary

Thank you to Kem-Laurin Lubin, for sharing her insight and stories in our knowledge base.

Click here to read on Medium.

Integrating creative solutions for ethical and innovative artificial intelligence governance

“Innovation is the ability to see change as an opportunity — not a threat.” — Steve Jobs

Let me first self identify before I proceed — I am a Computational Rhetorician, exploring the link between AI and identity characterization. In short, I explore, how AI uses human data to create digital versions of us for the purpose of persuasion. I coined the term algorithmic ethopoeia or in more lay terms algorithmic characterization, which I define as follows:

“the mathematizing of human data for the purpose of digital representation and human characterization, through the processes of sorting and targeting, with subsequent subjugation to algorithmic procedures and decision-making protocols.”

My earlier, notably influential publications, User Experience in the Age of Sustainability, emerged during my tenure leading the Design team at Research in Motion. It was a time when I recognized the imperative to embed sustainability at the heart of design philosophies. Over the years, my stance has only solidified, transforming my scholarly pursuits into a more pronounced advocacy for artificial intelligence as an integral component of sustainable human endeavors.

Now, twelve years subsequent to the publication of that pivotal book, my research trajectory has broadened to embrace a diverse spectrum of interdisciplinary studies. This includes an insightful contribution to Human Computer Interaction International (HCII) 2022 — Late Breaking Papers,” particularly within the Interacting with eXtended Reality and Artificial Intelligence series, my first presentation “Conversations Towards Practiced AI — HCI Heuristics.” This was in response to my peers need for heuristic guidance for designing around AI.

More recently, I explored the interplay between technological advancements and societal shifts, as evidenced by my co-written, peer reviewed, curated article soon to be published in Rhetoric Society Quarterly“Sex after Technology: The Rhetoric of Health Monitoring Apps and the Reversal of Roe vs. Wade,” where I explore how AI-powered tech is in alliance with dated jurisprudence.

I continue to explore the ethical dimensions of technology, anticipated to be elaborated upon in my forthcoming piece, “Beyond Security: The Problem with Surveillance Tech Rhetoric.” Complementing this, my dissertation, “Design Heuristics Framework for the Protection of Women,” marks a significant extension of my academic endeavours, which I will be defending in short order.

Outside this academic space and in my capacity as a Design Strategist and User Research Leader, these scholarly contributions underscore my dedication to advancing AI-Human-Centered Design (AI-HCD). My work champions a methodical approach to ensuring human privacy and security within digital frameworks, advocating for ethically responsible AI systems that enhance humanity’s best attributes by dissecting AI rhetoric and its influence on the creation of digital identities.

This unique perspective affords me a nuanced understanding of AI developments and the evolving discourse on AI regulation. From this vantage point, I’ve discerned a concerning trend:

Policymakers and regulators seem to be wading through complex waters, alarmingly detached from a profound comprehension of the technology they aim to govern.

So what’s happening — The challenge

AI is the new hot chick in town. And if you are a close follower of AI news, you will know that there have been many fight over her.

The rapid progression of AI presents a complex array of challenges and considerations. Over time, AI has evolved dramatically, achieving levels of proficiency that are remarkably similar to human abilities. This evolution has reshaped many aspects of our daily lives, including the way we work and interact with each other. In short, you can say that AI has greatly impact and transformed the way we live, even quietly so.

However, AI advancement brings with it certain drawbacks. One significant issue is the presence of bias within AI systems. These biases are more than just minor flaws; they are reflections of larger, systemic issues in our society. When AI systems inherit these biases, they can reinforce and even exacerbate existing societal inequalities. For instance, if an AI system is trained on data that contains historical biases, it may make decisions or predictions that unfairly disadvantage certain groups of people.

Situating AI equity by their associated outcome is therefore mission critical. Let me touch on a few more examples for deeper contextual application. Keep in mind, this is not to colour AI good or bad, its just what is and we need to grasp the greater societal impacts where they reside. I will elucidate three areas I am somewhat familiar: Healthcare, Credit systems and hiring practices.

1. Healthcare Allocation — a case of racial bias

Research revealed that AI used in allocating healthcare funds discriminated against Black patients by considering previous healthcare expenditures instead of actual needs. This approach ignored systemic barriers preventing these communities from accessing early medical care, thereby exacerbating health inequities. Such algorithms, without proper calibration, can worsen disparities by allocating fewer resources to those historically underserved.

2. Creditworthiness Assessment — socioeconomic bias

Inthe credit sector, AI-driven assessments have reinforced gender biases, with women frequently receiving lower credit limits. This discrepancy arises from AI systems trained on data reflecting longstanding societal inequalities, such as wage gaps and traditional roles impacting credit history. Earlier systems compounded the issue by incorporating gender and marital status, directly disadvantaging women and overlooking their financial independence and capabilities.

3. Job Advertisement and Hiring — gender bias

Gender bias in AI-driven recruitment processes has led to skewed job ad distribution, favoring male candidates in male-dominated fields due to past hiring data. Furthermore, AI hiring tools, learning from male-biased resume databases, have perpetuated this cycle by ranking female candidates lower, ignoring the diverse skill sets and perspectives they bring to the workplace. This not only limits women’s job opportunities but also hinders diversity and innovation within industries.

Other forms of bias can manifest when viewing the same situation through different lenses. For example, hiring practices may exhibit biases related to race, geographical location, and other factors.

When machines are taught bias

Itis therefore crucial to understand that these biases we see in AI are not inherent to the technology itself, but rather stem from the data it is trained on and the design choices made by its creators. As such, addressing these biases requires careful consideration of how AI systems are developed and the data they are trained on.

This challenge highlights the need for more inclusive and diverse data sets, as well as the involvement of a diverse range of people in the development of AI systems, to ensure that they serve all sections of society fairly. See the works of 

Joy Buolamwini

 (Unmasking AI

Safiya Umoja Noble, Ph.D.

 (Algorithm of Oppressions) as great references to these challenges.

AI is a disruptive force with the potential to bring about significant societal changes, particularly in the realm of employment. This disruptive nature of AI is not just a topic of academic debate; it has real-world implications that affect people’s lives and livelihoods. Considering the broad potential for impact, it becomes imperative to establish comprehensive oversight measures. Therefore, it is the responsibility of those in regulatory positions to diligently undertake the task of thoroughly grasp the full scope and implications of what is at stake.

Where are we with regulations

Terms like “curb,” “regulate,” and “control” are often used in this context. These aren’t just buzzwords. It underscores the growing recognition that while AI holds tremendous promise, its progression must be guided with a careful, responsible hand to ensure that its benefits are realized without causing undue societal disruption. If however you are looking from perspective such as mine as well as other AI researchers, it’s evident that disappointment pervades many of us when reflecting on the repeated false starts by policymakers and regulators in comprehending the transformative power of artificial intelligence (AI).

Over my two decades in the technology sector, especially as a User Researcher, I’ve learned the critical importance of understanding the context and impact on users. Unfortunately, this comprehensive human element seems to be consistently overlooked in the crafting of optimal policies and regulations for AI, essential for guiding its path with minimal negative human impact.

The EU lead

The European Union (EU) has established itself as a global leader in AI regulation with the implementation of comprehensive AI rules. These regulations cover various aspects, including transparency and the use of AI in public domains, especially focusing on high-risk systems to ensure they meet strict requirements related to model evaluation, risk mitigation, and incident reporting. This past week, a significant step was taken as the European Parliament approved comprehensive AI regulations, a process began in 2021, reflecting a shift towards safer and more human-centric AI development, influenced in part by the emergence of transformative technologies like OpenAI’s ChatGPT. This move signifies a growing acknowledgment of the need to address risks while fostering innovation.

And meanwhile in Canada

Here, in Canada, the state of AI regulations is evolving, with significant steps being made toward establishing a comprehensive legal framework. One of the main legislative efforts is the Artificial Intelligence and Data Act (AIDA), which represents Canada’s first foray into AI-specific legislation. This act is part of Bill C-27, aiming to address the complexities of AI in the context of international and interprovincial trade and commerce. Notably, AIDA introduces requirements for entities responsible for AI systems, including conducting impact assessments to identify “high-impact” AI systems and developing risk mitigation plans.

The act also emphasizes transparency and accountability, mandating public disclosures regarding the use of high-impact systems and establishing measures to manage anonymized data​​.

Granted, Canada’s approach to AI regulation is designed to be dynamic, allowing for adjustments as technology progresses. The framework aims to fill regulatory gaps highlighted by the rapid advancement of AI technologies, ensuring that Canadians are protected from potential harms while supporting responsible innovation. This involves integrating AIDA with existing legal frameworks, such as consumer protection and human rights laws, and aligning with international norms, including the EU AI Act and the OECD AI Principles​​​​.

The Canadian government acknowledges the existing legal frameworks, like the Personal Information Protection and Electronic Documents Act, and is also proposing the Consumer Privacy Protection Act to update privacy laws in the digital economy. These efforts reflect a broader initiative to keep pace with technological advancements and ensure consistent protections across various use contexts​​.

While AIDA is a significant step forward, it is part of a larger, ongoing effort to address AI’s impacts comprehensively. The Canadian government’s approach emphasizes collaboration with stakeholders and regular evaluation of regulations and guidelines, aiming for an agile regulatory process that can adapt to the changing AI landscape​​​​.

And meanwhile in the US — a case of decentralization

The state of AI regulations in the USA, on the other hand is multifaceted and evolving. While there is no unified federal approach, there are indications of increased regulatory activity at both federal and state levels, that to me, seems, uncoordinated. The Federal Trade Commission (FTC) has been proactive, issuing guidance emphasizing that existing laws apply to AI, particularly regarding unfair or deceptive claims about AI-powered products and services.

The FTC also stresses the importance of understanding the limitations and risks of AI products, including potential inherent biases​​.

Atthe state level, there has been a notable increase in AI-related legislation, addressing a variety of issues from predictive policing technologies to employment and healthcare. States like Illinois, New York City, Vermont, and Washington have enacted legislation specifically addressing aspects of AI, such as the use of AI in video interviews and automated decision tools in hiring and promotions​​.

Despite these developments, there is a sense of incremental progress rather than comprehensive regulation.

The National Institute of Standards and Technology (NIST) has also released the AI Risk Management Framework Version 1.0, offering guidelines for trustworthy AI systems, although its adoption is voluntary. This reflects a broader trend where regulations are slowly evolving to catch up with the rapid advancements in AI technology​​.

However, the legal landscape is fragmented and in flux, with significant differences across states and ongoing debates about how to effectively regulate the burgeoning AI sector. The lack of consensus at the federal level, coupled with diverse state laws, as well as the political climate, poses challenges for uniformity and consistency in AI governance​​​​.

Thee regulatory environment for AI in the USA is, therefore, characterized by a patchwork of state laws and federal guidelines, with growing recognition of the need for greater oversight as AI continues to permeate various sectors of society.

A design thinking approach to regulations

Assomeone familiar with both User Research and facilitation, I’m struck by the apparent disconnect between the technology and its societal implications but more so the disconnect by regulators and policymakers alike. Regrettably, policymakers and regulators seem ill-equipped to fully grasp the human outcomes and impact of AI, undermining the effectiveness of future policies and regulations. To address this gap, I advocate for an interdisciplinary approach borrowing from frameworks like Design Thinking, rooted in genuine human empathy.

Such an approach aligns with the EU’s human-centric approach to AI development and could serve as a model for crafting policies that prioritize human well-being while fostering technological advancement.

What do I mean by this?

Design thinking is a problem-solving approach traditionally used in product design and user experience. However, its principles extend far beyond these areas. At its heart, design thinking involves:

1. Empathy: Understanding the needs, challenges, and aspirations of those impacted by AI technologies. 2. Defining Problems: Clearly articulating AI-related challenges from a human-centric perspective. 3. Ideation: Generating a wide array of solutions, encouraging diverse and out-of-the-box thinking. 4. Prototyping& Visualizing: Developing tangible solutions and policies in iterative cycles, allowing for feedback and refinement. 5. Testing: Evaluating the effectiveness and impact of policies on real-world scenarios and iterating as AI evolves and impacts are identified.

Empathy and Human-Centricity

The first step in applying design thinking to AI policy is to ensure that these policies are grounded in a deep understanding of human needs and contexts. This involves engaging with a wide range of stakeholders, including AI researchers, communities, and those indirectly affected by AI technologies. Through empathetic engagement, policymakers can identify the real-world implications of AI and craft policies that address these concerns.

Next to clearly defining the problem is crucial in AI policy. This involves translating complex technical challenges into human-centric issues. By focusing on the human impacts of AI, such as privacy concerns, job displacement, or ethical dilemmas, policies can be more targeted and effective.

Following this is the thoughtful process of solution generation. Design thinking encourages the exploration of diverse solutions, through stakeholder ideations. In the context of AI policy, this means considering a range of regulatory, educational, and technological interventions. By fostering an environment of creativity and innovation, policymakers can discover novel solutions that balance progress with protection.

Next, it must be said that the process is iterative. AI policies should not be static as framed by the Canadian and European approaches. Instead, they should evolve through prototyping (visualization of the solutions) and iteration, adapting to new developments and feedback. This approach allows for the continuous refinement of policies, ensuring they remain relevant and effective in the face of rapid technological change.

Lastly, we need a way to gauge the solutions impact and so real-world testing is critical. Policies need to be tested in real-world environments to understand their impact. This can involve pilot programs, stakeholder consultations, and impact assessments. Through testing, policymakers can gauge the effectiveness of their approaches and make data-driven decisions.

While the proposals of using design thinking in policy making is perhaps not novel it is a growing conversation that requires deeper conversations out of the bounds of blog post. Integrating design thinking into AI policy development offers a pathway to more human-centric, innovative, and adaptable regulations. By prioritizing empathy, encouraging creative problem-solving, and embracing iterative development, policymakers can craft AI guidelines that not only foster technological advancement but also protect and enhance human welfare. As AI continues to evolve, so too should our approaches to governing it, with design thinking leading the way in this new era of policy-making.

Afterall, technology will change - the goal is to create policies that not only anticipate the future of AI but also ensure that this future is aligned with human values and societal well-being.


1.     Foley & Lardner LLP. (2023, December 7). What to Expect in Evolving U.S. Regulation of Artificial Intelligence in 2024. Retrieved from

2.    Hadfield, G., Arai, M., & Gazendam, I. (2023, May 24). AI regulation in Canada is moving forward. Here’s what needs to come next. Schwartz Reisman Institute.

3.    Holistic AI. (2024). The State of AI Regulations in 2024. Retrieved from

4.    Innovation, Science and Economic Development Canada. (2023). The Artificial Intelligence and Data Act (AIDA) — Companion document.

5.    Morgan Lewis. (2023, May 22). The United States’ Approach to AI Regulation: Key Considerations for Companies. Retrieved from

6.    Norton Rose Fulbright. (2023). Bill C-27: Canada’s first artificial intelligence legislation has arrived.

7.     World Economic Forum. (2023). Europe’s landmark AI regulation deal. Retrieved from

Related posts

How AI and automation have created a concerning ‘make work’ pattern

“Automation and AI will definitely cause widespread job losses and the potential for greater inequality. The challenge…


AI Boogeyman: exposing the fears, embracing the reality

“He who fights with monsters should be careful lest he thereby become a monster. And if you gaze long enough into an…


The fallacy of the “need for speed” in AI development

What’s the rush? Asking questions for the rest of us


5 ways AI is shaping your Christmas shopping

Empowering your shopping journey


Interrogating Big Tech Algorithms: What did the AI know & when, and how and why did it know it?

During the Nixon Watergate scandal of 1972, Republican Senator Howard Baker asked this now-famous question that would…


About me: Hello, my name is Kem-Laurin, and I am one half of the co-founding team of Human Tech Futures. At Human Tech Futures, we’re passionate about helping our clients navigate the future with confidence! Innovation and transformation are at the core of what we do, and we believe in taking a human-focused approach every step of the way.

We understand that the future can be uncertain and challenging, which is why we offer a range of engagement packages tailored to meet the unique needs of both individuals and organizations. Whether you’re an individual looking to embrace change, a business seeking to stay ahead of the curve, or an organization eager to shape a better future, we’ve got you covered.

Connect with us at


Was this article helpful?


Eddy, a super-smart generative AI, opening up ways to have tailored queries and responses