The fallacy of the “need for speed” in AI development
  • 08 Mar 2024
  • 10 Minutes to read
  • Dark
    Light

The fallacy of the “need for speed” in AI development

  • Dark
    Light

Article Summary

Thank you to Kem-Laurin Lubin, for sharing her insight and stories in our knowledge base.

Click here to read on Medium.


What’s the rush? Asking questions for the rest of us

With great power comes great responsibility”— Uncle Ben (Spiderman)

It appears that we, as a collective humanity, haven’t taken lessons from our past. During the dotcom boom, numerous tech entities were motivated by the creed — move fast and break things. This infamous approach, epitomized by Mark Zuckerberg, has stood as a symbol for pushing boundaries. However, this philosophy has, for some, led to a fragmentation within humanity, reflecting the “things” mentioned in Zuckerberg’s corporate credo. However, as we edge closer to an era where artificial intelligence (AI) reshapes every aspect of our existence, it’s imperative to stop and consider:

Are we repeating the same mistakes by hastening the deployment of technology without fully grasping its implications?

Today, as the race to pioneer the future of AI , the industry mantra seems to echo a relentless pursuit of speed. “The quick and the dead” appears to have morphed into the guiding philosophy for many in the AI sector, where being first to market is often prized above all else. Yet, this fervent rush raises a critical question:

What’s the real cost of moving too fast in AI development?

“Don’t believe the hype” — AI rat race & the frenzied pace for dominance

The pursuit of financial supremacy in the tech sector significantly fuels the rapid advancement of AI. My contention is that this rush is less about altruism or noble objectives for the greater good and more about a competition — one that extends beyond mere innovation to include the capture of market share and power. Yet, this race frequently overlooks vital factors, including the ethical ramifications, privacy issues, and the broader societal effects of AI technologies.

Consequently, this relentless push for technological dominance risks creating a future where AI systems are deployed without adequate safeguards, leading to unintended consequences that could affect millions. As history has shown, once technology is integrated into society, rectifying its adverse impacts becomes significantly more challenging. Therefore, it is essential for developers, policymakers, and society as a whole to prioritize a more measured approach. This involves incorporating comprehensive impact assessments, transparent ethical standards, and robust privacy protections before these technologies become irreversible parts of our daily lives. It also includes understanding the myriad human impact of this consequential technology and what if anything further will be “broken” in this race.

So I ask again— to what end, then, are we accelerating AI development? Is it merely to win a race of capital dominance, or is there a more profound purpose we aim to achieve? The true potential of AI lies not in dominating markets but in enhancing human capabilities and addressing some of the world’s most pressing challenges. Achieving this requires a balanced approach that values not just speed but also the quality, safety, and ethics of AI systems.

I have discussed this topic in academic circles and also proposed AI design heuristics as guidance for those developing such systems. The primary aim is to help reduce the impact of AI.

Sacrificing our future on the altar of speed

As we collectively experience the unfolding complexities of AI development,it’s crucial to embrace a slower, more deliberate pace. This approach ensures that our technological progress is not just groundbreaking but also ethical and enduring. We must avoid jeopardizing our future for the sake of haste. Instead, we should proceed with the insight that in the domain of AI, taking our time is not merely commendable but essential.

In my quest to understand public apprehension towards the swift advancement of Artificial Intelligence, beyond my specific field of Computational Rhetoric, I turned to broader sources. By reviewing findings from institutions like the Pew Research Center and McKinsey, I aimed to capture a wide range of common concerns among the general population about the quickening pace of AI innovation.

Here, now, I outline the top five concerns, each underpinned by pertinent examples, to reflect the spectrum of scenarios that resonate most with everyday individuals when considering the impact of AI.

These themes not only intersect with my academic inquiries but I also passionately argue for the dissemination of research that directly addresses the real-world implications of AI on daily life. This approach ensures that scholarly findings are not only theoretical but also practical and relevant to the broader public discourse on AI’s impact.

1. How does AI influence job diversity and fairness in hiring?

Just for a moment, imagine a company decides to revamp its hiring methods by bringing in an AI system. At first glance, everything appears progressive: the AI objectively reviews resumes, promising an equitable shortlisting of candidates. But as we move forward, a surprising pattern unfolds. Despite the AI’s supposed impartiality, diversity among the recruits starts to decline. What’s happening here? It turns out, underlying human prejudices are quietly shaping the final choices.

This isn’t a hypothetical situation; this actually happened within Amazon’s recruitment efforts. This real-life example highlights the complex interplay between AI mechanisms and human decision-making, showing us that even with cutting-edge technology, our human inclinations can guide outcomes in unexpected ways. It’s a vivid reminder that technology mirrors our imperfections and underscores the need for vigilant oversight in the integration of AI systems into our lives.

2. What are the ethical implications of AI in controlling human behaviour?

Envision a scenario where AI systems transition from being mere tools to becoming enforcers of public health policies. Imagine an AI assigned to oversee pandemic management, theoretically equipped to administer an electrical shock to those failing to comply with precautions such as wearing masks or maintaining physical distance. This scenario, while hypothetical, prompts significant reflection on the limits of AI’s influence over human actions, spotlighting a pivotal moment in the discourse on AI’s ethical confines in managing societal conduct.

A real-world example that brings this debate to the forefront is the use of surveillance systems in various countries to monitor compliance with COVID-19 safety guidelines. Although these systems did not administer physical deterrents, the principle of using AI for public health enforcement remains a tangible illustration of these ethical quandaries, underscoring the need for a balanced approach to technological governance.

3. How aware are people of AI’s impact on their daily lives?

Imagine a day in the lives of various citizens, each unknowingly interacting with AI in different forms. One starts their day with a fitness tracker, logging every step and heartbeat, while another communicates with a customer service chatbot for assistance. AI subtly integrates into various aspects of their lives. Yet, there’s a notable disparity in how much individuals understand or even recognize these AI interactions. While tech-savvy individuals might identify and appreciate the AI elements woven into their routines, others, perhaps influenced by their educational background, socioeconomic status, or digital access, remain unaware of AI’s pervasive role.

This disparity underscores a crucial issue: the existing digital divide extends into AI literacy. In my research, I emphasize the importance of AI literacy, advocating for educational initiatives that aim to bridge this gap. By fostering a broader understanding of AI and its implications, we can empower all segments of society to navigate the complexities of a technology-infused world more effectively and with conscience.

4. What are the challenges and opportunities of AI integration across different sectors?

Now, picture a world where AI is embedded into the fabric of our daily lives, streamlining city traffic, tailoring healthcare to individual needs — this isn’t a distant dream, it’s our current reality. This scenario paints a future filled with immense potential, promising unprecedented levels of efficiency and customization. Yet, this bright future is not without its shadows.

Questions arise: Who controls the vast pools of data AI relies upon? How do we prevent these advanced algorithms from mirroring and magnifying our own prejudices? And how do we maintain transparency in AI-driven decisions?

These are not just theoretical concerns; they are pressing challenges that we face as AI becomes an ever-more integral part of our world. In some contexts, they’ve already become a stark reality. Take, for instance, the case of the COMPAS recidivism algorithm, used by courts in the United States to predict the likelihood of a defendant reoffending. This tool has been criticized for perpetuating racial biases, as it was found to unfairly flag black defendants as higher risk than white defendants. This real-world example underscores the critical need for vigilance, fairness, and transparency as AI technologies are developed and deployed across different sectors of society. The stakes are high, and the impacts are real, underscoring the importance of addressing these ethical dilemmas head-on.

This duality of AI — as a tool for both incredible advancement and significant ethical quandary — illustrates the critical balance we must achieve as we navigate this new technological era.

5. How does the AI development gap affect organizations and talent acquisition?

Imagine two different organizations on the AI maturity spectrum. On one end, there’s a company that has mastered AI, integrating it seamlessly into their operations and sustainability initiatives. This proficiency not only mitigates risks but also makes them a magnet for top AI talent, who are eager to work at the forefront of technology. On the other end, there’s an organization still grappling with the basics of AI. Their struggles with risk management and the integration of AI into sustainable practices hinder their appeal to skilled professionals. This divide illustrates how expertise in AI can significantly influence a company’s risk management capabilities, sustainability efforts, and attractiveness to potential AI talent (McKinsey)​​.

The wider implications of AI are often overshadowed by the allure of innovation and the rush to harness new technologies. However, as we continue to explores these uncharted territories, it is imperative that we proceed with caution and mindfulness. The race for AI dominance should not lead us to compromise on ethical standards, privacy, or societal well-being. We must foster an environment where technology serves humanity’s best interests, ensuring inclusivity, fairness, and accountability. By taking a measured approach, we can leverage AI’s potential while safeguarding against its risks, ensuring that our advancements in AI contribute positively to society and do not exacerbate existing disparities or create new forms of inequity.

May we embrace a more measured approach of first reflecting on AI’s potential impact and then act with good intentions.

References

  1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (n.d.). Machine Bias. ProPublica. Retrieved from ProPublica.

  2. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (n.d.). How We Analyzed the COMPAS Recidivism Algorithm. ProPublica. Retrieved from ProPublica.

  3. Brookings. (n.d.). How artificial intelligence is transforming the world. Retrieved from Brookings

  4. EL PAÍS English. (n.d.). Very human questions about artificial intelligence. Retrieved from EL PAÍS English

  5. McKinsey. (n.d.). The state of AI in 2022 — and a half decade in review. Retrieved from McKinsey

  6. Mesa, N. (n.d.). Can the criminal justice system’s artificial intelligence ever be truly fair? Massive Science. Retrieved from Massive Science.

  7. Pew Research Center. (n.d.). Worries about developments in AI. Retrieved from Pew Research Center

  8. Pew Research Center. (n.d.). What Americans Know About Everyday Uses of Artificial Intelligence. Retrieved from Pew Research Center


  9. About me: Hello, my name is Kem-Laurin, and I am one half of the co-founding team of Human Tech Futures. At Human Tech Futures, we’re passionate about helping our clients navigate the future with confidence! Innovation and transformation are at the core of what we do, and we believe in taking a human-focused approach every step of the way.

    We understand that the future can be uncertain and challenging, which is why we offer a range of engagement packages tailored to meet the unique needs of both individuals and organizations. Whether you’re an individual looking to embrace change, a business seeking to stay ahead of the curve, or an organization eager to shape a better future, we’ve got you covered.

    Connect with us at https://www.humantechfutures.ca/contact



Was this article helpful?

ESC

Eddy, a super-smart generative AI, opening up ways to have tailored queries and responses