Why Tech Determinism & Solutionism Are False Starts to Discussing AI
  • 12 Apr 2024
  • 14 Minutes to read
  • Dark
    Light

Why Tech Determinism & Solutionism Are False Starts to Discussing AI

  • Dark
    Light

Article Summary

Thank you to Kem-Laurin Lubin, Ph.D. - C, for sharing her excellent writing in our knowledge base.

Click here to read on Medium.

The language we use to talk about AI matters

“Not going to beat centralized AI with more centralized AI.”— Emad Mostaque, former CEO, Stability AI.

When we in academia discuss "technological determinism," we are weaving together numerous threads of a theme, and one that has rapidly gained traction as a research interest in various academic circles and among professionals.

While the term is well-established in academic discourse, it boasts a rich lineage of thought, which I will succinctly delineate. Following this, I will discuss some more appropriate and emerging dichotomies to frame contemporary debates on artificial intelligence (AI) and its impact on society.

The term tech determinism is often associated with the American sociologist and economist Thorstein Veblen, who used it to describe the prevailing belief, even at that time — the late 1800s — that technology was vastly informing the development of social structures and cultural values.

Veblen’s ideas, particularly those expressed in his book The Theory of the Leisure Class (1899), laid the groundwork for discussions on how technology influences society. And while the precise origin of the phrase is difficult to pinpoint, as it most likely evolved over time through the contributions of various scholars and thinkers, Veblen was not the only thinker to evoke technology as an informer of culture.

Jacques Ellul

Perhaps one of the most incisive works on this topic is the 1954 publication of Jacques Ellul’s The Technological Society. In this seminal book, Ellul presents a comprehensive analysis of our technological civilization, illustrating how technology, initially developed to serve humanity, has paradoxically transformed our existential condition.

He explores the notion that technology, once a benign tool, has evolved into a pervasive force, reshaping every facet of human life and altering our fundamental understanding of the world. Ellul’s exploration goes beyond the surface, probing the deep-seated changes in social structures, cultural norms, and individual behaviors that have ensued from technological advancement.

Marshall McLuhan

Closer to home is the work of Marshall McLuhan, a visionary Canadian philosopher and media theorist, who is is best known for his profound impact on our understanding of technology’s role in society. His groundbreaking work, most notably, Understanding Media: The Extensions of Man, published in 1964, introduced the famous phrase “the medium is the message.” McLuhan argued that media and communication technologies fundamentally shape and influence the structure of society.

He suggests that the characteristics of each medium or technology affect how individuals think, communicate, and perceive the world, rather than the content they carry. This idea further laid the foundation for the concept of technological determinism, suggesting that media technologies determine our social arrangements and cultural practices. His insights into the pervasive influence of technology on human experience and social organization have made him a pivotal figure in the study of media theory, as well as the broader dialogue on how technology shapes our lives.

Friedrich Kittler

Following McLuhan, thinkers like Friedrich Kittler, a prominent German philosopher, also made significant contributions to the discourse on technology and culture. His seminal work, Gramophone, Film, Typewriter, published in 1986, explores how these technologies have transformed communication and cultural expressions.

Kittler’s analysis is pivotal in understanding technological determinism — the idea that technology drives societal and cultural change. Unlike the more traditional view that sees technology as merely a tool shaped by human use, he suggests that technology itself has agency and influences how we think, perceive, and interact with the world.

Through his detailed examination of media technologies, Kittler showed that they are not just passive conduits of information but actively shape the content and form of cultural discourse. His work, to date, remains influential in media theory and cultural studies, offering profound insights into the intertwined relationship between technology, culture, and society.

All these thinkers share an awareness of technology’s profound influence on the human condition. Their work all forces us to question whether our deep entanglement with technology has led to a kind of malaise, where technology is not just a part of our cultural and societal practices but has taken the helm, steering us towards a future where it exerts considerable control over our lives. This perspective invites reflection on how technology, now deeply embedded in our daily existence, shapes our choices, paths, and ultimately, our destiny.

In this light, technology, much like an addictive substance, becomes an inseparable part of our journey towards the future, suggesting that it’s not just a tool but a fundamental aspect of our existence. It seems we’ve reached a point where thinking outside the paradigm of technological solutionism is nearly impossible; if a plan doesn’t involve technology, it’s often dismissed as impractical or unfeasible.

This shift reflects a deep-rooted belief in technology as the primary, if not only, pathway to solving our challenges, underscoring its pervasive influence on our perception of progress and problem-solving.

Tech solutionism and the mirage of a silver bullet

Today’s version of tech determinism hinges on the belief that society will inevitably embrace technology, shaping human (customer) behaviours and expectations as if preordained. It’s as if the mere existence of smartphones necessitates a mobile app for every aspect of life.

However, this mindset reveals a critical oversight: the human dimension. When businesses blindly follow this deterministic view, they risk prioritizing technology, like chatbots, to the point where services become impersonal or even frustrating. It’s like being handed a map in a foreign language: technically, it’s a guide, but how useful is it, really?

In our current techno-culture, where AI is often seen as a universal remedy, we must pause and critically evaluate our relationship with AI-driven technology, ensuring we don’t lose sight of the human context behind the digital experience.

The offspring of technological determinism is this very belief that digital innovation can solve every problem, a notion both hopeful and susceptible to over-complication. In my experience, numerous instances illustrate this trend. Consider a straightforward customer question that could be quickly addressed over the phone, yet is now funnelled through a convoluted online system that falls short of asking one question — do you want fries with that?

The human experience: caught in the crossfires of tech determinism & solutionism

It strikes me as overkill, akin to using a sledgehammer to crack a nut. Maybe it’s my small-town farm upbringing speaking, but have we lost the ability to connect on a human level?

In many cases, customers don’t need complex tech solutions; often, they just want a straightforward answer. This pattern is prevalent in my professional life, underscoring a vital truth: the human touch in providing solutions is sometimes not just preferred but necessary, and it’s an aspect of service that remains fundamentally irreplaceable. It is moments like this, sitting in team meetings that I feel gaslit and my internal screaming voice tappers off into defeat.

In my past journey through User Experience (UX), I’ve found myself often overshadowed by the relentless pursuit of innovation, often devoid of genuine human empathy. To me, the crux of the matter lies in how these ideologies of technological determinism and tech solutionism shape the customer experience and the conversation about the role of the human in the face of the continued onslaught of technology.

Technological determinism may usher in a one-size-fits-all approach, where customers are funnelled through digital avenues with scant regard for personal preference or exigency. Conversely, tech solutionism may birth cumbersome systems that bewilder rather than elucidate, leaving customers adrift amidst a labyrinth of choices. To me these are false starts to any conversations about the growing power of AI.

We need to truly take time to explore the implications of AI-powered systems, challenging the notion that AI and technology represent an inevitable trajectory of human “progress.” Acknowledging the limitations of this assumption as the start of AI conversations is the beginning of the problems we create in AI development. It is crucial to expand our comprehension of how AI-driven service delivery should be conceptualized within the nexus of our human-tech futures and the language that we use to talk about AI.

New frames for discussing AI- beyond determinism & solutionism

Taking a cue from the European Union’s (EU) recently direction, we must elevate the conversation and must start with new dichotomies of thought, starting with conversations about AI powered systems and how they fare as it pertains to policies and leanings.

Folks, this is next level governance. We need to ask questions about AI — powered systems — such as: is its underlying data sources centralized or decentralized? How has the AI been implemented to operate — is its application generalized and specialized, and how does the human factored in the application of the AI system, meaning the degrees of autonomy and control?

Let me unpack these dichotomous pairings with some examples, where clarity may be needed.

Centralization and decentralization

In discussions about AI, the centralization-decentralization dichotomy often intersects with debates about power dynamics, ethical considerations, and the balance between efficiency and resilience in AI systems. Recognizing the trade-offs and complexities involved in both centralization and decentralization is essential for designing AI systems and policies that promote innovation, equity, and societal well-being.

Centralization refers to the concentration of control, decision-making authority, or resources within a single entity or a small group of entities. In the context of AI, centralization often manifests in the form of large tech companies or governments controlling the development, deployment, and regulation of AI technologies. Centralization can lead to more efficient coordination, standardized practices, and economies of scale. However, it also raises concerns about monopolistic power, lack of diversity in perspectives, and potential for misuse or abuse of AI systems.

Decentralization, on the other hand, involves the distribution of control, decision-making authority, or resources across multiple entities or nodes. In the context of AI, decentralization can take various forms, such as decentralized AI systems where computation and decision-making are distributed across networked nodes, or decentralized governance structures where decision-making power is shared among multiple stakeholders. Decentralization offers benefits such as increased resilience, transparency, and democratization of access to AI technologies. However, it also presents challenges related to coordination, interoperability, and maintaining alignment of goals among diverse stakeholders.

Generalization & specialization

Indiscussions about AI, the generalization-specialization dichotomy underscores the tension between breadth and depth of AI capabilities. Balancing generalization and specialization is crucial for designing AI systems that can both leverage domain-specific expertise and generalize across diverse contexts. Moreover, understanding the trade-offs between generalization and specialization is essential for guiding the development and deployment of AI technologies in various real-world applications, from personalized recommendations to autonomous driving systems.

Generalization refers to the ability of an AI system to apply knowledge and skills learned from one domain to perform adequately in new, unseen situations or tasks. In machine learning, generalization is often measured by how well a model performs on unseen data after being trained on a specific dataset. Generalization enables AI systems to exhibit flexibility, adaptability, and robustness, allowing them to tackle a wide range of tasks and environments. Achieving strong generalization requires algorithms and models that can abstract relevant patterns and principles from training data and apply them effectively to new scenarios.

Specialization, on the other hand, involves the focus of an AI system on performing specific tasks or functions with a high degree of precision and efficiency. Specialized AI systems are designed to excel in particular domains or applications, leveraging domain-specific knowledge, features, or optimizations to achieve superior performance. Specialization often entails fine-tuning algorithms, architectures, or training procedures to optimize for specific objectives or constraints. While specialization can lead to highly effective and efficient AI solutions within narrow domains, it may also limit the flexibility and adaptability of these systems when faced with novel or diverse tasks.

A common application of AI that demonstrates both generalization and specialization is in the field of personalized medicine. In this context, AI systems generalize by learning from vast datasets of patient records, medical images, genetic information, and clinical studies. They identify patterns and correlations that might not be evident to human practitioners. This generalization capability allows AI to predict diseases, suggest treatments, and optimize patient outcomes on a broad scale.

On the specialization side, AI can tailor its predictions and recommendations to individual patients. By analyzing a specific patient’s medical history, genetics, lifestyle, and real-time health data, AI systems can devise personalized treatment plans. This level of specialization ensures that each patient receives care that is optimally effective for their unique health profile, thereby increasing the efficacy of treatments and improving overall healthcare outcomes.

Understanding whether an AI system generalizes or specializes is crucial because it affects its applicability, efficacy, accuracy, and reliability across various contexts. Generalization allows AI to operate effectively in diverse scenarios, enhancing its versatility, while specialization ensures high performance in specific tasks but may limit its scope. Additionally, this distinction is vital for assessing risks like bias and overfitting, guiding resource allocation in AI development, and addressing ethical and societal implications. Granted, discerning the generalization and specialization capabilities of AI is key to leveraging its benefits and mitigating potential drawbacks in real-world applications.

Autonomy & control

Lastly, the autonomy-control dichotomy reflects broader debates surrounding the ethical, legal, and societal implications of autonomous technologies. Questions arise regarding the extent to which AI systems should be granted autonomy, the responsibilities of developers and operators in ensuring accountability and transparency, and the mechanisms for establishing human oversight and intervention. By understanding the complex interplay between autonomy and control, we can foster the responsible development and deployment of AI technologies that align with human values, preferences, and aspirations.

In application, autonomy refers to the degree of independence or self-governance exhibited by an AI system in making decisions, taking actions, or interacting with its environment without direct human intervention. Autonomy is often considered a desirable attribute for AI systems, particularly in applications such as autonomous vehicles, robotic systems, and intelligent agents. Increased autonomy enables AI systems to operate more efficiently, adapt to dynamic environments, and perform tasks without constant human oversight. Achieving high levels of autonomy typically involves equipping AI systems with sophisticated decision-making algorithms, perception capabilities, and learning mechanisms.

Control, on the other hand, pertains to the ability of humans to influence, direct, or constrain the behavior of AI systems according to desired objectives, values, or safety considerations. Control mechanisms can take various forms, including predefined rules, real-time supervision, or feedback loops that enable human operators to intervene when necessary.

Control is essential for ensuring the safety, reliability, and ethical behavior of AI systems, especially in high-stakes applications where errors or malfunctions can have serious consequences. Striking the right balance between autonomy and control involves designing AI systems that can operate autonomously while remaining responsive to human guidance and oversight.

A resonant example is the use of AI in managing personal healthcare data, particularly regarding its portability between different healthcare providers. AI systems can autonomously manage and transfer patient records, ensuring that data is available wherever and whenever needed, thus improving the efficiency and quality of care. However, this autonomy raises concerns about control: who governs the AI’s decisions, how data is shared, and the privacy and security of sensitive information.

Understanding the balance between AI autonomy and control, especially in managing personal healthcare data, is essential for several reasons. Firstly, privacy and security are paramount; AI’s autonomous handling of healthcare data necessitates stringent control over access and sharing to protect privacy and prevent unauthorized breaches.

Ethical considerations are also critical, as decisions regarding data portability and use must adhere to ethical standards, ensuring AI actions are in line with patient interests and consent. Additionally, regulatory compliance is mandatory, with AI systems needing to adhere to healthcare regulations like HIPAA in the U.S., which set standards for the confidentiality and portability of medical information.

Finally, trust and reliability are fundamental; both patients and healthcare providers must have confidence in AI’s ability to manage data responsibly and accurately, ensuring that care quality and patient autonomy are not compromised. Thus, a thorough understanding of the interplay between AI’s autonomy and control mechanisms is vital to effectively manage the complexities of personal healthcare data, guaranteeing that technological advancements benefit patients while safeguarding their rights and safety.

Collectively, although various pairings could be employed to contextualize the business practices of AI, the three that I have detailed address numerous central challenges within the AI sector. These issues are recurrent themes in my discussions and form the core of my doctoral research, which investigates AI’s tangible effects on the human condition. The rapid pace of AI development often dominates discourse, yet it is imperative to seek better frameworks for these conversations, aiming to tackle the well-identified challenges.

Given this, it is vital for all stakeholders, including technologists, policymakers, legal experts, and consumer advocacy groups, to converge and grasp the significant implications of AI’s evolving dynamics. Advocating for regulations that promote transparency, decentralization, autonomy, and equity will pave the way for a more ethical trajectory in AI development. By doing so, we can foster digital integrity and accountability, safeguarding the rights and freedoms of individuals in the digital era.

About me: Hello, my name is Kem-Laurin, and I am one half of the co-founding team of Human Tech Futures. At Human Tech Futures, we’re passionate about helping our clients navigate the future with confidence! Innovation and transformation are at the core of what we do, and we believe in taking a human-focused approach every step of the way.

We understand that the future can be uncertain and challenging, which is why we offer a range of engagement packages tailored to meet the unique needs of both individuals and organizations. Whether you’re an individual looking to embrace change, a business seeking to stay ahead of the curve, or an organization eager to shape a better future, we’ve got you covered.

Connect with us at https://www.humantechfutures.ca/contact


Was this article helpful?

ESC

Eddy, a super-smart generative AI, opening up ways to have tailored queries and responses