- 11 Sep 2024
- 8 Minutes to read
- DarkLight
AI Ideology and Politics: Scaling Tech Design Dichotomies
- Updated on 11 Sep 2024
- 8 Minutes to read
- DarkLight
Thank you to Kem-Laurin Lubin, PH.D - C for sharing her knowledge and expertise with us.
Exploring the Trade-offs in AI Development
A few years ago, while preparing for my doctoral primary exam, I came across the book, Virtualpolitik: An Electronic History of Government Media-Making in a Time of War, Scandal, Disaster, Miscommunication, and Mistakes, by Elizabeth Losh. An excerpted summary that stood out to me read:
“Today government agencies not only have official Web sites but also sponsor moderated chats, blogs, digital video clips, online tutorials, videogames, and virtual tours of national landmarks. Sophisticated online marketing campaigns target citizens with messages from the government — even as officials make news with digital gaffes involving embarrassing e-mails, instant messages, and videos. In Virtualpolitik, Elizabeth Losh closely examines the government’s digital rhetoric in such cases and its dual role as mediamaker and regulator. Looking beyond the usual focus on interfaces, operations, and procedures, Losh analyzes the ideologies revealed in government’s digital discourse, its anxieties about new online practices, and what happens when officially sanctioned material is parodied, remixed, or recontextualized by users.”
Not unlike Losh’s book S.Scott Grahams’s Where is the Rhetoric: Imagining a Unified Field was another book I chin wagged through and provided some of the scaffold for my own work in Computational Rhetoric and AI and situating the changing posture and location of Rhetorical praxis even in the virtual spaces of society.
Together, both these books helped shaped my own thinking about technology as something imbued with ideology — dare i say it, politics. Further then, in line with my own work in Computational Rhetoric and Artificial Intelligence (AI) the layers of ideological and political machinations is a central part of the datafication process. For me, Losh’s book provided a critical look at the intersection of technology, government, and communication, revealing the complexities and challenges of digital governance in the modern era. Additionally, Graham’s book advocates for a more integrated approach to rhetorical studies, encouraging scholars to embrace a broader, more cohesive understanding of the field’s role in academia and society.
So, it is between these two works, I have situated much of my work understanding the need to interrogate technology used in citizenry management, compounded by the onslaught of tech, primarily AI, which has been used — in many instances, imbued with bias and a lack of transparency, subjugating humans to be computed and datafied — and in many times, with unimagined material outcome.
I argue that AI, despite its technological sophistication, is profoundly influenced by the ideologies and politics of its creators, rendering it inherently ideological. Far from being just a technological advancement, AI is a transformative force reshaping all aspects of society, often driven by concealed agendas. As we move through this AI-driven era, it is essential to scrutinize the underlying ideologies and political influences that shape these technologies.
This requires us to examine four key dichotomies that influence AI’s development and deployment: Centralization vs. Decentralization, Autonomy vs. Control, Application Specialization vs. Generalization, and Localization vs. Globalization.
These dichotomies manifest in the products and services we encounter, shaping their design, functionality, and impact. I often discuss these issues with other scholars to better understand the levers of control that drive AI development. I have also explored these themes in my scholarly work and on Medium, examining how companies build AI-powered software and the governance models and motives behind them.
Now, I want to illustrate these dichotomies with some relevant examples to clarify their implications.
1. Centralization vs. Decentralization
Centralization in AI often leads to products and services that harness significant resources and extensive datasets, resulting in highly advanced and scalable solutions. Examples include:
Centralized Cloud AI Platforms: Services like Amazon Web Services (AWS) and Google Cloud AI offer powerful capabilities such as machine learning and data analytics to businesses globally. These platforms leverage centralized infrastructure to provide robust and reliable services at scale.
Smart Home Ecosystems: Centralized control through devices like Google Nest and Amazon Alexa facilitates seamless integration and automation within homes, delivering convenience and efficiency through interconnected smart devices.
Conversely, decentralization promotes products and services that distribute control and innovation across various entities, fostering diversity and reducing reliance on a few power centers. Examples include:
Blockchain-based Health Data Systems: Platforms like Medicalchain allow patients to securely share their health records with multiple healthcare providers without a central authority, enhancing privacy and portability.
(I really believe this may be the future of electronic patient record systems that gives power back to the citizen/patient)
Decentralized Finance (DeFi) Platforms: DeFi applications such as Uniswap and Aave enable peer-to-peer financial transactions and services without traditional centralized banks, democratizing access to financial tools.
2. Autonomy vs. Control
Autonomous AI systems function with minimal human intervention, capable of making decisions and performing tasks independently. Examples include:
Autonomous Vehicles: Self-driving cars from companies like Tesla and Waymo utilize advanced AI to make real-time decisions, reducing the need for human drivers and potentially enhancing road safety.
Automated Customer Service: AI chatbots and virtual assistants, used by banking and e-commerce companies, autonomously handle customer inquiries, providing efficient and continuous support.
In contrast, control emphasizes human oversight and regulation to ensure that AI systems adhere to ethical standards and societal values. Some examples of controlled AI-powered uses cases include:
Clinical Decision Support Systems: Tools like IBM Watson Health assist in medical treatment recommendations but leave final decisions to human doctors, ensuring that care remains ethical and informed.
AI in Criminal Justice: Predictive policing tools, such as COMPAS, require stringent control measures to avoid biases and ensure fairness, necessitating oversight to manage risks associated with autonomous decision-making in law enforcement.
3. Application Specialization vs. Generalization
Specialization in AI results in products and services tailored to perform specific tasks exceptionally well. Examples of such use cases include:
Facial Recognition Systems: Technologies like Clearview AI specialize in identifying individuals based on facial features, used in security and law enforcement applications.
For the record, I have written about Clearview AI, specifically addressing their surveillance rhetoric. I will be presenting this work at the 8th Future of Information and Communication Conference 2025, 28–29 April 2025 in Berlin
Financial Fraud Detection: Specialized AI models in banks like JPMorgan Chase detect and prevent fraudulent transactions with high accuracy, protecting customers and financial institutions.
In contrast, Generalization aims to create AI systems with broad applicability, capable of performing a wide range of tasks. Examples include:
Language Models: AI like OpenAI’s GPT-4, which can generate human-like text across diverse topics, are used in various applications from content creation to customer service.
General AI Assistants: Assistants like Apple’s Siri or Google Assistant are designed to handle a wide array of tasks, from setting reminders to answering questions across multiple domains.
4. Localization vs. Globalization
Notably, I introduced this dichotomy as my scholarly work evolved, particularly while considering the place, context, and challenges of the Global South. This topic will be a focal point at the AI Global South Summit in Saint Lucia this October, where we will explore how to shape policy and governance for AI within the Global South context.
Let me explain further.
Localization focuses on adapting AI to meet local needs, cultural contexts, and regulatory environments. Examples include:
Localized E-commerce Platforms: Companies like Alibaba tailor their services to the unique shopping behaviors and preferences of consumers in different regions, providing localized recommendations and payment options.
Agricultural AI Solutions: AI applications in agriculture, such as those developed by Agrix Tech, are customized to address specific climate and soil conditions of regions in Sub-Saharan Africa, enhancing productivity and sustainability.
In contrast, Globalization advocates for developing AI with a universal framework, applicable across different regions and cultures. Examples include:
Global Health Monitoring Systems: AI platforms like BlueDot analyze global health data to track and predict the spread of infectious diseases, providing insights and alerts to international health organizations.
Global AI Ethics Initiatives: Collaborative projects like the Partnership on AI work to establish common standards and best practices for ethical AI development, applicable across different countries and industries.
Understanding AI ideology and politics is crucial for grasping the stance and impact of AI-powered solutions. AI does not operate in a vacuum; it is shaped by the values, biases, and intentions of its creators, which in turn influence how it is designed and deployed. By examining the underlying ideologies and political dimensions of AI, we gain insight into how these technologies reflect and perpetuate specific power dynamics, ethical standards, and societal values.
This understanding helps us assess the motivations behind AI solutions and their potential implications for various stakeholders. It reveals how AI can reinforce or challenge existing power structures, affect equity and fairness, and align with broader societal goals. Moreover, recognizing these factors is essential for crafting informed policies and governance frameworks that ensure AI serves the public good, addresses biases, and promotes transparency and accountability. Ultimately, a full comprehension of AI’s ideological and political context enables us to better manage it complexities and drive more equitable and effective technological outcomes.
Lastly, understanding AI through the four dichotomies I have presented — Centralization vs. Decentralization, Autonomy vs. Control, Application Specialization vs. Generalization, and Localization vs. Globalization — offers a nuanced perspective on AI’s future trajectory. Each dichotomy presents distinct challenges and opportunities that require careful consideration from policymakers, technologists, and society as a whole. By referencing these dynamics thoughtfully, we can harness AI’s transformative potential to foster a more equitable, ethical, and sustainable world.
https://www.humantechfutures.ca/
About Me: Hello, I’m Kem-Laurin, co-founder of Human Tech Futures. At Human Tech Futures, we are dedicated to empowering our clients to navigate future planning with confidence. We offer tailored courses and training designed to address today’s challenges and opportunities.
Innovation and transformation are at the heart of everything we do, and we take a human-centered approach at every step. We recognize the uncertainty that the future may bring, which is why we provide a range of customized engagement packages for individuals and organizations alike. Whether you’re an individual ready to embrace change, a business aiming to stay ahead, or an organization working to shape a better future, we’re here to support your journey.
Connect with us at https://www.humantechfutures.ca/contact