Interrogating AI: What did the AI know and when did it know it?
  • 22 Mar 2024
  • 12 Minutes to read
  • Dark
    Light

Interrogating AI: What did the AI know and when did it know it?

  • Dark
    Light

Article Summary

Thank you to Kem-Laurin Lubin, for sharing her insight and stories in our knowledge base.

Click here to read this article on Medium.

“The life of the law has not been logic; it has been experience.” — Oliver Wendell Holmes Jr.


In 2021, I published an insightful blog post on Medium, entitled “Interrogating Big Tech Algorithms: What did the AI know, and when, how, and why?” This exploration served as the foundation for my academic investigations, which, intriguingly, were not situated within the realm of Law but rather nestled in the interdisciplinary space of Computational Rhetoric. My research and writings are framed within the concept of Tech Humanism, a perspective that emphasizes the integration of technology into society in a way that enriches and upholds human values. Through this lens, I try to unravel the complex interactions between technology, language, and human experience, contributing to a more ethical and transparent digital world.

I am fascinated by the fundamental components of AI models — the ‘ingredients,’ to use a culinary metaphor.

Myaim is to unravel the underlying principles steering the decision-making and creative facets of AI. More specifically, I am intrigued by the subtle rhetorical features embedded within AI systems, which are capable of generating a wide range of effects and, in turn, profoundly influencing both individuals and communities.

For example, I explore the impacts of AI on human aspects of employment — such as the processes of hiring and firing — and in the mechanisms of financial decision-making, like the algorithms banks employ for risk assessment.

In this academic space, I confront pressing questions:

Is a person’s name merely a label, or does the AI ascribe certain characteristics to names, and if so, how can we counteract this bias? Does one’s postal code influence an AI’s employment or university admissions decisions? These questions guide my inquiry, prompting a deeper examination of AI’s societal roles and responsibilities.

Inquiry minds want to know: “What did the AI know & when, and how and why did it know it?”

Understanding these mechanisms is crucial for ensuring transparency in technologies that significantly affect our lives, both individually and collectively. By unraveling the underlying rhetoric of AI, we can pave the way for a future where technology decisions are transparent, comprehensible, and aligned with societal values. So it came as no surprise to me when these series of event took place: European Union passed the Artificial Intelligence Act, The U.S. Securities and Exchange Commission (SEC) has fined two investment advisers, Toronto-based Delphia Inc and San Francisco-based Global Predictions Inc, for making false and misleading statements about their use of artificial intelligence — asking about the nature of the training model.

Let me brief you on this as this is also recent news, if you have not been following:

Event 1: EU passes the Artificial Intelligence Act

The Artificial Intelligence Act (AI Act) is a European Union (EU) regulation on AI; it was proposed by the European Commission on 21 April 2021 and passed on 13 March 2024, and seeks to establish a common regulatory and legal framework for AI.

For EU citizens, this legislation means an enhanced level of protection against potential abuses and risks associated with AI technologies. It aims to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability. Specifically, it bans untargeted scraping of facial images and the unrestricted use of biometric categorization systems, for example.

High-risk AI applications, like those used in critical infrastructure, law enforcement, and employment, will be subject to stringent requirements to mitigate risks, ensure transparency, and maintain human oversight.

Moreover, EU citizens will gain the right to lodge complaints about AI systems and demand explanations for decisions made by high-risk AI that affect them. This represents a significant shift towards greater transparency and accountability in the deployment of AI technologies within the EU. The legislation is expected to set a global benchmark for AI regulation, impacting not just EU citizens but potentially setting standards worldwide​​.

Overall, the AI Act is designed to promote innovation while ensuring that AI systems deployed in the EU are safe and respectful of human rights and democratic values. This legislation could lead to increased trust in AI technologies among EU citizens, fostering an environment where AI can be used beneficially and responsibly. Other nations are undoubtedly looking to this landmark passing for what we may see happening in conversations around AI, globally.

Event 2: Securities and Exchange Commission (SEC) fined two investment advisers

And then in the same week, the U.S. Securities and Exchange Commission (SEC) fined two investment advisers, Toronto-based Delphia Inc and San Francisco-based Global Predictions Inc, for making false and misleading statements about their use of AI. They agreed to pay a combined $400,000 in fines without admitting or denying the charges. This action is part of the SEC’s increased scrutiny on claims about AI technologies by companies​​.

For end users of services by Delphia Inc and Global Predictions Inc, the SEC’s action could indicate potential inaccuracies in the advertised AI capabilities they relied on for investment decisions. This may affect the trust and reliability in the services provided by these firms. Users may need to reevaluate their investment strategies and seek additional information or assurances regarding the technology used by these advisors. It underscores the importance of due diligence and skepticism when evaluating investment advice based on advanced technologies like AI.

Not suprisingly, the situation is gaining momentum, and we must remain vigilant for a potential influx of cases that will inevitably seek to scrutinize the AI itself.

AsI continue to evolve within this academic world, overwhelmed with a deluge of stories and anecdotes, I often find myself daunted by the vast, unregulated expanse where Big Tech operates unchecked. Yet, it is within this very wilderness that I see the critical need for probing questions about AI models and their potential impact on us. These questions, while numerous and varied, include, but are certainly not limited to, inquiries such as:

1. What specific datasets were used to train the AI model, and how do you ensure they are free from bias and representative of diverse populations?

2. Can you provide a detailed explanation of the AI model’s decision-making process, including any algorithms or logic used to reach conclusions?

3. How does your organization address and rectify errors or inaccuracies identified within the AI model’s outputs?

4. What measures are in place to ensure the privacy and security of data processed by the AI model, especially sensitive personal information?

5. How do you ensure compliance with applicable laws and regulations regarding the use of AI in your operations, including data protection and non-discrimination statutes?

These questions are the ones I think about for my research and they are deeply significant within legal tech regulatory circles for several reasons- reasons that inform my call for Heuristic guidance in systems design powered by AI. You can read my peer-reviewed paper calling for heuristics in the HCII 2022 Proceedings.

1. Data transparency and bias mitigation

The question regarding the specific datasets used to train the AI model and ensuring they are free from bias and representative of diverse populations is fundamental in legal tech regulation for ensuring fairness and equity. In the legal context, biased AI can lead to unjust outcomes, such as discriminatory legal decisions or enforcement. Legal tech regulators are concerned with how data is sourced, used, and whether it incorporates a wide range of demographics to prevent systemic biases from being encoded into AI systems.

2. Explainability and transparency of decision-making

Asking for a detailed explanation of the AI model’s decision-making process is vital in legal settings where decisions can significantly affect people’s lives and liberties. Transparency is crucial for accountability, especially in legal tech where decisions may need to be explained in court. Regulators are interested in whether AI models can provide understandable explanations for their outputs to ensure they align with legal reasoning and can be scrutinized for fairness and accuracy.

3. Error management and rectification

The question about addressing and rectifying errors or inaccuracies is critical because it concerns the reliability and trustworthiness of AI systems in legal contexts. Legal tech regulators are concerned with how organizations detect, report, and correct errors in AI outputs, especially since these errors can lead to wrongful legal outcomes. This also ties into the broader issue of accountability and continuous improvement within AI systems used in legal applications.

4. Privacy and security measures

The query about privacy and security measures touches on the significant concerns regarding data protection within the legal tech space. Given the sensitivity of legal data, regulators are keenly interested in how organizations protect this information, particularly when it involves personal or confidential data. Legal tech companies must adhere to strict data protection laws and regulations, such as the General Data Protection Regulation (GDPR) in Europe or various state laws in the U.S., and must ensure that their AI systems comply with these requirements.

5. Regulatory compliance and legal obligations

Finally, the question of compliance with applicable laws and regulations is fundamental because it concerns the legal and ethical use of AI in legal settings. This includes adherence to data protection laws, non-discrimination statutes, and other relevant regulations. Legal tech regulators are interested in how organizations ensure their AI systems are not only effective but also lawful and ethical. This involves constant monitoring, auditing, and updating of AI systems to align with evolving legal standards and practices.

Overall, these questions reflect the broad regulatory concerns surrounding the deployment of AI including ethical implications, accountability, transparency, and legal compliance. They are relevant because they address the critical balance between leveraging AI’s benefits in legal practice and ensuring that these technologies operate fairly, transparently, and within the bounds of the law.

This is just the tip of the iceberg of chilling questions that many Big Tech firms will face in the near future in halls of government. While the wild west of AI unravels in its many forms and postures, I for one will be looking at this space.

In a recent discussion, my friends and I drew a parallel between traditional businesses expanding beyond their original licensed — something we see a lot with digital companies. Originally, these businesses might have been licensed to sell specific items like shoes, but then they started offering unrelated products like alcohol and automobiles without proper authorization, from our vantage points as lay observers. Similarly, we observed how some others may have began as a platform for sharing photos and life events, has expanded its operations in a concerning way.

Let’s take Facebook (now Meta), as an example. Originally licensed as a social media platform, it has been collecting vast amounts of user data to refine its algorithms. This data collection and usage might breach the initial terms under which they operated. Furthermore, without adequate government oversight, Facebook has, allegedly, employed AI in ways that, to some, represent a form of misinformation and disinformation. This has caused significant harm not only to directly affected individuals but also to the broader community dealing with the consequences of misinformation.

Asengaged citizens of the digital era, we urgently appeal to policymakers, regulators, and other key stakeholders to swiftly address the urgent need for digital literacy and oversight. This call to action comes in response to the profound disruptions and challenges posed by major technology firms, which have significantly impacted the lived experiences of countless individuals. These disruptions have been exacerbated by a conspicuous lack of oversight from those entities traditionally responsible for safeguarding our collective digital well-being.

The critical moment has arrived for us to collectively confront and deliberate on the establishment of effective guardrails for our society, particularly in the context of the rapidly evolving landscape of AI. It is imperative that we develop a comprehensive understanding of AI’s current dynamics and potential future directions. By doing so, we can ensure that our approach to digital governance is both informed and adaptive, capable of protecting the public interest while fostering innovation and respecting individual rights. This endeavor is not just about mitigating risks but also about empowering citizens and enabling them to thrive and flourish in this new digital age.

Afterword

As I wrap us this essay so much is unfolding. Currently, Facebook, in under scrutiny because they now demands a per-account fee of €9.99/month on web or €12.99/month on mobile for users in the region wanting to avoid its tracking. No other choice is offered to users which means a forced to acceptance or a total loss of their privacy — problems created, in part by them.

Mind boggling!

In response to this, and other Big Tech shenanigans, a group of European Parliament members (MEPs) wrote to the European Commission regarding the regulation of online content.

Their letter highlights concerns about the impact of online platforms’ algorithms and calls for regulations that ensure fair compensation for content creators, particularly in the music and journalism industries. MEPs are urging the Commission to prioritize legislation that addresses these issues, emphasizing the importance of protecting creators’ rights and ensuring a level playing field in the digital marketplace.

So much is unravelling and I hope to continue this conversation soon.

References

1. European Parliament Passes Landmark AI Act, World’s First Comprehensive Law Regulating Artificial Intelligence.” Law.com International, 13 Mar. 2024, www.law.com/international-edition/2024/03/13/european-parliament-passes-landmark-ai-act-worlds-first-comprehensive-law-regulating-artificial-intelligence/.

2. EU Parliament Approves AI Act with Large Majority.” Business & Human Rights Resource Centre, European Parliament, 13 Mar. 2024, www.business-humanrights.org/en/latest-news/eu-parliament-approves-ai-act-with-large-majority/.

3. Russia Uses Facial Recognition to Detain Navalny Funeral Attendees.” Yahoo News, https://news.yahoo.com/russia-uses-facial-recognition-detain-202142978.html. Accessed [March 21, 2024]

Related posts

About me: Hello, my name is Kem-Laurin, and I am one half of the co-founding team of Human Tech Futures. At Human Tech Futures, we’re passionate about helping our clients navigate the future with confidence! Innovation and transformation are at the core of what we do, and we believe in taking a human-focused approach every step of the way.

We understand that the future can be uncertain and challenging, which is why we offer a range of engagement packages tailored to meet the unique needs of both individuals and organizations. Whether you’re an individual looking to embrace change, a business seeking to stay ahead of the curve, or an organization eager to shape a better future, we’ve got you covered.

Connect with us at https://www.humantechfutures.ca/contact


Was this article helpful?

ESC

Eddy, a super-smart generative AI, opening up ways to have tailored queries and responses