Wake up, humans! Our data crisis is really a humanity crisis
  • 16 Apr 2024
  • 15 Minutes to read
  • Dark
    Light

Wake up, humans! Our data crisis is really a humanity crisis

  • Dark
    Light

Article Summary

Thank you to Kem-Laurin Lubin, PH.D - C for sharing her knowledge and insight with us.

Click here to read on Medium.

From bias to greed: the many data exploits

“What I may see or hear in the course of treatment or even outside of the treatment in regard to the life of men, which on no account one must spread abroad, I will keep to myself, holding such things shameful to be spoken about.” — Hippocratic Oath.

Are we in the middle of a data crisis, or have we, as a society, merely unveiled a more profound issue with our human values and behaviours? I think both. This article discusses the idea that the root of our “data crisis” is not in the numbers, bytes, or systems, but deeply embedded in our human interactions, decisions, and ethics as well as informed by our seeming obsession with data. And now, data is in control.

Siri, are you listening?

In 2017, as I sat in our living room with my family, planning a summer holiday to Europe, it became a turning point in our lives, one that we would call back to but reduced to the joke that “Siri is listening; shhh.” We laugh it off but it is not laughable for two Gen-Xers, one with Luddite tendencies with only one image of him on the internet — this fact is something he reminds me of from time to time, when I decide to post any images of mee online.

That day the air in our living room was filled with excitement and possibilities, though we hadn’t settled on a specific destination and was just having the discussion with many travel books strewn on the side table. Our options of places we could take the kids included Spain, Portugal, and Italy — places that I had visited during my university years and thought I would like to visit again but this time with my family. Eventually, we chose Italy, but strangely, it seemed as if AI had anticipated our choice even before we did any online search.

Just moments after our family conversation, my digital feeds were inundated with accommodations and travel tips for Italy — things to do and places to eat. I assure you I had not even yet explored online searches, which added to the shock of the invasion.

While many of us have become numb to the invasive nature of technology, my partner and I, both Gen-Xers, who grew up valuing privacy, found the invasion of our personal space jarring. To us , it is similar to being interrupted by an intrusive stranger while having a private conversation at a restaurant. Both shocked and irate, that incident compelled us to convene a family meeting to tighten the privacy settings on all our devices and scrub our location data. While some might welcome such interactions, for my partner and me, it was a clear signal to curtail our reliance on smartphones.

This unsettling experience was not a one-off. Since then, we, the adults in the household, have intentionally limited our conversations to areas away from mobile phones and similar eavesdropping devices — even after adjusting our privacy settings. We’ve also established a firm rule against keeping mobile phones in private spaces within our home.

Some may deem this behaviour overly cautious, but I see it as a necessary measure of self-defense against the ongoing encroachment of our privacy — an imposition we never agreed to become mere data points for the caprices of big tech companies.

The death of serendipity

There’s a certain charm in serendipity and the accidental, which seems to be diminishing in today’s world; to me, and much of it has to do with the constant barrage of recommendation engines — next level marketing that finds you and suggest purchases based on known behavioural history. There is no more wander in much of our existence and many are now recognizing the challenges of living with ubiquitous machines that seem to know what we want to do before we even think of it.

This death also reflects a broader cultural shift driven by the digital age’s algorithms and data analytics. The magic of stumbling upon a new favorite book in a store, or meeting someone who changes your life by chance at a café, is being replaced by curated experiences designed to optimize efficiency and predictability. These recommendation engines and targeted marketing strategies are so finely tuned to our previous behaviours that they often circumscribe our choices, funneling us into repetitive patterns and creating echo chambers that reinforce the familiar rather than introduce the novel.

This technological determinism can make our lives seem pre-scripted. As we grow more reliant on smart technologies — from smartphones to smart homes — we risk losing the ability to act without prior data, to make choices that are not just unexpected but genuinely creative. The pervasiveness of such technology can lead to a flattening of human experience, where spontaneity and the unpredictable joys and challenges of life are dampened.

Many now recognize the need to step back and examine this trajectory. There’s a growing movement to reclaim the spaces where serendipity can thrive, suggesting perhaps a digital detox or the deliberate use of technology that does not exploit our data but instead enriches our human experiences without preconceptions. This might mean designing technologies that encourage exploration and surprise, or simply fostering environments — both physical and digital — that allow for the unknown and the unplanned to flourish.

Data — weapons of math destructions

A few years ago, I read Cathy O’Neil’s Weapons of Math Destruction. In it, O’Neil recounts the anecdote where a major retail company, Target, used an AI recommendation engine to analyze purchasing patterns and predict customer behaviours, including pregnancy. This became widely discussed following an article in The New York Times in 2012.

In this instance, Target, an American retailer, developed a predictive model that assigned each customer a “pregnancy prediction” score based on their purchase history. The model identified certain products that, when purchased together, indicated a likelihood of pregnancy. For example, women on Target’s list suddenly started buying large quantities of unscented lotion around the beginning of their second trimester. Other predictive products included supplements like calcium, magnesium, and zinc.

The model was so accurate that Target began sending coupons for baby items to customers it predicted were likely pregnant. The controversy came to light when a father complained to a Target manager that his teenage daughter was receiving coupons for baby clothes and cribs. It turned out that the model had correctly identified her pregnancy based on her recent purchases, and she had not yet informed her family.

This anecdote is often cited as an example of the potential for predictive analytics to invade privacy and raise ethical concerns, especially when individuals are unaware of how their data is being used or the extent to which they are being profiled. O’Neil used this anecdote, a now famous example and highlights how algorithms, while powerful, can also perpetuate biases, invade privacy, and have unintended consequences.

The societal and human impact of our data trends

To explore the human aspect at the core of our data crises, it’s also crucial to explore the psychological and sociological factors shaping our interactions with technology, specific to this constant creation of more and more data about every aspect of our lives. Least we forget that data can be biased, even though we only think of bias in the context of textual narratives. As noted by O’Neil, data is flawed and therefore our deep reliance on data as the life blood of our current existence is already problematic.

She further argues that many of the algorithms and data models used today are flawed because they are built on historical data that reflects past prejudices and inequalities. O’Neil emphasizes that these biases are not just minor glitches, but can actually scale up and reinforce social inequalities through decisions made by these algorithms in areas such as hiring, policing, lending, and education.

She warns that these biased algorithms, which she calls the actual “weapons of math destruction,” can be opaque, unregulated, and uncontestable, even while they have significant impacts on people’s lives.

To me, this alone should raise red alerts.

These “weapons” she suggests often operate in environments lacking transparency and accountability, making it difficult to challenge or even understand their outcomes. Her work encourages a more ethical approach to data science, urging developers to consider the social impact of their algorithms and to ensure they do not perpetuate or exacerbate existing disparities.

The sociological impact

Sociologically, our behaviours around data and technology are heavily influenced by societal norms and cultural expectations, which can drive a collective overreliance on technology, often without sufficient scrutiny or skepticism. By understanding these underpinnings, we can begin to see that our data issues are not merely technical faults but are deeply informing and derailing the natural trajectory of innate behaviour and societal constructs.

We are on a path to something unhuman — even cyborgish, an homage to Donna Haraway’s Cyborg Manifesto, where our merging with machines is both a liberation from traditional boundaries and a potential entry into new forms of domination. This intersection of technology and humanity beckons us to reconsider what we consider natural or essential, pushing us into territories that are as exciting as they are unsettling.

This insight is crucial for tackling the underlying issues of data mismanagement and promoting a more responsible and sustainable engagement with technology. Frequently, this mismanagement is ignored, overshadowed by the belief that such trends are an inevitable aspect of human evolution — falling prey to the twin ideologies of technological determinism and solutionism.

The fallacy of tech determinism & solutionism

Let me very briefly elucidate why I say tech determinism and solutionism are the ideological paths of our data crisis, which can only get worse, if not controlled. It is also a topic central to my doctoral research as well as on the tongues of many of my colleagues and peers, specifically how companies assume we are all going to embrace and use technology — that it is so predetermined. This is the belief that technology shapes society — that our digital tools and platforms inevitably steer customer behaviours and expectations. It’s like saying, “Because we have smartphones, we must have mobile apps for everything.” But here’s the catch: when businesses adopt a deterministic viewpoint, they sometimes forget the human element.

They might assume everyone loves using chatbots, leading to automated services that feel impersonal or, worse, frustrating. It’s like being handed a map in a language you don’t understand — sure, it’s a guide, but is it helpful?

Tech solutionism — the silver bullet fallacy

On the flip side, we have tech solutionism — the belief that every problem has a digital solution. It’s an optimistic view, but it can lead to over-engineering. Imagine a simple customer query that could be resolved in a two-minute phone call, now navigated through a labyrinthine online portal. It’s akin to using a sledgehammer to crack a nut. Customers don’t always want flashy gadgets; sometimes, they just want a straightforward answer. And I can go on; this has been my world.

The new Luddites: untangling our lives from the web of tech surveillance

In the posture of our failing techno-culture, our autonomy seems increasingly compromised, driven by the stealthy encroachments of recommendation algorithms that infringe upon our privacy. These algorithms, much like the exploitative contracts of record companies with artists, operate under the guise of convoluted terms and agreements. As awareness of these practices grows, more individuals are questioning the use of their personal data as fodder for the engines of technological advancement — from the curation of efforts by large language models (LLM) to the analysis of our every click and hover.

This growing disillusionment has even permeated my own social circles, prompting a shift toward more analog forms of communication. We’ve rediscovered the value of offline interactions, a nostalgic throwback to a time unmediated by technology. This move away from digital oversight is not merely about protecting our privacy; it’s about reclaiming a sense of personal agency and fostering genuine connections that technology once promised but seldom delivered.

Amid this relentless digital onslaught, with scant regulatory oversight, many, including myself, are gravitating towards a semi-Luddite lifestyle, an increasingly challenging endeavor in our interconnected world. A poignant illustration of this shift is a book club I recently initiated, which convenes exclusively in person, in the serene setting of a local park. This deliberate choice to disconnect from technology and focus on local community engagement marks a step back from the pervasive influence of digital recommendation engines. It’s a celebration of discovery and community, standing in stark contrast to the often intrusive and prescriptive digital interactions that have derailed our digital lives. This return to simpler, more authentic modes of interaction isn’t just a preference — it’s becoming a necessary refuge for those seeking to preserve a sense of self in the digital age.

What we can learn from the Hippocratic Oath in retaining data dignity

During my tenure as the leader of the User Research Team at Research in Motion (RIM, now Blackberry), my responsibilities were heavily centred on securing Non-disclosure Agreements (NDAs) with users engaged in robust and never-ending product testing. This involved close coordination with the risk, legal, and compliance departments to establish rigorous controls over how we used and protected testers’ videos, data, and other sensitive materials. A critical aspect of our protocol was setting definitive retention periods for this data, typically ranging from three to seven years, after which the information was either securely destroyed or archived.

With all the effort, you would think that we were medical professionals observing the Hippocratic Oath. For those not familiar with this Oath, to which we moderns are all subjected to.

The Hippocratic Oath emphasize a commitment to treating the ill to the best of one’s ability, avoiding harm or injustice, respecting patients’ privacy, and generally maintaining high standards of professional conduct in medicine. And from all accords it is something the field of medicine takes seriously. My father in law is a retired surgeon who both taught, wrote and practiced surgery.

A few years back, a very famous and recognizable American came to Canada because he was the first to write a specific surgery which was executed successfully on that noted famous person. He would not even hint at the specifics, even after a few dark and stormy mixes, his drink of choice. We only know that the family of that famed patient, found a way to ship him some very expensive wines, as a thank you — something that surprised him and frankly something I think he was not too comfortable with. These wines still remain unopened, so many years later.

I say all this to say, doctors are bound by such deep privacy and here we are today when our privacy has arrived at the basement of bargains, where the indefinite retention and use of decades-old data by numerous companies continue unabated, fueling profits at the expense of individual privacy.

This ongoing practice leaves many wondering how to begin reclaiming their data dignity. The stark difference in data management underscores a significant shift in how personal information is perceived and utilized in the tech industry today. The challenge now is not just about managing data but about restoring respect and control over personal information in an era where data has become a perpetual commodity.

Assomeone who consciously avoids the use of analytics and biometric data, I find comfort in the unknown. Like countless generations before us, living without detailed personal analytics isn’t just possible; it’s often preferable. The incessant tracking and analysis of our data not only infringe upon our privacy but also reflect a troubling legacy of corporate mis-behaviour and an overwhelming obsession with metrics in every facet of life. Indeed, the serendipity of life — unplanned and untracked — holds a special charm that I, among others, deeply cherish. This choice to disconnect underscores a broader critique of how personal data is used and the pervasive impact it has on society. The need for governments and institutions to step in has never been more urgent. They must help foster a comprehensive understanding of data literacy that allows individuals to interpret and grasp the ubiquity of data and analytics — how it’s collected, analyzed, and utilized, and the potential for generating mistrust.

Furthermore, we must not overlook the fact that reliance on data and algorithms can inadvertently perpetuate biases. If people begin to suspect that data-driven decisions are biased or unfair, their willingness to trust or endorse these decisions will likely diminish over time. It is crucial that we become more aware of how data can carry biases — sometimes unintentionally, but at times intentionally as well. Moreover, the relentless pursuit of optimization and data-driven decision-making threatens to erode the human touch — a critical element of our interactions and decisions. There is a growing concern that an over-dependence on data removes the nuanced and emotional aspects of human decision-making, raising fears that machines may never fully comprehend or address the complex needs and emotions of individuals.

This dialogue is not merely about resisting technology but advocating for a balanced approach where technology serves humanity without overriding the essence of human experience. It is a call to ensure that as we advance technologically, we do not lose sight of the values and experiences that define us as human beings.

For companies that rely on data to fuel their business practices — including employee tracking, which has become commonplace — there must be regulatory interventions that challenge this focus, which often treats the human as a mere cog in a machine, akin to a factory line. The use of data in such endeavors must also be compliant with broader conversations about data usage. Companies must engage more openly in discussions about transparent and ethical data practices, improving data literacy, ensuring data security and privacy, and maintaining a balance between automated decisions and human decision-making practices.

Related posts

About me: Hello, my name is Kem-Laurin, and I am one half of the co-founding team of Human Tech Futures. At Human Tech Futures, we’re passionate about helping our clients navigate the future with confidence! Innovation and transformation are at the core of what we do, and we believe in taking a human-focused approach every step of the way.

We understand that the future can be uncertain and challenging, which is why we offer a range of engagement packages tailored to meet the unique needs of both individuals and organizations. Whether you’re an individual looking to embrace change, a business seeking to stay ahead of the curve, or an organization eager to shape a better future, we’ve got you covered.

Connect with us at https://www.humantechfutures.ca/contact


Was this article helpful?

ESC

Eddy, a super-smart generative AI, opening up ways to have tailored queries and responses