Algorithms That Know Us

Predicting Your Behaviour: The Algorithms That Know Us (Maybe Too Well)

You’re not just being paranoid—your phone has an intimate understanding of your preferences and behaviour, thanks to algorithms.

Predicting Your Behaviour: The Algorithms That Know Us (Maybe Too Well)

You’re not just being paranoid—your phone has an intimate understanding of your preferences and behaviour, thanks to algorithms.

Do you ever get the eerie feeling that your smartphone is peering into your soul? It somehow knows exactly what interests you, who you are, and what you’ll do next. You’re not just being paranoid—your phone has an intimate understanding of your preferences and behaviour, thanks to algorithms.

Algorithms are everywhere, operating behind the scenes to curate our digital experiences. That Google search you just performed? Algorithms analyzed your query and sifted through millions of possibilities to serve up what they calculated would be the most relevant results for you. The Facebook and Instagram feeds showing you a personalized selection of posts? Algorithms, again, built to appeal to your tastes. The suggested videos on YouTube, the “Discover” playlist on Spotify, and the products being marketed to you on Amazon—all assembled just for you by algorithms.

These bits of code have become adept at predicting our clicks and purchases by crunching massive amounts of our data. And most of the time, it feels magical how spot on their suggestions can be. Algorithms can help us discover new yet specific interests like Balearic punk or hand lettering tutorials. They dish out content from the vast sea of the Internet that they know we’ll enjoy. And they push our purchasing habits along, nudging us to buy that extra pair of shoes or treat ourselves to the expensive olive oil. 

Mostly, we’ve welcomed algorithms into our lives for the convenience they offer. But are we sacrificing too much privacy and autonomy in the process? How much power should we give these all-knowing lines of code? Let’s explore some implications of our algorithm-driven world.

The Perks and Pitfalls of Personalization

A while back, I came across a movie on Netflix that made me laugh harder than I had in months. In the suggested videos after, there was another wacky comedy that Netflix predicted I’d love based on my previous watches. I clicked it, cracked up the whole time, and dove into the next suggested film. This went on for hours as if Netflix was peering into my humour sensibilities and handpicking each one just for me.

Having a virtual assistant who “got” my taste in movies so well felt great. But in the blur of all that binge-watching, I forgot that the algorithm wasn’t an all-knowing Netflix guru—it was cold, calculated code that had sussed out my preferences based on thousands of minor viewing decisions I’d made over the years. 

And here lies an inherent tension—algorithms provide a hyper-personalized experience that we find convenient. But how much should we rely on them, especially when their insights come from sweeping up our data?

On the positive side, algorithms can enhance our lives:

  • They help us discover new interests and hidden talents—say, by suggesting a baking tutorial after you buy a few cookbooks online. 
  • They make searching efficient—Google returns results catered to you in a fraction of a second.
  • They save us time—algorithm-generated playlists free you from hunting down new music yourself. 

However, there are also some drawbacks:

  • They can manipulate us and impact our choices—playlists based on your listening history subconsciously shape your music taste.
  • They filter out diversity—reading news curated just for you to create “echo chambers” and “filter bubbles.”
  • They can reinforce biases—an algorithm trained on biased data absorbs and amplifies those same biases.

So, should we welcome our new algorithmic overlords or rage against the machine? As with most technologies, the truth lies in the grey area—algorithms can be beneficial and unsettling. The key is staying mindful of how they work so you can better evaluate their influence.

Take Control of Your Data Destiny 

Here’s an experiment: Tell Alexa or Google Assistant to add pasta, marinara sauce, and garlic bread to your shopping list. Then load up Facebook and talk aloud about making spaghetti tonight, but don’t type anything. Expect ads for pasta-centric recipes or Italian restaurants on your feed within minutes.

This happens thanks to those devices’ constant microphones. Algorithms process the audio to understand context, activating ad algorithms to serve pasta-related ads on Facebook. Creepy, no?

But it’s not just our voices. Our many apps and accounts build comprehensive profiles about our habits, locations, relationships, interests, ages, genders, and more. This enables algorithms to predict our needs and desires with frightening accuracy—a level of personalized persuasion unprecedented in human history.

So, how can you gain more control over your data? Here are some tips:

  • Review and adjust privacy settings in your apps and social media. Opt-out of sharing data.
  • Delete apps you don’t need or use.
  • Use ad blockers while browsing to dodge targeted ads. 
  • Try alternative search engines like DuckDuckGo that don’t store your info. 
  • Use a VPN to encrypt your web traffic and mask your IP address.
  • Clear cookies and cache from your browser to wipe your slate clean.

Granted, reclaiming your data requires more effort and sacrifices some convenience. But it’s worth considering as algorithms become more astute. Do you want Alexa announcing that you’re out of laxatives in front of your family? Yeah, didn’t think so.

The Bias Behind the Code 

Algorithms don’t actually “know” you or make human-like judgments—they find patterns in large datasets and make statistical guesses based on probabilities. Their predictions are only as good as the data used to train them. Unfortunately, that data often reflects existing societal biases and blind spots.

For example, facial recognition software has struggled to accurately identify people of colour, in part because the algorithms were designed using datasets dominated by white faces. Predictive policing systems disproportionately target low-income neighbourhoods based on historically biased arrest data. And Facebook’s algorithms have been found to show hiring ads to predominantly male audiences, illustrating how discrimination can be embedded in code.

These biases have serious real-world consequences—accidental but still harmful. However, the opacity of algorithms makes it challenging to inspect their actions. Their machinations occur in server farms accessible to just a handful of engineers. Users see the output, not the process. 

It’s unsettling to think secretive formulas churning away in Silicon Valley could shape our opportunities in life. But there are glimmers of hope, as algorithmic auditing processes emerge:

  • Researchers are developing tools to analyze datasets and algorithms for signs of bias. This brings more transparency.
  • Governments are proposing new regulatory frameworks around verifying algorithmic fairness. 
  • Activist groups pressure tech companies to address algorithmic discrimination. Employees on the inside are also speaking out.
  • Class action lawsuits are holding tech companies accountable for algorithmic harm. 

Society is awakening to the unintended side effects of algorithms. But much work is left to expose their inner workings, test their integrity, and diversify the teams behind them. The goal should empower users, not just optimizing for engagement and profit.

The Delicate Balance of Power

So, where do we go from here? Is it time to dismantle Big Tech, delete all our accounts, and live off the grid? Probably not—algorithms bring real utility to our lives. Google Maps has helped me get lost far less often, and I’ve discovered delightful new artists on Spotify I’d never encounter on the radio. 

But it is vital to remain vigilant, think critically, and push for transparency. We cannot take algorithms’ objectivity or infallibility for granted. Behind the complex code are still very human motives and incentives. There is immense power in controlling the information people see—it can shift political winds and sway personal choices. And power, as always, risks being abused without oversight.

We need to move forward carefully. Advancements in algorithms and AI will unlock incredible new potential. But the creators moulding this technology must respect the people subjected to their opaque machinations. Users should feel empowered, not patronized or manipulated. The goal should enhance humanity, not control it.

The algorithms may know us, but we can still shape their impact. Through vigilance and thoughtful regulation, we can create a future powered by ethical algorithms that uplift society. But it will require proactive effort—technology’s march toward convenience rarely pauses for more profound questions of morality. It is up to us as engaged citizens to join the discussion and ensure that algorithms reflect not just what is profitable but also what is just.

So yes, be a little uneasy knowing that algorithms may better understand some part of your psyche than you do. But also recognize their promise if adequately directed. There is a delicate middle path between privacy and personalization, between oversight and innovation. The algorithms will keep learning more about us, but we can also learn more about them. Where this relationship takes us is yet to be seen, but it is within our power to shape it. Let us not be passive spectators in this algorithm-driven world but active participants in shaping its future.

As we navigate this delicate balance of power, it is essential to cultivate a deep understanding of algorithms and their implications. Education is key. We need to demystify the workings of these algorithms and make them accessible to the public. By doing so, we can empower individuals to make informed decisions about their data and digital experiences.

Fostering diversity and inclusivity within the teams developing these algorithms is crucial. By bringing in a wide range of perspectives, we can mitigate biases and ensure that algorithms are fair and equitable. Actively seeking and amplifying marginalized voices can help challenge the status quo and prevent the perpetuation of existing societal inequalities.

Collaboration between different stakeholders is vital. Governments, tech companies, researchers, and activists must work together to establish regulatory frameworks that promote algorithmic fairness and accountability. This collaboration can lead to the creation of standardized auditing processes, increased transparency, and the development of best practices.

The responsibility falls on all of us as individuals to consciously navigate this algorithm-driven world. We must be mindful of our choices, the information we consume, and the data we share. By demanding transparency and engaging in discussions about the impact of algorithms on our lives, we can ensure that these powerful tools serve us rather than control us.

So, let us not be passive recipients of algorithmic predictions. Instead, let us reclaim our agency and actively take part in shaping the algorithms that know us. By doing so, we can harness the potential of these technologies while safeguarding our privacy and dignity. The future of algorithms is in our hands, and it is up to us to shape it for the betterment of society.

Tags:

Bill Beatty

International Man of Leisure, Harpo Marxist, sandwich connoisseur https://4bb.ca / https://billbeatty.net

More posts from this author