A year or so ago, I wrote a piece of fiction called Reccr. Reccr was supposed to be an AI-driven recommendation engine that filtered life around you and used any and all data about you to refine that filter. We already see a rise of that behavior: Facebook’s “top posts” and “recommended” articles, Twitter’s “while you were away”, and pretty much every other service out there that allows you to consume content has a “recommended” section that is partially, if not fully, driven by your own behavior.

There’s no need to read the story. Let me summarize it, it’s a story about two very different people with very different views and how they used this engine to augment their lives. One of them became very complacent, believing the world is an ultimately good place with no conflicts and ended up losing his life because of it. The other one experienced the opposite and ended up believing the world was an ultimately evil place and ended up ending the lives of innocent people.

While extreme examples, they are supposed to serve as an extreme version of the echo chamber. So what did my story ultimately discuss? The good, the bad, and the ugly.

Fictional Features

For the sake of simplicity, let’s look at the fictional features of this AI:

  1. ability to gather data from all our online profiles (with your permission)
  2. ability to create “recommendations” and “suggestions” in-app in all your online profiles based on your data
  3. ability to plug into any number of outside services to learn more about you and give you better suggestions
  4. using mass central data collection and on-the-fly learning of all of its users

 

The Good

Let’s start off well. The “good”. The recommendation AI revealed a few really cool ideas.

AAAS

One of them was an AI-as-a-service model (an AAAS?) where a recommendation AI uses a huge vast wealth of data to serve as an API for outside services that can feed it even more data and it can use provided datasets to provide recommendations based on that.

Imagine starting your own social network that plugs into Reccr. You’d be able to provide it with your user data as a starting point and then provide it with the data users generate to provide “recommendations” or “top posts” refined to that user. If you think about it, Ad networks work this way already but this is more consumer-centric.

If that user owns a “Reccr” account, their recommendations would also utilize data they provide from other services.

This could also work for sites like Hacker News and Reddit wherein the “top posts” try to serve the vast wealth of users rather than a single user. Utilizing the data people provide (the posts they comment on, the posts that get upvoted, etc.), HN and Reddit could easily predict which posts should come out on top.

Better Recommendations with a centralized service

The other thing to note here is that if a service plugged into a lot of other networks, sites, etc. it could provide much better results. Google does that to an extent. They build a profile on you and give you search results tailored to who you are based on the sites you click on. Adsense does this as well.

Utilizing data from various sources is nothing new but providing that learned data back to other services, kind of is. Twitter knows who I follow on Twitter and what Tweets I like so it provides me with “While you were away” or “Best of” recommendations based on my behavior on that particular site.

Imagine if it had the power to give better recommendations based on what I search for on Google or what Tumblr blogs I follow, etc. without actually being able to see that data.
The more you know, the better you can recommend.

“Create it and they will come” will come true

On the flip side, let’s consider how Google figures out what site to rank the highest. A huge part of that equation is using other highly-ranked sites and seeing how much they link/trust a new site. This is far from perfect but it might work well. They, of course, use plethora of other metrics but this one is a big deal.

With an AI that can analyze writing, analyze podcasts, and even videos and then use “high-power” users for “A/B testing” new content, we could see the rise of true “create it and they will come” behavior.

Imagine that you write a blog post and the AI analyzes it and categorizes it. It knows what kind of users might like it from the beginning. It serves it into randomized results of “high power users” (users that often get it “right” as far as what’s going to be good). The article gets read by the user and skyrockets to the top of all recommendations list.
Later on, the AI could probably analyze the text itself and figure out if it’s good without the input of another user.

This is the ultimate dream: create something good, and get recognized without having to have a huge following and without having to “strike gold” on HN or Reddit.

Connecting with like minds (…and dating)

The other upside is that we don’t just experience connecting with content, we experience connecting with individuals. When browsing through Facebook, I often get a “suggested friends” section that shows me people that are somehow connected to me through other friends or somehow through Facebook’s privacy invasion.

Whichever it is, I often find people I mean to reconnect with. The problem with Facebook is that it doesn’t connect you with someone outside of your already existing network.

Twitter has no issue with suggesting complete strangers and so it will look at who you follow and who other people like you follow and give you suggestions. These are a tad more helpful.

However, making true connections with other people is a little more like dating. Finding the right person to connect with on Facebook or Instagram, or even Twitter can be difficult. Sometimes you just “click”, but often times, you don’t.

A service that collects this kind of wealth of data could potentially connect people together and bring on an intellectual rennaissance by getting the right minds in the “same room”.

 

The Bad

non-anonymized data collection

Google Analytics is bad enough. Google Adsense is horrifying. Facebook still creeps me out. Combine the two together plus access to personal information behind a login gate, and you’re looking at an entity that knows everything about you, even more than you know about yourself.

If properly anonymized (as anonymized as you can make intimate data), it’s great! But even with erasing your name, it should be pretty easy to narrow down a person from a huge list by their metadata.

Enormous data collection is dangerous regardless because that data can be leaked, hacked, and you never really know who has access to it. This data can be easily used for malicious reasons. Whether that’s on the “lighter end” (ad targetting) or the “darker end” (blackmail, stolen identity, etc.)

Information selling for ad targetting

One thing that Adsense does well is figure out who you are and serve you ads you’ll respond to. When that happens, you’re using personal data against a person. I just had a daughter and I have son incoming. Mentioning this in an email, or visiting parenting website will result in adsense everywhere serving me ads for parenting-related products.

In the previous section, I described this as a side effect; however, it can also be purposeful. Gathering data on a user can be done for a variety of reasons and Facebook, for instance, does this so that you stay engaged longer on the site, view more ads, and interact with more ads.

These ads can be there for a variety of reasons and based on who you are will try to serve you different products. For instance, if you’re fairly rich and interested in cars, you might see a lot of Audi commercials. If you’re from the US South and “liked” multiple “Trucks are awesome!” pages, you might see a bunch of truck commercials.

What happens when these ads are targetting your opinions? What if you had a family member with cancer in your family and Susan G Komen ads start popping up? What if your local politician placed an ad on that criteria because they’re trying to capitalize on a campaign promise related to cancer or hospital funding?

The Ugly

The image above should probably show the 3rd party plugging into the main stream rather than sending a second one. But whatever works! 🙂

Echo…echo…echo

The echo chamber is a concept wherein someone surrounds themselves with affirmation bias. Let’s say you’re a developer. So you surround yourself with other developers, probably developers that use the same technology as you. The echo chamber effect takes hold as the people you follow basically confirm your own believes without challenging them.

If you’re a PHP user (a very disliked language that, I believe, has its good merits) and surrounded yourself with Laravel/Symfony devs (PHP frameworks), you might come to believe that your language of choice is the best while the rest of the world harks on you. You’d be oblivious of potentially better options for problems you face. And ultimately, you’d end up believing in myths and legends without outside input because if you were call out to your followers to challenge an idea like “Is PHP good for microservices?” you’ll be left with confirmation that it is good, without getting the true benefit of diverse opinions.

The past US election showed this first hand, at least to me. My friends were on both sides of the fence but most of them were surrounded by their peers that agreed with them. There was a popular meme that said “Check Hillary’s page and how many friends liked it. Now check Trump’s. Post the results”.

The effect was undeniable. My Trump supporter Facebook friends had numbers like 2/45 (favoring Trump) and my Clinton supporter friends had numbers like 34/1 (favoring Clinton).
Having an AI entity that sorts and filters the world for you would make this effect even worse.

Potential for massive and individualized manipulation

The fake news scandal around the election had an interesting result as well. It made us realize that Facebook’s recommendation algorithm and our own friends served us with news that could influence how we feel about a candidate and potentially make a grave mistake in who we voted for.
There were definitely shots on both sides. I didn’t cover this potential “ugliness” of a central AAAS in my story but I think this is one of the biggest deals here.
We’ve covered a few ways of manipulating someone’s opinion based on the data AAAS has on someone:

  1. Advertising – advertising is an “external” factor that tries to influence your opinion. Ads are visible and “visibly” targetted (you can recognize an ad)
  2. Echo chamber – opinion reinforcement based on people that agree with your opinion. A filter that filters out things you don’t agree with.
  3. Directed manipulation

The last one involves using data against you internally. Fake news is a good example of this. Fake news are brought to you as real news in hopes of changing your opinion. They’re not ads (meaning they’re not marked), they’re not part of an echo chamber, they’re part of a fraud or scam that maliciously tries to dissuade you from an opinion you hold or reinforce an opinion on a false premise.

There are various ways of manipulating maliciously:

  1. Serving news that reinforces the opinion of a manipulator: whether that news is true or matches your profile. Good example of this might be propaganda
  2. Creating an artificial echo chamber based on the manipulator’s wishes rather than one’s profile matches. A good example would really be fascism and peer pressure.
  3. Serving fake data in lieu of truthful data. For example: fake news, fake users and personas, etc. to separate you from reality. (even worse, on-the-fly creation of fake data based on your personal data)

The term for slowly but directedly changing one’s view of reality is these days called “Gas lighting” or “being gaslit”. And, in my opinion, is the worst possible side effect here.