Censorship is Not the Real Threat
People are being radicalized by Artificial Intelligence applications
|Paul Gernhardt||Jan 13||3|
Social Media Artificial intelligence algorithms shape your world view by creating a custom designed “reality” based on your personality and activities in all areas of your usage of the internet — across applications, websites, and services. The reality you have created for you is very different than that created for someone else. It is completely different that the one presented to your child.
The facts, memes, articles, notifications, discussions, and ads that you see are only those various AI Algorithms select. The AI are designed to keep you engaged online and to influence you as desired by the company or their paying customers. You are targeted based on the most extensive profile anyone has ever created on you, your views, your activities, and those of your friends.
The more you respond to one type of article, the more of those types of articles you will see, the more of the posts from your friends that also read them you’ll see. The more discussions of one nature you and your friends are engage in the more you’ll see that type of discussion and information related to the discussion. These things will “bubble up” to the top of your social networking newsfeed.
These artificial intelligence networks are designed to draw you as far into an areas as they can get you. That keeps you engaged. If you’re into trains it will take you as deep into the train information, ads, and community as it can get you to go. If you are into politics it will take you whatever direction you starting and take you as far as it can get you to go… to keep you engaged.
If you click on an article about a democrat running for office, the system will start showing you more articles and ads for democrats. It will start showing you more content about things democrats are talking about. It will priorities postings of friends who are likewise engaged. The more time you spend on those things the more it will show you.
As you progress and read more of those types of article and posts and ads that you look at, even if you don’t click on them, those actions will be noted. If you select something a little more to the left it will show you more of that. If you engage in discussions with a friend about something a little further left it will show you more of that… and so on. The more time you spend on things related to that the more it will show you.
All the while it is leading you down the path to radicalization. All you start seeing is further left articles, discussions, ads, “news”. It slowly eliminates things which contradict your positions. UNLESS, you spend more time engaging in arguments, then it will show you things which make you argue because… that’s more time online, you’re more engaged… which is what the algorithms are designed to do.
Eventually all you are seeing are things on “your” side of the issues. All the people you’re engaged with agree with you. Everything is one “reality”, one world view, one position - except perhaps for that one person you keep arguing with.
Through this process the social network’s artificial intelligence networks are creating different sets of reality. It spreads everyone out into various levels of left/right/. It reinforces each set, from neutral to radical and keep them all in their various echo chambers.
These artificial intelligence systems are based on neural nets. These are systems which are fed input, told what output they want and the AI systems build their own methods on how to accomplish that. The people who set-up and train these systems are unaware of the details and “methods the neural network make to produce the desired result. They understand the process of how it might have determined it, but they won’t understand the methods the neural nets end up using. It just works… somehow. We feed it this information and get the result we want.
For example, I can feed a neural network a large number of fish pictures with labels of what that picture is. I can feed this into a neural network learning set-up and it will learn to identify fish in pictures. I can then start feeding it unlabeled pictures and it will label the fish for more. Once the network has learned something I can save that “state” and use it whenever I want.
What I can’t tell you is exactly how it identifies the fish. Two neural networks doing the same thing may end up with completely different ways of doing things. They can continue to learn if they are corrected and will endlessly modify their method — in ways that might take an engineering team years to reverse engineer and understand what exactly it is doing.
Lets do some math explorations just to get a feel for how this might work in the real world… using assumptions pulled completely out of thin air.
So let’s say that one percent of the population have personalities that are susceptible to radicalization on a passionate ideological level. Let’s further assume that one percent of those (.01% of the entire population) are susceptible to be influenced to commit to some extreme actions for a cause. Then one percent of those are open to serious criminal activities associated with that. There are 244,000,000 people on social media in the United States. One percent of that is 2,440,000 “passionate” people. One percent of that is 24,000 activated people. That leaves 240 people who will commit a crime for the cause.
So how does this work in reality. Let’s say I want to influence a number of people in a direction towards something I believe and am passionate about. Let’s say I want to create a March on DC to end the vitriol against brussel sprouts, — a subject that has started when the president mentioned how he hates them.
Now for this march I want people highly motived people. So I define a group of people who are “against the president”, “have attended a protests”, “are foodies”, unemployed, between the ages of 15 and 35, and are within 300 miles of DC. I can target these specific people with ads and information about my cause.
If they click on it their friends will likely see it. The more they click on information the more I’ll pay for their future clicks. Soon they’re protesting. It wouldn’t be hard to target those who might do “more”.
The systems are designed to draw people in. The more they click the more they get. All the same view or a little further down the line.
Censorship on social networking isn’t the threat… it’s the foundational way social networks work.