![]() |
Given all the brilliant things that are happening today with machine learning and artificial intelligence, I just don’t understand why “fake news” is still an issue. I think the solution is right in front of us; that is, if social media networks are really serious about addressing this problem. Facebook is one of the biggest culprits in tolerating fake news, and that probably has a lot to do with the “economics of social engagement.” An article titled “Future of Social Media” summarizes the challenge nicely: “While it’s great that everyone and her brother has access to create content online, offering a more diverse and thriving online market, this also generates stronger competition for your content to break through the clutter and be seen. In fact, there will be a time in which the amount of content internet users can consume will be outweighed by the amount of content produced. Schaefer calls this “Content Shock” which, unfortunately, is uneconomical.” Figure 1 shows the area of “Content Shock,” when the ability to create content outstrips the ability for humans to consume it. ![]() Figure 1: Economics of Content and “Content Shock” The article recommends to “create content that will stand out” in order to draw attention and create engagement. Well, nothing draws attention and creates engagement like “fake news”. For example, here are some examples of fake news articles and the number of Facebook engagements each of these articles drove[1]:
That’s an awful lot of Facebook engagements with news that isn’t true, but the “news” certainly does “stand out” in the crowded content space and it certainly does drive engagement. Solving the Fake News ProblemSo assuming that the social media networks truly are motivated to solve the “fake news” problem, here is how I would do it.
![]() Figure 2: Flagging Potential Email Spam in Yahoo Mail
Amazon already supports the flagging of potential “Trolls” and “fake reviews” in their customer reviews (see Figure 3). ![]() Figure 3: Flagging Fake Reviews
Freedom of Speech and Type I/Type II ErrorsMachine Learning could certainly help to mitigate and flag fake news, but probably cannot and should not even try to eliminate it entirely. Why? It’s the First Amendment of the Constitution and it’s called Freedom of Speech. One important consideration as social media organizations look to squelch fake news is to not violate Freedom of Speech. So instead of an outright deletion of questionable publications (other than for pornographic, liable or hate crime reasons), it might be better for the social media sits to use some sort of “Degrees of Truth” indicator that could accompany each publication or article. These indicators might look like something in Figure 4. ![]() Figure 4: Degrees of Truthfulness Indicators The cost to society of letting a few fake news articles to get published (false positive) greatly outweighs the potential costs of blocking potentially valid news (false negatives). So one will need to err on the side of allowing some level of fake news to ensure that one is not blocking real (though maybe controversial) news. See my blog “Understanding Type I and Type II Errors” to learn more about the potential costs and liabilities associated with Type I and Type II errors. Machine Learning to End of Fake NewsEnding Fake News seems like the perfect application of machine learning. Organizations like Yahoo, Google and Microsoft have been using machine learning for years now to catch spam (see article “Google Says Its AI Catches 99.9 Percent Of Gmail Spam”.) And companies like McAfee and Symantec employee machine learning to catch viruses (see article “Malware Detection with Machine Learning Methods”.) Fake news looks a lot like spam and a virus to me. Should be an easy problem to solve, if one really wants to. [1] http://www.cnbc.com/2016/12/30/read-all-about-it-the-biggest-fake-news-stories-of-2016.html [2] A troll is a person who sows discord on the Internet by starting arguments or upsetting people, by posting inflammatory, extraneous, or off-topic messages with the intent of provoking readers into an emotional response or of otherwise disrupting normal, on-topic discussion. https://en.wikipedia.org/wiki/Internet_troll The post Using Machine Learning to Stop Fake News appeared first on InFocus Blog | Dell EMC Services. |
