I Will Never Trust Google Again After What I Heard on Tv.
Lies, propaganda and false news: A challenge for our historic period
(Image credit:
Getty Images
)
With news sources splintering and falsehoods spreading widely online, tin annihilation be done? Richard Grayness takes an in-depth look at how we got hither – and hears from the researchers and innovators seeking to save the truth.
Grand Challenges
A guide to the issues that define our age
We may have things ameliorate than ever – but we've as well never faced such world-irresolute challenges. That's why Future Now asked 50 experts – scientists, technologists, business leaders and entrepreneurs – to name what they saw as the key challenges in their area.
The range of different responses demonstrate the richness and complexity of the modernistic globe. Inspired past these responses, over the side by side month we will be publishing a serial of feature manufactures and videos that take an in-depth look at the biggest challenges we face today.
Who was the first black president of America? It'due south a fairly simple question with a straightforward answer. Or so yous would call back. Only plug the query into a search engine and the facts get a little fuzzy.
When I checked Google, the outset issue – given special prominence in a box at the elevation of the page – informed me that the first black president was a human called John Hanson in 1781. Apparently, the US has had seven black presidents, including Thomas Jefferson and Dwight Eisenhower. Other search engines do fiddling better. The pinnacle results on Yahoo and Bing pointed me to articles about Hanson equally well.
Welcome to the world of "alternative facts". It is a bewildering maze of claim and counterclaim, where hoaxes spread with frightening speed on social media and spark angry backlashes from people who take what they read at confront value. Controversial, fringe views about US presidents can be thrown centre phase by the power of search engines. It is an environment where the mainstream media is accused of peddling "fake news" by the most powerful human in the world. Voters are seemingly misled by the very politicians they elected and even scientific research - long considered a reliable ground for decisions - is dismissed equally having petty value.
For a special series launching this week, BBC Future Now asked a panel of experts about the k challenges we face in the 21st Century – and many named the breakdown of trusted sources of information as i of the near pressing issues today. In some means, it's a challenge that trumps all others. Without a common starting point – a set of facts that people with otherwise dissimilar viewpoints can hold on – it will be hard to address whatsoever of the problems that the world now faces.
The example at the start of this commodity may seem a small, frothy controversy, merely there is something greater at pale hither. Leading researchers, tech companies and fact-checkers we contacted say the threat posed by the spread of misinformation should not be underestimated.
Take another instance. In the run-up to the U.s.a. presidential elections last year, a made-up story spread on social media claimed a paedophile ring involving high-profile members of the Democratic Political party was operating out of the basement of a pizza restaurant in Washington DC. In early December a human walked into the restaurant - which does not take a basement - and fired an assault rifle. Remarkably, no 1 was hurt.
After a malicious rumour spread online about a pizza eatery in Washington DC, a man walked into the restaurant and fired an assail rifle (Credit: Alamy)
Some warn that "false news" threatens the democratic procedure itself. "On page one of whatever political science textbook it will say that commonwealth relies on people being informed about the issues so they can have a debate and make a conclusion," says Stephan Lewandowsky, a cognitive scientist at the University of Bristol in the Uk, who studies the persistence and spread of misinformation. "Having a large number of people in a guild who are misinformed and have their own set up of facts is absolutely devastating and extremely hard to cope with."
A survey conducted past the Pew Enquiry Center towards the terminate of terminal year found that 64% of American adults said fabricated-up news stories were causing defoliation about the basic facts of current issues and events.
Alternative histories
Working out who to trust and who not to believe has been a facet of human life since our ancestors began living in circuitous societies. Politics has ever bred those who will mislead to get ahead.
But the divergence today is how nosotros get our information. "The internet has made information technology possible for many voices to be heard that could not go far through the clogging that controlled what would be distributed earlier," says Paul Resnick, professor of information at the University of Michigan. "Initially, when they saw the prospect of this, many people were excited about this opening up to multiple voices. At present we are seeing some of those voices are saying things we don't similar and there is great business organization well-nigh how we command the broadcasting of things that seem to be untrue."
We need a new way to decide what is trustworthy. "I call back it is going to be non figuring out what to believe but who to believe," says Resnick. "It is going to come downwardly to the reputations of the sources of the information. They don't accept to be the ones nosotros had in the past."
We're seeing that shift already. The Great britain'south Daily Mail newspaper has been a trusted source of news for many people for decades. But last month editors of Wikipedia voted to stop using the Daily Mail as a source for information on the ground that it was "generally unreliable".
However Wikipedia itself - which tin can be edited past anyone only uses teams of volunteer editors to weed out inaccuracies - is far from perfect. Inaccurate information is a regular feature on the website and requires conscientious checking for anyone wanting to use it.
For case, the Wikipedia page for the comedian Ronnie Corbett in one case stated that during his long career he played a Teletubby in the children'southward Idiot box series. This is false but when he died the statement cropped up in some of his obituaries when writers resorted to Wikipedia for assist.
Several obituaries for the comedian Ronnie Corbett falsely claimed he had once played a Teletubby considering this argument appeared in his Wikipedia entry (Credit: Getty Images)
Other than causing offense or embarrassment – and ultimately eroding a news organisation'southward standing - these sorts of errors practise little long-term impairment. In that location are some who care fiddling for reputation, however. They are simply in it for the coin. Final yr, links to websites masquerading equally reputable sources started appearing on social media sites like Facebook. Stories about the Pope endorsing Donald Trump'due south candidacy and Hillary Clinton being indicted for crimes related to her email scandal were shared widely despite being completely made up.
"The major new challenge in reporting news is the new shape of truth," says Kevin Kelly, a technology author and co-founder of Wired magazine. "Truth is no longer dictated by authorities, but is networked by peers. For every fact at that place is a counterfact. All those counterfacts and facts wait identical online, which is confusing to most people."
For those behind the made-up stories, the power to share them widely on social media means a slice of the ad revenue that comes from clicks as people follow the links to their webpages. It was found that many of the stories were coming from a small town in Republic of macedonia where young people were using it equally a get-rich scheme, paying Facebook to promote their posts and reaping the rewards of the huge number visits to their websites.
"The difference that social media has made is the scale and the ability to find others who share your world view," says Volition Moy, director of Full Fact, an independent fact-checking organisation based in the UK. "In the by information technology was harder for relatively fringe opinions to go their views reinforced. If we were chatting around the kitchen table or in the pub, frequently there would exist a argue."
But such debates are happening less and less. Information spreads effectually the world in seconds, with the potential to accomplish billions of people. But information technology tin can also be dismissed with a motion-picture show of the finger. What we choose to appoint with is cocky-reinforcing and we get shown more of the aforementioned. Information technology results in an exaggerated "echo chamber" effect.
"What is noticeable about the two recent referendums in the Britain - Scottish independence and European union membership - is that people seem to exist clubbing together with people they agreed with and all making 1 some other angrier," says Moy. "The fence becomes more partisan, more than aroused and people are quicker to assume they are beingness lied to but less quick to assume people they agree with are lying. That is a dangerous tendency."
The challenge here is how to burst these bubbles. One approach that has been tried is to challenge facts and claims when they appear on social media. Organisations like Full Fact, for instance, look at persistent claims made by politicians or in the media, and effort to right them. (The BBC also has its own fact-checking unit, called Reality Check.)
Research by Resnick suggests this arroyo may not be working on social media, withal. He has been edifice software that can automatically track rumours on Twitter, dividing people into those that spread misinformation and those that correct information technology. "For the rumours we looked at, the number of followers of people who tweeted the rumour was much larger than the number of followers of those who corrected information technology," he says. "The audiences were also largely disjointed. Fifty-fifty when a correction reached a lot of people and a rumour reached a lot of people, they were unremarkably not the same people. The problem is, corrections practice not spread very well."
One example of this that Resnick and his team constitute was a fault that appeared in a leaked draft of a Earth Wellness Arrangement report that stated many people in Greece who had HIV had infected themselves in an attempt to get welfare benefits. The WHO put out a correction, but even and so, the initial mistake reached far more people than the correction did. Another rumour suggested the rapper Jay Z had died and reached 900,000 people on Twitter. Effectually half that number were exposed to the correction. Merely merely a tiny proportion were exposed to both the rumour and correction.
This lack of overlap is a specific challenge when it comes to political issues. Moy fears the traditional watchdogs and safeguards put in place to ensure those in power are honest are existence circumvented by social media.
"On Facebook political bodies tin put something out, pay for advertizement, put information technology in front of millions of people, nevertheless information technology is difficult for those non existence targeted to know they have washed that," says Moy. "They can target people based on how one-time they are, where they alive, what pare colour they accept, what gender they are. We shouldn't think of social media as just peer-to-peer advice - it is also the most powerful advertising platform in that location has e'er been."
But it may count for trivial. "We have never had a time when it has been so like shooting fish in a barrel to advertise to millions of people and not have the other millions of us discover," he says.
Twitter and Facebook both insist they have strict rules on what can be advertised and particularly on political advertisement. Regardless, the employ of social media adverts in politics tin can have a major impact. During the run upwards to the Eu referendum, the Vote Get out entrada paid for nearly a billion targeted digital adverts, more often than not on Facebook, according to 1 of its entrada managers. One of those was the merits that the UK pays £350m a week to the European union - a figure Sir Andrew Dilnot, the chair of the UK Statistics Authority, described as misleading. In fact the UK pays around £276m a week to the EU because of a rebate.
"Nosotros demand some transparency about who is using social media advert when they are in election campaigns and referendum campaigns," says Moy. "We need to be more equipped to bargain with this - nosotros demand watchdogs that will become around and say, 'Hang on, this doesn't stack up' and ask for the record to be corrected."
Many people are worried that fundamental disagreement over basic facts is damaging the democratic process (Credit: Getty Images)
Social media sites themselves are already taking steps. Mark Zuckerberg, founder of Facebook, recently spelled out his concerns virtually the spread of hoaxes, misinformation and polarisation on social media in a vi,000-word alphabetic character he posted online. In information technology he said Facebook would work to reduce sensationalism in its news feed on its site past looking at whether people have read content before sharing information technology. It has besides updated its ad policies to reduce spam sites that turn a profit off fake stories, and added tools to permit users flag fake manufactures.
Other tech giants also merits to be taking the trouble seriously. Apple tree's Tim Melt recently raised concerns nigh fake news, and Google says it is working on ways to improve its algorithms so they take accuracy into account when displaying search results. "Judging which pages on the web best respond a query is a challenging problem and nosotros don't always get it right," says Peter Barron, vice president of communications for Europe, Center Eastward and Asia at Google.
"When non-authoritative information ranks as well loftier in our search results, we develop scalable, automated approaches to fix the issues, rather than manually removing these 1 by one. We recently made improvements to our algorithm that volition help surface more loftier quality, credible content on the web. We'll continue to change our algorithms over time in order to tackle these challenges."
For Rohit Chandra, vice president of engineering at Yahoo, more than humans in the loop would help. "I see a demand in the market to develop standards," he says. "We can't fact-check every story, only there must be enough eyes on the content that we know the quality bar stays high."
Google is also helping fact-checking organisations like Full Fact, which is developing new technologies that tin place and even correct false claims. Full Fact is creating an automated fact-checker that volition monitor claims made on TV, in newspapers, in parliament or on the internet.
Initially information technology will be targeting claims that take already been fact-checked by humans and send out corrections automatically in an attempt to close down rumours before they go started. As artificial intelligence gets smarter, the arrangement will also practice some fact-checking of its own.
"For a claim similar 'criminal offence is rising', information technology is relatively easy for a calculator to check," says Moy. "We know where to get the offense figures and we tin write an algorithm that tin can make a sentence nearly whether crime is rising. Nosotros did a demonstration project last summer to prove we can automate the checking of claims like that. The challenge is going to exist writing tools that can check specific types of claims, just over fourth dimension it will become more powerful."
What would Watson exercise?
Information technology is an approach being attempted by a number of dissimilar groups around the world. Researchers at the Academy of Mississippi and Indiana University are both working on an automated fact-checking system. I of the world's almost advanced AIs has also had a cleft at tackling this problem. IBM has spent several years working on ways that its Watson AI could aid internet users distinguish fact from fiction. They built a fact-checker app that could sit in a browser and utilise Watson'south language skills to scan the page and give a percent likelihood of whether it was true. But according to Ben Fletcher, senior software engineer at IBM Watson Enquiry who built the system, it was unsuccessful in tests - but not because it couldn't spot a lie.
"Nosotros got a lot of feedback that people did not want to exist told what was truthful or not," he says. "At the heart of what they want, was actually the ability to see all sides and make the decision for themselves. A major consequence almost people face without knowing it is the bubble they live in. If they were shown views exterior that bubble they would exist much more open up to talking nigh them."
This idea of helping intermission through the isolated information bubbling that many of us now alive in comes up again and again. Past presenting people with accurate facts it should be possible to at least get a debate going. But telling people what is true and what is not does non seem to work. For this reason, IBM shelved its plans for a fact-checker.
"There is a large proportion of the population in the US living in what nosotros would regard as an alternative reality," says Lewandowsky. "They share things with each other that are completely fake. Whatsoever attempt to suspension through these bubbles is fraught with difficulty equally you are being dismissed as being office of a conspiracy just for trying to correct what people believe. It is why you have Republicans and Democrats disagreeing over something as fundamental as how many people appear in a photo."
1 approach Lewandowsky suggests is to make search engines that offer up information that may subtly conflict with a user's world view. Similarly, firms like Amazon could offer up films and books that provide an alternative viewpoint to the products a person normally buys.
"Past suggesting things to people that are outside their comfort zone but non so far outside they would never await at it you tin proceed people from self-radicalising in these bubbles," says Lewandowsky. "That sort of technological solution is 1 skilful manner forward. I think we have to work on that."
Google is already doing this to some caste. Information technology operates a little known grant scheme that allows sure NGOs to place high-ranking adverts in response to certain searches. Information technology is used by groups like the Samaritans and so their pages rank highly in a search by someone looking for information about suicide, for instance. Merely Google says anti-radicalisation charities could also seek to promote their message on searches virtually and so-called Islamic State, for example.
But there are understandable fears near powerful internet companies filtering what people encounter - fifty-fifty within these organisations themselves. For those leading the push to fact-check information, better tagging of authentic information online would be a improve arroyo by allowing people to make upward their own minds nigh the data.
"Search algorithms are as flawed as the people who develop them," says Alexios Mantzarlis, director of the International Fact-Checking Network. "Nosotros should think most adding layers of credibility to sources. We need to tag and construction quality content in effective means."
Mantzarlis believes function of the solution volition exist providing people with the resources to fact-check information for themselves. He is planning to develop a database of sources that professional fact-checkers use and intends to brand it freely bachelor.
Merely what if people don't concord with official sources of information at all? This is a trouble that governments around the world are facing as the public views what they tell them with increasing scepticism.
Nesta, a United kingdom-based charity that supports innovation, has been looking at some of the challenges that face democracy in the digital era and how the internet can be harnessed to get people more than engaged. Eddie Copeland, director of government innovation at Nesta, points to an example in Taiwan where members of the public tin suggest ideas and help formulate them into legislation. "The first phase in that is crowdsourcing facts," he says. "So before y'all take a debate, you lot come up with the unremarkably accepted facts that people tin can debate from."
Only that means facing up to our own bad habits. "There is an unwillingness to curve i's mind around facts that don't agree with one'south ain viewpoint," says Victoria Rubin, director of the linguistic communication and information technology research lab at Western University in Ontario, Canada. She and her team take been working to identify false news on the net since 2015. Will Moy agrees. He argues that by slipping into lazy pessimism about what we are being told, we let those who lie to usa to become away with information technology. Instead, he thinks we should be interrogating what they say and holding them to account.
Ultimately, however, there's an uncomfortable truth nosotros all demand to address. "When people say they are worried about people existence misled, what they are actually worried about is other people beingness misled," says Resnick. "Very rarely do they worry that fundamental things they believe themselves may be wrong." Technology may help to solve this thou challenge of our age, but information technology is fourth dimension for a little more cocky-awareness also.
Go along up to date with Future At present stories by joining our 800,000+ fans on Facebook , or follow us on Twitter .
If you liked this story, sign upward for the weekly bbc.com features newsletter , called "If You Only Read half-dozen Things This Calendar week". A handpicked selection of stories from BBC Future, Earth, Culture, Capital, and Travel, delivered to your inbox every Friday.
Source: https://www.bbc.com/future/article/20170301-lies-propaganda-and-fake-news-a-grand-challenge-of-our-age
0 Response to "I Will Never Trust Google Again After What I Heard on Tv."
Post a Comment