Standards also help because we are fighting to ensure that the costs of sharing do not outweigh the advantages
A cartoon published a long time ago in The New Yorker summed it up: "Nobody on the Internet knows that you are a dog." If this cartoon had been written today, the headline could be, "Nobody on the internet knows you're a scam."
Scammers, snake oil sellers, sock puppets, bot armies and bullies – every time we look up we seem to discover a different form of dishonesty that has grown to a global scale through the great but terrifying combination of internet and smartphone.
None of this should surprise us. People are wonderful and terrible. The network that we have built for ourselves serves both the honest and the liar. But we have no infrastructure to manage a planet of thieves.
Navigating through this stuff goes far beyond the "reservation" and goes into the darkest secrets of spear phishing and social engineering, which for the simplest of reasons play on our higher self. It is no longer an African prince who offers you a hundred million dollars for your help. It is a customer who has carefully recorded all of her transactions and registration numbers in a Word document, which she received in a very helpful email.
Security has been taken to extremes. If things go on as before, the cost of connectivity could outweigh the benefits, and at that point, the already frayed post-web civilization of sharing and knowledge would relax fully as people and businesses retreat and call behind defensible borders it is a day.
All of this served as a subtext at the 26th International Conference on the World Wide Web – never spoken, but always in mind. In a broader sense, it's all the flaw of the web – the shadow of its culture of sharing. So could it be a problem that the web can fix?
This question preoccupied the hundreds of doctoral students who presented papers and posters at the conference. In so far as the contributions submitted by the Internet's core research community are a reliable indicator of the future direction of the Internet, the future will focus on learning how to recognize lies.
Detection of false advertising, bullies and bots – all of this can be learned by machine. It can even be applied to a politician's tweets – to find out if and when they are clear about where they were.
This flood of research is returning to one of the oldest problems in computer science – the Turing test. Can you tell if someone on the other end of a text-based connection is a person or a computer? What questions do you ask? How do you analyze your answers? Take the same ideas and apply them to a provider on Alibaba or an account on Twitter – ask the questions, analyze and review them – and then decide: truth or lie.
When Sir Tim Berners-Lee won the ACM A.M. Turing Award last week, the timing for this next development of his web couldn't be more appropriate. The web must build a meta-layer of error checking and truth-finding. This will likely slow things down a bit, even though we feel more confident that the counterfeit can be suppressed.
This will never be as true as we would like it to be. Once a lie-detecting system becomes widespread, the least honest and smartest will work to undermine this algorithmic determination of the truth, find its weaknesses, and take advantage of it. It was always like that; In the long run, the search for true will has always been an act of persistence and dedication.
Machines can help us in this fight – but machines are used on both sides to deceive and uncover fraud. Still, there is hope: there is too much money on the table to allow the forces of darkness to ascend. Chaos is bad for business.
Any alignment of trade to the common good is a rare and effective combination, which means that the resources for this struggle will be available in the foreseeable future. These students, with their fraud and bot detection algorithms, are picked up by the giant companies, whose profits depend on a web that is true for trading. What is good for Google and Facebook is good for the rest of us.