Computer science meets journalism
A revolution is about to disrupt how news is created and the very nature of journalism. These changes are attributed to automation, analytics and crowdsourcing all of which touch a broad sector of business.
Throughout history, journalism has constantly changed and adapted. It has integrated technological progress in printing presses and copying, and faced the rise of radio and television. On the societal front, it has adapted to reader demands for accurate information and for gossip. It has delivered information from the battlefront, provided political partisanship articles and conducted investigations.
Today it faces a whole new level of upheaval with the advances in computer science: the Internet, natural language generation, machine learning, and interconnected personal devices and social platforms that permit the rapid dissemination of news.
These advances have begun to disrupt how news is created and the nature of journalism. The changes are based on four main trends:
- Print to Online
- Machine-generated news
- Data-driven journalism
- Citizen journalism
Print to Online
This is the oldest of the recent revolutions, and some may even consider it as passé. Yet aside from some notable exceptions, media outlets are still struggling to adapt their revenue strategy to the fact that most readers consume their news online instead of buying a print newspaper. Moreover, they expect free online access. Solutions for providers have emerged in the shape of subscriptions or payment for access to full articles, online advertising and other strategies.
Recent advances in artificial intelligence and the diffusion of tools that leverage them are accelerating the automation of certain white-collar jobs that were traditionally considered exclusive to humans. Journalism is highlighted as one of the first categories of jobs that could be lost, spurred by advances in natural language generation and fast data processing of structured data.
Some startups have already started to create news summaries from feeds of structured data. The application of these systems is for now restricted to the production of summaries with very limited variability in restricted topics such as sport or finance. When Kristian Hammond, a co-founder of one of these start-ups, claims that in 15 years, 90 per cent of news articles will be written purely by software, he is saying more about the growth of this activity than about the reduction of articles written by professional journalists. It will certainly automate some low-investigative journalist tasks, but this will create a new market not disrupt an existing one.
The term "data journalism" may sound like a pleonasm, because journalism is arguably all about transforming data into a story. The "Data Journalism Handbook" defines it as the combination of traditional journalism (notably the nose for news and ability to tell a story) with "the sheer scale and range of digital information now available". It is the growing availability of digital (and structured) data, together with the scalability of data-crunching algorithms and advancements in data visualization that defines this new trend.
A reason for the current popularity of the term may be the reader’s higher expectations of news outlets. They have become used to Internet companies and portals providing them with easy to navigate interfaces and visualization tools with which they can dive into the data. An increasing number of governments now even mandate that certain public information and data (belonging to or held by the government) be made publicly accessible. Alternative associations like WikiLeaks also provide a steady flow of raw data. Combining access to such a vast realm of data with reader expectations has resulted in several news providers applying techniques to extract information, summarize and visualize the data.
The Guardian, The New York Times and Der Spiegel are some of the established outlets spearheading this trend, and on whose websites the reader is witness to real cases of data-driven journalism.
The algorithms which mine news and raw databases and convert them into mathematical objects are being combined with other algorithms that read these objects to output news stories and summaries. From this combination is arising a new business niche, the one of personalization. Its goal is to give real-time summaries of very specific information as soon as it appears. One day these may be used by all news consumers but for now most of the target users are specific decision-makers for example those in financial markets.
This last trend may well be the most revolutionary. Still in its early stages, its real impact is more difficult to predict.
The idea behind citizen journalism is that anyone can comment on, report news and even be the first to break it. A journalist does not necessarily have to be at the right place at the right time. It may be enough that he or she is somehow getting the flow of reports of many people and aggregating, filtering and adding background and analysis to a story.
The second necessary ingredient for citizen journalism to work is a platform to share citizen reports. Micro-blogging platforms (most notably Twitter) are fulfilling this role and much was claimed about the importance of Twitter in the so-called Arab Spring of 2012. The truth is that, in unexpected events like the Woolwich shooting in the UK or the Boston marathon bombing, where very few facts were known, journalists responded to their readers' thirst for information by swimming through the sea of tweets and reporting their findings.
The real-time nature and public demand of this approach exacerbates the historic tension of broadcasting breaking news before verification. This lack of trust is reflected in a recent survey, which showed that, irrespective of the source of a story, 60 per cent of people turned to an established outlet for confirmation of what was being reported.
The need to discover, filter and deliver information is a wonderful playing field for algorithms that organically incorporate incertitude through machine learning. The UK-based start-up storyful.com adds to this the importance of having trustworthy people who can confirm or deny a given piece of information. Their slogan "news from noise" encapsulates the problem and challenge of this new kind of journalism.
Reaching beyond journalism
Machine-generated content, data-guided decision making and harnessing the wisdom of the crowd is radically changing how news is created and published. But these trends are not confined to news. Their impact can be witnessed across a broad range of fields under the general terms of automation, analytics and crowdsourcing. Customer care is but one example of an industry where automation is used but, at the moment, is mainly restricted to IVR’s (Interactive Voice Responses) and instant messaging. Analytics are becoming widely adopted in this field to detect trending issues by mining customer comments. Much less mature are emerging applications of crowdsourcing to harness and share the knowledge published in specialized Internet forums.
As the face of journalism changes and embraces new technological opportunities, many other document-intensive business processes have a lot to learn from this transformation.
About the author:
Matthias Gallé is a research scientist in machine learning techniques applied to document access tasks at the Xerox Research Centre Europe. His main research interests include machine learning, grammatical inference and text algorithms.
More like this
- XRCE RSS feeds
- 2013/009 - Who Broke the News? An Analysis on First Reports of News Events
- Computer science meets journalism by Matthias Galle.
- Xerox Research Centre Europe's virtual help desk technology receives the 2011 Wall Street Journal Innovation Software award.
- How machines can learn what humans interpret: adapting probabilistic topic models to natural language.