Blog: The impact of AI on journalism and democracy.
In recent years, the threats posed by AI to democracies through hacking, manipulation, and disinformation campaigns have garnered public attention. However, technologies of the future could also enhance both the craft and the business of journalism, and create new opportunities for civic engagement. Krishna Bharat, the creator of Google News, shares his thoughts on AI’s potential contributions to newsrooms and democratic processes as well as ways to democratise AI itself.
GEN: AI has seen major developments in the past years, from automated fact-checking to news aggregation. While some see the potential applications of AI as an asset, others deem it problematic. Is there a middle ground? And what are, in your opinion, the main concerns newsrooms should or shouldn’t have?
Krishna Bharat: AI innovations tend to be aimed at communication and commerce broadly, but they end up impacting news business models and the information ecosystem — with both positive and negative consequences. User modelling and content targeting can be used to boost relevance, benefiting users and advertisers. They can also be used to manipulate vulnerable users with fake news, displacing legitimate content. The problem is with social media business models. They make virality and watch-time top-line metrics and the AI optimizes for it, boosting misinformation and spin.
Looking to the future, I see a lot of upside for newsrooms. AI can help them cut costs, without sacrificing reach or quality. Journalists will find that AI gives them superpowers, in discovery, analysis, reporting and editing. With improved speech synthesis and recognition, and the ability to create custom multimedia content on the fly, news will become more engaging, interactive and personalized.
The main concern for newsrooms, especially small ones, will be getting access to data scientists, machine resources, and AI models. We should find ways to pool resources and democratise access and knowhow across the industry, keeping most of it open.
Are concerns about the lack of control over AI development warranted, and if so — where should we focus our efforts to regulate it?
The deep learning systems being developed today are extremely general purpose and will be applied to every domain and often without full disclosure. Our best hope is to require a high level of transparency in both research and practice. Knowing which models are being employed and how they were trained would be useful to preempt concerns of bias and abuses of privacy. Of course, most companies will resist, calling it a trade secret, and governments will have to negotiate what is appropriate. There is also a risk of anti-competitive behaviour. If a company’s AI drives its market share and the model derives strength from the size of its user base, then the market leader will dominate. It will be hard for a new player to take them on.
How realistic is the democratization of AI, given the monopolistic tendencies of the market?
Unfortunately, market forces favour the winner takes all models. A large company with deep pockets can not only afford the best AI tech, but they can also build superior AI models based on their large user/client base. This leads to a feedback loop that strengthens their position until they become untouchable. Also, there is a potential for cliques to develop with sharing of models trained on each others’ data. This could also happen at the national level, with the government encouraging an AI coalition to compete internationally.
Democratisation is not likely unless it is enforced to some extent. E.g., governments could mandate the disclosure of model training details, and require licensing of business-critical models for a reasonable fee. Many industries could self-organise into AI co-ops where all members pledge to make models and data sets open and free or subject to modest licensing terms. This could be disclosed to the public, and garner trust as we have seen with Fair Trade labels. At present Silicon Valley seems open to collaboration, preferring the open source model for tech and data. While they continue to push the limits on tech there is hope. Once the best technology assets and technologists go underground we have a problem.
Current applications of AI in journalism revolve around automated writing, gathering, distribution or fact-checking of news. What other potential applications do you anticipate in the future?
A key technology that I look forward to is super accurate, domain-sensitive speech recognition and speech synthesis. This will allow AI systems to interact with humans just as journalists do. Intelligent agents can be co-present in conversations and interviews, providing assistance and insight in real time. Tasks such as interviewing at scale can be delegated to a cyber reporter. This will also move news to a much more interactive model, via voice and synthesized video. This will feel more natural and engaging and displace many current formats.
Another area of growth will be knowledge representation within an organisation. Newsrooms will have the ability to pool the domain knowledge of thousands of experts and journalists, both within and outside the organisation, by ingesting prior and ongoing work. As reporters find new information it will be organised efficiently and associated with relevant knowledge assets, boosting research capabilities. This will lead to smarter, domain-specific AI services that understand the semantics of the information under consideration, benefiting both journalists and consumers.
With A.I. finding its way into newsrooms, what role will human journalists play in the future and how will their skill set differ from today’s journalists?
I see AI giving journalists superpowers to scale themselves in time and space. Since there will always be more to cover than what journalists can possibly do, I am not worried they will be out of work. I think scaling will allow for better journalism, on more topics, and in greater depth. The skill set needed by journalists will expand to require more interaction with autonomous services and AI systems. At the same time, some laborious production tasks (e.g., layout, proofreading, video editing) can be handed off to robotic agents, with only high-level directions being required from the human. Correspondingly, some skills may no longer be required.
Beyond journalism, how can AI contribute to democratic processes?
AI can do a number of things in this regard. Firstly, AI can boost our ability to fact check claims, detect misinformation waves in social media, and verify the accuracy of journalism inputs. This can be used to defend accurate reporting and combat misinformation.
AI can also boost the role of data journalism. Both, by helping derive insights from data and also by enabling scalable surveys and data collection. This moves the industry away from shallow, anecdotal reporting to more trustworthy, data-driven reporting. Journalism’s aim is to portray the state of the world accurately to citizens, so that they can make informed decisions, vote correctly, and hold the powerful accountable. AI systems can bring rigour and scale and further this end. They are also immune to human bias, physical threats, and corruption. In certain scenarios, an AI aggregator of information could provide a more unbiased and credible report of what’s happening than a conventional source — provided there is a high level of transparency on how it operates.
Major developments in AI applications are happening in the U.S. and China under very different frameworks. Where are the differences from an industry insider’s perspective and what are the potential consequences of this disparity for the industry as a whole?
The US still dominates in AI R&D today and in new AI ventures overall, but China’s AI startup investment share is growing rapidly. In particular, in facial recognition and AI chips, where the Chinese government has a strategic interest and is both a major consumer and exporter of surveillance tech. Success in AI depends on having the best hardware for training and sourcing large amounts of labelled data. China is well positioned to execute on both fronts. But so are other countries with large domestic markets. We can hope that AI development continues to proceed in the open, with sharing of best practices and technology, but we cannot count on it. In particular, models are valuable assets that can be hoarded and shared selectively. It could very well go the way of nuclear technology and become the next arms race, with hoarding and export restrictions, which could cause disparities both within and across nations.
With the impending changes to the online copyright law in the EU, what are your opinions on Article 11 and its influence on news aggregation platforms in general?
As others have observed, Article 11 seems like misguided legislation that will hurt the news industry. It is based on a fundamental misunderstanding of how the link economy on the open web works. It is now widely accepted by web authors that linkage from web sites and search engines, with a title and snippet, is the best way to acquire monetizable traffic. Tests have shown that anything short of that much disclosure will reduce traffic. The multibillion-dollar online ad market is proof that links in this format benefit the target more than the source. Despite all this evidence, the EU wants people linking to news content with more than a few words to pay for the privilege of driving traffic to the publisher. Nobody expects platforms and search engines to pay. If they do it for online news (some of which can be of poor quality) they will have to do it for every other category (e.g., scientific, legal, educational content). The logical outcome from reduced linkage is a huge traffic drop that disproportionately punishes small and medium publishers and hurts information access in EU countries. You will no longer be able to search for news and find it. As publishers fold it will reduce the diversity of voices. It will damage citizen awareness, and also make it harder for journalists to research their stories or monetize their news. The top publishers who lobbied for article 11 may be hoping it will crush smaller publishers and reduce competition, but if it does it will come at a tremendous cost to their country and democracy.
Krishna Bharat is a keynote speaker at this year’s GEN Summit in Athens, Greece, from 13 to 15 June, sharing his insights on the latest tech and business trends for newsrooms.
Previously Krishna worked as a distinguished research scientist at Google for over 15 years, where he led a team developing Google’s news product. Among other projects, he is best known for being the creator of Google News, a service that automatically indexes over 25,000 news websites in more than 35 languages to provide a summary of news resources. Google News launched in beta version in September 2002, and was released officially in January 2006.