一、Journalism and artificial intelligence:Ghost writers
Robot reporters imply profound changes to the news industry
1A sensational scoop was tweeted last month by America’s National Public Radio: Elon Musk’s “massive space sex rocket” had exploded on launch. Alas, it turned out to be an automated mistranscription of SpaceX, the billionaire’s rocketry firm. The error may be a taste of what is to come as artificial intelligence (AI) plays a bigger role in newsrooms.
2machines have been helping deliver the news for years: the Associated Press (AP) began publishing automated company earnings reports in 2014. The New York Times uses machine learning to decide how many free articles to show readers before they hit a paywall. Bayerischer Rundfunk, a German public broadcaster, moderates online comments with AI help. AP now also deploys it to create video “shot lists”, describing who and what is in each clip.
3As AI improves, it is taking on more creative roles. One is newsgathering. At Reuters, machines look for patterns in large data sets. AP uses AI for “event detection”, scanning social media for ripples of news. At a journalism conference last month in Perugia, Italy, Nick Diakopoulos of Northwestern University showed how ChatGPT, a hit AI chatbot, could be used to assess the newsworthiness of research papers. The judgments of his model and those of human editors had a correlation coefficient of 0.58—maybe a close enough match to help a busy newsroom with an initial sift.
4ChatGPT-like “generative” AIs are getting better at doing the writing and editing, too. Semafor, a news startup, is using AI to proofread stories. Radar AI, a British firm, creates data-driven pieces for local papers (“REVEALED: Map shows number of accessible toilets in south Essex”). Its five human journalists have filed more than 400,000 partly automated stories since 2018. In November Schibsted, a Norwegian media firm, launched an AI tool to turn long articles into short packages for Snapchat, a social network. News executives see potential in automatically reshaping stories for different formats or audiences.
5Some sense a profound change in what this means for the news industry. AI “is going to change journalism more in the next three years than journalism has changed in the last 30 years”, predicts David Caswell of BBC News. By remixing information from across the internet, generative models are “messing with the fundamental unit of journalism”: the article. Instead of a single first draft of history, Mr. Caswell says, the news may become “a sort of ‘soup’ of language that is experienced differently by different people”.
6Many hacks have more prosaic concerns, chiefly about their jobs. As in other industries, employers portray AI as an assistant, not a replacement. But that could change. “We are not here to save journalists, we are here to save journalism,” Gina Chua, executive editor of Semafor, told the Perugia conference. The industry needs all the help it can get. On April 20th BuzzFeed shut down its Pulitzer-prizewinning news operation. A week later Vice, a one-time digital-media darling, made cuts; it is reportedly preparing for bankruptcy. As Lisa Gibbs of AP puts it: “In terms of challenges to journalists’ employment, [AI] is not highest on the list.”
二、The language instinct ChatGPT’s way with words raises questions about how humans acquire language
1When deep blue, a chess computer, defeated Garry Kasparov, a world champion, in 1997 many gasped in fear of machines triumphing over mankind. In the intervening years, artificial intelligence has done some astonishing things, but none has managed to capture the public imagination in quite the same way. Now, though, the astonishment of the Deep Blue moment is back, because computers are employing something that humans consider their defining ability: language.
2Or are they? Certainly, large language models (LLMS), of which the most famous is ChatGPT, produce what looks like impeccable human writing. But a debate has ensued about what the machines are actually doing internally, what it is that humans, in turn, do when they speak—and, inside the academy, about the theories of the world’s most famous linguist, Noam Chomsky.
3Although Professor Chomsky’s ideas have changed considerably since he rose to prominence in the 1950s, several elements have remained fairly constant. He and his followers argue that human language is different in kind (not just degree of expressiveness) from all other kinds of communication. All human languages are more similar to each other than they are to, say, whale song or computer code. Professor Chomsky has frequently said a Martian visitor would conclude that all humans speak the same language, with surface variation.
4Perhaps most notably, Chomskyan theories hold that children learn their native languages with astonishing speed and ease despite “the poverty of the stimulus”: the sloppy and occasional language they hear in childhood. The only explanation for this can be that some kind of predisposition for language is built into the human brain.
5Chomskyan ideas have dominated the linguistic field of syntax since their birth. But many linguists are strident anti-Chomskyans. And some are now seizing on the capacities of LLMS to attack Chomskyan theories anew.
6Grammar has a hierarchical, nested structure involving units within other units. Words form phrases, which form clauses, which form sentences and so on. Chomskyan theory posits a mental operation, “Merge”, which glues smaller units together to form larger ones that can then be operated on further (and so on). In a recent New York Times op-ed, the man himself (now 94) and two co-authors said “we know” that computers do not think or use language as humans do, referring implicitly to this kind of cognition. LLMS, in effect, merely predict the next word in a string of words.
7Yet it is hard, for several reasons, to fathom what LLMS “think”. Details of the programming and training data of commercial ones like ChatGPT are proprietary. And not even the programmers know exactly what is going on inside.
8linguists have, however, found clever ways to test LLMS’ underlying knowledge, in effect tricking them with probing tests. And indeed, LLMS seem to learn nested, hierarchical grammatical structures, even though they are exposed to only linear input, ie, strings of text. They can handle novel words and grasp parts of speech. Tell ChatGPT that “Dax” is a verb meaning to eat a slice of pizza by folding it, and the system deploys it easily: “After a long day at work, I like to relax and Dax on a slice of pizza while watching my favorite TV show.” (The imitative element can be seen in “Dax on”, which ChatGPT probably patterned on the likes of “chew on” or “munch on”.)
9What about the “poverty of the stimulus”? After all, GPT-3 (the LLM underlying ChatGPT until the recent release of GPT-4) is estimated to be trained on about 1,000 times the data a human ten-year-old is exposed to. That leaves open the possibility that children have an inborn tendency to grammar, making them far more proficient than any LLM. In a forthcoming paper in linguistic Inquiry, researchers claim to have trained an LLM on no more text than a human child is exposed to, finding that it can use even rare bits of grammar. But other researchers have tried to train an LLM on a database of only child-directed language (that is, of transcripts of carers speaking to children). Here LLMS fare far worse. Perhaps the brain really is built for language, as Professor Chomsky says.
10It is difficult to judge. Both sides of the argument are marshalling LLMS to make their case. The eponymous founder of his school of linguistics has offered only a brusque riposte. For his theories to survive this challenge, his camp will have to put up a stronger defense.
三、Clause for thought: first non-invasive way to read minds as AI turns brain activity into text
1An AI-based decoder that can translate brain activity into a stream of text has been developed, in a breakthrough that allows thoughts to be read non-invasively for the first time. The decoder could reconstruct speech with uncanny accuracy while people listened to a story – or even silently imagined one – using only fMRI scan data. Previous language decoding systems have required surgical implants, and the latest advance raises the prospect of new ways to restore speech in patients struggling to communicate as a result of stroke or motor neurone disease. Dr Alexander Huth, a neuroscientist who led the work at the University of Texas at Austin, said: “We were kind of shocked that it works as well as it does. I’ve been working on this for 15 years … so it was shocking and exciting when it finally did work.”
2Mind-reading has traditionally been the preserve of sci-fi, in characters such as the X-Men’s Jean Grey, but the latest AI technology has finally taken the concept into the real world. This decoder’s achievement overcomes a fundamental limitation of fMRI: while the technique can map brain activity to a specific location with incredibly high resolution, there is an inherent time lag, which makes tracking activity in real time impossible. The lag exists because fMRI scans measure the blood-flow response to brain activity, which peaks and returns to baseline over about 10 seconds, meaning even the most powerful scanner cannot improve on this. “It’s this noisy, sluggish proxy for neural activity,” said Huth. This hard limit has hampered the ability to interpret brain activity in response to natural speech because it gives a “mishmash of information” spread over a few seconds.
3However, the advent of large language models – the kind of AI underpinning OpenAI’s ChatGPT – provided a new way in. These models are able to represent, in numbers, the semantic meaning of speech, allowing the scientists to look at which patterns of neuronal activity corresponded to strings of words with a particular meaning rather than attempting to read out activity word by word. The learning process was intensive: three volunteers were required to lie in a scanner for 16 hours each, listening to podcasts. The decoder was trained to match brain activity to meaning using the large language model GPT-1, a precursor to ChatGPT. Later, the same participants were scanned listening to a new story or imagining telling a story and the decoder was used to generate text from brain activity alone.
4About half the time, the text closely – and sometimes precisely – matched the intended meanings of the original words. “Our system works at the level of ideas, semantics, meaning,” said Huth. “This is the reason why what we get out is not the exact words, it’s the gist.” For instance, when a participant was played the words: “I don’t have my driver’s license yet,” the decoder translated as: “She has not even started to learn to drive yet.”
5In another case, the words: “I didn’t know whether to scream, cry or run away. Instead, I said: ‘Leave me alone!’” was decoded as: “Started to scream and cry, and then she just said: ‘I told you to leave me alone.’” The participants were also asked to watch four short, silent videos while in the scanner, and the decoder was able to use their brain activity to accurately describe some of the content, the paper in Nature Neuroscience reported.
6“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. Jerry Tang, a doctoral student at the University of Texas at Austin and coauthor, said: “We take very seriously the concerns that it could be used for bad purposes, and have worked to avoid that. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
7Professor Shinji Nishimoto of Osaka University, who has pioneered the reconstruction of visual images from brain activity, described the paper as a “significant advance”. He said: “This is a non-trivial finding and can be a basis for the development of brain-computer interfaces.” The team now hope to assess whether the technique could be applied to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
四、The Human genome Project at 20:Epic ambition
The genomics revolution has transformed biology. Its work is not over yet
1TWENTY YEARS ago the Human genome Project (HGP) unveiled a mostly complete sequence of the roughly 3bn base pairs of DNA found in every set of human chromosomes. The project was chock-full of ego and hype, but also heralded the rapid improvements and dramatically lower costs of sequencing. This fed the success of the burgeoning field of genomics, which has transformed biology and medicine—and still holds plenty of promise.
2genomics has added a new dimension to the study of life and evolution. It has helped scientists understand genes and proteins, and how they govern the growth and function of cells. CRISPR gene editing—a way to precisely modify the DNA in cells—gives researchers a handle on cellular function and dysfunction. The first treatments based on gene editing could be approved within a year. Plant scientists have acquired ways to create disease- and heat-resistant crops.
3The era of cheap genome sequencing opened the doors to biology as a data science. The data and findings from the HGP came close to being hidden behind patents. Instead they were opened up to the public, which proved crucial—a useful lesson for other big projects. Biologists’ databases now hold the sequences of millions of people and other organisms. This has helped draw links between genes, traits and diseases and also enhanced scientists’ understanding of evolution.
4Most of the revolution’s tangible effects have been in medicine. Screening for serious but treatable genetic diseases is already possible. Cancer is largely the result of genetics gone awry. sequencing the genome has become a routine part of treating many tumors. Doing so allows doctors to work out which mutations the cancer has and therefore which course of treatment is likely to work best.
5genomics will increasingly inform doctors’ decisions. Starting later this year, 100,000 babies in England will have their genomes sequenced and screened for around 200 conditions. Each disease is rare; together, they affect nearly one in 200 children. Early detection means early treatment, and a higher chance of a better outcome. The hope is that, in time, the precise variations in many hundreds of locations on a person’s genome will guide doctors. They seem likely to become a factor in assessments of whether a patient is likely to develop conditions such as cardiovascular disease and type-2 diabetes.
6Yet for the genomics revolution to realize its potential, plenty more can be done. sequencing has fallen in cost from over $50m a genome at the end of the HGP to a few hundred dollars today, but making it even cheaper and more convenient would allow it to be more widely available. People’s genetic sequences need to be integrated into their medical records, requiring data infrastructure, digitized records, and the setting of robust security and privacy standards. Scientists must also continue to collect more diverse data, beyond those of patients in the rich world. That will help them understand variations in the genome. Some projects, such as the Three Million African genomes and Genome Asia 100k, are already under way.
7The science will need to progress further, too. Researchers now have a decent understanding of diseases that are affected by single genes. But they do not yet have a good grasp of how genes interact with each other. And much is unclear about the interplay between groups of genes and people’s environments. Nature versus nurture was once a popular debate in genetics but these days is largely seen as a false dichotomy. With genomes as with much successful research, the more you find out, the more you realize that you do not have the whole story.
五、Inflation :28 years later
After decades of stagnation, wages in Japan are finally rising
1Kasahara Yoshihisa, boss of Higo Bank, a lender in Japan’s south, beams with pride as he explains plans to lift wages. The firm’s workers will see a 3% boost, as well as regular increases for seniority. But a sheepish look crosses his face when asked about the last time staff saw such a rise. “Twenty-eight years ago,” he admits.
2Higo Bank is no outlier. Annual nominal wages in Japan rose by just 4% from 1990 to 2019, compared with 145% in America, according to the OECD, a rich-country club. Unions emphasize job stability over raises; bosses are reluctant to lift pay amid poor productivity growth. This has hampered efforts to escape deflation or low inflation. Thus the Bank of Japan (BOJ) has maintained a dovish policy stance despite headline inflation topping 4% this year.
3But recent data suggest change may be on the way: this year’s wage negotiations point to the fastest pay growth in 30 years. Daniel Blake of Morgan Stanley, an investment bank, calls it “the biggest macro development in Japan in a decade”. For Ueda Kazuo, who took over as BOJ governor on April 8th, the data will be a crucial factor in deciding whether to tighten policy.
4Parsing Japanese wage figures requires understanding local quirks. Wages are set when firms and unions meet for yearly negotiations known as shun to or “the spring offensive”. Headline figures consist of two parts: scheduled seniority-based increases and “base pay”. The latter has more impact on household spending, and thus potential to influence inflation.
5According to figures released by Japan’s confederation of labor unions on April 5th, base pay will rise by 2.2% and headline wages by 3.7% this year, compared with 0.5% and 2.1% last year. Blue-chip firms have been particularly generous. Fast Retailing, a clothing giant which owns brands including Uniqlo, gave its regular workers increases of as much as 40%. More data will trickle in until July, as medium- and smaller-sized firms report results. Goldman Sachs, a bank, reckons the final figure will settle at 2% growth in base pay, the highest since 1992.
6Consumer prices have risen at a pace not seen in four decades. Although most of the rise comes from cost-push factors, such as imported food and energy, higher headline numbers have raised expectations and placed pressure on bosses. As Mr. Kasahara puts it: “Companies have a responsibility to provide wages that match inflation—and not just big firms in Tokyo.” Tight labor markets have also played a role: Japan has compensated for its shrinking, greying population by bringing more women and elderly into the labor force in recent years, but these opportunities are close to being maxed out.
7For both workers and the BOJ, the question is whether the raises are a one-off event or a step change. Even this year’s big gains may not be enough to assuage policymakers. Kuroda Haruhiko, the BOJ’s former governor, has said that still higher wage growth will be needed to hit the 2% inflation target. At his final press conference as governor, Mr. Kuroda said that although wage negotiations were encouraging, easing should continue. At his first press conference on April 10th, Mr. Ueda sounded much the same note.
六、vending machines:All you need, from false eyelashes to a good read
1when vending machines would simply swallow your money. Nor are they limited to offering a savory snack or sweet treat. Instead they have quietly transformed into hi-tech cashless devices selling everything you could possibly need on the move, from false eyelashes to milk, and now books. The publisher Penguin Random House is showcasing a book vending machine at Exeter St David’s railway station in DeVon. The titles available include Taste, by Stanley Tucci, but what is sold will change on a regular basis, sometimes to reflect key moments throughout the year such as Black History Month.
2David Llewellyn, chief executive of the Automatic vending Association, says there has been a rise in machines offering personal protective equipment for workers, as well as a growth in “micro-markets”. “These are small retail units that sit within an office block offering fresh food, snacks and confectionery,” he says. “It’s like a small unattended retail corner, using things like smart fridges that can read what is taken out of them. You can buy a whole meal.”
3Llewellyn thinks micro-markets have arisen because of different working patterns, with more people at home. “There are less people consistently on sites now so not huge demand for canteens .” Book vending machines are not entirely new. The first Penguin book vending machine was in Charing Cross Road, London, in 1937, and the books cost sixpence each. In 2019, short-story vending machines arrived in Canary Wharf, dispensing one-, three- and five-minute stories free to passersby. Llewellyn says the vending market had a £2.2bn annual turnover before the pandemic but lost about 40% when lockdowns forced people out of the offices, transport hubs and leisure spaces where they are most found. Sales are expected to return to pre- Covid levels this year.
4In 2018, Neil Stephen, from Inverurie, Aberdeen shire, introduced self-service machines dispensing farmhouse produce. The idea was inspired by his grandfather who, in the late 1970s, used to leave a wheelbarrow filled with turnips and other vegetables at the gates. “We introduced it at the right time, just before Covid, because our business skyrocketed,” he says.
5Pizza is another product that has emerged as a top seller, with machines serving it popping up from Hampshire to Bristol. Even Italy got in on the act with its own device, close to Piazza Bologna in Rome: Mr. Go Pizza offers up four varieties costing between €4.50 (£3.95) and €6 .The public health sphere is also benefiting from vending devices. In Glasgow, in 2021, health officials set up a dispenser of sterilized needles to curb infections among drug users. Elsewhere, in America there are vending machines dispensing free packets of Narcan (naloxone), which can prevent death from a drug overdose.
6Japan is home to the most unusual vending operations, offering everything from umbrellas to fancy dress. The most recent addition, in the northern prefecture of Akita, sells fresh bear meat. The last formal count, conducted by a trade body in December 2020, found there were 2.7m vending machines in Japan – one for every 46 citizens. Llewellyn says the UK is following Japan down the route of offering more fresh food in vending, although it is unlikely we will ever reach the sheer number of machines in that country. “There is a huge array of machines [in Japan] but they have a lot of public vending, which would not stand our climate or our social responsibility to other people’s equipment. People don’t beat machines up in Japan.”