All posts by media-man

Politico Pro wants subscribers doing “deep research” on its site, not on ChatGPT

Politico Pro is a high-priced item ($12,000 or more annually!) that is targeted at a demanding audience of lobbyists, agency staffers, corporate execs, industry think-tankers, and oligarchs either real or aspiring. It attempts to give high-leverage intel about the world of D.C. policy to people in a position to take advantage of that intel.

So of course they’re adding AI.

But! Before you click away in disgust, it’s a potentially smart implementation that I think other publishers should be looking at. Here’s Mark Sternberg at Adweek:

When you think about an AI answering questions, your mind may go straight to: “Oh, god, the next USDA nutrition guidelines will recommend eating one small rock a day and getting your RDA of pizza glue.” But this AI is trained exclusively on the corpus of Politico’s journalism (both Pro and Proletariat), so it should not be confusing some years-old Reddit thread with the state of play in pharma regulations. It reminds me of BloombergGPT, the LLM trained by Bloomberg back in 2023 on its journalism as well as a variety of bespoke financial datasets. (Bloomberg also charges its terminal users a pretty penny and thus has an incentive to test out differentiating tech.)

Politico’s product is a partnership with Capital AI, a fresh-out-of-Y-Combinator startup that describes itself as “a custom Perplexity for every website in the world” — Perplexity being the AI “search engine” that returns LLM-generated paragraphs instead of 10 blue links. (Google just announced a very Perplexity-like “AI Mode” for search.)

But what’s most interesting to me is not any solution custom to Capital AI — it’s the enormous growth over the past three months in “deep research” AI models. These are modes from the major chatbot companies (OpenAI, Google, Perplexity, DeepSeek, Qwen) that force the LLM to work slowly, search for and ingest background materials, rethink its reasoning path, and produce something closer to a coherent single report than a back-and-forth chat.

These models are all grindingly slow compared to standard-issue ChatGPT et al.; results can take anywhere from 2-3 minutes to 20-30 minutes. But their final output is…surprisingly good. I’ve been using them on questions I know I could answer on my own with some heavy googling and PDF reading. (One recent one: What impacts did Grover Cleveland’s policies have on race relations in the South? How did they differ from Benjamin Harrison’s? Give me 10 minutes and a search engine and I’ll get a minimally competent answer that wouldn’t embarrass me in a tweet. Give me an afternoon and I could upgrade that to a reasonably informed one. But doing that work for me, in a few minutes? And producing something that’s probably closer to the all-afternoon version than the 10-minute one? That’s an actual service.)

While Politico Pro is lucky to have that high-dollar, high-interest audience, lots of outlets sit atop archives that could fuel these sorts of deep-research. The New York Times archives would do quite a good job of answering 20th-century history questions. ESPN.com’s corpus could answer lots of sports queries. Industry-specific sites could tackle plenty of high-leverage questions. A good metro newspaper or a business journal has a tremendous set of information. And all of these could get more powerful through training on bespoke datasets. (For instance, what if Politico Pro also trained its model on the entire Congressional Record? The text of all past and present congressional bills, CRS reports, and vetted think-tank research?)

Anytime someone searches your archives, they’re asking a question and counting on your outlet’s body of work to provide an answer. If you’ve got a particular area of expertise (and a digital subscription to add value to), wouldn’t you rather they ask you instead of ChatGPT?

Photo of a 2024 Politico event via the British Embassy Washington used under a Creative Commons license.

News unions are grappling with generative AI. Our new study shows what they’re most concerned about

Generative AI hype has launched newsroom experiments around the world. Even though many of these early applications have become cautionary tales, the hype has endured for over two years since OpenAI publicly launched ChatGPT.

In many ways, this is familiar territory for journalism. In a long line of digital technologies (smartphones, social media, the infamous “pivot to video”), generative AI is yet another sociotechnical force that journalists did not create or ask for, but that they must navigate and (try to) reshape.

Previous hype cycles of innovation, disruption, and adaptation provoked existential questions about the value of journalism — and coincided with waves of unionization among journalists in the United States. Stories of technological change, industry mission, and labor power are always intertwined.

As a professor of communication and journalism at the University of Southern California and a scholar-practitioner of technology law at New York University, we have seen how challenging it has been for journalists to make sense of generative AI. In a new article for Digital Journalism, we show how news media unions that represent and advocate for a growing number of journalists are trying to manage and stabilize generative AI.

Our work is based on a close study of nearly 50 union sources over a two-year period from 2022 to 2024. These sources fall into three general categories:

  • Public statements and testimonies, from national umbrella organizations like The NewsGuild-CWA to locals representing journalists at publishers like the Atlantic, CNET, Dow Jones, Insider, Los Angeles Times, and Sports Illustrated;
  • Collective bargaining agreements struck with 14 different media companies, including the Associated Press, Arizona Republic, Financial Times, Philadelphia Inquirer, and Politico;
  • Trade press articles, published in Nieman Lab, Columbia Journalism Review, Poynter, and elsewhere.

Together, these sources tell a story about how news unions are engaging generative AI. We find six areas where news media unions are focusing their generative AI attention and concern — and, notably, two areas where they’re not.

Where news unions are focusing their concern…

1. Unions acknowledge that publishers are the ones with the power to initiate generative AI experiments, control its use, and accelerate its adoption. Unions are countering this power by vocally advocating — sometimes successfully — for an active role in starting, slowing, and stopping journalistic generative AI. Several unions have secured collaborative arrangements in which management and labor discuss generative AI implementations together in advance.

2. Unions say publishers’ widespread lack of transparency is a key reason that workers fundamentally do not trust publishers’ generative AI plans. Unions are trying to resolve this trust deficit by demanding more transparency in everything from procurement to licensing deals.

3. Unions stress that the humanity of workers is essential to quality news work. They want publishers to trust workers’ judgments about whether and how to use generative AI.

Journalists stress that the technology cannot replicate the indispensable creativity and ingenuity that makes journalism a public service and a key piece of healthy societies. Unions argue that any use of generative AI must center humans, and that journalists must be free to opt out of generative AI altogether whenever it conflicts with their judgment — “they can’t be made to use it.”

4. Unions are understandably preoccupied with generative AI’s threat to automate news work, but this preoccupation doesn’t revolve solely around concerns about job loss and livelihoods. It also stems from a belief that generative AI is inherently unaccountable and unreliable in ways that are antithetical to the purpose and values of journalism. Generative AI may help with the “logistical, busywork, back-end side of reporting,” but unions see the “destructive, careless, and borderline fraudulent” use of generative AI to publish AI-generated stories as threats to journalism’s accuracy and accountability, and its core professional values.

5. Unions are agitating for greater control over news products. This creates some tension with publishers, but also aligns workers and management as they both struggle to counter the power of tech companies. While unions demand publisher concessions over journalistic autonomy and creative identities — including the “assurance that AI won’t be used to modify content after employees leave,” “protection from byline misuse,” and control over their “image or likenesses” — journalists generally support publishers’ efforts to enforce copyright claims against the technology companies that have trained GenAI models on journalists’ content and data.

6. Finally, unions see contractual guardrails as central to stabilizing generative AI, but they are concerned that direct worker action alone cannot force publishers to change their uses of generative AI. Unions are working outside of and around their publications, engaging audiences and policymakers for help defining generative AI as a problem, articulating journalism’s value against it, and envisioning solutions to generative AI’s challenges.

Across these themes, journalists are workers trying to understand generative AI’s hype, tame its power, and articulate yet again to managers and audiences why strong, human-made journalism matters. Union responses to generative AI aren’t simply about organized labor defending traditions and protecting jobs — guarding against hallucinations and automation — but also about media workers’ trying to stabilize a new, opaque, and rapidly changing technology. As they reflect, bargain, and advocate around generative AI, they show what they think their work is, why it has value, and what they need to be successful journalists serving public interests.

…and where they’re not

We also find two notable ways that unions are generally not reflecting, bargaining, and advocating about GenAI.

First, they largely fall silent when it comes to talking about generative AI’s broader social and infrastructural impacts — namely, the natural and human resources required to build and sustain datasets, train models, and power interfaces. They are somewhat concerned about the provenance and construction of datasets as biased, extractive, or copyright-infringing, but there is scant mention of generative AI’s ecological impacts — its dramatic water and energy needs — or the invisible, seemingly unrelated labor that makes journalistic generative AI possible — the often-ignored “ghost workers” who make AI seem automated and intelligent.

Second, we don’t hear unions talking much about how working with generative AI might impact journalists’ wellbeing — their job satisfaction, sense of professional accomplishment, or workplace stress. Unions focus on news work and working conditions, including generative AI’s power to speed up the pace of journalism, but leave largely unexplored generative AI’s emotional tolls on journalists’ work, identities, and hopes for the profession’s future.

These absences may simply be areas that fall outside of unions’ expertise or interest, or they may be deliberate and strategic choices to focus generative AI conversations around tractable and actionable concerns.

In any event, they suggest opportunities for news media unions to expand their thinking about who qualifies as a “media” worker, to see journalistic GenAI within larger infrastructures and ecosystems, and perhaps to use this moment for advocacy that further foregrounds the humans and humanity that power journalism.

Though the patterns that we found describe a snapshot in time, they serve as touchstones that scholars and practitioners alike might use to convene journalists, publishers, technologies, infrastructures, and audiences in ways that lead to better media systems.

Mike Ananny is an associate professor of communication and journalism at the University of Southern California Annenberg School. Jake Karr is the acting director of New York University’s Technology Law and Policy Clinic.

Photo by Matthew Rodier/Sipa USA via AP Images

Butterfly population in US shrinking by 22% over last 20 years, study shows

Drop in line with rate of overall insect loss as scientists point to habitat loss, pesticide use and the climate crisis

Butterflies may be among the most beloved of all creatures, routinely deified in art and verse, but they are in alarming decline in the United States with populations plummeting by a fifth in just the past two decades, according to the most comprehensive study yet of their fortunes.

The abundance of butterflies in the US slumped 22% between 2000 and 2020, the new analysis of more than 76,000 mostly regional surveys, published in Science, found. For every five butterflies fluttering daintily around at the start of the century, just four remain today.

Continue reading...

Scientists Are Rising Up to Resist Trump Policies

March 7 demonstrations across the U.S. and Europe will protest cuts to research, staffing and funding, and push for a continued federal focus on diversity, equity and inclusion.

Thousands of scientists from scores of countries around the world are joining together in solidarity to oppose attempts by the Trump administration to enact what they see as anti-scientific measures that threaten public health and the environment around the world. 

This food researcher is on a mission to make fake meat taste better. Will she succeed?

Caroline Cotto’s research group taste-tests meat alternatives so plant-based companies can attract new customers – and help the climate

I am sitting in a Manhattan restaurant on a frigid Thursday in January, eating six mini servings of steak and mashed potatoes, one after another. The first steak I am served has a nice texture but is sort of unnaturally reddish. The second has a great crispy sear on the outside, but leaves behind a lingering chemical aftertaste. The next is fine on its own, but I imagine would be quite delicious shredded, drenched in barbecue sauce and served on a bun with vinegary pickles and a side of slaw.

If you peeked into this restaurant, you’d see nothing out of the ordinary – just a diverse range of New Yorkers huddled over plates of food. But everyone present is here for more than just a hot meal. We’re participating in a blind taste test of plant- (or sometimes mushroom-) based steaks, organized by a group of people who hope that better-tasting meat alternatives just might be a key to fighting the climate crisis.

Continue reading...

Researchers reveal nitrogen’s dominant role in global organic aerosol absorption

A collaborative research team has introduced a nitrogen-centric framework that explains the light-absorbing effects of atmospheric organic aerosols. This groundbreaking study reveals that nitrogen-containing compounds play a dominant role in the absorption of sunlight by atmospheric organic aerosols worldwide. This discovery signifies a major step towards improving climate models and developing more targeted strategies to mitigate climate impact of airborne particles.

BP cuts boss’s pay by 30% after company misses profit targets

Murray Auchincloss paid £5.4m in 2024 as oil company ditched green investment strategy

BP cut the pay of its chief executive after a chastening year in which the British oil company missed profit targets and ditched its green investment strategy as it came under pressure from a US-based activist investor.

Murray Auchincloss’s pay decreased by 30% to £5.4m for 2024, according to the company’s annual report published on Thursday.

Continue reading...

The Media Needs to Show How the Climate Crisis is Fueling the LA Wildfires

This piece originally appeared in the Guardian.

Last week, as the Sunset fire was bearing down on her Los Angeles home, Allison Agsten approached a group of television news crews gathering in her neighborhood. Did any of them plan to mention the role of the climate crisis in their reporting?

The question was professional as well as personal for Agsten, who runs a climate journalism center at the University of Southern California and has trained reporters on how to connect the climate crisis to what’s happening in the world. She has lived in her home along Runyon Canyon, near Hollywood, for a decade.

Alas, these mega-fires have called forth a mega-failure by much of the news media. A review of coverage to date shows that most journalism is still not accurately representing how the climate crisis is upending our civilization by driving increasingly frequent and severe extreme weather.

Too much of the coverage has simply ignored the climate crisis altogether, an inexcusable failure when the scientific link between such mega-fires and a hotter, drier planet is unequivocal. Too many stories have framed the fires as a political spat between President-elect Donald Trump and California elected officials instead of a horrifying preview of what lies ahead if humans don’t rapidly phase out fossil fuels. Too often, bad-faith disinformation has been repeated instead of debunked. And rarely have stories named the ultimate authors of this disaster: ExxonMobil, Chevron and other fossil fuel companies that have made gargantuan amounts of money even as they knowingly lied about their products dangerously overheating the planet.

At a moment when making the climate connection to these deadly fires seemed urgent to Agsten, she told us, she was disappointed by the response she received. One reporter said she “just was not sure how my news director feels about covering climate on the air”. Another was more interested in what Agsten might know about looting in the area and asked if she had any security camera footage of the fire or of looting that could be used on air.

“It was disheartening because it’s my personal story, and it’s disheartening because it’s what I do for a living,” said Agsten, director of the USC Annenberg Center for Climate Journalism and Communication.

The Los Angeles fires represent a seminal moment for the climate crisis – and for journalism. These are not the wildfires of seasons past. They are mega-fires that have now burned an area larger than the entire city of San Francisco. They are likely to be the costliest disaster in US history, California’s governor, Gavin Newsom, has predicted. At last count, a staggering 6 million people remained under a critical fire threat.

Alas, these mega-fires have called forth a mega-failure by much of the news media. A review of coverage to date shows that most journalism is still not accurately representing how the climate crisis is upending our civilization by driving increasingly frequent and severe extreme weather.

Too much of the coverage has simply ignored the climate crisis altogether, an inexcusable failure when the scientific link between such mega-fires and a hotter, drier planet is unequivocal. Too many stories have framed the fires as a political spat between President-elect Donald Trump and California elected officials instead of a horrifying preview of what lies ahead if humans don’t rapidly phase out fossil fuels. Too often, bad-faith disinformation has been repeated instead of debunked. And rarely have stories named the ultimate authors of this disaster: ExxonMobil, Chevron and other fossil fuel companies that have made gargantuan amounts of money even as they knowingly lied about their products dangerously overheating the planet.

As the co-founders of the global journalism collaboration Covering Climate Now, we have a unique view into what’s been missing from the first chapter of the LA fires coverage. But we can also see what has worked and could help our fellow journalists do better on future chapters of the story.

More extreme weather is inevitable. In a welcome exception to most coverage, Time published a powerfully illustrated cover story headlined “The LA fires show the reality of living in a world with 1.5C of warming”. In an implicit rebuke of Trump and other climate deniers, the veteran climate reporter Jeffrey Kluger wrote that “fixing the problem first requires understanding – and, even more fundamentally, accepting – the science”. That science, he added, says that additional warming is unavoidable until humans stop burning fossil fuels.

When our planetary house is literally on fire, better news coverage is an essential climate solution. Climate activists have been sounding the alarm for decades. But until the general public – such as the countless working parents who are more focused on raising their kids than on following every twist and turn of current affairs – understands that our planetary house is on fire, why it’s on fire and that humanity possess all the tools needed to extinguish the fire (except for enough politicians who will deploy those tools), there simply will not be enough public pressure to get governments to change course.

To be fair, there have been episodic examples of climate-savvy coverage of the fires. Individual stories by the Los Angeles Times, Variety, Axios, the Guardian, CBS News, ABC News, CNN and others are worth noting, if only because they illustrate how easy it is to do better.

Sometimes, a single sentence is enough to tie these fires to the larger climate reality. And all the better if that sentence is the story’s lede. Sammy Roth, one of the very best climate journalists in the US, did exactly that in his 14 January Boiling Point column for the Los Angeles Times: “Los Angeles is burning. Fossil fuel companies laid the kindling.”

One canard sometimes heard in newsrooms is that talking about the climate crisis is misplaced, even disrespectful, during the initial stages of a disaster when people are fearing for their lives. Certainly journalists should always remember our public service responsibility to provide timely, accurate information about evacuation routes and the like. But we can do that while also informing the public about why such disasters are happening in the first place.

Space is especially at a premium in TV and radio journalism, where reporters sometimes squabble with their producers for an extra 10 seconds of air time. But that didn’t stop CBS News from making the climate connection when using the LA fires as a news peg when examining former president Jimmy Carter’s efforts against the climate crisis. It took CBS’s anchor Lindsey Reiser just six seconds to open her piece with the words “The wildfires in California are the latest in a string of natural disasters made worse by climate change.”

Of course, TV needs pictures to tell the story, and brave photographers have provided stunning images of the LA fires. But it’s an editorial choice what narration is paired with such images and, again, it doesn’t take long to make the climate connection. A 30-second weather segment on ABC’s Good Morning America showed harrowing video of the Palisades fire while the meteorologist Somara Theodore told viewers, “As the climate is changing, we are seeing that these wildfires are becoming more extreme.” The rise in global temperatures, she added, has meant that “41% more land [has burned] as a result”.

One of the most distressing fails has been the way too many news outlets took statements by Trump, Elon Musk and other known peddlers of disinformation at face value and even began framing their coverage of the fires accordingly. After Trump and Musk blasted false claims last week about how California environmental and DEI policies were impeding a rapid response to the fires, reports by leading news organizations began echoing those questions about whether local and state officials were properly prepared for this disaster.

Again, exceptions to the rule show how easy it would have been for those outlets to identify and debunk such disinformation instead. Some of the best work here was done by California outlets, including the public radio stations KQED and LAist . An added bonus of such debunking? It gains the public’s trust in the news outlet, which in turn can build the audience and revenues that independent journalism desperately needs.

Journalists have understandably wanted answers about what caused these fires. Was it arson, or downed power lines, or something else? Of course it’s important to understand the proximate cause of these terrible blazes. But zeroing in only on the spark misses the bigger truth. We would not be living this particular catastrophe without an overheated climate.

Let’s listen to what our USC colleague Allison Agsten is telling us. For now, her home is spared. But strong winds are picking up again and emergency officials are warning of more fires to come. When a house is on fire, by all means let journalism show us the flames. But tell us why the house is burning, too.

The post The Media Needs to Show How the Climate Crisis is Fueling the LA Wildfires appeared first on Covering Climate Now.

The Tesla Ethicist: Should I Sell My Tesla EV To Protest Musk’s Government Interference?

In this edition of the Tesla Ethicist, we weigh in on the degree to which our consumer spending reflects our moral compasses. A Tesla owner wonders if that company’s anti-democratic values are an ample reason for selling their proven, premium electric vehicle. Dear Tesla Ethicist, I see a few celebrities ... [continued]

The post The Tesla Ethicist: Should I Sell My Tesla EV To Protest Musk’s Government Interference? appeared first on CleanTechnica.