All posts by media-man

Grabaciones durante la pandemia del coronavirus

Estamos enfrentando una crisis global sobre una enfermedad infecciosa, y —comprensiblemente— tanto a reporteros como a productores les preocupa cómo protegerse a sí mismos y a sus entrevistados.  Es importante recalcar que aquí en Transom no somos doctores ni especialistas en enfermedades infecciosas, así que nada de lo que vamos a sugerir a continuación debe […]

The post Grabaciones durante la pandemia del coronavirus appeared first on Transom.

A Word from AIR and COVID-19 Resources for freelancers

From our CEO, Ken Ikeda:

The effects of the COVID-19 outbreak can be felt everywhere and could last for many more months. We at AIR want to communicate our support for the freelancers and small businesses among our membership, peer institutions and friends. There are few resources available for small businesses and for many of us, there are few funds to cover living costs as gigs are  delayed or cancelled. (Details on those funds that do exist are available from the freelance artists resource guide.)

Please don’t hesitate to communicate with us. Our advocacy work is grounded in fair wages and working conditions for freelancers. COVID-19 illuminates just how challenging and fragile our economy, work portfolios, and professional security can be. At the same time, the value of the incredible reporting and storytelling that each of you contributes is needed now more than ever. We’ve adopted the saying internally that AIR members are “Always Independent. Never Alone.” In the best of times this captures the spirit of everyone we meet. In the most challenging times, it is our promise that you are not alone. As a community, if we can connect you with others, listen, and assist in any way, please let us know.

AIR staff have gathered information from across our network in an effort to amplify the work others have already put in. We are thankful for how quickly they have mobilized to provide information and resources and hope that you find it useful as well. 

Your primary source of information and guidance about the spread of the virus and best practices for prevention should absolutely be the CDC and your own local news outlets who are best positioned to provide fact-based assessments of local conditions. 

We’ve identified a few additional resources that may help AIRsters navigate the next few months: 

Squadcast is a web-based app that records conversations with remarkable fidelity. AIR members receive for a month free trial and up to 5 hours of free recording time. Use promocode AIRMEDIA at checkout.

Women in Sound shared an excellent crowd sourced round up of resources for finding work and financial support as well as handling the logistics of social distancing. 

For folks in newsrooms, Hearken shared some fantastic resources on Handling Audience Questions in a Crisis

Wherebyus’s Rebekah Monson shared some practical tips for managing newly remote teams on Medium. It’s a great read if you’re on a team that isn’t used to working remotely. 

Marketplace Tech Team has shared their remote plan, including basic protocol for using Zoom.

Current has a great post from KUOW News Director Jill Jackson on how the station is handling coverage of the outbreak in Seattle

If there are resources that you have found helpful, please do share them!

The post A Word from AIR and COVID-19 Resources for freelancers appeared first on AIR.

Pym.js Embeds version 1.3.2.3: Now with AMP ⚡️ support!

It's hard to do truly custom interactives work within WordPress. INN Labs' Pym.js Embeds plugin is built to make it easier for your newsroom to embed your latest data project, with help from NPR's Pym.js library.

Other solutions often involve pasting JavaScript into the post editor, disabling or bypassing parts of WordPress' security filters, or using interactive-builder plugins that limit your creative freedom.

All that Pym.js Embeds requires is a place to host your interactive as a standalone HTML page, and that you use NPR's Pym.js library in your interactive to make it resizable. Our plugin provides the seamless WordPress integration. Take the URL of your interactive and embed it in posts using the Block Editor or shortcodes. We handle the rest!

For the first time, thanks to the efforts of Claudiu Lodromanean and Weston Ruter, the Pym.js Embeds plugin supports the official WordPress AMP ⚡️ plugin. With both plugins installed, your Pym.js-based iframes will now be displayed as <amp-iframe> tags when your site is viewed through AMP.

Since amp-iframe now includes Pym.js's messages as a supported protocol, your embedded content is now more likely to work in AMP sites than it was before. As Google drives more content to your AMP pages, your readers will continue to have the same first-class experience they'd have if the reader viewed your full site.

This release also fixes some minor documentation issues, and we've improved this plugin's contribution guidelines on GitHub for external contributors.

Connect with the INN Labs Team

If you're using this plugin, let us know how you're using it! Send us links to cool things you've done with it; we'd love to include them in our weekly newsletter.

If you'd like to learn more about INN Labs' open-source WordPress plugins and tools for publishers or how we can work together on your next project, send us an email or join us at one of our weekly Office Hours.


Introducing AIR’s new Engagement Strategist

Erin McGregor joined AIR as its new Engagement Strategist on January 21st. AIR’s CEO, Ken Ikeda, explains her new role as “a combination of member engagement, partnership development and oversight of our online platform, which is relaunching in early May. We had so many compelling applicants for this role, but Erin’s aspirations for AIR, her project management experience and work in founding Gaydio, align so well with our priorities. We are assembling a team of active learners, practitioners and organizers. Erin is all three. We are so thrilled she has joined AIR!”

A proud Canadian, Erin is based out of Philadelphia. She brings with her 10 years of project management and coordination experience in non-profit, corporate, and education sectors and 6 years of experience as an audio producer. “I’ve been a member of AIR for three years. The organization supported me through my early career in audio production and I am thrilled to bring those experiences and my passion for people who tell great stories to the Engagement Strategist position. I look forward to connecting with our members.”

Erin can be reached directly at is erin@airmedia.org

 

The post Introducing AIR’s new Engagement Strategist appeared first on AIR.

Rate Guide: Editing and Content Strategy

Everyone needs an editor. Yes, even you. Good editors bring an experienced ear to the whole story, and that can make a world of difference.

And even if you know precisely what you’re doing, you also need someone with an experienced eye on the big picture. On some shows that’s the executive producer and it’s a full time role. Other shows bring in a consultant to lay solid groundwork and then step back after the first few episodes are out the door. Often, newcomers to the space turn to experienced editors in search of guidance that looks a lot more like content strategy or operations.  We’ve covered this group of experts under “consulting” below.

In the last in our 2019 rate guide series, we wanted to capture those roles, editing and consulting, and the current market rates we found in our research.

Though these two roles are very different in practice, we’ve included them in a single guide because many of the consultants and other indies we interviewed in the course of our research on rates told AIR that they also do some story editing. Some of them mix a part time staff editing role with freelance reporting. Some freelance full time and only edit stories. We also talked to experienced professionals who tackle more strategic work helping podcasts or radio shows think through their plans, develop realistic editorial budgets, refine their voice, and maximize their reach.

Editors

In this context “editing” refers to the process of working with a producer, reporter or host to refine and finalize a story, episode or segment. Sometimes we’ll use “story editing” to distinguish this kind of editing from “audio editing,” or actually cutting the tape, which we cover in our guide to engineering and music rates.

Most editors are involved from the pre-reporting stage to help frame the goals of the story, they’re checking in along the way to help troubleshoot and prioritize, and they’re helping shape the final piece, including the script itself.

It depends on the format of a show, and on the rest of the staffing structure. But in general an editor is consulting with a reporter or producer about the content, tone and structure of a story (or an episode or series). Often that starts at the conceptual stage, talking with a reporter, producer or host about the goals and big questions for the project. In most cases an editor expects to check in along the way to help assess the projects evolution by asking questions like: what do we have? What do we need? What is turning out to be hard? Almost every editor will want to work directly with the script. Some will also want to listen to a lot of raw tape, while others look to the reporter or producer for that. Every editor will want to listen to a live read with tape.

Obviously some projects proceed much faster than others: the conceptual stage might be a quick conversation for a news feature, while a more deeply reported segment or series could involve substantially more conversations.

Editors tend to bring a good deal of experience both reporting — ensuring that a piece is complete and accurate, and narrative — ensuring that a piece tells a compelling story.

Rates

For experienced editors, we saw rates from $85 to $150 per hour, with most falling in the $100 to $125 range. We did talk to a handful of either specialized or uniquely desirable editors whose rates ranged significantly higher than $150/hour.

Newer editors who are still establishing a professional reputation cited rates closer to $75 per hour, in some cases ranging down to $50 or $65.

Editors at every level of experience cited a range and noted that rates at the lower end of their range reflect consistent, reliable gigs that they can count on and have been doing for a few years.

Finding Editors

Need a great editor? You can search the Talent Directory for AIRsters who are available for Story Editing. And if you’re an AIR member with editing experience, update your profile!

Consultants: Shaping the Big Picture

Most productions can benefit from the insights of an experienced leader who can help establish the show’s voice, identify opportunities to reach a larger audience, assemble a staff and realistic budget, and connect with financing. While some editors can also provide this kind of support, this is usually where consultants come in on a small production.

While titles like “Executive Producer” and “Content Strategy Consultant” are by no means interchangeable, we found that most shows need someone in one or both roles, at least during the conception and launch phase. Once a show is up and running the folks who provided initial scaffolding aren’t as necessary — but unless there’s a very experienced editor in place to keep things moving in the right direction, some level of strategic oversight is important at any stage.

As we were interviewing consultants with an eye for the big picture, we also talked to a few who specialize in operations.  If running an organization is outside your expertise, operations consultants can help you get the contract templates you need, connect with attorneys when you need legal support, and make sure that your budget includes the logistical details of keeping a production running smoothly — details that are often left out of editorial budgets.An operations consultant can help a show scale or just start up in a way that makes sustainability possible.

Some consultants who do this work describe themselves simply as experts, others use show runner. More than one described their work as “creative-meets-editorial-meets-business.”

Our sample size for big picture consultants was both small and diverse so these rates represent snapshots rather than a complete picture. We’re still including them here because we regularly get questions about budgeting for strategic planning.

Rates

Rates for expert consulting among our respondents ranged from $1000 -$1500 per day — many consultants at this level told us that they only work by the day. Where folks take hourly rates, those range from $150-250 per hour, sometimes ranging up to $500/hour.  The amount of time that any one show will need with an outside executive producer or content strategist will vary, but most consultants will be able to accurately estimate the level of engagement they foresee.

Methodology

We interviewed experienced shows and production companies about what they expect to pay for editors and strategic consulting. We also interviewed experienced editors and consultants, to understand how they set their rates and what they are actually paid on individual projects.

AIR’s work on rates

AIR is actively developing a series of guides designed to help independent producers, editors, and engineers set fair and reasonable rates, and to help everyone create accurate and realistic budgets. We want to hear from you. Contact amanda@airmedia.org if you have feedback on our rate recommendations.

This guide was completed in December 2019 and has not been updated. Our hope as an organization is that AIR can keep these rate guides up to date but if you’re reading this and it is more than a year old, you should adjust the recommended rate to reflect changes in the cost of work and living.

The post Rate Guide: Editing and Content Strategy appeared first on AIR.

I Spent Three Years Running a Collaboration Across Newsrooms. Here’s What I Learned.

ProPublica’s Documenting Hate collaboration comes to a close next month after nearly three years. It brought together hundreds of newsrooms around the country to cover hate crimes and bias incidents.

The project started because we wanted to gather as much data as we could, to find untold stories and to fill in gaps in woefully inadequate federal data collection on hate crimes. Our approach included asking people to tell us their stories of experiencing or witnessing hate crimes and bias incidents.

As a relatively small newsroom, we knew we couldn’t do it alone. We’d have to work with partners - lots of them - to reach the biggest possible audience. So we published a tip form in English and Spanish, and recruited newsrooms around the country to share it with their readers.

Never miss the most important reporting from ProPublica’s newsroom. Subscribe to the Big Story newsletter.

We ended up working with more than 180 partners to report stories based on the leads we collected and the data we gathered. Partnering with national, local and ethnic media, we were able to investigate individual hate incidents and patterns in how hate manifested itself on a national scale. (While the collaboration between newsrooms is coming to an end, ProPublica will continue covering hate crimes and hate groups.)

Our partners reported on kids getting harassed in school, middle schoolers forming a human swastika, hate crime convictions, Ivy League vandalism, hate incidents at Walmarts and the phrase “go back to your country,” to name just a few. Since the project began in 2017, we received more than 6,000 submissions, gathered hundreds of public records on hate crimes and published more than 230 stories.

Projects like Documenting Hate are part of the growing phenomenon of collaborative data journalism, which involves many newsrooms working together around a single, shared data source.

If you’re working on such a collaboration or considering starting one, I’ve written a detailed guidebook to collaborative data projects, which is also available in Spanish and Portuguese. But as the project winds down, I wanted to share some broader lessons we’ve learned about managing large-scale collaborations:

Overshare information. Find as many opportunities as possible to explain how the project works, the resources available and how to access them. Journalists are busy and are constantly deluged with information, so using any excuse to remind them of what they need to know benefits everyone involved. I used introductory calls, onboarding materials, training documents and webinars as a way to do this.

Prepare for turnover. More than 500 journalists joined Documenting Hate over its nearly three-year run. But more than 170 left the newsrooms with which they were associated at the beginning of their participation in the project, either because they got a new job, were laid off, left journalism or their company went under. Sometimes journalists would warn me they were leaving, but most of the time I had to figure it out from email bounces. Sadly, it was rare that reporters changing jobs would rejoin.

Be understanding about the news cycle. Intense news cycles, whether it’s hurricanes or political crises, mean that reporters are not only going to get pulled away from the project but from their daily work, too. Days with breaking news may mean trainings or calls need to be rescheduled and publication dates bumped back. It’s important to be flexible on scheduling and timelines.

Adapt to the realities of the beat. It’s not uncommon for crime victims, especially hate-crime victims, to be reluctant to go on the record or even speak to journalists. Their cases are difficult to report out and verify. So like in a lot of beats, a promising lead doesn’t guarantee an achievable story. Crowdsourced data made the odds even longer in many cases, since we didn’t receive tips for every partner. That’s why it’s important to set expectations and offer context and guidance about the beat from the outset.

Expand your offerings. Given the aforementioned challenges, it’s a good idea to diversify potential story sources. We made a log of hate-crime-related public records requests at ProPublica for our reporting, and we made those records available to partners. We also offered a weekly newsletter with news clips and new reports/data from external sources, monthly webinars and guidance on investigating hate crimes.

Be flexible on communication strategies. Even though Slack can be useful for quick communication, especially among large groups, not everyone likes to use it or knows how. Email is what I’ve used most consistently, but reporters’ inboxes tend to pile up, and sometimes calling is easiest. Some journalists are heavy WhatsApp users, and I get through to them fastest there. Holding webinars and trainings is helpful to get some virtual face time, and sending event invites is another way you can get someone’s attention amid a crowded inbox. It’s useful to get a sense of the methods to which people are most responsive.

Celebrate success stories. There is a huge amount of work that doesn’t end up seeing the light of day, so I make an effort to signal-boost work that gets produced. I’ve highlighted big stories that ProPublica and our partners have done to show other partners how they can do similar work or localize national stories. Amplifying these stories by sharing on social and in newsletters, as well as featuring them in webinars, can help inspire more great work.

Be diligent about tracking progress. Our database software has a built-in tracking system for submissions, but I separately track stories produced from the project, news clips and interviews that mention the project, as well as impact from reporting. I keep on top of stories partners are working on, and I also use Google Alerts, internal PR emails and daily clip searches.

Evaluate your work. I’m surveying current and past Documenting Hate participants to get feedback and gauge how participants felt about working with us. I’m also going to write a post-mortem on the project to leave behind a record of the lessons we learned.

read more...

AIR x WHYY: Print to Radio Bootcamp

Are you a Philly-based print reporter looking to venture into audio storytelling? AIR and WHYY are partnering to offer a weekend-long audio training bootcamp, January 11–12, that will provide print journalists with the skills they need to expand their career into radio news reporting. WHYY, an NPR member station, is seeking to expand its pipeline of contributors available to pitch story ideas on a freelance basis

The training is free. Nine selected individuals will work with award-winning radio journalists, editors and audio producers. After, they will have access to guidance on story ideas from WHYY’s news directors and editors. All participants will receive the opportunity to directly pitch a story, with a focus on diverse, local voices, to WHYY editors.

You must be able to commit to the full weekend and complete readings/listening homework assigned prior to the training session. On the first day of training, be prepared to briefly describe a freelance story you would like to pitch to WHYY. 

Who is eligible:

  • Print and online journalists creating independently or for media outlets and working in the Philadelphia metro area (including South Eastern Pennsylvania, South and Central New Jersey, and Delaware).
  • Recent graduates of college or university journalism programs (this includes associate’s degrees).
  • Individuals working with community based news organizations
  • Individuals with a laptop computer with at least 1GB RAM and the ability to download Hindenburg audio editing software from the Web (click here for more specs).

You will receive a complimentary one-year membership or one-year renewal to AIR. Apply by Tuesday, December 17 at 8pm ET.

The post AIR x WHYY: Print to Radio Bootcamp appeared first on AIR.

WordPress vs. Drupal: Which CMS is Best for You?

WordPress and Drupal are both powerful Content Management Systems (CMS’s) and two of the most popular ones in use today. There are several key differences between the two, and selecting the best fit has more to do with the specific goals and aims of a project than one being necessarily better than the other.  

Similarities

Both WordPress and Drupal are:

  • Open source content management systems built on the LAMP stack.
  • Shared functionality, plugins, themes, etc. 
  • Regular events, support communities, trainings, and information/guides.
  • A robust long-term future; neither of them will be going anywhere anytime soon.

Key Differences

FACTORWORDPRESSDRUPAL
Ease of use & ComplexityEasy to install, requires little to no coding knowledge to get a site up and running.Easy to install, requires more customization and configuration to set up.
Learning CurveSmall learning curve.Large learning curve, coding experience often needed.
Content StructuresFocus on blog and article content that displays on webpages. Pre-configured content types and functionality for quick publishing and easy static pages and articles.Focus on interconnected content that can display in multiple places and push to other places on the web. Ability to create complex and customizable content types for advanced functionality of pages.
Plugins/ModulesSolution oriented plugins that provide specific out of the box functions. Functionality oriented modules meant to be combined with others to create needed functions.
User Access/WorkflowsEasy to start publishing content. Simple roles and workflows out of the box. Complex workflows are implemented via plugins like Edit Flow.Focused on more-complex workflows with highly customized permissions and roles.
SecurityThird-party plugins can be prone to security vulnerabilities. For more information, see our blog post on WordPress Security.Also has third-party vulnerabilities.

Summary

WordPress is a powerful and easy to use tool for creating content-oriented websites of all sizes. Drupal is similar to WordPress in many ways, however it is has a higher learning curve and requires more customization.

Questions?

If you would like to learn more about WordPress or are a publisher considering a site migration from Drupal to WordPress, get in touch!

Introducing: The Winter 2019 Full Spectrum Cohort

We’re proud to welcome the Winter 2019 class to AIR’s Full Spectrum Storytelling Intensive. This group brings together makers from throughout the United States, and spans a wide range of interests including audio ephemera, communal listening experiences, comedy, video games, ice cream cakes, and then some. Their storytelling expertise runs the gamut, from radio art to digital journalism to film and television production. This December, the Winter 2019 Full Spectrum cohort will gather at UnionDocs in Williamsburg, Brooklyn, for a week of learning and exploring with the guidance of co-leaders Chiquita Paschal and Cher Vincent. Meet the cohort below!

Myra Al-Rahim (she/her) is an audio producer currently living and working in Brooklyn, New York. Outside of her day job editing audio, she considers herself a practitioner of hauntological radio art. Many of her performances and compositions are preoccupied with the physicality of analog technologies and emphasizing the material qualities of playback systems that enable us to access recorded sounds. She is an avid synthesist and the use of electronic music tech is prevalent in most of her work. She is also a restless collector of audio ephemera from the past. In an ever privatizing world, Myra believes that airwaves remain one of the last domains truly reserved for the public. In all her compositions, she is dedicated to channeling the public spirit.

Heloiza Barbosa (she/her) is creator/producer of Faxina Podcast, a Portuguese-language podcast of stories that get swept under the rug. She is also a Brazilian academic researcher and a writer whose work has appeared in Alpaca literary magazine and other international academic and literary publications. Heloiza believes that the future of podcasting sounds like women, people of color, queer individuals, and immigrants.

Ellen Berkovitch (she/her) is an award-winning documentary radio producer and podcaster, as well as a digital journalism entrepreneur and writer. She has been news director of Santa Fe Public Radio. In 2018 she contributed a long-form investigative environmental radio documentary about uranium pollution on Navajo Nation to KRCB/NPR One. Ellen is now at work on two new podcasts as well as on a participatory journalism initiative, the Re-Voice Project, to amplify voices of people experiencing homelessness as experts in the accelerating social crisis. She moved back to her hometown of New York City from New Mexico in September 2018. In 2019 she has participated in #50WomenCan (Take the Lead in Journalism), a women’s journalism leadership training program. She’ll begin teaching journalism at Pratt Institute in January 2020.

Molly Born (she/her) is a journalist and producer living in West Virginia. She has spent 2019 working on film projects with documentary filmmaker, Elaine McMillion Sheldon. In 2018, Born was a fellow with Report for America, an organization that places journalists in under-covered areas. As a fellow, she covered issues in southern West Virginia for West Virginia Public Broadcasting, the state’s NPR member station, and was a finalist for the 2019 Livingston Award. Before that, Born worked six years at the Pittsburgh Post-Gazette.

Elizabeth Friend (she/her) is freelance journalist and audio producer based in North Carolina. She’s also the co-creator of Audio Under the Stars, the largest outdoor audio documentary festival in the Southeast. Each summer Audio Under the Stars shares a selection of original stories and thoughtfully curated favorites designed to bring the world of audio storytelling out of your earbuds and into the outdoors, creating a one-of-a-kind communal listening experience.

Robin Gelfenbien (she/her) is the Ambassador of Fun aka an NYC-based storyteller, writer, comedian and host. She’s a three-time Moth StorySLAM winner who has performed on PBS, RISK!, Mortified and countless shows. She’s also shared the stage with luminaries like Hannah Gadsby, Trevor Noah and more. She’s the Creator and Host of the storytelling series and podcast, Yum’s the Word, which features her homemade ice cream cakes. The show has been named a New York Times and Time Out New York Critic’s Pick.

Andrea Gutierrez (she/her) produces interviews and features for The Frame, a daily arts and culture radio show by NPR station KPCC. She is drawn to stories about the intersections of gender, race, class, and ability in arts and culture. Her work has appeared in print, digital, and audio in various outlets, including BBC World Service, The Current (CBC), LAist, The California Sunday Magazine, Marfa Public Radio, and Bitch. In 2019 she was an AIR New Voices Scholar and a finalist in KCRW’s 24-Hour Radio Race. Andrea received an MFA in creative writing at the University of California, Riverside, and B.A. in German studies at Scripps College. Social: @AndreaGtrrz

Alisha Hall (she/her) is the creator and producer of the weekly podcast Tell It. Each episode is less than five minutes long and offers listeners little nuggets of wisdom. She’s worked as a board operator and Freelancer for WFYI in Indianapolis, staff writer with FAF Collective, and she was a 2016 New Voice Scholar. Alisha is from University Park, Illinois. She is a mother, college graduate, crybaby, and an overall people-lover. She wants to tell whole stories that open minds, hearts, and inspires people to live fuller lives!

Ari(el) Mejia (she/her) is a Chicago native who believes multiple truths exist at once. She came to audio art and radio production by way of feminist praxis, community organizing and education. As well as working on independent audio projects Ari is the Assistant Director of Features at CHIRP Radio, a Chicago community radio station where she curates & produces interviews with Chicago-based artists. She is also the Youth Radio instructor for Chicago’s youth programming non-profit After School Matters.  Ari was a 2019 Air New Voices Scholar, and is a proud member and co-founder of the Radio (R)ejects, a queer + poc audio collective challenging what it means to make experimental work based out of Los Angeles, Chicago, and New Orleans. 

Ron Lyons (he/his) is an independent journalist and audio storyteller. He writes about culture and technology which can be found in 101.9 WDET, StoryCorps and Slate Magazine. When not working on stories he unwinds by video games, eating Korean food and traveling. Journalism like any other industry can have both it’s good and bad days, but he wouldn’t leave it for anything else in the world.

Cat Modlin-Jackson (she/her) is a freelance journalist and co-host/producer of WRIR’s Race Capitol, a weekly show that interrogates racial narratives in Richmond, Virginia — home to the former capitol grounds of the confederacy. Her educational background is in Middle Eastern politics and the Arabic language, which she studied intensively in Oman. She came to journalism by way of national politics after writing about the Women’s March in January, 2017. She’s done stints in New York, Tampa Bay, and Houston, where she covered gender, race, food, agriculture, and the political threads that tie it all together. 

Emerald O’Brien (she/her) is a multimedia storyteller with predilection for audio. Along with the arts of communication and narrative crafting, she’s interested the ways people create and consume stories as a way to engage their world. Since receiving a degree in journalism from the University of Missouri, she has worked in various production and reporting roles at KBIA, NPR’s Morning Edition and APM Reports. She is currently a digital producer for American Public Media’s Live from here with Chris Thile.

Erika Romero (she/her) is a producer based in NYC. She’s traveled across the country in the iconic StoryCorps airstream trailer as part of the 2015 Mobile Tour, produced shows for Maeve in America and Mashup Americans. She is currently the associate producer for The Splendid Table.

 

Kip Reinsmith (he/his) is a writer, director, producer, and trans human who is passionate about unearthing stories and truths rarely told. His past podcasts include the time-traveling historical series Subframe about the 1915 World’s Fair in San Francisco, produced in partnership with the California Historical Society, and Marc Maron Presents: Classic Showbiz, about the untold history of comedy in the U.S. His audio work has been featured on WNYC’s Nancy and KALW. His directorial debut, a music video for Mal Blum’s Reality TV, was featured on NPR’s First Watch. He is a graduate of NYU’s Tisch School of the Arts where he studied film and television production with a focus on documentary studies.

Laurel Morales (she/her) has been a public radio reporter since 1998 but she has spent the majority of her career in northern Arizona covering Indian Country. During that time it’s been her mission to find innovative ways to break down complicated issues for local, regional and national public radio audiences. She has won several awards for her reporting and writing, including a national Edward R. Murrow Award for her continuing coverage of the Yarnell Hill Fire, the Arizona wildfire that killed 19 firefighter hotshots. Her greatest accomplishments have been sparking dialogues, informing decision-makers and moving people to action.

The post Introducing: The Winter 2019 Full Spectrum Cohort appeared first on AIR.

We’re going to be at Third Coast this week! Find our table for swag and conversation.

The entire AIR staff will be at Third Coast this week and we’re thrilled to meet (or reconnect) with AIRsters and beyond in attendance at the conference. Visit our table to meet our team and get to know our New Voices and AIR ambassadors who will also be working the table. Here’s your guide to the AIR experience at Third Coast:

Don’t snooze on our Bitchin’ Pitchin’ Panel.
Happening Friday at 2pm and Saturday at 11am, the pitch panel will bring brave storytellers into an arena of editors to pitch their stories—LIVE! Make sure to swing by to catch one of the sessions—or both.

It’s always a good time for a mingle.
Join us Friday evening, November 1, 5pm–7pm at Motor Row Brewing—just twelve minutes away from the conference hotel. Enjoy some light bites while you catch up or connect with audio pals. RSVP here!

We’ll have a raffle going on along with discounts to membership.
Thanks to our lovely industry partners, we’ll have a slew of prizes including tools from iZotope, Hindenburg, and Epidemic Sound. Swing by the table to submit your name and grab a discount code for 20% off AIR membership.

Contribute to the membership fund for some snazzy AIR swag.
New this year, we’re asking AIRsters to pay it forward though small donations that will go towards membership costs for those who would benefit from an AIR membership but can’t afford it. In exchange, we’ll have (freshly-designed!) t-shirts, tote bags, metal straws, and more.

Follow along online.
Last but not least, stay connected to our Third Coast goings-on by following us on social media:
Twitter: @AIRmedia | Instagram: @AIRcurator | Facebook: Association of Independents in Radio.

Looking forward to seeing you all there!

The post We’re going to be at Third Coast this week! Find our table for swag and conversation. appeared first on AIR.

Introducing: AIR’s 2019 Claiming the Rad in Radio Fellows

October 28, 2019—In an effort to bring more sustainable support to women-identifying producers of color in media, AIR has paired five mentors with mentees through our inaugural Claiming the Rad in Radio Fellowship. We’re excited to announce the pairs today. Over the next five months, fellows will have time with their mentors to navigate career transitions, assertiveness on the job, dealing with race and gender bias, salary/pay negotiations and finding/creating community for self care. The program will also be accompanied by a series of exclusive webinars specifically designed for the fellowship cohort and their mentorship discussion topics. Meet our fellows below!

Amarachi Anakaraonye and Kim Bui

Amarachi Anakaraonye is a freelance multimedia producer with a keen eye and ear for calling out inequities, and calling in objective subjectivity. She has spent the past decade sojourning the United States, Europe, and Sub-Saharan Africa utilizing her innate empathic nature and cultural sensitivity to explore how Black women experience and transcend trauma within and beyond such institutions as health, education, and media. In 2017, she created her podcast, The Fragmented Whole, a bi-monthly podcast series that simultaneously dismantles and reframes narratives of trauma, displacement, and healing among women of color, particularly Black women. She has guest hosted and produced such podcasts as The Measure of Everyday Life (WNCU 90.7 FM/RTI International) and The Nonprofit Experience (NC State University’s Philanthropy Journal). Amarachi is a proud member of Alpha Kappa Alpha Sorority, Inc. and when she’s not producing new and existing projects, she enjoys cycling, yoga, and dancing. To learn more about Amarachi’s present and future pursuits, visit www.amarachia.com.

Fellowship statement: “My present and future aim is to establish my own multimedia production and marketing company. I have a strong desire to work with organizations prioritizing health equity and media literacy through technological innovation. The ‘Claiming the Rad in Radio’ Fellowship will provide me with critical insight on industry standards to reference when: 1. Educating clients about the significance of multimedia arts in marketing their services in an educational and/or entertaining manner, and 2. Partnering with clients that already realize the significance of using diverse multimedia outlets in marketing, but need guidance in planning, executing, and/or evaluating such assets.”

Gabrielle Berbey and Dmae Roberts

Gabrielle Berbey is a documentary filmmaker and multi-media journalist from San Francisco, based in New York. Interested in exploring how stories can be adapted throughout different forms to have the most impact, Gabrielle’s experience spans across audio and broadcast documentaries. Upon graduating from Bard College with a B.A. in Film/TV and Human Rights, she worked on the producing team for the FRONTLINE PBS award-winning investigative podcast, The FRONTLINE Dispatch. She currently works on the edit team for a PBS broadcast biographical series about Muhammad Ali, directed by Ken Burns. 

Fellowship statement: “Gabrielle’s independent documentary work ranges from radio pieces to short documentaries. With a specific focus on illuminating stories in the Filipino diaspora, she produced a documentary about the last remaining Filipino grocery store in upstate New York, and her senior thesis documentary tells the story of a Filipino-American community grappling with political unrest in the homeland. Throughout this fellowship, she is excited to develop and produce a historical audio documentary about the 1941 attack on Manila Bay, and how the Philippines’ legacy as a U.S. territory has been largely overlooked in American history.”

Elizabeth Estrada and Juleyka Lantigua Williams

Elizabeth Estrada is a Cuban American multimedia producer dedicated to using audio to explore and amplify stories about Latinx communities living in America. She is currently the engagement editor at WHYY’s PlanPhilly where she connects audiences with local news. Elizabeth has worked at the Greater Philadelphia Cultural Alliance, Firelight Media, New York Women in Film & Television and interned at The Pulse. She is a producer and board member at PhillyCAM, Philadelphia’s public access media station, where she was the recipient of the Innovation Award for Radio in 2018.

Elizabeth is a graduate of Ithaca College and The New School University. Originally from Queens, New York, she now lives in South Philadelphia with her fiancé and dog. You can follow her on Twitter @theElizabethEst.

Fellowship statement:Through this rad fellowship, I hope to cultivate a long-lasting community with other women of color audio producers. I also intend to conquer my fear and finally produce a pilot of a podcast that I’ve been wanting to create for a long time that centers diverse Latinx changemakers.”

Janae Pierre and Alyce Myatt

Janae Pierre is a general assignment reporter and local host of NPR’s All Things Considered at WBHM in Birmingham, Alabama. A native of New Orleans, Pierre has worked and volunteered with several different media organizations, notably NPR affiliate WWNO, the New Orleans Tribune and WBOK 1230AM. In her spare time, Pierre enjoys listening to old vinyl records (she loves that scratchy sound). Some of her favorite artists include Al Green, Gil Scott Heron and Dinah Washington. In early 2019, Pierre was recognized as the “Best Large Market Radio Reporter” by the Alabama Broadcasters Association. She was also listed on Radio Ink’s 2017 “Future African American Leaders in Radio.” 

Fellowship statement: “I’m thrilled to be a part of this mentorship program and I hope to literally ‘claim the rad in radio’ with the help of my mentor Alyce Myatt. With her advisement, I look forward to learning different ways to advance my career and ideas to merge my love for reporting and philanthropy in Birmingham and my hometown, New Orleans.”

Rhana Natour and Carmel Delshad

Rhana Natour is a national reporter and producer with PBS NewsHour in Washington D.C. where she covers a wide array of subjects including stories on technology, race, gender and gun violence. 

Before coming to the NewsHour, Rhana was an associate producer at ABC News in New York. In 2014, she earned an Emmy nomination for her work on the Nightline special “Crisis in Syria.”

Rhana is also passionate about quality long-form storytelling. She was part of the producing team for the feature-length documentary Speed Sisters that follows the first female race car driving team in the Middle East. Rhana is a graduate of the University of Michigan and was a Fulbright scholar to the United Arab Emirates.

The post Introducing: AIR’s 2019 Claiming the Rad in Radio Fellows appeared first on AIR.

Meet this Member! | Elliott Peltzman

Editor’s note: “Meet this Member!,” is an interview series that spotlights AIRsters doing outstanding work in the audio and media industry. This week we feature composer and audio producer, Elliott Peltzman.

1) Where are you based and what do you do?
I am a San Francisco based Composer and Audio Producer with a particular interest in scoring and sound designing podcasts – the weirder the better!

2) A piece of yours (in any medium!) that you would like to share? (And why?)
I’m really proud of this one piece of mine called “Couldn’t Find A Way”. It’s part true-crime, part smoky tiki lounge, part cumbia?? Not really sure what I’d call it. I wrote it outside of any project – just for myself – and to me really showcases my ability to play with tension and release within just a few notes, only two chords. I’m always most proud of one of my pieces when it incites that amazing feeling where you can’t decide if it’s more major or minor and leaves you wanting to dive further in to find out. Also I absolutely love the tone of the guitar I went with 🙂

3) What draws you to storytelling?
The Fiction. There was a certain point in life when I started realizing that the best storytellers are not the most accurate storytellers. And I want it to stay that way. The re-coloring, the erasing and augmenting of storylines; it’s not just juicier and more interesting – it’s the stuff that art is made of! What I love most about composing is creating pieces that incite complex emotions that have the power to guide and influence the narrative when played alongside the spoken word.

4) What’s playing on your radio/audio streaming service right now?
This group Khruangbin have really stolen my heart. They’re really moving mountains for (mostly) instrumental music right now.

5) What’s the most underrated tool (technical or not!) you use in your creative practice?
The standard Marimba sound in Logic is where almost all of my compositions begin and where a lot of them stay! I like anything that helps me move past tech and spend most of my time in the compositional headspace.

6) What is something you want to see more of in the industry?
This is such a self-serving answer, but I’d honestly love to see more podcasts feature the same composer(s) for the entirety of the show, or at least the entirety of each season. I believe most shows would really benefit from the cohesion and composers like myself would create much finer work knowing they can create repeated motifs, unique instrumentation per episode / per character. Delivering one-off themes and/or ambiguous pieces is fun, but I wish there were more positions that would allow me to go as deep as I’d like to!

7) Who’s your radio/audio idol and why?
On the same note as the previous question – my favorite composer to tune in to each week is Sean Real who composes original music for every episode of 99% Invisible. Sean has my dream job and does a very, very good job at it. Always subtle, but never innocuous – quirky as all get out. And most importantly, always very well-suited to the narrative.

8) What’s the best method for people to connect with you (including social media)?
I share new works to my website pretty often and prefer to be reached out to through email (epeltzman@gmail.com). Please don’t hesitate to reach out – that’s why we’re members of AIR!

The post Meet this Member! | Elliott Peltzman appeared first on AIR.

Building a Database From Scratch: Behind the Scenes With Documenting Hate Partners

For nearly three years, ProPublica’s Documenting Hate project has given newsrooms around the country access to a database of personal reports sent to us by readers about hate crimes and bias incidents. We’ve brought aboard more than 180 newsrooms, and some have followed up on these reports — verifying them, uncovering patterns and telling the stories of victims and witnesses. Some partners have done significant data journalism projects of their own to augment what they found in the shared dataset.

The latest such project comes from News12 in Westchester County, New York. Reporter Tara Rosenblum joined the Documenting Hate project after a spate of hate incidents in her coverage area. Last month, News12 aired her five-part series about hate crimes in the Hudson Valley and a half-hour special covering hate in the tri-state area. The station also published a public database of hate incidents going back a decade. It was the result of two years of work.

Rosenblum and her team built the database by requesting records from every police department in their coverage area, following up on tips from Documenting Hate and collecting clips about hate incidents the news network was already reporting on. Getting records was a laborious process, particularly from small agencies, some of which accept requests only by fax. “It was definitely torturous, but a labor of love project,” Rosenblum said.

Never miss the most important reporting from ProPublica’s newsroom. Subscribe to the Big Story newsletter.

She also expanded the scope of the project beyond her local newsroom and brought in News12 reporters from the network’s bureaus in Connecticut, New Jersey, Long Island, the Bronx and Brooklyn. The local newsrooms used Rosenblum’s investigation as their model, examining hate incidents since 2016. In all, six News12 reporters in three states documented around 2,300 hate incidents.

“We knew that this was one of those cases, the more the merrier,” Rosenblum said of collaborating with other newsrooms. “Why not flex our investigative muscle and get everyone working on this at the same time so we can really get a regional look?”

After the series aired, Rosenblum heard from a number of lawmakers — some who said they’d experienced discrimination — as well as students and schools. The special also aired on national and international networks, garnering responses from other states and countries. “A lot of what I heard is people being really grateful that we were shining the light on this,” she said.

Catherine Rentz, a reporter at The Baltimore Sun, wanted to investigate hate incidents in her area after learning how the Maryland State Police tracks hate crimes. (Since this writing, Rentz left the Sun to pursue freelance projects.) Maryland has been collecting hate crime data since the 1980s, so there was much to explore, Rentz said. Her reporting was also sparked by the May 2017 homicide of Richard Collins III, a second lieutenant in the Army who was days away from his college graduation. He was stabbed to death at the University of Maryland in what may have been a racially motivated attack; the suspect will be tried in December.

Rentz began her hate crimes investigation the summer after the killing, and she worked on it on and off for a year, she said. She sent public records requests to the Maryland State Police, city police departments and the state judiciary, and she built a public database of hate crimes and bias incidents reported to police in Maryland from 2016-17, including narratives of the incidents. She also worked with Documenting Hate to look into Maryland-based reports in our database.

To collect the data, she set up a spreadsheet and entered each case by hand, since the state police records were in PDF files and she wasn’t able to easily extract data from them. She had a number of other challenges. For instance, many agencies redacted victims’ names, so it was a challenge to use the data to find potential sources to interview. And when she did find names, some victims didn’t want to talk about what happened to them.

“I completely understood that, and I didn’t want to do any more damage than had already been done,” she said.

In the course of her investigation, Rentz discovered that there were agencies that did collect reports of potential bias crimes, but that they weren’t reporting it to the state police, so the data wasn’t being counted. She also looked at prosecutions; in 2017, there were nearly 400 bias crimes reported to police, but only three hate crime convictions.

Following the Sun’s hate crime reporting, the state police held several trainings with local police and reminded agencies that they’re required by law to turn in their bias crime reports on specific deadlines. In April, the governor signed three new bills into law on hate crimes.

Last year, Reveal investigated hate incidents that involved the invocation of President Donald Trump’s name or policies. They published a longform story and produced a radio show. Reporter Will Carless built a database using reports from the Documenting Hate project and news clips. He worked his way through a color-coded spreadsheet of hundreds of entries to verify reports and find sources to highlight in the story. After the investigation published, Carless says he received emails from readers who said similar incidents had happened to them; others thanked him for connecting the dots and gathering data on previously disparate stories. He also said a few academics told him they were going to include the story in their courses that involve hate speech.

And this year, HuffPost created a database for a forthcoming story examining hate incidents in which the perpetrator used the phrase “go back to your country” or “go back to” a specific country. Their database combined tips submitted to the Documenting Hate project, along with news clips culled from the Lexis-Nexis database, social media reports, as well as police reports gathered by ProPublica. The investigation is slated to publish this fall.

“The thing I want to stress for this project is that this type of hatred or bigotry or white nationalism is kind of ubiquitous and foundational to American society,” said HuffPost reporter Christopher Mathias. “It’s very much a common thing that people who aren’t white in this country experience on a regular basis. There’s no better way to show that than to create a database of many, many incidents like this across the country.”

Like News12, HuffPost opened the project up to its newsroom colleagues, bringing in reporters from HuffPost bureaus in the United Kingdom and Canada. After HuffPost published a questionnaire to collect more stories from readers, its U.K. and Canadian colleagues set up their own crowdsourcing forms to collect stories. (Documenting Hate is a U.S.-based project, and our form is limited to the U.S.) Their plan is to publish stories using the tips they collect when HuffPost’s U.S. newsroom publishes its investigation.

Want to create your own database of hate crimes? Here are some tips about how to get started. 1. Get hate crimes data from your local law enforcement agency.

We have a searchable app where you can see the data we received directly from police departments, as well as the numbers that agencies sent the FBI. (The federal data is deeply flawed, as we’ve found in our reporting.) We have partial 2017 data for some agencies.

If we don’t have data from your police department, you can replicate our records request.

Also, some states, like Maryland and California, release statewide hate crime data reports, so find out if top-line data is publicly available.

Some things to keep in mind:

More than half of hate crime victims don’t report to the police at all. And the police don’t always do a good job handling these crimes.

That’s because police officers don’t always receive adequate training about how to investigate or track hate crimes. Still, training isn’t a guarantee to ensure these crimes are handled properly. Some police mismark hate crimes or don’t know how to fill out forms to properly track these crimes. Some victims believe officers don’t take them seriously; in some cases, victims say police even refuse to fill out a report. 2. Put together a list of known incidents using media reports and crimes tracked by nonprofit organizations.

It’s a good idea to search for clips of suspected hate crimes during the time period in question to compare them to police data. You can use tools like Google News, LexisNexis, Newspapers.com and others.

You can also consult organizations that track incidents and add them to your list of known crimes. They can give you a sense of how police respond to hate crimes against these groups. Here are some examples.

  • CAIR (Muslim community)
  • ADL (Jewish community)
  • SAALT (South Asian community)
  • AAI (Arab community)
  • AVP (LGBTQ community)
  • MALDEF (Latino community)
  • NAACP (black community)
  • NCTE (trans community)
  • HRC (LGBTQ community)
3. Review the police records carefully, and request incident reports to get the full picture.

Once you receive data from the police department, compare it with your list of known hate crimes from media and nonprofit reports. That will be especially useful if the police claim to have no hate crimes in the time period. Ask about any discrepancies.

You can also check to see if the department’s data matches what it sent the FBI. If the department’s numbers don’t match what they sent the feds, ask why.

The best way to get a deeper look at the data is to get narratives. Ask for a police report or talk to the public information officer to get the narrative from the incident report.

Then review the data and incident reports for potential mismarked crimes. Take a look at the types of bias listed for each crime. We found that reports of anti-heterosexual bias crimes were almost always mismarked, either as different types of bias crimes or crimes that weren’t hate crimes at all.

Also check the quantity of each bias type. Is there a large number of a specific bias crime that may not fit with the area’s demographics? We’ve encountered cases in which police marked incidents as having anti-Native American bias in their forms or computer systems because they thought they were selecting “none” or “not applicable.”

Next, check the crime types. We’ve also seen that certain crime types are unlikely to involve a bias motivation but are sometimes erroneously marked as hate crimes; examples include drug charges, suicide, drug overdose and hospice death. Request incident reports, and follow up with police to ask about cases that don’t appear to be bias crimes. Police have often told us that mismarking happens as a result of human error, and that officials will sometimes rectify the errors found.

read more...

Rate Guide: Engineering and Composition

Though radio transmitters have been broadcasting for more than a century, the emerging  podcast industry is disrupting traditional models of audio production. Experienced audio engineers, recordists, sound designers, and composers all bring vital skills that can make a big difference in the sound and quality of any show, however the final audio is distributed.

In addition, experienced professionals who bring skills honed on other productions can provide an unbiased editorial ear, and are often able to improve a project long before production gets underway.

A Brief Glossary

In many cases the roles described here overlap and any one show’s needs is going to vary. Most independent producers do their own recording and they often expect to do their own initial dialog edits. Some sound designers compose original scores. Some do all the mixing and scoring for a show. Some mix engineers are asked to make editorial decisions about how to cut tape.

No glossary or guide can replace a clear and direct conversation about expectations. Whether you’re hiring a freelancer or taking on a new gig, make sure everyone is on the same page about what you need.

audio engineer is a broad term that can be applied to someone in any one of a variety of engineering-related roles.

audio mixer, mix engineer, mastering engineer are all titles for someone who mixes a show or segment.  NPR Training defines mixing as “the process of creating balance, consistency and clarity with differing audio sources.”  An audio mixer or mix engineer brings a clear understanding of audio concepts like phase and gain structure and core tools including equalization, compression, loudness, and restoration software to the mixing process. Mixing is typically the final step of producing an audio story and results in a publishable audio file. “Mastering” actually describes the final step in creating a music album, which follows the mixing stage. The term is not technically applicable to audio storytelling, but it is sometimes used to describe mixing work.

Note: some shops use “mix” to refer to the process of cutting and arranging audio — a clear conversation about expectations will help avoid any misunderstandings.

composer describes a musician who creates original music. A show might commission a composer to create original music designed especially for that show, or they might commission existing music from a composer.

dialogue editor is a term borrowed from the film world for someone who cuts and cleans dialogue. In audio storytelling, this responsibility more often falls to a producer who is charged with cutting the story.

scoring describes the work of creating or identifying, selecting and licensing existing music from a music library or other source to suit the needs of a segment or story. In our research we found composers who were firm that scoring a segment always means composing original music, and other folks who were just as firm that in radio and podcasting work, scoring always means finding music from an existing source. As ever, no glossary is a substitute for a clear conversation about expectations.

sound designer, sound design is another title borrowed from film. Sound design traditionally refers to the practice of cutting and layering sound effects and ambient audio using natural or synthesized sounds.  In audio storytelling it might refer to someone who provides music and pacing decisions, or to someone who customizes a palette of music and sonic materials that form the defining sound of a show. Transom’s series on sound design is a great introduction to the craft. A sound designer might compose original scores themselves, find composers to create original sounds and music, or use sound libraries to identify and license existing music and effects. The bounds of a sound designer’s responsibility can vary a lot: some sound designers do all the mixing, engineering and sound design for a single show.

sound recordist, field recordist, production sound recordist are all terms that describe an audio engineer who records “in the field” outside of a studio. Someone using these titles should be competent with remote recording equipment and able to set up equipment that will optimize recording quality given the constraints of the particular scene.

studio engineer describes an engineer who operates a live broadcast or recording studio.

Engineering, Recording, and Mixing Rates

In our research the rates for mixing, recording and engineering roles varied with experience and sometimes by the complexity of the job but rarely varied by the role. We focused our research on independent contractors, though it is not uncommon for a specialist to be on payroll for a short term appointment. In general, independent contractors should expect to charge at least 30% more than peers doing similar work on payroll.

Rates: Most independent engineers we interviewed cited hourly rates in the $75-125 range, though some experienced professionals charge $150 to $200 or more. Everyone we interviewed quoted day rates commensurate with that range.

Comparable rates for someone doing the same work on payroll, with an employer covering payroll taxes, workers compensation, unemployment insurance would range from $58-96/hour.

Some engineers reported including a fixed number of revisions in their contract, even for work that will ultimately be billed by the hour. They opt to bill at a higher rate for the aggravation of inefficiency.

Tape Syncs are a special case — the work involved in preparation, set up, and follow up on a tape sync is relatively consistent and described in our tape sync rate guide.

Additional fees, consistent with time-and-a-half overtime standards, are typical for unusually long days or condensed schedules.

We also found many folks at all levels working in short term staff positions. Staff rates, which include significant additional benefits (among them access to workers compensation and unemployment insurance, paid sick leave) start at  $30-35/hour for staffers working under the close supervision of a more experienced engineer. Staff engineers, whatever their hourly rate, are entitled by law to overtime pay after working 40 hours in a single week.

A few notes on best practices:

Many people we spoke with noted that newcomers to the field often wait until the last minute to bring a mix engineer onto the team. Though many of the roles described here are technically “post-production,” we’ve avoided that term intentionally. A good mixer, engineer or sound designer can make a big difference in the final quality of a show. Bringing a “post production” team in early can head off problems with recording quality or file organization that will be labor intensive to fix later.

Experienced engineers working across these fields reported including a fixed number of revisions in their contract, even for work that will ultimately be billed by the hour. Revisions, tweaks, corrections and adjustments are part of the work, but when those trickle in piecemeal, the freelancer is stuck managing a lot of inefficient communication. Charging a higher rate for revisions after the first two passes can help encourage efficiency and ensure that everyone is able to do their best work.

Composition Rates

Establishing “standard” rates for original music composition is particularly challenging because

some composers can command substantially more for their work than others. Session musicians and a studio cost money, but some music can be produced “in-the-box” using only software. Most composers take expected usage of the work into account when setting their rates as well.  With those criteria taken into account, a composer will generally propose a flat rate that includes a fixed number of revisions (two or three is typical).

Note that our sample size for composers was both small and diverse so these rates for composition represent snapshots rather than a complete picture of the industry. We’re still including them here because we regularly get questions about budgeting for music.

Composing theme music, or an intro and outro for a weekly public radio show with a national audience might run to $15,000. A smaller budget show or one with a smaller market might expect to pay $2000-$5000 for an original score.

Some shows and sound designers also turn to composers for help scoring a single episode. Though we didn’t find consensus on what constitutes a small or large audience, licensing might look like this, where figures reflect a small, medium or large audience:

Licensing a pre-existing track for use on a single episode: $50 |  $100 | $200

Non-exclusive use of a custom track: $300 | $500 | $750

Exclusive use of a custom produced track or score will vary more widely.

In almost all cases, a composer retains the copyright to the work and use beyond the original intended medium may need to be renegotiated. Many experienced composers and sound designers will ask for revisions to a standard contract that asks for exclusive worldwide use in any medium, or will charge more for that level of licensing.

Methodology

We interviewed experienced radio shows and podcast production houses about what they expect to pay. We interviewed experienced sound designers, composers, and engineers about what they charge. We also talked to professionals working in film or music to get a sense of where rates overlap and reviewed rates. We reviewed existing research including Blue Collar Post Collective’s survey of post-production rates in film and television.

For this guide, we relied heavily on interviews to establish the roles and categories. Rob Byers, Michael Raphael, and Jeremy Bloom helped refine, define and clarify the terms we’ve used here and were absolutely indispensable to the creation of this guide.

AIR’s work on rates

AIR is actively developing a series of guides designed to help independent producers, editors, and engineers set fair and reasonable rates, and to help everyone create accurate and realistic budgets. We want to hear from you.

This guide was posted in October 2019 and has not been updated. Our hope as an organization is that AIR can keep these rate guides up to date but if you’re reading this and it is more than a year old, you should adjust the recommended rate to reflect changes in the cost of work and living.

The post Rate Guide: Engineering and Composition appeared first on AIR.

Do’s and Don’ts of WordPress Security

Do's and Don'ts of WordPress Security

With a great WordPress site comes great responsibility. WordPress offers journalists a distinguished platform to publish and distribute their content, but keeping your site safe and secure can seem like an overwhelming and daunting task. Luckily, keeping your WordPress site in tip-top shape isn’t as difficult as it seems. We’ve put together a list of a few basic do’s and don’ts to follow in order to keep your site running smoothly and securely, along with the basics of WordPress vulnerabilities and how to understand why some WordPress websites end up getting exploited.

Common WordPress Vulnerabilities

Before we discuss what you should and should not do with your WordPress site, it will be helpful for you to understand the two main ways that WordPress sites can end up becoming vulnerable to attackers. 

  1. Outdated Plugins
    The most common way for attackers to exploit WordPress sites is through outdated plugins, which account for nearly 60% of all WordPress breaches. Outdated plugins can leave unintended doors open for unwelcome visitors with insecure code practices, improperly sanitized text fields, or a myriad of other bad practices. Keep your plugins updated.
  2. User Accounts
    Another common way WordPress sites are exploited is through user accounts. Keeping track of who has access to user accounts on your website, and what permission levels each account has, is a great way to prevent unwanted users from coming in and making unwelcome changes to your site. 

Basic Do’s and Don’ts of WordPress Security

Now that we’ve gone over what some of the most commonly exploited WordPress vulnerabilities look like, we can explore a basic list of some do’s and don’ts when it comes to keeping up with your WordPress site. 

Do:

You can find available plugin and WordPress updates by logging into your WordPress admin panel and navigating to plugins -> installed plugins -> updates available.
  • Keep WordPress, plugins and themes up to date
    • Keeping your plugins and themes up to date will not only allow you to use the newest features and tools added, but it will also ensure that any bugs and vulnerabilities in the previous versions won’t be running on your WordPress site.
  • Remove unused users and plugins
    • Removing unused user accounts and plugins from your site will not only help keep your website running smoothly, but it will also limit the number of things that need to be maintained on your WordPress site and prevent more ways for unauthorized users and vulnerabilities to gain access to your site.
  • Set up a backup solution
    • If the unthinkable happens and your site is the unfortunate target of a successful attack, having a backup solution in place will save you a lot of time and headaches. Having a backup solution in place can usually enable you to have your site back up and functioning with the click of a couple of buttons and in a matter of minutes. Taking a few hours to get a solid backup solution in place is a lot better than losing your entire site and having to rebuild it from the ground up if it is compromised.
  • Install an SSL certificate
    • Installing an SSL certificate on your website is a pretty painless process, and it can usually be done for free. Adding an SSL certificate adds an extra layer of security between your WordPress site and its visitors by securing the connection between the two. Adding an SSL certificate to your website is also a great way to instill trust in readers and let them know that you run a legitimate and safe website. Along with the added trust factor, your site will also see a boost in search engine ranking since Google’s algorithms prefer HTTPS-enabled websites.
  • Find a stable host who specializes in WordPress
    • Finding a stable and trustworthy web host that specializes in hosting WordPress sites, such as Flywheel or WPEngine, is one of the most important steps you can take to ensuring the security of your WordPress site. A good web host will work with you to help maintain your WordPress site and even help improve your site speed and performance. 

Don’t:

Changing the default WordPress admin username to something more complex is an easy and simple way to deter some would-be attackers.
  • Don't reuse the same password for multiple accounts
    • This is more of a basic internet security rule as opposed to being WordPress specific, but never use the same password for multiple internet accounts. Instead, find an easy to use password organizer to keep your passwords safe and secure. You should make sure that your WordPress password is a secure mix of capital letters, symbols, and numbers, as a secure password is a simple preventative step to stop an account from becoming compromised.
  • Don't use the default `admin` username
    • Unwanted visitors who try and gain access to WordPress accounts almost always try using the default admin username on the first try. Consider changing the admin username to something different as a simple preventative step.
  • Don't install questionable themes or plugins
    • The beauty of WordPress is that it gives you the freedom to install thousands of free themes and plugins, with mostly all of them being legitimate. However, it’s easy to get caught up in the endless amount of free plugins and themes that you can install. Unfortunately, there are some themes and plugins out there that are made with malevolent intent. Make sure to always read reviews and download plugins and themes from reliable sources, like the WordPress plugin and theme directories.
  • Don't give away admin access
    • Only give out admin access to users you fully trust. Admin accounts come with lots of responsibility. Instead of granting full admin privileges to users, try giving them specific privileges to only certain tools and areas they need access to. When the user no longer needs that access, revoke their permissions.

Security and Speed Go Hand-In-Hand

An additional benefit of following these steps is that most of them will help you speed up your WordPress site speed. For example, reducing the number of plugins you have will help control what we call “plugin bloat”. Having too many plugins may result in slow page load times due to all of their assets and functions having to load on the page at once.

Another area to keep an eye on if you’re looking to increase your site speed is your theme. Lots of themes are built with a lot of unnecessary tools and functions which may be useful sometimes, but most of the time just end up increasing page load times. Verify that the theme you’re installing has been thoroughly tested to see the effects it’ll have on your page speed.

What to Do if Your Site is Compromised

If your site is the unfortunate victim of a successful attack, knowing what to do will save you from a lot of headaches. First off, don’t panic! Panicking will only make the situation worse, and you will need a level head to successfully recover your website. The first step you’ll want to take is finding out what exactly happened and locating the vulnerability that was exploited. Ask yourself these questions:

  • Are you able to log in to your admin panel? 
  • Is your website being redirected to another website? 
  • Is your website not responding at all?

Once you figure out what exactly happened, you can continue to recover your website. At this point, you should contact your hosting provider. Your host has dealt with this before and will know how to help with these next steps:

Having an automated backup solution in place can come really come in handy in the unfortunate event of a successful attack. This image shows Flywheel's backups panel and several nightly backups.
  1. Restore a backup of your site
    • Hopefully, you backed up your site before this attack happened – you should be backing up your site every day! If you have, you will restore your website from the latest one. Unfortunately, you will lose any content updates you’ve made between the time of that backup and now, but that is a small price to pay to get your site back up and running. 
  2.  Fix the vulnerability to prevent future attacks
    • After you and/or your host has restored your site to a previous backup, it’s important to remember that it’s still vulnerable to attack. Now is the time to fix whatever vulnerability in your site, whether it be an outdated plugin or user account so that this can’t happen again. 
  3. Change your passwords
    • Once you have your site restored from a previous backup, make sure to change all of the passwords relating to your WordPress site, including your WordPress admin account, MySQL database, SFTP users, and all others that allow access to your website. WordPress.org has also put together a useful FAQ guide on what to do if your site has been hacked and how to get it back up and running.

In Conclusion

WordPress is a great tool for publishers when used properly and maintained often. However, if you ignore maintaining your WordPress themes and plugins, you could potentially welcome unwanted threats to your site. Keeping your WordPress site secure seems daunting at first, but it’s not that big of a hurdle to overcome. Now that we’ve explored the basics of how the majority of WordPress sites are exploited, you can keep an eye out and know what to look for and what best practices to use on your website.

Questions? Get in touch.

Have a question for our team or need help with WordPress design and/or development? Check out INN Labs' full services here, join us for one of our weekly Office Hours, or get in touch!

Making Collaborative Data Projects Easier: Our New Tool, Collaborate, Is Here

On Wednesday, we’re launching a beta test of a new software tool. It’s called Collaborate, and it makes it possible for multiple newsrooms to work together on data projects.

Collaborations are a major part of ProPublica’s approach to journalism, and in the past few years we’ve run several large-scale collaborative projects, including Electionland and Documenting Hate. Along the way, we’ve created software to manage and share the large pools of data used by our hundreds of newsrooms partners. As part of a Google News Initiative grant this year, we’ve beefed up that software and made it open source so that anybody can use it.

Collaborate allows newsrooms to work together around any large shared dataset, especially crowdsourced data. In addition to CSV files and spreadsheets, Collaborate supports live connections to Google Sheets and Forms as well as Screendoor, meaning that updates made to your project in those external data sources will be reflected in Collaborate, too. For example, if you’re collecting tips through Google Forms, any new incoming tips will appear in Collaborate as they come in through your form.

Once you’ve added the data to Collaborate, users can:

  • Create users and restrict access to specific projects;
  • Assign “leads” to other reporters or newsrooms;
  • Track progress and keep notes on each data point;
  • Create a contact log with tipsters;
  • Assign labels to individual data points;
  • Redact names;
  • Sort, filter and export the data.

Collaborate is free and open source. We’ve designed it to be easy to set up for most people, even those without a tech background. That said, the project is in beta, and we’re continuing to resolve bugs.

If you are tech savvy, you can find the code for Collaborate on Github, and you’re welcome to fork the code to make your own changes. (We also invite users to submit bugs on Github.)

This new software is part of our efforts to make it easier for newsrooms to work together; last month, we published a guide to data collaborations, which shares our experiences and best practices we’ve learned through working on some of the largest collaborations in news.

Starting this month, we’ll provide virtual trainings about how to use Collaborate and how to plan and launch crowd-powered projects around shared datasets. We hope newsrooms will find the tool useful, and we welcome your feedback.

Get started here.

read more...

NewsMatch Pop Up Best Practices

There have been some changes since our last blog post around Pop Up best practices for NewsMatch and other special campaigns, so we're releasing an updated guide.

Here are some general recommendations and best practices for using popups as part of NewsMatch, year-round campaigns, or special campaigns on your site. 

Installing the plugin

We recommend using Popup Maker plugin for setting up donation and newsletter signup popups on your site. 

Instructions for installing the plugin and creating a popup.

Recommended Pop Up Settings

Your popup should:

  • Be size “Large” or smaller from Popup Maker’s settings
  • Appear at the center of the bottom of the reader’s screen
  • Appear by sliding up from the bottom of the screen, over 350 milliseconds
  • Have an obvious “Close” button
  • Allow readers to interact with the rest of the page (do not use a full-page overlay)
  • Automatically open after 25 seconds (or more) on the page, because immediate popup appearances can be jarring. It can also be set to open after scrolling down a percentage of the page.
  • Be configured to not appear again for 2 weeks or more once dismissed by a reader
  • Be configured to not show on your donation page

You'll need to configure which pages the popup appears on, using the built-in conditionals feature. For disabling the popup on certain pages or in certain cases, read on in this blog post, or check out Popup Maker's paid extensions.

You'll also probably want to review the Popup Maker themes available and modify them to suit your own site's appearances. Once you've modified or created a theme, edit your popup to make it use your theme.

In addition to using Popup Maker themes, you can style popups using your site's WordPress theme's CSS, Jetpack’s Custom CSS Editor, or any other tool that allows you to define custom styles on your site.

What goes in a popup?

NewsMatch will provide calls to action, images, and gifs to be used leading up to and during the campaign. 

Here are some examples: https://www.newsmatch.org/info/downloads

Non-NewsMatch popups should have an engaging, short call to action along with an eye-catching button.

Need help?

There is a ton of additional information on the WP Popup Maker support pages: https://wppopupmaker.com/support/

If you have questions, sign up for one of INN Labs’ NewsMatch technical support sessions or email the INN Labs team at support@inn.org.

Introducing our newest Largo redesign: Workday Minnesota

Workday Minnesota began publishing in 2000 with support from Minnesota’s labor community and was the first online labor news publication in the United States. Since then, Workday has won many awards and has grown to be a trusted source for news about workers, the economy, worker organizations, and Minnesota communities. It is a project of the Labor Education Service at the University of Minnesota.

This summer, INN Labs teamed up with Workday Minnesota’s editor, Filiberto Nolasco Gomez, and webmaster John See to migrate their outdated Drupal site to the Largo WordPress framework and redesign their brand.

Our goals for this project were to:

  • give Workday Minnestoa a streamlined and modern look and feel
  • improve site performance for readers and usability on the back-end for editors
  • enhance the design and improve engagement for Workday’s long-form investigative pieces
  • empower the Workday team to easily manage and update their WordPress site after launch

Some of our design inspiration came from INN Members with bold, modern designs (such as The Marshall Project, The Intercept and Reveal News) and some from outside of our industry, like nowness.com. We wanted clean, bold headlines, a thoughtful type hierarchy, and a way for photos to take center stage. 

Here's what Filiberto had to say:

“We focused on what it would take to rebuild Workday to be responsive to our readers and enhance investigative reporting. The new website will allow us to display long-form and investigative journalism in a more attractive and readable interface. This version of Workday will also allow us to effectively use multimedia segments to make what can sometimes be dense material more approachable.”

The INN Labs team is excited for this new phase of Workday Minnesota and thankful for the opportunity to help bring it to life.

Out with the old, in with the new

Before and after the workdayminnesota.org redesign.

We created a custom homepage layout that showcases Workday’s latest content with a clean and modern look.

Benefits of this custom homepage are big visuals for larger screens and ease of navigation on smaller screens. Workday editors have room for both curated news from around the web (using the Link Roundups plugin) and their most recently published articles.

A sleek and modern article layout

Workday Minnesota articles, before and after.
A typical Workday Minnesota article, before and after.

Article pages continue the sleek, clean design approach. We left out ads and ineffective sidebars in order to prioritize long reads with custom-designed pull quotes and large, responsively embedded photos and videos. Behind the scenes, our Largo framework works with the new WordPress Gutenberg editor to add essential editing tools for media organizations.

Workday Minnesota's redesign is responsive to all devices.

But wait – there’s more!

We couldn’t stop with just a website redesign without also giving attention to the heart of the brand – the logo. The redesigned logo builds off of the modern, new typefaces for the website and its bold use of the Minnesota state outline (Filiberto’s idea!) is great for lasting brand recognition. In the process of creating the logo, we also incorporated a new tagline that succinctly expresses the mission of Workday Minnesota: “Holding the powerful accountable through the perspective of workers.” The new logo is now being used on the website and across Workday’s social media channels.

Workday Minnesota's new logos.

Questions? Get in touch.

Have a question for our team or need help with WordPress design and/or development? Check out INN Labs full services here, join us for one of our weekly Office Hours, or get in touch!

Working Together Better: Our Guide to Collaborative Data Journalism

Today we’re launching a guidebook on how newsrooms can collaborate around large datasets.

Since our founding 11 years ago, ProPublica has made collaboration one of the central aspects of its journalism. We partner with local and national outlets across the country in many different ways — including to work with us to report stories, to share data and to republish our work. That’s because we understand that in working together, we can do more powerful journalism, reach wider audiences and have more impact.

Never miss the most important reporting from ProPublica’s newsroom. Subscribe to the Big Story newsletter.

In the last several years, we’ve taken on enormous collaborations, working with hundreds of journalists at a time. It started in 2016 with Electionland, a project to monitor voting problems in real time during the presidential election. That project brought together more than 1,000 journalists and students across the country. Then we launched Documenting Hate in 2017, a collaborative investigation that included more than 170 newsrooms reporting on hate crimes and bias incidents. We did Electionland again in 2018, which involved around 120 newsrooms.

In order to make each of these projects work, we developed software that allows hundreds of people to access and work with a shared pool of data. That information included datasets acquired via reporting as well as story tips sent to us by thousands of readers across the country. We’ve also developed hard-won expertise in how to manage these types of large-scale projects.

Thanks to a grant from the Google News Initiative, we’ve created the Collaborative Data Journalism Guide to collaborative data reporting, which we’re launching today. We’re also developing an open-source version of our software, which will be ready this fall (sign up here for updates).

Our guidebook covers:

  • Types of newsroom collaborations and how to start them
  • How a collaboration around crowdsourced data works
  • Questions to consider before starting a crowdsourced collaboration
  • Ways to collaborate around a shared dataset
  • How to set up and manage workflows in data collaborations

The guidebook represents the lessons we’ve learned over the years, but we know it isn’t the only way to do things, so we made the guidebook itself collaborative: We’ve made it easy for others to send us input and additions. Anybody with a GitHub account can send us ideas for changes or even add their own findings and experiences (and if you don’t have a GitHub account, you can do the same by contacting me via email).

We hope our guide will inspire journalists to try out collaborations, even if it’s just one or two partners.

Access the guidebook here.

read more...

Making Sense of Messy Data

I used to work as a sound mixer on film sets, noticing any hums and beeps that would make an actor’s performance useless after a long day’s work. I could take care of the noisiness in the moment, before it became an issue for postproduction editors.

Now as a data analyst, I only get to notice the distracting hums and beeps in the data afterward. I usually get no say in what questions are asked to generate the datasets I work with; answers to surveys or administrative forms are already complete.

To add to that challenge, when building a national dataset across several states, chances are there will be dissonance in how the data is collected from state to state, making it even more complicated to draw meaning from compiled datasets.

Get info about new and updated data from ProPublica.

The Associated Press recently added a comprehensive dataset on medical marijuana registry programs across the U.S. to the ProPublica Data Store. Since a national dataset did not exist, we collected the data from each state through records requests, program reports and department documents.

One question we sought to answer with that data: why people wanted a medical marijuana card in the first place.

The answers came in many different formats, in some cases with a single response question, in others with a multiple response question. It’s the difference between “check one” and “check all.”

When someone answers a single response question, they are choosing what they think is the most important and relevant answer. This may be an accurate assessment of the situation — or an oversimplified take on the question.

When someone is given the chance to choose one or more responses, they are choosing all they think is relevant and important, and in no particular order. If you have four response choices, you may have to split the data into up to 16 separate groups to cover each combination. Or you may be given a summary table with the results for each option without any information on how they combine.

In the medical marijuana data, some states have 10 or more qualifying conditions — from cancer and epilepsy to nausea and post-traumatic stress disorder. Of the 16 states where data on qualifying condition is available, 13 allow for multiple responses. And of those, three states even shifted from collecting single to multiple responses over the years.

This makes it nearly impossible to compare across states when given only summary tables.

So, what can we do?

One tip is to compare states that have similar types of questionnaires — single response with single response, multiple with multiple. We used this approach for clarification when looking into the numbers for patients reporting PTSD as a qualifying condition. We found that half of all patients in New Mexico use medical marijuana to treat PTSD, and the numbers do not seem to be inflated by the method of data collection. New Mexico asks for a single qualifying condition, yet the proportion of people reporting PTSD as their main ailment is two to three times the number than those that could report multiple responses in other states.

Using data from the 13 states that allow multiple responses, we found that when states expand their medical markets to include PTSD, registry numbers ramp up and the proportion of patients reporting PTSD increase at a quick pace. The data didn’t enable us to get one single clean statistic, but it still made it possible for us to better understand how people used medical marijuana.

Get the data (with a description of the caveats you’ll need to keep in mind when working with it) for your own analysis here.

read more...

Announcing Largo 0.6.4

This week's release of updates to the Largo WordPress theme is all about improvements for images, pull quotes, and media. It also brings improved compatibility and editorial functions for the WordPress Block Editor.

This release includes:

An example of the new pull quote block styles.
  • Improved pull quote display. The Pull Quote block gains full styling, so that block quotes and pull quotes no longer appear the same.
  • The ability to insert media credits from the Media Gallery in the block editor.
  • More thumbnail display options for the Series Post widget.
  • Compatibility with WP 5.2's wp_body_open hook, which will be increasingly important for plugin compatibility.

This release also contains a number of fixes and minor updates. Particular thanks go to outside contributor @megabulk.

What's new in 0.6.4?

For the full details on what we've updated in version 0.6.4, including improvements for developers, please see the Largo 0.6.4 official release notes.

You may also want to read the release notes for version 0.6.3 and 0.6.2.

Upgrading to the Latest Version of Largo

Want to update to the latest version of Largo? Follow these instructions, or send us an email!

Before updating, please see the upgrade notices outlined here.

What's next?

When Largo was first released, it contained functionality for things that did not yet exist in WordPress core, like term metadata. Our next release will continue the work already underway to streamline the theme and seamlessly switch to using WordPress' now-built-in functionality.

This is in addition to an overall focus to improve Largo's frontend for mobile-first speed and easy editorial customizations.

Plugins

Another part of the work we’ve done recently with Largo has been to spin out important functionality for publishers into standalone plugins. This makes these features widely available for any WordPress site to use while further streamlining the Largo theme and improving overall performance. We published the Disclaimers plugin last year. The 0.7 release of Largo will complete the transition of the Disclaimers Widget as a standalone plugin by removing the widget from Largo. We're doing the same with our Staff plugin.

New INN Labs publishing tools:

  1. We recently launched the Republication Tracker Tool plugin which allows publishers to easily share their stories with other websites and then track engagement of those shared stories in Google Analytics.
  2. Link Roundups received important updates in the version 1.0 release. This WordPress plugin helps editors aggregate links from around the web and save them as “Saved Links”. You can publish these curated links in widgets and posts in your site, or push Link Roundups directly to subscribers via MailChimp.

Send us Your Feedback

We want to know what you think will make Largo better. Send us an email with your ideas!

“It was hard to take Nazi memes all that seriously when they were sandwiched between sassy cats”

Syracuse’s Whitney Phillips — scholar of the darker corners of Internet culture, author of “The Oxygen of Amplification,” last seen here offering this dire observation/prediction last winter — has a new paper out in Social Media + Society that might make be a bracing experience for some Nieman Lab readers.

When we think of the nightmarish edge of online culture — the trolling, the disinformation, the rage, the profound information pollution — it’s easy to think of the worst offenders. 4chan denizens, for-the-lulz trolls, actual Nazis — you know the type. But, she writes, maybe the origins of those phenomena aren’t only in those dark corners of Internet culture — maybe they’re also in the kind of good Internet culture, the kind that people sometimes get nostalgic about.

I used to believe that the internet used to be fun. Obviously the internet isn’t fun now. Now, keywords in internet studies—certainly, keywords in my own internet studies—include far-right extremism, media manipulation, information pollution, deep state conspiracy theorizing, and a range of vexations stemming from the ethics of amplification.

Until fairly recently, I would sigh and say, remember when memes were funny? When the stakes weren’t so high? I wish it was like that still. I was not alone in these lamentations; when I would find myself musing such things, it was often in the company of other internet researchers, or reporters covering the technology and digital culture beat. Boy oh boy oh boy, we would say. What we wouldn’t do to go back then. It was a simpler time.

…internet/meme culture was a discursive category, one that aligned with and reproduced the norms of whiteness, maleness, middle-classness, and the various tech/geek interests stereotypically associated with middle-class white dudes. In other words: this wasn’t internet culture in the infrastructural sense, that is, anything created on or circulated through the networks of networks that constitute the thing we call The Internet. Nor was it meme culture in the broad contemporary sense, which, as articulated by An Xiao Mina , refers to processes of collaborative creation, regardless of the specific objects that are created. This was a particular culture of a particular demographic, who universalized their experiences on the internet as the internet, and their memes as what memes were.

Now, there is much to say about the degree to which “mainstream” internet culture—at least, what was described as internet culture by its mostly white participants—overlapped with trolling subculture on and around 4chan’s /b/ board, where the subcultural sense of the term “trolling” first emerged in 2003…the intertwine between 4chan and “internet culture” is so deep that you cannot, and you should not, talk about one without talking about the other. However, while trolling has—rightly—been resoundingly condemned for the better part of a decade, the discursive category known as internet culture has, for just as long, been fawned over by advertisers and other entertainment media. The more jagged, trollish edges of “internet culture” may have been sanded off for family-friendly consumption, but the overall category and its distinctive esthetic—one that hinges on irony, remix, and absurd juxtaposition—has in many ways fused with mainstream popular culture.

Specifically, it was the breadth of types within this sort of earlier-web content that opened the door for what we’ve since seen:

The fact that so many identity-based antagonisms, so many normative race and gender assumptions, and generally so much ugliness was nestled alongside all those harmless and fun and funny images drills right to the root of the problem with internet culture nostalgia. A lot of “internet culture” was harmless and fun and funny. But it came with a very high price of entry. To enjoy the fun and funny memes, you had to be willing—you had to be able—to deal with all the ugly ones. When faced with this bargain, many people simply laughed at both. It was hard to take Nazi memes all that seriously when they were sandwiched between sassy cats and golf course enforcement bears, and so, fun and ugly, ugly and fun, all were flattened into morally equivalent images in a flipbook. Others selectively ignored the most upsetting images, or at least found ways to cordon them off as being “just” a joke, or more frequently, “just” trolling, on “just” the internet.

Of course, only certain kinds of people, with certain kinds of experiences, would be able and willing to affect such indiscriminate mirth. Similarly, only certain kinds of people, with certain kinds of experiences, would be able and willing to say, “ok, yes, I know that image is hateful and dehumanizing, so I will blink and not engage with it, or you know, maybe chuckle a little to myself, but I won’t save it, and I won’t post anything in response, and instead will wait patiently until something that’s ok for me to laugh at shows up.”

Phillips calls that response the “ability to disconnect from consequence, from specificity, from anything but one’s own desire to remain amused forever.” And — apologies for all the blockquoting, but it’s good! — she ties that back to some of the journalists who covered this space when its public impact turned more serious down the road.

Very quickly, I realized that many of the young reporters who initially helped amplify the white nationalist “alt right” by pointing and laughing at them, had all come up in and around internet culture-type circles. They may not have been trolls themselves, but their familiarity with trolling subculture, and experience with precisely the kind of discordant swirl featured in the aforementioned early-2000s image dump, perfectly prepped them for pro-Trump shitposting. They knew what this was. This was just trolls being trolls. This was just 4chan being 4chan. This was just the internet. Those Swastikas didn’t mean anything. They recognized the clothes the wolf was wearing, I argued, and so they didn’t recognize the wolf.

This was how the wolf operated: by exploiting the fact that so many (white) people have been trained not to take the things that happen on the internet very seriously. They operated by laundering hate into the mainstream through “ironically” racist memes, then using all that laughter as a radicalization and recruitment tool. They operated by drawing from the media manipulation strategies of the subcultural trolls who came before, back when these behaviors were, to some anyway, still pretty funny.

Go read the whole thing, but here’s the lesson to take from it:

Most foundationally, shaking your head disapprovingly at the obvious villains—the obvious manipulators, the obvious abusers, the obvious fascists—isn’t enough. Abusers, manipulators, and fascists on the internet (or anywhere) certainly warrant disapproving head shakes, and worse. But so does a whole lot else. Pressingly, the things that were—and that for some people, still are—fun and funny and apparently harmless need more careful unpacking. Fun and funny and apparently harmless things have a way of obscuring weapons that privileged people cannot see, because they do not have to see them.

SRCCON 2019 – A first-timer’s recap

Miranda with Jonathan Kealing (INN’s Chief Network Officer) and INN Members Candice Forman from Outlier Media and Asraa Mustufa from Chicago Reporter. We had a blast meeting and chatting in person!

I wasn’t entirely sure what to expect going into my first ever SRCCON, a two-day conference from the folks at OpenNews. The conference is designed to connect news technology and data teams in a hands-on, interactive series of workshops and conversations to address the practical challenges faced by newsrooms today. Leading up to the event, I had heard SRCCON described as “inclusive," “welcoming," and “supportive," which turned out to be an understatement!

As someone relatively new to the world of journalism conferences, and even more new to SRCCON, I was blown away at how many comfortable, friendly, and productive conversations were had before, during, and after the sessions each day. At every table at every meal, and at each session, nearby people took the time to introduce themselves and constantly made me feel welcome and included. 

I loved the opportunity to meet people in person from many INN Member organizations and formed so many new connections with newsrooms far and wide. There is still so much to process from my two days there, but here’s a recap of some of my favorite sessions at SRCCON 2019:

Ghosts in the Machine - How technology is shaping our lives and how we can build a better way

I kicked off the conference by attending this session by facilitators Kaeti Hinck and Stacy-Marie Ishmael that focused on people-centered metrics and outcomes for newsrooms. We discussed issues with commonly-used metrics and brainstormed ways to make these metrics humane and collected in a way that respects people and humanity, rather than just the numbers.

My table discussed at length measuring retweets, shares, and other social engagement statistics and brainstormed ways we can improve these measurements by increasing education around what the statistics mean and considering sentiment behind shares when collecting data. Other tables discussed topics such as measuring changes in policy, comprehension of article content, truly engaging with readers using surveys and rewards for participation, and many other complex topics.

While finding solutions for these issues is challenging, these continued conversations around human-centered metrics and ethics around data collection are incredibly important as technology plays an increasingly important role in how we collect, define, and distribute news. I’m certain that this session wasn’t the end of these conversations, and I can’t wait to see where they go next.

Engineering Beyond Blame

Joe Hart and Vinessa Wan from the New York Times led this session introducing a collaborative method for discussing incidents and outages within today’s complex systems via blameless PostMortems called “Learning Reviews." They made the point that complex systems we work with today necessitate a need to prioritize learning opportunities over blame and punishment. The traditional idea of a single point of failure often doesn’t exist in complex systems where many factors can combine to lead to an incident or outage.

The goal of these “Learning Reviews” is to create a psychologically safe space where an honest and thorough investigation can happen to determine where the system or current team process failed, rather than on individual blame. They outlined how to create a defined process for these reviews, and then walked us through several small group exercises to demonstrate how complexity necessitates this approach. Here’s an article with more information about The Times Open Team’s approach and how they utilized it for Midterm election coverage.

What Happens to Journalism When AI Eats the World?

This was a fascinating and thought-provoking session led by Sarah Schmalbach and Ajay Chainani from the Lenfest Local Lab that examined the ethics behind the emerging field of AI, machine learning, and deep learning, and the effect these questions can have on the world around us.

We started with a group conversation about some of the AI horror stories we’ve heard about in the news or in our own lives, but then also discussed some of the groundbreaking AI work advancing journalism and helping make positive impacts on our world. 

They then led us through a series of small group discussions where we came up with our own AI product and then evaluated it using common ethics standards from companies such as Microsoft and Google. The main takeaway was giving everyone in the room an ethical framework for evaluating AI news projects and the confidence to continue these discussions moving forward.  

Thanks to Sarah and Ajay for leading such a deep and thought-provoking session!

Other highlights from the conference:

  • Brainstorming ways to explain complex topics in a very unique setting:
Jennifer LaFleur, Aaron Kessler, and Eric Sagara led an awesome session about creative ways to teach complex issues in the Heritage Gallery at McNamara Alumni Center.
  • New to SRCCON for 2019 was the Science Fair, a chance to informally check out journalism tools and resources with interactive demos. Here’s INN’s Jonathan Kealing trying out a VR news story from the Los Angeles Times:
 Jonathan Kealing checks out the size of a studio apartment via a VR headset, a project from the Los Angeles Times as part of their immersive storytelling demo.
  • I chose to end the conference by witnessing a bit of friendly competition at “CMS Demos: Approaches to Helping Our Newsrooms Do Their Best Work." The demos featured a walkthrough of writing, editing, and publishing a news story from 5 different custom CMS platforms, along with some light-hearted competition and a lot of laughs. Included in the demos were:
    • Scoop, from the New York Times
    • Arc, from Washington Post
    • Chorus, from Vox
    • Copilot, from Condé Nast
    • Divesite, from Industry Dive

Overall, SRCCON 2019 exceeded my expectations as a first-time attendee, and was such an incredible opportunity to network, address important issues through interactive sessions, and have a ton of meaningful conversations with newsrooms from all over. Events like this remind us why we do the work we do with nonprofit newsrooms and inspire us to continue addressing the challenges faced by newsrooms today. Thanks so much to OpenNews and all the other sponsors, volunteers, and fellow attendees that made SRCCON 2019 possible!  

Sold! Randa Duncan Williams buys Texas Monthly, the latest legacy brand to enjoy billionaire ownership

There’s a party going on deep in the heart of Texas. Texas Monthly is the latest in a series of newsrooms to be scooped up and bolstered by the deep and patient pockets of legacy media-loving billionaires. Randa Duncan Williams, heiress of an oil and gas fortune and a native Texan, chairs the holding company […]

The post Sold! Randa Duncan Williams buys Texas Monthly, the latest legacy brand to enjoy billionaire ownership appeared first on Poynter.

Largo site wins Cleveland Press Club magazine website award

Ben Keith and Lucia Walinchus with Eye on Ohio's first-place award for magazine website.

A photograph of an engraved wood plaque.
The award plaque.

We're happy to relay the news that INN Member Eye on Ohio won First Place Magazine Website in the Press Club of Cleveland's annual All-Ohio Excellence in Journalism Awards. We thank the Club and the judges for their consideration and congratulate Eye on Ohio for their success.

Magazine Website
First place: EyeonOhio.com
Lucia Walinchus, Ben Keith, Eye on Ohio

Eye on Ohio is built using INN Labs' Largo WordPress theme, which is the fruit of many years' work by contributors at INN, at NPR, and from the greater WordPress community. Eye on Ohio executive director listed Labs' lead developer, Ben Keith, as the second contact on the awards for his contributions as an INN Labs employee in Ohio.

Beyond Ben, contributors to Largo include past and present INN staff, folks at NPR's former Project Argo, and community contributors from across the web. We've got a full list in Largo's README over on GitHub.

“Your Default Position Should Be Skepticism” and Other Advice for Data Journalists From Hadley Wickham

So you want to explore the world through data. But how do you actually *do* it?

Hadley Wickham is a leading developer of open source tools for data science and works as the chief scientist at RStudio. We talked with him about interrogating data, what stories might be hiding in the gaps and how bears can really mess things up. What follows is a transcript of our talk, edited for clarity and length.

ProPublica: You’ve talked about the way data visualization can help the process of exploratory data analysis. How would you say this applies to data journalism?

Wickham: I’m not sure whether I should have the answers or you should have the answers! I think the question is: How much of data journalism is reporting the data that you have versus finding the data that you don’t have ... but you should have ... or want to have ... that would tell the really interesting story. Hadley Wickham Courtesy of Hadley Wickham

I help teach a data science class at Stanford, and I was just looking through this dataset on emergency room visits in the United States. There is a sample of every emergency visit from like 2013 to 2017 ... and then there’s this really short narrative, a one-sentence description of what caused the accident.

I think that’s a fascinating dataset because there are so many stories in it. I look at the dataset every year, and each time I try and pull out a little different story. This year, I decided to look at knife-related injuries, and there are massive spikes on Memorial Day, Fourth of July, Thanksgiving, Christmas Day and New Year’s.

As a generalist you want to turn that into a story, and there are so many questions you can ask. That kind of exploration is really a warmup. If you’re more of an investigative data journalist, you’re also looking for the data that isn’t there. You’ve got to force yourself to think, well, what should I be seeing that I’m not?

ProPublica: What’s a tip for someone who thinks that they have found something that isn’t there. What’s the next step that you take when you have that intuition?

Wickham: This is one of the things I learned from going to NICAR, which is completely unnatural to me, and that’s picking up a phone and talking to someone. Which I would never do. There is no situation in my life in which I would ever do that unless it’s life-threatening emergency.

But, I think that’s when you need to just start talking to people. I remember one little anecdote. I was helping a biology student analyze their field work data, and I was looking at where they collected data over time.

And one year they had no data for a given field. And so I go talk to them. And I was like: “Well, why is that? This is really weird.”

And they’re like, well, there was a bear in the field that year. And so we couldn’t collect any data.

But kind of an interesting story, right?

ProPublica: What advice would you have for editors who are managing or collaborating with highly technical people in a journalism environment but who may not share the same skill set? How can they be effective?

Wickham: Learn a little bit of R and basic data analysis skills. You don’t have to be an expert; you don’t have to work with particularly large datasets. It’s a matter of finding something in your own life that’s interesting that you want to dig into.

One [recent example]: I noticed on the account from my yoga class, there was a page that has every single yoga class that I had ever taken.

And so I thought it would be kind of fun to take a look at that. See how things change over time. Everyone has little things like that. You’ve got a Google Sheet of information about your neighbors, or your baby, or your cat, or whatever. Just find something in life where you have data that you’re interested in. Just so you’ve got that little bit of visceral experience of working with data.

The other challenge is: When you’re really good at something, you make it look easy. And then people who don’t know so much are like: “Wow, that looks really easy. It must have taken you 30 minutes to scrape those 15,000 Excel spreadsheets of varying different formats.”

It sounds a little weird, but it’s like juggling. If you’re really, really, really good at juggling, you just make it look easy, and people are like: “Oh well. That’s easy. I can juggle eight balls at a time.” And so jugglers deliberately build mistakes into their acts. I’m not saying that’s a good idea for data science, but you’ve taken this very hard problem, broken it down into several pieces, made the whole thing look easy. How do you also convey that this is something you had to spend a huge amount of time on? It looks easy now, because I’ve spent so much time on it, not because it was a simple problem.

Data cleaning is hard because it always takes longer than you expect. And it’s really, really difficult to predict in advance where the problems are going to lie. At the same time, that’s where you get the value and can do stuff that no one has done before. The easy, clean dataset has already been analyzed to death. If you want something that’s unique and really interesting, you’ve got to dig for it.

ProPublica: During that data cleaning process, is that where the journalist comes out? When you’re cleaning up the data but you’re also getting to know it better and you’re figuring out the questions and the gaps?

Wickham: Yeah, absolutely. That’s one of the things that really irritates me. I think it’s easy to go from “data cleaning” to “Well, you’ve got a data cleaning problem, you should hire a data janitor to take care of it.” And it’s not this “janitorial” thing. Actually cleaning your data is when you’re getting to know it intimately. That’s not something you can hand off to someone else. It’s an absolutely critical part of the data science process.

ProPublica: The perennial question. What makes R an effective environment for data analysis and visualization? What does it offer over other tool sets and platforms?

Wickham: I think you have basically four options. You’ve got R and Python. You’ve got JavaScript, or you’ve got something point and click, which obviously encompasses a very, very large number of tools.

The first question you should ask yourself is: Do I want to use something point and clicky, or do I want to use a programming language? It basically comes down to how much time do you spend? Like, if you’re doing data analysis every day, the time it takes to learn a programming language pays off pretty quickly because you can automate more and more of what you do.

And so then, if you decided you wanted to use a programming language, you’ve got the choice of doing R or Python or JavaScript. If you want to create really amazing visualizations, I think JavaScript is a place to do it, but I can’t imagine doing data cleaning in JavaScript.

So, I think the main competitors are R and Python for all data science work. Obviously, I am tremendously biased because I really love R. Python is awesome, too. But I think the reason that you can start with R is because in R you can learn how to do data science and then you can learn how to program, whereas in Python you’ve got to learn programming and data science simultaneously.

R is kind of a bit of a weird creature as a programming language, but one of the advantages is that you can get some basic templates that you copy and paste. You don’t have to learn what a function is, exactly. You don’t have to learn any programming language jargon. You can just kind of dive in. Whereas with Python you’re gonna learn a little bit more that’s just programming.

ProPublica: It’s true. I’ve tried to make some plots in Python and it was not pretty.

Wickham: Every team I talked to, there are people using R, and there are people using Python, and it’s really important to help those people work together. It’s not a war or a competition. People use different tools for different purposes. I think is very important and one project, to that end, it is this thing called Apache Arrow, which Wes [McKinney] has been working on because of this new organization called Ursa.

Basically, the idea of Apache Arrow is to just to sit down and really think, “What is the best way to store data-science-type data in memory?” Let’s figure that out. And then once we’ve figured it out, let’s build a bunch of shared infrastructure. So Python can store the data in the same way. R can store the data in the same way. Java can store the data in the same way. And then you can see, and mostly use, the same data in any programming language. So you’re not popping it about all the time.

ProPublica: Do you think journalists risk making erroneous assumptions about the accuracy of data or drawing inappropriate conclusions, such as mistaking correlation for causation?

Wickham: One of the challenges of data is that if you can quantify something precisely, people interpret it as being more “truthy.” If you’ve got five decimal places of accuracy, people are more likely to just kind of “believe it” instead of questioning it. A lot of people forget that pretty much every dataset is collected by a person, or there are many people involved. And if you ignore that, your conclusions are going to possibly be fantastically wrong.

I was judging a data science poster competition, and one of the posters was about food safety and food inspection reports. And I … and this probably says something profound about me ... but I immediately think: “Are there inspectors who are taking bribes, and if there were, how would you spot that from the data?”

You shouldn’t trust the data until you’ve proven that it is trustworthy. Until you’ve got another independent way of backing it up, or you’ve asked the same question three different ways and you get the same answer three different times. Then you should feel like the data is trustworthy. But until you’ve understood the process by which the data has been collected and gathered ... I think you should be very skeptical. Your default position should be skepticism.

ProPublica: That’s a good fit for us.

read more...

New: You Can Now Search the Full Text of 3 Million Nonprofit Tax Records for Free

On Thursday, we launched a new feature for our Nonprofit Explorer database: The ability to search the full text of nearly 3 million electronically filed nonprofit tax filings sent to the IRS since 2011.

Nonprofit Explorer already lets researchers, reporters and the general public search for tax information from more than 1.8 million nonprofit organizations in the United States, as well as allowing users to search for the names of key employees and directors of organizations.

Now, users of our free database can dig deep and search for text that appears anywhere in a nonprofit’s tax records, as long as those records were filed digitally — which according to the IRS covers about two-thirds of nonprofit tax filings in recent years.

How can this be useful to you? For one, this feature lets you find organizations that gave grants to other nonprofits. Any nonprofit that gives grants to another must list those grants on its tax forms — meaning that you can research a nonprofit’s funding by using our search. A search for “ProPublica,” for example, will bring up dozens of foundations that have given us grants to fund our reporting (as well as a few filings that reference Nonprofit Explorer itself).

Just another example: When private foundations have investments or ownership interest in for-profit companies, they have to list those on their tax filings as well. If you want to research which foundations have investments in a company like ExxonMobil, for example, you can simply search for the company name and check which organizations list it as an investment.

The possibilities are nearly limitless. You can search for the names or addresses of independent contractors that made more than $100,000 from a nonprofit, you can search for addresses, keywords in mission statements or descriptions of accomplishments. You can even use advanced search operators, so for instance you can find any filing that mentions either “The New York Times,” “nytimes” or “nytimes.com” in one search.

The new feature contains every electronically filed Form 990, 990-PF and 990-EZ released by the IRS from 2011 to date. That’s nearly 3 million filings. The search does not include forms filed on paper.

So please, give this search a spin. If you write a story using information from this search, or you come across bugs or problems, drop us a line! We’re excited to see what you all do with this new superpower.

read more...

New Plugin Launch: Republication Tracker Tool

INN Labs is happy to announce our newest plugin, the Republication Tracker Tool.

The Republication Tracker Tool allows publishers to share their stories by other websites and then track engagement of those shared stories with Google Analytics. The technology behind this tracking is similar to ProPublica’s PixelPing.

Why Might You Want to Use This Plugin?

  • Grow your audience and pageviews: Other publishers and readers acquire and re-distribute your content with a Creative-Commons license.
  • Better republishing reporting: View what publishers that are republishing your content and analyze engagement.
  • Foster collaborations: Gather supporting data to build relationships with other publishers who may be republishing your content.

How Publishers Republish Your Content

A simple “Republish This Story” button is added to your posts through a WordPress widget. This enables your stories to be republished by other sites who may want to use it and then to track engagement of those republished stories via Google Analytics

Sample republication button (style can be customized).

Track Republished Posts in WordPress

Once one of your stories has been republished, you will easily be able to see how many times it has been republished, how many republished views it has, who has republished it, and the URL of where it was republished, all from the WordPress edit screen for that story.

Example of republication data in the edit screen of a WordPress post.

Track Republished Posts in Google Analytics

Another valuable feature of the Republication Tracker Tool is all of your republished post data is also tracked in your Google Analytics account. Once you have your Google Analytics ID configured in the Republication Tracker Tool settings, you will be able to log into Google Analytics and view who has republished your stories, who is republishing most of your stories, and more.

Example of republication data within Google Analytics.

More Information and Feedback

For more information about how the plugin works:

You can download the Republication Tracker Tool from the WordPress.org plugin repository or through your website’s WordPress plugin page.

The initial release of this plugin was made possible by valuable INN member testing and feedback. If your organization uses the plugin, please let us know and continue sending us suggestions for improvement. Thank you!

The Republication Tracker Tool is one of the many WordPress plugins maintained by INN Labs, the tech and product team at the Institute for Nonprofit News.

Announcing Version 1.0 of the Link Roundups Plugin

INN Labs is pleased to announce an important update to the Link Roundups plugin!

If you run a daily or weekly newsletter collecting headlines from around the state, region, or within a particular industry, the Link Roundups plugin will make it easier to build and feed your aggregation posts into MailChimp.

The Link Roundups plugin helps editors aggregate links from around the web and save them in WordPress as “Saved Links”. You can publish these curated links in a Link Roundup (more below), display Saved Links and Link Roundups in widgets and posts in your WordPress site, or push Link Roundups directly to subscribers via MailChimp. It's designed to replace scattered link-gathering workflows that may span email, Slack, Google Docs and spreadsheets and streamlines collaborations between different staffers.

Why might you want to use this plugin? Here are a few reasons:

  • It creates a single destination for collecting links and metadata
  • On sites that publish infrequently, it provides recently published (curated) content for your readers
  • Weekly roundup newsletters or posts are a great way to recap your own site's coverage and build and diversify your audience, which can increase donations

Saved Links

The central function of the Link Roundups plugin is the Saved Link. It's a way of storing links in your WordPress database, alongside metadata such as the link's title, source site, and your description of the link's contents.

A screenshot of the Saved Links interface, showing many saved links and their respective metadata: authors, links, descriptions, and tags.

Save to Site Bookmarklet

When WordPress 4.9 removed the "Press This" functionality, this plugin's bookmarklet broke. This release's updates to the Saved Links functionality include a renewal of the "Save to Site" bookmarklet, based off of the canonical Press This plugin's functions. If your site has the WordPress-maintained Press This plugin active, your site users will be able to generate new bookmarklets. We include instructions on how to use the bookmarklet in the latest release.

A screenshot of the "Save to Site" button and its copy button

Once you've accumulated a few Saved Links, you can display them on your site using the Saved Links Widget or start to create Link Roundups (see next).

Saved Links Widget

Common uses of this widget include "coverage from around the state" or "recommended reads" or "from our partners" links.

It's a good way to point your to expert coverage from newsrooms you partner with. With the ability to sort Saved Links by tag, you can easily filter a widget to only show a selection of all the links saved on your site. Here's how Energy News Network uses the widget:

A screenshot of the widget as it appears at Energy News Network, showing a selection of links from the last day.
A screenshot of the widget as it appears at Energy News Network, showing a selection of links from the last day.

Link Roundups

Link Roundups are one of the best ways to present Saved Links to your readers. Collect links with Saved Links, then create a Link Roundup post with the week's curated links. The person who assembles the Link Roundup doesn't have to deal with messy cut-and-paste formatting or composing blurbs — when your users create Saved Links, they're already adding headlines, blurbs, and sources.

Add some opening and closing text, and you're most of the way to having composed a morning or weekly news roundup.

Link Roundups are a custom post type with all the Classic Editor tools and an easy interface for creating lists of Saved Links. As a separate post type, they can be integrated into your site's standard lists of posts or kept separate in their own taxonomies. You don't have to integrate the roundups with your standard posts flow; it's why we provide a Link Roundups widget to fulfill your widget area needs.

MailChimp Integration

Link Roundups don't have to stay on your site. If you configure your site to connect to the MailChimp API and create a newsletter template with editable content areas, you can send a Link Roundup directly to MailChimp from WordPress.

From the Link Roundup editor, you can choose a mailing list, and create MailChimp campaign drafts, send test emails, and send drafted campaigns directly. If you'd rather open a draft campaign in MailChimp to finalize the copy, there's a handy link to your draft campaign.

A screenshot of a settings metabox: choose a campaign type of regular or text. Choose a list to send to: the Link Roundups Mailchimp Tools Test list, with the group "are they Ben" option chosen: "Ben". The campaign title will be "Test Title Three Title", the test subject will be "Test Title Three Subject", and the template will be "Link Roundups Test 2"
Here's the MailChimp settings for the Link Roundups campaign editor: Many of the controls that you'd want to use to create and send a draft campaign.

More information

For more information about how the plugin works, see the Largo guide for administrators, the plugin's documentation on GitHub, or drop by one of our weekly open office hours sessions with your questions. You can also reach us by email at support@inn.org.

If you already have the Link Roundups plugin installed, keep an eye out for an update notice in your WordPress dashboard. If you'd like to install it, download it from the WordPress.org plugin repository or through your site dashboard's plugin page.

This update was funded in part by Energy News Network and Fresh Energy, with additional funding thanks to the generous support of the Democracy Fund, Ethics and Excellence in Journalism Fund, Open Society Foundation, and the John S. and James L. Knight Foundation.

Link Roundups is one of the many WordPress plugins maintained by INN Labs, the tech and product team at the Institute for Nonprofit News.

The Ticket Trap: Front to Back

Millions of motorists in Chicago have gotten a parking ticket. So when we built The Ticket Trap — an interactive news application that lets people explore ticketing patterns across the city — we knew that we’d be building something that shines a spotlight on an issue that affects people from all walks of life.

But we had a more specific story we needed to tell.

At ProPublica Illinois, we’d been reporting on Chicago’s aggressive parking and vehicle compliance ticket system for months. Our stories revealed a system that disproportionately punishes black and low-income residents and generates millions of dollars every year for the city by pushing massive debt onto Chicago’s poorest residents — even sending thousands into bankruptcy.

So when we thought about building an interactive database that allows the public, for the first time, to see all 54 million tickets issued over the last two decades, we wanted to make sure users understood the findings of the overall project. That’s why we centered the user experience around the disparities in the system, such as which wards have the most ticket debt and which have been hit hardest because residents can’t pay.

The Ticket Trap is a way for users to see lots of different patterns in tickets and to see how their wards fit into the bigger picture. It also gives civically active folks tools for talking about the issue of fines imposed by the city and helps them hold their elected officials accountable for how the city imposes debt.

The project also gave us an opportunity to try a bunch of technical approaches that could help a small organization like ours develop sustainable news apps. Although we’re part of the larger ProPublica, I’m the only developer in the Illinois office, so I want to make careful choices that will help keep our “maintenance debt” — the amount of time future-me will need to spend keeping old projects up and running — low.

Managing and minimizing maintenance debt is particularly important to small organizations that hope to do ambitious digital work with limited resources. If you’re at a small organization, or are just looking to solve similar problems, read on: These tools might help you, too.

In addition to lowering maintenance debt, I also wanted the pages to load quickly for our readers and to cost us as little as possible to serve. So I decided to eliminate, as much as possible, having executable code running on a server just to load pages that rarely change. That decision required us to solve some problems.

The development stack was JAMstack, which is a static front-end client with microservices to handle the dynamic features.

The learning curve for these technologies is steep (don’t worry if you don’t know what it all means yet). And while there are lots of good resources to learn the components, it can still be challenging to put them all together.

So let’s start with how we designed the news app before descending into the nerdy lower decks of technical choices. Design Choices

The Ticket Trap focuses on wards, Chicago’s primary political divisions and the most relevant administrative geography. Aldermen don’t legislate much, but they have more power over ticketing, fines, punishments and debt collection policies than anyone except the mayor.

We designed the homepage as an animated, sortable list that highlights the wards, instead of a table or citywide map. Our hope was to encourage users to make more nuanced comparisons among wards and to integrate our analysis and reporting more easily into the experience.

The top of the interface provides a way to select different topics and then learn about what they mean and their implications before seeing how the wards compare. If you click on “What Happens if You Don’t Pay,” you’ll learn that unpaid tickets can trigger late penalties, but they can also lead to license suspensions and vehicle impoundments. Even though many people from vulnerable communities are affected by tickets in Chicago, they’re not always familiar with the jargon, which puts them at a disadvantage when trying to defend themselves. Empowering them by explaining some basic concepts and terms was an important goal for us.

Below the explanation of terms, we display some small cards that show you the location of each ward, the alderman who represents it, its demographic makeup and information about the selected topic. The cards are designed to be easy to “skim and dive” and to make visual comparisons. You can also sort the cards based on what you’d like to know.

We included some code in our pages to help us track how many people used different features. About 50 percent of visitors selected a new category at least once and 27 percent sorted once they were in a category. We’d like to increase those numbers, but it’s in line with engagement patterns we saw for our Stuck Kids interactive graphic and better than we did on the interactive map in The Bad Bet, so I consider it a good start.

For more ward-specific information, readers can also click through to a page dedicated to their ward. We show much of the same information as the cards but allow you to home in on exactly how your ward ranks in every category. We also added some more detail, such as a map showing where every ticket in your ward has been issued.

We decided against showing trends over time on ward pages because the overall trend in the number of tickets issued is too big and complex a subject to capture in simple forms like line charts. As interesting as that may have been, it would have been outside the journalistic goals of highlighting systemic injustices.

For example, here’s the trend over time for tickets in the 42nd Ward (downtown Chicago). It’s not very revealing. Is there an upward trend? Maybe a little. But the chart says little about the overall effect of tickets on people’s lives, which is what we were really after.

On the other hand, the distributions of seizures/suspensions and bankruptcy are very revealing and show clear groupings and large variance, so each detail page includes visualizations of these variables.

Looking forward, there’s more we can do with these by layering on more demographic information and adding visual emphasis.

One last point about the design of these pages: I’m not a “natural” designer and look to colleagues and folks around the industry for inspiration and help. I made a map of some of those influences to show how many people I learned from as I worked on the design elements:

These include ProPublica news applications developer Lena Groeger’s work on Miseducation, as well as NPR’s Book Concierge, first designed by Danny DeBelius and most recently by Alice Goldfarb. I worked on both and picked up some design ideas along the way. Helga Salinas, then an engagement reporting fellow at ProPublica Illinois, helped frame the design problems and provided feedback that was crucial to the entire concept of the site. Technical Architecture

The Ticket Trap is the first news app at ProPublica to take this approach to mixing “baked out” pages with dynamic features like search. It’s powered by a static site generator (GatsbyJS), a query layer (Hasura), a database (Postgres with PostGIS) and microservices (Serverless and Lambda).

Let’s break that down:

  • Front-end and site generator: GatsbyJS builds a site by querying for data and providing it to templates built in React that handle all display-layer logic, both the user interface and content.
  • Deployment and development tools: A simple Grunt-based command line interface for deploying and administrative tasks.
  • Data management: All data analysis and processing is done in Postgres. Using GNU Make, the database can be rebuilt at any time. The Makefile also builds map tiles and uploads them to Mapbox. Hasura provides a GraphQL wrapper around Postgres so that GatsbyJS can query it, and GraphQL is just a query language for APIs.
  • Search and dynamic services: Search is handled by a simple AWS Lambda function managed with Serverless that ferries simple queries to an RDS database.

It’s all a bit buzzword-heavy and trendy-sounding when you say it fast. The learning curve can be steep, and there’s been a persistent and sometimes persuasive argument that the complexity of modern Javascript toolchains and frameworks like React are overkill for small teams.

We should be skeptical of the tech du jour. But this mix of technologies is the real deal, with serious implications for how we do our work. I found that once I could put all the pieces together, there was significantly less complexity than when using MVC-style frameworks for news apps, in my view.

Front End and Site Generator

GatsbyJS provides data to templates (built as React components) that contain both UI logic and content.

The key difference here from frameworks like Rails is that instead of splitting up templates and the UI (the classic “change template.html then update app.js” pattern), GatsbyJS bundles them together using React components. In this model, you factor your code into small components that bundle data and interactivity together. For example, all the logic and interface for the address search is in a component called AddressSearch. This component can be dropped into the code anywhere we want to show an address search using an HTML-like syntax (<AddressSearch />) or even used in other projects.

We’ll skip over what I did here, which is best summed up by this Tweet:

lol pic.twitter.com/UCpQK131J6— Thomas Wilburn (@thomaswilburn) January 16, 2019

There are better ways to learn React than my subpar code.

GatsbyJS also gives us a uniform system for querying our data, no matter where it comes from. In the spirit of working backward, look at this simplified query snippet from the site’s homepage, which provide access to data about each ward’s demographics, ticketing summary data, responsive images with locator maps for each ward, site configuration and editable snippets of text from a Google spreadsheet.

export const query = graphql` query PageQuery { configYaml { slug title description } allImageSharp { edges { node { fluid(maxWidth: 400) { ...GatsbyImageSharpFluid } } } } allGoogleSheetSortbuttonsRow { edges { node { slug label description } } } iltickets { citywideyearly_aggregate { aggregate { sum { current_amount_due ticket_count total_payments } } } wards { ward wardDemographics { white_pct black_pct asian_pct latino_pct } wardMeta { alderman address city state zipcode ward_phone email } wardTopFiveViolations { violation_description ticket_count avg_per_ticket } wardTotals { current_amount_due current_amount_due_rank ticket_count ticket_count_rank dismissed_ticket_count dismissed_ticket_count_rank dismissed_ticket_count_pct dismissed_ticket_count_pct_rank … } } }

Seems like a lot, and maybe it is. But it’s also powerful, because it’s the precise shape of the JSON that will be available to our template, and it draws on a variety of data sources: A YAML config file kept under version control (configYAML), images from the filesystem processed for responsiveness (allImageSharp), edited copy from Google Sheets (allGoogleSheetSortbuttonsRow) and ticket data from PostgreSQL (iltickets).

And data access in your template becomes very easy. Look at this snippet:

iltickets { wards { ward wardDemographics { white_pct black_pct asian_pct latino_pct } } }

In our React component, accessing this data looks like:

{data.iltickets.wards.map( (ward, i) => ( <p>Ward {ward.ward} is {ward.wardDemographics.latino_pct}% Latino.</p> ) )}

Every other data source works exactly the same way. The simplicity and consistency help keep templates clean and clear to read.

Behind the scenes, Hasura, a GraphQL wrapper for Postgres, is stitching together relational database tables and serializing them as JSON to pull in the ticket data.

Data Management

Hasura

Hasura occupies a small role in this project, but without it, the project would be substantially more difficult. It’s the glue that lets us build a static site out of a large database, and it allows us to query our Postgres database with simple JSON-esque queries using GraphQL. Here’s how it works.

Let’s say I have a table called “wards” with a one-to-many relationship to a table called “ward_yearly_totals”. Assuming I’ve set up the correct foreign key relationships in Postgres, a query from Hasura would look something like:

wards { ward alderman wardYearlyTotals { year ticket_count } }

On the back end, Hasura knows how to generate the appropriate join and turn it into JSON.

This process was also critical in working out the data structure. I was struggling with this but I realized that I just needed to work backward. Because GraphQL queries are declarative, I simply wrote queries that described the way I wanted the data to be structured for the front end and worked backward to create the relational database structures to fulfill those queries.

Hasura can do all sorts of neat things, but even the most simple use case — serializing JSON out of a Postgres database — is quite compelling for daily data journalism work.

Data Loading

GNU Make powers the data loading and processing workflow. I’ve written about this before if you want to learn how to do this yourself.

There’s a Python script (with tests) that handles cleaning up unescaped quotes and a few other quirks of the source data. We also use the highly efficient Postgres COPY command to load the data.

The only other notable wrinkle is that our source data is split up by year. That gives us a nice way to parallelize the process and to load partial data during development to speed things up.

At the top of the Makefile, we have these years:

PARKINGYEARS = 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018

To load four years worth of data, processing in parallel across four processor cores looks like this:

PARKINGYEARS=”2015 2016 2017 2018" make -j 4 parking

Make, powerful as it is for filesystem-based workflows and light database work, has been more than a bit fussy when working so extensively with a database. Dependencies are hard to track without hacks, which means not all steps can be run without remembering and running prior steps. Future iterations of this project would benefit from either more clever Makefile tricks or a different tool.

However, being able to recreate the database quickly and reliably was a central tenet of this project, and the Makefile did just that.

Analysis and Processing for Display

To analyze the data and deliver it to the front end, we wrote a ticket loader (open sourced here) to use SQL queries to generate a series of interlinked views of the data. These techniques, which I learned from Joe Germuska when we worked together at the Chicago Tribune, are a very powerful way of managing a giant data set like the 54 million rows of parking ticket data used in The Ticket Trap.

The fundamental trick to the database structure is to take the enormous database of tickets and crunch it down into smaller tables that aggregate combinations of variables, then run all analysis against those tables.

Let’s take a look at an example. The query below groups by year and ward, along with several other key variables such as violation code. By grouping this way, we can easily ask questions like, “How many parking meter tickets were issued in the 3rd Ward in 2005?” Here’s what the summary query looks like:

create materialized view wardsyearly as select w.ward, p.violation_code, p.ticket_queue, p.hearing_disposition, p.year, p.unit_description, p.notice_level, count(ticket_number) as ticket_count, sum(p.total_payments) as total_payments, sum(p.current_amount_due) as current_amount_due, sum(p.fine_level1_amount) as fine_level1_amount from wards2015 w join blocks b on b.ward = w.ward join geocodes g on b.address = g.geocoded_address join parking p on p.address = g.address where g.geocode_accuracy > 0.7 and g.geocoded_city = 'Chicago' and ( g.geocode_accuracy_type = 'range_interpolation' or g.geocode_accuracy_type = 'rooftop' or g.geocode_accuracy_type = 'intersection' or g.geocode_accuracy_type = 'point' or g.geocode_accuracy_type = 'ohare' ) group by w.ward, p.year, p.notice_level, p.unit_description, p.hearing_disposition, p.ticket_queue, p.violation_code;

The virtual table created by this view looks like this:

This is very easy to query and reason about, and significantly faster than querying the full parking data set.

Let’s say we want to know how many tickets were issued by the Chicago Police Department in the 1st Ward between 2013 and 2017:

select sum(ticket_count) as cpd_tickets from wardsyearly where ward = '1' and year >= 2013 and year <= 2017 and unit_description = 'CPD'

The answer is 64,124 tickets. This query took 119 milliseconds on my system when I ran it, while a query to obtain the equivalent data from the raw parking records takes minutes rather than fractions of a second.

The Database as the “Single Source of Truth”

I promised myself when I started this project that all calculations and analysis would be done with SQL and only SQL. That way, if there's a problem with the data in the front end, there's only one place to look, and if there's a number displayed in the front end, the only transformation it undergoes is formatting. There were moments when I wondered if this was crazy, but it has turned out to be perhaps my best choice in this project.

With common table expressions (CTE), part of most SQL environments, I was able to do powerful things with a clear, if verbose, syntax. For example, we rank and bucket every ward by every key metric in the data. Without CTEs, this would be a task best accomplished with some kind of script with gnarly for-loops or impenetrable map/reduce functions. With CTEs, we can use impenetrable SQL instead! But at least our workflow is declarative and ensures any display of the data can and should contain no additional data processing.

Here’s an example of a CTE that ranks wards on a couple of variables using the intermediate summary view from above. Our real queries are significantly more complex, but the fundamental concepts are the same:

with year_bounds as ( select 2013 as min_year, 2017 as max_year ), wards_toplevel as ( select ward, sum(ticket_count) as ticket_count, sum(total_payments) as total_payments, from wardsyearly, year_bounds where (year >= min_year and year <= max_year) group by ward ) select ward, ticket_count, dense_rank() over (order by ticket_count desc) as ticket_count_rank, total_payments, dense_rank() over (order by total_payments desc) as total_payments_rank from wards_toplevel;

Geocoding

Geocoding the data — turning handwritten or typed addresses into latitude and longitude coordinates — was a critical step in our process. The ticket data is fundamentally geographic and spatial. Where a ticket is issued is of utmost importance for analysis. Because the input addresses can be unreliable, the address data associated with tickets was exceptionally messy. Geocoding this data was a six-month, iterative process.

An important technique we use to clean up the data is very simple. We “normalize” the addresses to the block level by turning street numbers like “1432 N. Damen” into “1400 N. Damen.” This gives us fewer addresses to geocode, which made it easier to repeatedly geocode some or all of the addresses. The technique doesn’t improve the data quality itself, but it makes the data significantly easier to work with.

Ultimately, we used Geocodio and were quite happy with it. Google's geocoder is still the best we've used, but Geocodio is close and has a more flexible license that allowed us to store, display and distribute the data, including in our Data Store.

We found that the underlying data was hard to manually correct because many of the errors were because of addresses that were truly ambiguous. Instead, we simply accepted that many addresses were going to cause problems. We omitted addresses that Geocodio wasn't confident about or couldn't pinpoint with enough accuracy. We then sampled and tested the data to find the true error rate.

About 12 percent of addresses couldn’t be used. Of the remaining addresses, sampling showed them to be about 94 percent accurate. The best we could do was make the most conservative estimates and try to communicate and disclose this clearly in our methodology.

To improve accuracy, we worked with Matt Chapman, a local civic hacker, who had geocoded the addresses without normalization using another service called SmartyStreets. We shared data sets and cross-validated our results. SmartyStreets’ accuracy was very close to Geocodio's. I attempted to see if there was a way to use results from both services. Each service did well and struggled with different types of address problems, so I wanted to know if combining them would increase the overall accuracy. In the end, my preliminary experiments revealed this would be technically challenging with negligible improvement. Deployment and Development Tools

The rig uses some simple shell commands to handle deployment and building the database. For example:

make all make db grunt publish grunt unpublish grunt publish --target=production Dynamic Search With Microservices

Because we were building a site with static pages and no server runtime, we had to solve the problem of offering a truly dynamic search feature. We needed to provide a way for people to type in an address and find out which ward that address is in. Lots of people don’t know their own wards or aldermen. But even when they do, there’s a decent chance they wouldn’t know the ward for a ticket they received elsewhere in the city.

To allow searching without needing to spin up any new services, we used Mapbox's autocomplete geocoder, AWS Lambda, to provide a tiny API, our Amazon Aurora database and Serverless to manage the connection.

Mapbox provides suggested addresses, and when the user clicks on one, we dispatch a request to the back-end service with the latitude and longitude, which are then run through a simple point-in-polygon query to determine the ward.

It’s simple. We have a serverless.yml config file that looks like this:

service: il-tickets-query plugins: - serverless-python-requirements - serverless-dotenv-plugin custom: pythonRequirements: dockerizePip: non-linux zip: true provider: name: aws runtime: python3.6 stage: ${opt:stage,'dev'} environment: ILTICKETS_DB_URL: ${env:ILTICKETS_DB_URL} vpc: securityGroupIds: - sg-XXXXX subnetIds: - subnet-YYYYY package: exclude: - node_modules/**

functions: ward: handler: handler.ward events: - http: method: get cors: true path: ward request: parameters: querystrings: lat: true lng: true

Then we have a handler.py file to execute the query:

try: import unzip_requirements except ImportError: pass import json import logging import numbers import os import records log = logging.getLogger() log.setLevel(logging.DEBUG) DB_URL = os.getenv('ILTICKETS_DB_URL')

def ward(event, context): qs = event["queryStringParameters"] db = records.Database(DB_URL) rows = db.query(""" select ward from wards2015 where st_within(st_setsrid(ST_GeomFromText('POINT(:lng :lat)'), 3857), wkb_geometry) """, lat=float(qs['lat']), lng=float(qs['lng']))

wards = [row['ward'] for row in rows]

if len(wards): response = { "statusCode": 200, "body": json.dumps({"ward": wards[0]}), "headers": { "Access-Control-Allow-Origin": "projects.propublica.org", } } else: response = { "statusCode": 404, "body": "No ward found", }

return response

That’s all there is to it. There are plenty of ways it could be improved, such as making the cross-origin resource sharing policies configurable based on the deployment stage. We’ll also be adding API versioning soon to make it easier to maintain different site versions. Minimizing Costs, Maximizing Productivity

The cost savings of this approach can be significant.

Using Amazon Lambda cost pennies per month (or less), while running even the smallest servers on Amazon’s Elastic Compute Cloud service usually costs quite a bit more. The thousands of requests and tens of thousands of milliseconds of computing time used by the app in this example are, by themselves, well within Amazon’s free tier. Serving static assets from Amazon’s S3 service also costs only pennies per month.

Hosting costs are a small part of the puzzle, of course — developer time is far more costly, and although this system may take longer up front, I think the trade-off is worth it because of the decreased maintenance burden. The time a developer will not have to spend maintaining a Rails server is time that he or she can spend reporting or writing new code.

For The Ticket Trap app, I only need to worry about a single, highly trusted and reliable service (our database) rather than a virtual server that needs monitoring and could experience trouble.

But where this system really shines is in its increased resiliency. When using traditional frameworks like Rails or Django, functionality like search and delivering client code are tightly coupled. So if the dynamic functionality breaks, the whole site will likely go down with it. In this model, even if AWS Lambda were to experience problems (which would likely be part of a major, internet-wide event), the user experience would be degraded because search wouldn’t work, but we wouldn’t have a completely broken app. Decoupling the most popular and engaging site features from an important but less-used feature minimizes the risks in case of technical difficulties.

If you’re interested in trying this approach, but don’t know where to begin, identify what problem you’d like to spend less time on, especially after your project is launched. If running databases and dynamic services is hard or costly for you or your team, try playing with Serverless and AWS Lambda or a similar provider supported by Serverless. If loading and checking your data in multiple places always slows you down, try writing a fast SQL-based loader. If your front-end code is always chaotic by the end of a development cycle, look into implementing the reactive pattern provided by tools like React, Svelte, Angular, Vue or Ractive. I learned each part of this stack one at a time, always driven by need.

read more...

Want to Start a Collaborative Journalism Project? We’re Building Tools to Help.

Today we’re announcing new tools, documentation and training to help news organizations collaborate on data journalism projects.

Newsrooms, long known for being cutthroat competitors, have been increasingly open to the idea of working with one another, especially on complex investigative stories. But even as interest in collaboration grows, many journalists don’t know where to begin or how to run a sane, productive partnership. And there aren’t many good tools available to help them work together. That’s where our project comes in.

Get the latest news from ProPublica every afternoon.

We’ll be sharing some of the software we built, and the lessons we learned, while creating our Documenting Hate project, which tracks hate crimes and bias-motivated harassment in the U.S.

The idea to launch Documenting Hate came shortly after Election Day 2016, in response to a widely reported uptick in hate incidents. Because data collection on hate crimes and incidents is so inadequate, we decided to ask people across the country to tell us their stories about experiencing or witnessing them. Thousands of people responded. To cover as many of their stories as we could, we organized a collaborative effort with local and national newsrooms, which eventually included more than 160 of them.

We’ll be building out and open-sourcing the tools we created to do Documenting Hate, as well as our Electionland project, and writing a detailed how-to guide that will let any newsroom do crowd-powered data investigations on any topic.

Even newsrooms without dedicated developers will be able to launch a basic shared investigation, including gathering tips from the public through a web-based form and funneling those tips into a central database that journalists can use to find stories and sources. Newsrooms with developers will be able to extend the tools to enable collaboration around any data sets.

We’ll also provide virtual trainings about how to use the tools and how to plan and launch crowd-powered projects around shared data sets.

This work will be a partnership with the Google News Initiative, which is providing financial support.

Launched in January 2017, ProPublica’s Documenting Hate project is a collaborative investigation of hate crimes and bias incidents in the United States. The Documenting Hate coalition is made up of more than 160 newsrooms and several journalism schools that collect tips from the public and records from police to report on hate. Together we’ve produced close to 200 stories. That work will continue in 2019.

We’re already hard at work writing a how-to guide on collaborative, crowd-powered data projects. We’ll be talking about it at the 2019 NICAR conference in Newport Beach, California, in March. We are also hiring a contract developer to work on this; read the job description and apply here.

The first release of the complete tools and playbook will be available this summer, and online trainings will take place in the second half of the year.

There are a thousand different ways to collaborate around shared data sets. We want to hear from you about what would be useful in our tool, and we’re interested in hearing from newsrooms that might be interested in testing our tools. Sign up for updates here.

read more...

Chasing Leads and Herding Cats: Shaping a New Role in the Newsroom

In this ever-changing industry, new roles are emerging that redefine how we do journalism: audience engagement director, social newsgathering reporter, Snapchat video producer. At ProPublica, I’ve been part of developing a new role for our newsroom. My title is partner manager, and I lead a large-scale collaboration: Documenting Hate, an investigative project to track and report on hate crimes and bias incidents.

ProPublica regularly collects large amounts of information that we can’t process by ourselves, including documents gathered in our reporting, tips solicited by our engagement journalists, and data published in our news applications.

Get the latest news from ProPublica every afternoon.

Since the beginning, we’ve seen collaboration as a key way to make sure that all of this reporting material can be used to fulfill our mission: to make an impact in the real world. Collaboration has been a fundamental part of ProPublica’s journalism model. We make our stories available to republish for free through Creative Commons and usually co-publish or co-report stories with other news outlets. When it comes to large data sets, we often offer up our findings to journalists or the public to enable new reporting. It’s a way of spreading the wealth, so to speak. Collaborations are typically a core responsibility of each editor in the newsroom, but some of our projects have large-scale collaborations at their center, and they require dedicated and sustained attention.

My role emerged after Electionland 2016, one of the largest-ever journalism collaborations, which many ProPublica staff members pitched in to organize. While the project was a journalistic success, its editors learned a key lesson about the need for somebody to own the relationship with partner newsrooms. In short, we came to think that the collaboration itself was something that needed editing, including recruiting partners, making sure they saw the reporting tips they needed to see, and tracking what partners were publishing. It also reinforced the need for a more strategic tip-sharing approach after the success of large engagement projects, like Lost Mothers and Agent Orange, which garnered thousands of leads — and more stories than we had time to tell.

That’s how my role was born. Soon after the 2016 election, ProPublica launched Documenting Hate. Hiring a partner manager was the first priority. We also hired a partner manager to work on Electionland 2018, which will cover this year’s midterm elections.

Our newsroom isn’t alone in dedicating resources to this type of role. Other investigative organizations, such as Reveal from the Center for Investigative Reporting and the International Consortium of Investigative Journalists, staffed up to support their collaborations. Heather Bryant — who founded Project Facet, which helps newsrooms work together — told me there are at least 10 others who manage long-term collaborations at newsrooms across the country, from Alaska, to Texas, to Pennsylvania. What I Do

My job is a hybrid of roles: reporter, editor, researcher, social media producer, recruiter, trainer and project manager.

I recruited our coalition of newsrooms, and I vet and onboard partners. To date, we have more than 150 national and local newsrooms signed on to the project, plus nearly 20 college newspapers. I speak to a contact at each newsroom before they join, and then I provide them with the materials they need to work on the project. I’ve written training materials and conduct online training sessions so new partners can get started more quickly.

The core of this project is a shared database of tips about hate incidents that we source from the public. For large collaborations like Documenting Hate and Electionland, our developer Ken Schwencke builds these private central repositories, which are connected directly to our tip submission form. We use Screendoor, a form-building service, to host the tip form.

In large-scale collaborations, we invite media organizations to be part of the newsgathering process. For Documenting Hate, we ask partners to embed this tip submission form to help us gather story leads. That way, we can harness the power of different audiences around the country, from Los Angeles Times readers, to Minnesota Public Radio listeners, to Univision viewers. At ProPublica, we try to talk about the project as much as we can in the media and at conferences to spread the word to both potential tipsters and partners.

The tips we gather are available to participating journalists — helping them to do their job and produce stories they might otherwise not have found. ProPublica and our partners have reported more than 160 stories, including pieces about hate in schools, on public transportation and on the road, in the workplace, and at places of worship, and incidents involving the president’s name and policies, to name just a few. Plus, each authenticated tip acts as a stepping stone for other partners to build on their reporting.

At ProPublica, we’ve been gathering lots of public records from police on hate crimes to do our own reporting and sharing those records with partners, too. Any time we produce an investigation in-house, I share the information we have available so reporters can republish or localize the story.

As partner manager, I’m a human resource to share knowledge. I’ve built expertise in the hate beat and serve as a kind of research desk for our network, pointing reporters to sources and experts. I host a webinar or training once a month to help reporters understand the project or to build this beat, and I send out a weekly internal newsletter.

Another part of my job is being an air-traffic controller, sending out incoming tips to reporters who might be interested and making sure that multiple people aren’t working on the same tip at the same time. This is especially important in a project like ours; given the sensitivity of the subject, we don’t want to scare off tipsters by having multiple reporters reach out at once. I pitch story ideas based on patterns I’ve identified to journalists who might want to dig further. I’m constantly researching leads to share with our network and with specific journalists working on relevant stories.

And I’m also a signal booster: When partners publish reporting on hate, we share their work on our social channels to make sure these stories get as big an audience as possible. We keep track of all of the stories that were reported with sourcing from the project to make them available in one place. The Challenges

While the Documenting Hate project has produced some incredible work, this is not an easy job.

Many journalists are eager to work with ProPublica, but not always with each other; it can be a process to get buy-in from editors to collaborate with a network of newsrooms, especially at large ones where there are layers of hierarchy. Some reporters agree to join but don’t make it all the way through onboarding, which involves several steps that may require help from others in their newsrooms. Some explore the database and don’t see anything they want to follow up on right away, and then lose interest. And occasionally journalists are so overwhelmed with their day-to-day work that I rarely hear back from them after they’ve joined.

Turnover and layoffs, which are depressingly common in our industry, mean having to find and onboard new contacts in partner newsrooms, or relying on bounce-back emails to figure out who’s left. It also means that sometimes engaged reporters move into positions at new companies where they don’t cover hate, leaving a gap in their old newsrooms. A relentless news cycle doesn’t help, either. For example, after the 2017 violence in Charlottesville, Virginia, caused a renewed surge in interest in the hate beat, a series of deadly hurricanes hit, drawing a number of reporters onto the natural disaster beat for a time.

And because of the sensitivity of the incidents, tipsters sometimes refuse to talk after they’ve written in, which can be discouraging for reporters. Getting a story may mean following up on a dozen tips rather than just one or two. Luckily, since we’ve received thousands of tips and hundreds of records, active participants in our coalition have found plenty of material to work on. The Future of Partnerships

While collaborations aren’t always easy, I believe projects like Documenting Hate are likely to be an important part of the future of journalism. Pooling resources and dividing and conquering on reporting can help save time and money, which are in increasingly short supply.

Some partnerships are the fruit of necessity, linking small newsrooms in the same region or state, like Coast Alaska, or creating stronger ties between affiliates within a large network, like NPR. I think there’s huge potential for more local collaborations, especially with shrinking budgets and personnel. Other partnerships emerge out of opportunity, like the Panama Papers investigation, which was made possible by a massive document leak. If more newsrooms resisted the urge for exclusivity — a concept that matters far more to journalists than to the public — more partnerships could be built around data troves and leaks.

Another area of potential is to band together to request and share public records or to pool funding for more expensive requests; these costs can prevent smaller newsrooms from doing larger investigations. I also think there’s a ton of opportunity to collaborate on specific topics and beats to share knowledge, best practices and reporting.

With new partnerships comes the need for someone at the helm, navigating the ship. While many newsrooms’ finances are shrinking, any collaborative project can have a coordinator role baked into the budget. An ideal collaborations manager is a journalist who understands the day-to-day challenges of newsrooms, is fanatical about project management, is capable of sourcing and shaping stories, and can track the reach and impact of work that’s produced.

We all benefit when we work together — helping us reach wider audiences, do deeper reporting and better serve the public with our journalism.

read more...