All posts by media-man

Mercy Street Season 2 Digital Assets

Mercy Street returns this Sunday, January 22,  and a wide variety of assets and opportunities exist for digital to support the second season of this exciting drama.

Promotional Assets

A full suite of promotional assets is available now to help you continue your support of Season 2 of Mercy Street, including several recently-posted video elements and more to roll out during the season. Check out the assets listed below on the Source, and learn more about some of the newest elements on myPBS.

Website and Social Media
Building on the dynamic Mercy Street website from Season 1, new assets include an expanded character hub, a Season 2 episode guide and sneak-peek photo galleries, a host of Season 2 costumes and costume sketches, and more information about the real people, battles, and places behind this historical series.

The Mercy Street Revealed blog follows series production news, cast interviews and more. In-season, look out for historical analysis and GIF recaps for each episode — check back each Monday! Two bloggers have been added to the expert lineup: Kenyatta Berry, co-host of Genealogy Roadshow and Jeffrey S. Reznick, PhD of the National Library of Medicine of the National Institutes of Health. The Tintype Me photo maker is back (and available as a Bentomatic) and the Mercy Street Social Hub has been expanded to highlight all the #MercyStreetPBS social activities.

360°
Lastly, immerse yourself in two 360° projects! First, experience “A Letter Home,” a narrative 360° VR video that takes place in the Mansion House Hospital, the main location for Mercy Street. Next, spin around 360° degree images and experience a behind-the-scenes look at the making of Season 2.

As with Season 1, all episodes of Mercy Street will be available for two weeks after broadcast via PBS.org, on PBS apps for iOS and Android devices, and via station-branded digital platforms including Roku, AppleTV, Amazon Fire TV and Chromecast. After the two-week free streaming window, episodes will be available in Passport.

Follow along with the Mercy Street social accounts using @PBS and @MercyStreetPBS. A social media outline highlighting in season assets for Mercy Street will be made available shortly, with suggested language to promote upcoming web assets (including the new 360 projects, sneak peek photo galleries, and GIF recaps) as well as episodic preview videos. Other assets include costume sketches, a story map with episodic tie-ins, and a "Meet the New Characters" video. 

TechCon 2017 Registration is Now Open




Viva Las Vegas!

Its that time of year again when public media descends upon Las Vegas to get our Tech on.

TechCon 2017 registration is now open. This year's digital track will focus on multi-platform strategy and tactics, with the sessions covering a variety of topics. More specifics on the digital track will be able in the coming weeks, but look for sessions that explore:

Social Media Collaborations and Facebook LIVE

Building a Digital Department

Optimizing Production Workflows and Digital First Content

Serving Niche Audiences through Digital Marketing

Engaging Audiences through Multi-platform Approach

and many more!

Also, check out the Digital Immersion Project professional development opportunity, created by PBS Digital with support from CPB, that will be launching at this year's TechCon. More information can be found here.

The agenda can be found here, and will be updated as we get closer to conference time.

PBS has reserved a block of rooms at Caesar's Palace, and hotel information can be found here.

Key dates to remember:
  • Early Bird registration deadline is Friday, March 17, 2017
  • Hotel cut-off date: Friday, March 17th
  • Advance Registration closes Friday, April 7, 2017

Stay Tuned for more information....Register Now!

APPLY NOW: PBS Digital Launches the Digital Immersion Project, Kicking off at TechCon 2017

It’s a new year, and with it comes the urge to set new goals and objectives, buoyed by the promise of new possibilities.

In the spirit of harnessing those ambitions, PBS Digital is excited to announce a new professional development opportunity for stations and their digital professionals aimed at improving local efforts across platforms.

The Digital Immersion Project, developed by PBS Digital with support from the Corporation for Public Broadcasting, is a unique opportunity that mixes in-depth training, hands-on workshops, and collaborative mentorship to improve overall expertise in digital strategies and tactics. Twenty-five (25) digital professionals from PBS member stations will be selected to become Digital Immersion Partners. 

The professional development program also focuses on strategic and organizational tactics, with the selected participants being able to draw on the project’s learnings and a national network of public media contacts to further digital success at the local level. Follow the link below to apply, or keep reading for more details.

Follow this link to apply: 
Deadline: February 28, 2017


KEY DIGITAL IMMERSION PROJECT ACTIVITIES

After an initial onboarding, meet and greet, and assessments, the core in-person training week of the project begins with participants sponsored to attend PBS TechCon 2017 from April 19-21. In addition to general conference activities, Digital Immersion Partners will attend a curated program of workshops, sessions, and events, that will lead into post-conference follow-up and goal setting.

For more specifics on activities and schedule, check out the first page of the application.

Following the initial immersion training experience, participants will work with each other and mentors for six additional months to not only gain  knowledge, but also achieve a uniquely set goal that is specific to their station.

Key concepts covered by the training curriculum include:
  • Digital-first Production
  • Distribution Platforms
  • Key Infrastructure Technologies
  • Audience Development 
  • Organizational Structure and Culture
  • Multi-platform Strategy and Tactics 
  • Digital Marketing
  • Data Analysis and Metrics
Upon completion, each participant will also have completed a Digital Strategy exercise that takes the project’s learnings and maps them to station’s goals, complete with tactics and deliverables.

APPLICANT CRITERIA AND DETAILS
The primary requisite for applicants will be a responsibility for digital strategy or execution at a PBS Member Station, which can be either direct or a part of a larger set of job duties. Recipients will be selected from with a wide-range of skillsets and experience in public media.

Ideal applicants have demonstrated leadership in their field of expertise and a commitment to strengthening public media’s reach across platforms to communities around the country.

Since the project emphasizes ongoing learning, applicants should articulate how they would use this digital training to improve their station’s overall digital efforts. A letter of support from a supervisor or GM is also required for the application.

Recipients will be required to participate in all activities detailed in the application introduction, available here.

The Digital Immersion Project application deadline is February 28th, 2017, with recipients being announced in March. 

Our goal for the Digital Immersion Project is to work with individuals from stations of all size and license types. Applications will be reviewed by a multi-discipline group of colleagues from PBS and the Digital Media Advisory Council (DMAC), a representative body of station leaders focused on advocating for station digital needs.

PBS Digital is excited to offer this unique opportunity to stations and their digital professionals with CPB’s support. We hope the talented professionals across the system join us in further developing our industry’s multi-platform capabilities.

If you have any questions, feel free to reach out to PBS Digital at spi@pbs.org.


DIGITAL IMMERSION PROJECT OPPORTUNITY SUMMARY
What: An immersive professional development opportunity developed by PBS Digital, with support from the Corporation for Public Broadcasting, that mixes in-depth training, hands-on workshops, and collaborative mentorship to improve overall expertise in digital strategies and tactics. Participants will attend PBS TechCon 2017 for an in-person training week, followed by six months working with each other and mentors to gain not only knowledge, but also achieve a uniquely set goal that is specific to their station.
Application Deadline: February 28, 2017
Application Link: https://www.surveymonkey.com/r/digitalimmersionproject
Core Applicant Criteria: PBS Member Station employee; any discipline or skillset with total or partial responsibility for station digital efforts; all levels and all station sizes/types welcome
Project Duration: 7 months (April - October 2017)
Total Available Slots: Twenty-five (25)


Hacks/Hackers enters 2017

Welcome to a new year, hacks and hackers! 2016 was a tumultuous one for many Hacks/Hackers groups around the world, and 2017 may or may not be more stable. Last week we featured NiemanLab’s...

Visit Hacks/Hackers to read the full post and join our community.

Innovative Ideas & Amazing Possibilities: The Digital Media Advisory Council


by Cheraine Stanford | Senior Producer/Director | WPSU

What do you get when you gather a group of smart, thoughtful, people from public media stations around the country in a room to talk about the future of digital media?

The answer is a lot of innovative ideas and strategies and a clear sense of the amazing possibilities on the horizon for public media.

For two days in November, members of the PBS Digital Media Advisory Council (DMAC) gathered for its annual Summit at PBS Headquarters to discuss the future of the digital space in public media. We shared successes and challenges and spent some time defining our role in advocating for digital in public media. Members serve as a voice for stations large and small.

As a newly minted member of the council, this was my first DMAC Summit. I filled pages with notes about new program and funding ideas and information learned from presenters. I also shared feedback and ideas from my experiences at WPSU.

A few highlights from the Summit:
  • We heard from PBS Digital & Marketing about work they are doing to better understand how younger audiences consume media as well as how we can successfully incorporate services like Passport.
  • We heard from a representative from Facebook Strategic Partnerships who helped us think about ways to use the platform creatively to reach our desired audiences. He noted that successful organizations are creating platform-specific content.
  • We heard presentations from DMAC members about successful events around phenomena like Pokémon GO and ways those events brought in people who had never visited their stations.
  • DMAC members shared advice and questions to ask as we navigate the digital world. The most important ones for me were: what are your goals and who are you trying to reach? It’s very easy to get caught up in new technologies or the latest social media platform and lose sight of our goals and how these trends can help us serve our communities.
  • We discussed ways to work across platforms to collect data and how to use analytics to determine whether we are reaching and serving our audiences.
  • We heard from PBS Digital Studios about ways they can work with stations to develop projects, give advice and help them navigate digital. PBS is a great resource for stations.
  • We discussed the exciting sessions coming up at the PBS TechCon (the digital and technology conference) which will be held April 19-21 in Las Vegas.

I realized that things that in the past would have been seen as hindrances for smaller stations (fewer people, less funding) don’t have to be limitations in the digital space. There are ways to use the size and ability to move quickly to your advantage.
Innovations in digital media are already impacting every aspect of the work we do. We have the opportunity to use cutting-edge technology to tell compelling stories in new ways, to broaden and diversify our audiences, to engage with our communities in ways we couldn’t before, to reach new supporters and find new ways of fundraising.

The possibilities are endless, which can be daunting, but it’s been important to me to focus on how digital can help public media better fulfill its mission of “using media to educate, inspire, entertain and express the diversity of perspectives.”

Above all else, DMAC reinforced a realization I have had several times during this past year when I have been a public media Next Generation Leadership Fellow. That is, that as public media stations, we are our best resources. Stations in major and smaller markets are doing incredible work, taking chances, making mistakes, learning from them and having great successes.

One of the most important roles of the DMAC is fostering this kind of station collaboration.

In a time when technology, platforms, and modes of storytelling are changing rapidly, it’s important for all of us in public media to be sharing our successes and challenges. No one has all the answers, there is no one-size-fits-all strategy, but we can all be working together to learn and grow and make an impact.

Because, as our very wise DMAC Chair Colleen Wilson noted: Digital is our present. Digital is our future.


2016 Press Picks: Best of Radiotopia

2016 was a banner year for Radiotopia. We performed our first live show, ran an hugely successful Podquest competition (look for Ear Hustle this summer!), added three new podcasts, and much more. We are so grateful to our fans for the love and support this year. To give 2016 a proper send-off, we gathered up the ‘Best of 2016’ podcast episode articles published this month, many included Radiotopia episodes. Check out our roundup below and take a listen, or load up our playlist. More audio goodness to come in 2017!

The Memory Palace

From The Atlantic– The Wheel 
and Below, from Above

From VultureGallery 742

From IndieWireFinishing Hold

From Outside MagazineArtist in Landscape

The Heart

From Vulture, Audible Feast, and The Atlantic– Mariya, Extended Cut 

From The Guardian and The AtlanticSilent Evidence (four parts)

From Audible FeastMy Everything, My Bear

Criminal

From Wired– Money Tree 

From The Atlantic– One-Eyed Joe

From Audible FeastThe Editor

From IndieWireMelinda and Judy (two parts)

Theory of Everything

From The Atlantic– Honeypot
and Sudculture (two parts)

From Outside MagazineA Light Touch and a Slight Nudge

Love + Radio

From The Guardian, Thrillist, and Outside MagazineA Girl of Ivory 

From The Atlantic– The Man in the Road

Millennial

From The Atlantic– Double Life
and You Can’t Go Home Again

From New StatesmanMen, Moms & Money

Mortified

From The Atlantic– Totally Juvenile Election Special

From The Guardian– Summer Camp Spectacular

99% Invisible

From The GuardianMojave Phone Booth

From WiredMiss Manhattan

From Outside MagazineThe Green Book

From Audible FeastThe Giftschrank

From NPRAmerica’s Last Top Model

Song Exploder

From The GuardianMGMT’s Time to Pretend

From Audible FeastWeezer’s Summer Elaine and and Drunk Dori

The Allusionist

From New Statesman– Getting Toasty

and Please


Radio Diaries

From ThrillistMajd’s Diary

From Audible FeastFrom Flint to Rio

Strangers

From ThrillistThe Truth

From IndieWireJo & Fayaz

The Truth

From IndieWireCommentary Track

Check out our playlist with (almost) all the episodes here.

 

The post 2016 Press Picks: Best of Radiotopia appeared first on PRX.

Sherlock Season 4: Tune in or Streaming Event


To take advantage of the new season of Sherlock on Masterpiece, streaming availability in COVE will be available directly after the East Coast broadcast. This opportunity will allow stations to promote broadcast tune-in or streaming online starting at 10PM ET.

Streaming release times for these episodes
  • January 1, 2017: Episode 1, The Six Thatchers | PBS Video Link
  • January 8, 2017: Episode 2, The Lying Detective
  • January 15, 2017: Episode 3, The Final Problem
These episodes will now be available by the time each east coast broadcast concludes. 

Each episode will be available for two weeks after broadcast.  Passport rights are not available for this program. Promotional assets for Sherlock on Masterpiece are available on Pressroom and the Source.

Announcing the Project Catapult Cohort!

Project Catapult Podcast TrainingPRX is excited to announce the first cohort of Project Catapultan innovative podcast training project for public media stations, made possible by a $1 million grant from the Corporation for Public Broadcasting (CPB).

The project initially intended to include five stations, but will now total seven. “The final pool of applicants was so strong, we found a way to expand the first Catapult class to seven station teams,” said PRX CEO Kerri Hoffman.

The stations are located across the US, have varying market sizes and represent diverse production teams and topics. They’ll kick things off at the PRX Podcast Garage in Cambridge, MA in January with a podcast bootcamp, and will continue an intensive production sprint for 20 weeks.

The Catapult process will create a professional network of diverse talent across the country, and help the podcasters hone skills in digital content development, audience engagement and monetization. At the end of the curriculum each station, in co-production with PRX, will launch a new, or re-launch an existing, podcast.

PRX has hired Enrico Benjamin as Catapult’s project director. Benjamin is an Emmy award-winning producer with a background in video and digital production, most recently KING-TV in Seattle. During his time at Stanford University, Benjamin was exposed to design thinking, a method that will guide Project Catapult.

“Through this innovative program, we’re pleased to help more stations increase their multimedia production capacity and increase the diversity of voices heard in public media,” said Erika Pulley-Hayes, CPB vice president, radio. “We hope the new podcasts that these stations produce will lay the groundwork for more multimedia content that connects with a broad range of audiences.”

“Project Catapult is an ambitious first step,” said PRX CEO Hoffman.  “We are investing in station capacity so they can make digital content that is sustainable and relevant, both locally and beyond.”

Project Catapult will culminate in an open listening session in Boston in May to show off the work and progress to date.

Project Catapult Stations

Inflection Point, KALW – San Francisco, CA
Extraordinary women are leading the change in our world join the KALW team to tell their stories–to help us understand a moment when women are embracing their power as never before, and to inspire a future generation of women leaders.

Versify, Nashville Public Radio – Nashville, TN
Versify is a podcast with a twist on storytelling: Nashville poets travel to neighborhoods across the city, hear stories from people they’ve never met, and then capture them in verse.

Us & Them, West Virginia Public Radio
Stories of people on either side of the fault lines that divide Americans, from culture wars, to education and religion, to the basic beliefs about what defines Americans in a troubled time. From DuPont Award-winning producer Trey Kay.

We Live Here, St. Louis Public Radio – St. Louis, MO
We Live Here empowers you by untangling policy and systems so you can better understand how race and class influence everything from what we learn to how long we live.

Que Pasa Midwest, WNIN – Evansville, IN
Whether you speak Spanish, English, or both, come along on a rich journey of discovering El Sueño Americano, the many definitions and faces of the American Dream with Que Pasa Midwest.

Out of Blocks, WYPR – Baltimore, MD
Each episode is a collage of life-stories from a single city block. The episodes are rich with the sounds of people in their own spaces, talking about life on their own terms. The soundscape is enhanced when the natural sounds of the block are fused with an original musical score. There is no host; rather, the people on the block are the hosts.

Second Wave, KUOW – Seattle, WA
Thanh Tan takes the listener along on a quest to better understand her Vietnamese American identity and to explore the heartbreak and triumph of refugees who fled Southeast Asia en masse 40 years ago after the Vietnam War to pursue new lives in the United States.

—-

About PRX
PRX is shaping the future of public media content, talent and technology. PRX is a leading creator and distributor, connecting audio producers with their most engaged, supportive audiences across broadcast, web and mobile. A fierce champion of new voices, new formats, and new business models, PRX advocates for the entrepreneurial producer. PRX is an award-winning media company, reaching millions of weekly listeners worldwide.  For over a dozen years, PRX has operated public radio’s largest distribution marketplace, offering thousands of audio shows including This American Life, The Moth Radio Hour and Reveal. In 2015, PRX opened the Podcast Garage, a community recording studio and educational hub dedicated to the craft of audio storytelling. Follow us on Twitter at @prx.

The post Announcing the Project Catapult Cohort! appeared first on PRX.

How We Cleaned Up And Ranked Our Listeners’ Favorite Albums of 2016

Header of All Songs Considered articleAll Songs Considered asks listeners for their favorite albums of 2016

At the beginning of December 2016, All Songs Considered followed a nice tradition and asked listeners for their favorite albums of 2016. Users could enter up to five different albums in a Google form, ranked according to their preferences. The poll was open for eight days and resulted in more than 4,500 entries.

In the end, the All Songs Considered team wanted a ranked list of the best albums. Sounds easy, right?

But data is always messy and there are a few problems to solve with this dataset. First, there were some obviously not-so-awesome things going on with the Google spreadsheet that gathers the results:

Header of All Songs Considered articleDifferent spelling, empty rows, multiple entries by one person: Ugh

In addition to cleaning up the data to make it usable, we had to decide on a weighting algorithm for the five different ranks and calculate it.

Since the whole project had a tight deadline, our process wasn’t pretty, but we did it. Here’s how:

Step 1: Combining Like Entries

The poll asks listeners to type in the artist and album, separated with a comma. But humans are faulty creatures who make spelling mistakes, don’t obey the rules or don’t remember the name of an album correctly. This faultiness results in a nice compilation of a dozen different ways to write one and the same thing:

Bon Iver - 22, a Million
Bon Iver -22, A Million
Bon Iver 22 a million
Bon Iver, '22, A Million'
BON IVER-22 A MILLION
Bon Iver
bon iver, "22, a million"
Bon Iver, 22/10
Bon Iver, 20 a Million 
Bon Iver 22
bon iver, 22 a million
Bon Iver, 22, A Million
Bon Iver, 22a million
BonIver, 22 a million
Bon Iver, 33 a million
Bon Iver, 22 million
Bon Iver,22,a Million
22, A Million 
Bon Iver: 22, A million 
bon iver. 22,a million
…

…and that’s still a relatively easy album name. I rely on your imagination to think of all the possible ways to spell “A Tribe Called Quest, We Got It from Here… Thank You 4 Your Service”.

To fix that mess, we used a combination of cluster analysis in OpenRefine and “Find and Replace” in Google Spreadsheet.

First, OpenRefine. To run the cluster analysis on just one column instead of five different ones, we needed to transform the data from a “wide” format into a “long” format. This can be easily achieved, e.g. with R:

library(reshape2)
d = read.csv("data.csv",stringsAsFactors = FALSE)
d = melt(d,id.vars = c('Timestamp'))
write.csv(d,”data_long.csv”)

Then we imported the CSV into OpenRefine, selected our one column that states all artist-album entries and chose Facet > Text Facet and then Cluster.

OpenRefine interfaceText Facet in OpenRefine

So what is cluster analysis? Basically, OpenRefine can run different algorithms on the data to cluster similar entries. Depending on the algorithm, “similar” is defined differently. OpenRefine offers different methods and keying functions, and we used all of them one after another.

OpenRefine interfaceClustering in OpenRefine

OpenRefine then lets us select and merge similar entries and give them all a new name.

After successfully running through lots of different cluster methods, our data was approximately 95 percent clean. Our Bon Iver entries looked like this:

Bon Iver, 22, A Million
Bon Iver, 22, A Million
Bon Iver, 22, A Million
Bon Iver, 22, A Million
Bon Iver
Bon Iver, 22, A Million
Bon Iver, 22, A Million
22, A Million 
Bon Iver, 22, A Million
Bon Iver, 22, A Million
…

So much better! But OpenRefine doesn’t take care of the cases in which only the album or artist is mentioned. So we imported the data back into Google Spreadsheet and took care of that by hand – with a combination of “Find and Replace” and sorting the list alphabetically (which places all the Bon Iver’s before Bon Iver, 22, A Million).

Step 2: Roughly clean up with a Python script

Once we made sure that the albums were written in the same way, they were countable. But we still needed to only count the entries that are from individual listeners who don’t abuse the poll. To do so, we ran the cleaned data through a Python script. The Pandas library is a great choice for our first easy task, dropping the empty rows:

# Drop empty rows
albums.dropna(subset=RANKS)

But Pandas proved to be a bad choice for the next task: Deleting duplicate rows that appears within one hour. Doing that makes sure that we eliminated the entries that obviously come from one and the same person. We saw dozens of these copy-and-pasted entries (especially for the album Mind of Mine by Zayn). To get rid of all the duplicate entries within one hour, we first transformed the Pandas dataframe to a Python list and then checked for identical entries:

# Do row values match? If not, not a dupe
for rank in RANKS:
    if row1[rank] != row2[rank]:
        return False

The last piece is checking for mentions of the same album within one entry, eg “Beyonce, Lemonade” on rank 1 and on rank 3. We wanted to delete these rows as well. To do so, we used a solution that we found on StackOverFlow:

# check if all elements in a list are identical
iterator = iter(iterator)
try:
    first = next(iterator)
except StopIteration:
    return True
return all(first == rest for rest in iterator)

That whole process removed 1200 empty or duplicate rows and brought the CSV from 4,500 entries down to 3,300 entries.

Step 3: Weight and rank with an R script

Wooooooohoo! We went from messy, human-made data to clean, machine-readable data! Next, we did the actual calculations that got us to a ranked list of the top albums.

To spice things up a little bit (or maybe because we have people with different favorite tools on the team), we did this part of the process not with Python, but with R.

After converting the data back into a long format, it looks like this:

data in RData with ranks in long format

Next, we gave each album a ranking value. To do so, we just replaced the rank columns with ranking values:

d$rank[d$rank=="Rank.1"]= 5
d$rank[d$rank=="Rank.2"]= 4
d$rank[d$rank=="Rank.3"]= 3
d$rank[d$rank=="Rank.4"]= 2
d$rank[d$rank=="Rank.5"]= 1

Note here that we are giving the number one albums the most points and the number five album the least points. This means a sum of these points will lead to the most popular album.

With numerical rank values, we could try out different ranking methods and different ways of aggregating these ranks. We quickly found that artists like Zayn who had campaigns on their behalf had huge spikes on certain days in terms of entries:

Zayn pollsThe table shows how often Zayn’s Mind of Mine was mentioned on all days of the poll. He was really successful on the first and the second-to-last day.

In contrast, artists like Bon Iver have a very consistent number of entries each day. We decided to favor these consistent entries. Our final calculations gave back a rank of albums for each day and then summed these daily rankings.

To do so, we reduced the Timestamp column to the month and day with d$Timestamp = substr(d$Timestamp,1,5), which removes all characters after the first 5 characters. Then we used the dplyr library to sum up the rankings to calculate points for each album on each day:

d = d %>% 
  group_by(Timestamp,album) %>%
  summarise(points = sum(rank))

After getting rid of the n/a values, we sorted the albums by these points and give it a rank number. Meaning, the album with the most points per day gets the rank “1”, the album with the second most points per day gets the rank “2” etc:

d = d %>% 
  arrange(Timestamp, -points, album) %>%
  group_by(Timestamp) %>%
  mutate(rank=row_number())

After transforming the data back to a wide format and summing up the ranking for each day, we arrive at the final ranking:

Final rankingThe final ranking: the sum of the rankings for each day.

For days where an album did not get mentioned, we used the ranking 200. We achieved this with d_wide[is.na(d_wide)] <- 200:

Final ranking with empty valuesWe replaced empty values with a high number, so that they didn’t show up at the top of the ranking

If we wanted to be more correct, we could get the max number of mentioned albums for each day, and then replace the n/a values with this max number. Since we only want to show the very top albums and they were all mentioned at least once every day, we didn’t need that method for our goal.

We made it! To recap this complicated process, let’s look at the steps again:

  1. To unify the spelling of these albums, we ran some cluster analysis in OpenRefine and cleaned up the data in Google spreadsheet
  2. Then we wrote a Python script to remove duplicate rows/cells and empty rows
  3. At the end, we calculated the ranking for each album per day and summed them up with an R script

The final ranking is also published on All Songs Considered. Next time we’ll do an autocomplete survey, yeah?

PRX Remix Picks: Dreams Deceived, Deferred, Fulfilled

This month, I’m featuring stories about dreams: the tale of a woman who dreams of a bigger apartment, the consequences of a jail system that puts dreams on hold, and a mother-daughter team helping each other to fulfill lifelong ambitions.

dreams
Time to “biggerize”

“Quadraturin” from Jon Earle and Emma Wiseman

A young woman lives in a New York City apartment so cramped there’s no room for a couch. She doesn’t even need to get out of bed to open the door. So, why wouldn’t she participate in a bizarre science experiment to “biggerize” her digs? After all, as the story’s protagonist exclaims, “there’s nothing on the lease about ‘biggerization!’”

This is the situation in “Quadraturin,” a captivating piece of audio fiction from producer Jon Earle and actress Emma Wiseman, based on a short story by Sigizmund Krzhizhanovsky. The piece won Best New Artist at the 2016 Sarah Awards.

Earle and Wiseman use scenes and natural sounds to great effect, turning the sonic apartment into an imaginary stage on which the story unfolds. In the booming audio fiction genre, it’s especially nice to hear a story that relies on smart staging and careful dialogue instead of the ‘found recording’ crutch, often used in other pieces, to drive narrative.  To understand this crutch in the visual world, imagine a sudden plethora of TV shows with plots hinging on faux-archival videos.

“Off The Block” from KCRW’s Independent Producer Project

dreams
Palm trees distract from the largest jail system in the country

The Los Angeles jail system is the largest in the country, with 17,000 people incarcerated at any given given time. The consequences are explored in “Off The Block,” a six-part series from KCRW. Bail, mental health, and jailhouse weddings are some of the topics covered in the series, which explains that even a short stint in the system can have numerous lasting impacts.

The episodes are short—most well under 10 minutes—and not an exhaustive investigation into the issues presented. But the series does a good job finding characters whose experiences and perspectives provide an access point for listeners who aren’t directly impacted by the jail system themselves.

“Now There’s Only Time To Live Forever” from Jessica Ripka

dreams
It’s a story within a story!

When I listened to this piece I felt like I’d emerged from diving underwater, when the world looks slightly different than it did before the plunge. It’s the mark of a good story, one that shifts your life experience by just a few degrees so everything feels a bit shinier and more surreal.

There are two main stories nestled into one here. First, producer Jessica Ripka tells the story of her mother, Penelope DeWitt, whose creative dreams fell dormant for decades due to fear and insecurity. A car crash renews her interest in pursuing those dreams. Ripka then uses her mother’s story to reflect her own life, how she quit her desk job to pursue a dream career in radio storytelling. This piece represents an important first step towards that dream.

It’s a joy to follow the mother and daughter pair along on their overlapping journey to fulfill lifelong ambitions. Ripka’s piece is funny, surprising, and, perhaps unsurprising given the relationship between producer and subject, very tenderly told.

This piece was produced at the Fall 2016 Transom Story Workshop.

How To Listen to PRX Remix:
Download the PRX Remix app or go to prx.mx and press ‘play’. If you’re a satellite radio kind of person, check out channel 123 on Sirius XM or XM radio. If you’re a traditionalist and stick to the radio dial, check these listings to find Remix on a station near you.

Josh Swartz is the curator of PRX Remix. Email him at remix@prx.org

The post PRX Remix Picks: Dreams Deceived, Deferred, Fulfilled appeared first on PRX.

Mercy Street Season 2 Digital Content


Season 2 of Mercy Street premieres January 22 and to help you with promotion, below is an overview of some of the digital support you’ll see in the coming weeks:

Website
Continuing to build on the dynamic Mercy Street website from last season, there will be new content and experiences to excite your viewers for Season 2.

New assets include:
Popular returning assets:
Even more exciting content, including 360 videos and photos, will be rolling out in January, so stay tuned for more website information.

Social Media – #MercyStreetPBS and @MercyStreetPBS
An in-season social media outline, including brand new videos, will be available the week of January 16. Leading up to the premiere an overview of this content highlights the new characters and provides a countdown calendar to the premiere.

PBS LearningMedia
Here are the Mercy Street education materials available:
Make sure you check out the wide collection of promotional assets on the Source – don’t miss the brand new program guide feature for use around Black History Month – and publicity assets on PBS PressRoom.




Radiotopia Fall Fundraiser 2016: We Did it Again!

We recently wrapped up our Radiotopia 2016 fall fundraiser, and were blown away by the love and support from our fans, both old and new. With every drive, we gain new and important insights into the podcast fundraising universe and our dedicated fanbase. We’re always keen to learn how to best engage with listeners, make a genuine appeal, and secure the funds our shows need to keep creating quality, independent content.

Our fans: Whether they’ve been with us since the beginning, or just started listening…they’re the best.

This campaign taught us just how dedicated, generous and committed our fans truly are. A whopping 80% of our recurring 2015 donors stayed on as part of our active donor community this year. We aimed to steward existing relationships while encouraging steadfast donors to expose friends, partners, siblings and co-workers to the quality craft producing within Radiotopia. It worked!
challenge-coin-collage

As a surprise, we rewarded active sustaining members with our second challenge coin, this time Radiotopia themed.

Interestingly, of the 6,300 donors who contributed to this campaign, 64% had never before donated to Radiotopia.

Partnerships: Work together to drive donations.

Last year, we began a tradition of bringing our sponsors into the fundraiser to help provide donor challenges. These partners have become important tools that generate fan excitement and showcase our important corporate sponsors.

This year, Podster Magazine—a digital magazine dedicated to podcasts—jump-started the fundraiser by offering to chip in $10,000 if we hit 1,000 donors in the first two days. When we missed our goal by a few hours, our fans sprang into action and helped us ultimately secure the challenge funds from Podster (by the way, you can still get a free subscription). A big thank you to Podster!

A few days later, our friends at FreshBooks—who offer cloud-based accounting software for small businesses—issued another key challenge: a $40,000 donation if we snagged another 5,000 donors by the end of the campaign. This helped energize our fans to spread the word to friends and family, allowing us to soar beyond that goal to finish with over 6,000 donors. Thanks again to FreshBooks!

Producer rewards: Engaging, unique and original premiums.

This year, our producers offered up their time and talent to create exclusive, custom reward items that were incredibly popular with donors. Some rewards showcased their creative talents, like the curated mixtape from Song Exploder’s Hrishikesh Hirway (which quickly sold out), and the custom recording from Criminal’s Phoebe Judge.

Others gave lucky fans the opportunity to engage on more a producer-collagepersonal level. These included a VIP Dinner with the Kitchen Sisters, one-on-one phone calls with Megan Tan from Millennial, a virtual documentary viewing with team Mortified, a museum tour with Nate DiMeo of The Memory Palace… oh, and a wedding ceremony officiated by Helen Zaltzman of The Allusionist. Overall, we found the personalized gifts were a great way to way to drive excitement and, sometimes, laughter.

Benefit without the reward: The choice of no gift.

A whopping 40% of donors opted for no reward at all. Despite the long-time association of public media with t-shirts and tote bags, nearly half of our donors opted to support us directly. This ultimately allows our independent producers to keep more of the funds and for Radiotopia to save on fulfillment expenses and benefit more directly from the campaign’s success.

The result: The reach of Radiotopia’s message is impressive (if we do say so ourselves).

  • We surpassed our original goal of 5,000 donations by over 1,000 people
  • 64% of donors were brand new to our community
    • The industry average is 20% new donors for any fundraising drive
  • 80% of our sustaining members from last year maintained their monthly commitments
  • 12% of donors who has previously cancelled their recurring donations came back in 2016
  • We had donors from all 50 states and 73 countries/territoriesradiotopia-donors-by-country

The post Radiotopia Fall Fundraiser 2016: We Did it Again! appeared first on PRX.

Be our design/code/??? intern for winter/spring 2017!

Semi-Automatic Weapons Without A Background Check Can Be Just A Click AwayMap by Visuals Team intern Brittany Mayes

Are you data-curious, internet savvy, and interested in journalism? Do you draw, design, or write code? We are looking for you.

We’ve had journalists who are learning to code, programmers who are learning about journalism, designers who love data graphics, designers who love UX, reporters who love data, and illustrators who make beautiful things.

Does this sound like you? Please join our team! It isn’t always easy, but it is very rewarding. You’ll learn a ton and you’ll have a lot of fun.

The internship runs from January 9, 2017 to April 21, 2017. Applications are due October 23, 2016 at 11:59pm eastern.

Here’s how to apply

Read about our expectations and selection process and then apply now!

Into images? Check out our photo editing internship.

What makes a great photo editing intern (Apply now for Winter/spring 2017!)

NPR Interns at workPhoto by Rachael Ketterer

This is not your standard photo internship!

This internship is an opportunity to learn more about the world of photo editing. Our goal isn’t to make you into a photo editor; we view this internship as a chance for you to understand what it is like to be an editor and improve your visual literacy, which can help you become a better photographer.

The internship runs from January 9, 2017 to April 21, 2017. Applications are due October 23, 2016 at 11:59pm eastern.

What you will be doing

  • Editing: You’ll be working closely with the Visuals Team’s photo editors (Ariel and Emily) on fast-paced deadlines – we’re talking anywhere from 15 minutes to publication, to short-term projects that are a week out. You’ll dig into news coverage and photo research, learning how to communicate about what makes a good image across a range of news topics, including international, national, technology, arts and more.

  • Photography: Depending on the news cycle, there may be opportunities to photograph DC-area assignments. This can mean you’d have one or two shoots in a week, or maybe just a couple shoots in a month. You’ll work closely with a radio or web reporter while out in the field, and a photo editor will go through your work and provide feedback for each assignment. There will also be a chance to work on portraiture and still lifes in our studio.

  • We also encourage each intern to create a self-directed project to work on throughout the semester. It can be an Instagram series, video, photo essay, text story or anything in-between. You can work independently or with another intern or reporter.

You will be part of NPR’s intern program, which includes 40-50 interns each semester, across different departments. There will be coordinated training and intern-focused programming throughout the semester, which includes meeting NPR radio hosts, career development and other opportunities. As an intern, you will be treated as a member of the team. Many NPR employees are former interns and they’re always willing to help current interns.

Eligibility

Any student (undergraduate or graduate), or person who has graduated no more than 12 months prior to the start of the internship period to which he/she is applying is eligible. Interns must be authorized to work in the United States.

Who should apply

We’re looking for candidates that have a strong photojournalism background. An interest in editing, or experience with video/photo editing is a nice plus. It’s also helpful if you’ve completed at least one photojournalism-focused internship prior to applying (let us know if you have!), though it’s not necessary. A portfolio, however, is required.

We also want folks who can tell us what they would like to accomplish during their time at NPR. What do you want to learn? What do you want to try? We try to shape each internship around our intern, so we rely on you to tell us what goals you have for your time with us!

So how do I apply?

Does this sound like you? Read about our expectations and selection process and then apply now!

Into code, design, and data? Check out our design/developement internship.

Pym.js v1.0.0 release – what do you need to know

The NPR Visuals Team happy to announce the release of Pym.js v1.0.0. We want to share with all of you the goals that we hope to achieve with it and the design process that led us to the new release.

But wait, what is Pym.js for?

Pym.js embeds and resizes an iframe responsively (width and height) within its parent container while bypassing the usual cross-domain related issues.

Pym.js v1.0.0 Goals

  • Fix Pym.js loading issues and integration problems with certain CMSes.
  • Add automated unit testing to improve reliability moving forward.
  • Serve Pym.js through a canonical CDN, but leave room for the library evolution.
  • Clean up small issues and merge pull requests made by the community.

Loading Pym.js in complicated environments

Pym.js v1.0.0 development has been driven by a change needed to extend the ability to use Pym.js in certain CMSes used by NPR member stations and other use cases found by our collaborators. The Pym.js loading process broke for these users and thus made the embeds unusable.

Some content management systems prevent custom Javascript from being embedded on the page, others use pjax to load content, and still others use RequireJS to load libraries. Since Pym.js was designed as a library with support for inclusion using AMD and CommonJS, we have encountered certain CMSes scenarios where Pym.js broke in some cases or did not load at all. Pym.js v1.0.0 development was geared towards solving these issues.

That’s why we created pym-loader.js, an additional script that acts as a wrapper to deal with all the nitty gritty details to successfully load Pym.js in many common cases. pym-loader.js was developed after much thought and discussion with developers using Pym.js.

We have decided to separate the particular needs of the Pym.js loading process in these special situations into a separate script that will wrap and load Pym.js for these cases instead of polluting the Pym.js library itself with special needs of certain CMSes.

We want to keep Pym.js loading and invocation as manageable as possible. Due to the extensive use of Pym.js in many different environments, we encourage implementers to create special loaders if their integrations require it.

If you have a reasonable amount of control over your CMS’s Pym.js implementation, we recommend the raw inclusion of Pym.js. If you do not have that control over your CMS, are having problems loading Pym.js directly or just prefer to feel more protected against future changes to your CMS then you can use the loader script.

Testing Pym.js

Having some unit testing in place for Pym.js will allow us to be more reliable and efficient moving forward with the maintenance of the library. So in this v1.0.0 release we have introduced unit testing for Pym.js.

The testing suite uses a combination of Karma, Jasmine and Sauce Labs to improve our browser coverage (Sauce Labs provides a nice free tier solution for open source projects).

We have found some caveats using Sauce Labs as a testing platform for open source projects. Sauce Labs manages parts of its services, specifically badges, with a user-based approach instead of a project based approach. If you need to test more than one open source project you will need to rely on creating virtual users which is just not a good long term solution.

Having talked to Sauce Labs support about it, they have pointed us to their product ideas website to ask for that feature to be included. If you work with open source projects and would like to be able to include tests for multiple projects under the same user, go ahead and support our feature idea.

Versioning Pym.js

Starting with Pym.js v1.0.0, the library follows the semantic versioning pattern MAJOR.MINOR.PATCH.

  • MAJOR version changes for backwards-incompatible API changes.
  • MINOR version for new backwards-compatible functionality.
  • PATCH version for backwards-compatible bug fixes.

NPR will host and serve pym.js and pym-loader.js through a canonical CDN at pym.nprapps.com. We recommend that you link directly there to benefit instantaneously from the patches and minor releases. Specifically, you can link to:

To minimize the impact on our current and future customers, on the production side of Pym.js we are only going to keep the major version exposed. That way we can apply PATCHES and MINOR version changes without any change being made on our customer’s code but we maintain the possibility of new major releases that are somewhat disruptive with previous versions of the library.

If for any reason you want to point to a particular release instead, just head over to our Github release page and download the version you are looking for.

Issues & Pull Requests

With Pym.js v1.0.0 release we have fixed 7 open issues and integrated 7 pull requests.

Most of the issues were related with better documentation and fixing integration problems with various CMSes.

Most of the Pull Requests dealt also with adding more configuration options to Pym.js as well as solving integration issues.

Summary

We hope that this release of Pym.js will extend its ability to be used by NPR member stations and other customers thanks to the new pym-loader.js implementation.

Interested in using Pym.js? Please refer to the user documentation and API documentation.

We would like to thank all of our collaborators/contributors for their insightful feedback and thorough discussion, a special shout-out goes to Hearken for the progress on their Pym.js fork and willingness to merge together so that we do not diverge and thus help us grow Pym.js together.

How we built a VR project using web technologies

A screenshot of [Standing At The Edge Of Geologic Time](http://apps.npr.org/rockymountain-vr) in virtual reality A screenshot of Standing At The Edge Of Geologic Time in virtual reality.

Last Wednesday, the NPR Visuals Team published a virtual reality story about the geologic history of Rocky Mountain National Park. It was weird! Making a virtual reality project on the web presented a lot of new challenges for us. This blog post will explore some of the challenges and how we solved them.

Making the Web Experience

We had three main goals when creating the web experience out of these assets:

  1. Make an immersive experience out of the 360º photos we had created and the binaural audio we recorded.
  2. Ensure the experience worked across devices, on phones, desktops and Cardboards.
  3. Do this on the web. We weren’t interested in Oculus or other things that required users to install software.

Given these requirements, we wanted to work with WebVR. The experimental JavaScript API is basically not supported in any browsers yet, but work on making WebVR a reality is active, and a few projects have sprung up in an attempt to getting people working with WebVR today.

Google VR has created VR View, an incredibly simple way of creating a 360º image viewer. The code is all open source, and we could have modified the experience however we wanted, but the starting point is so opinionated that making an experience that integrated well with our audio and design style felt onerous. But for just getting an image on the page, VR View is as simple as it gets.

Boris Smus maintains the WebVR Boilerplate, a starting point that uses Three.js that has been used by our friends at the LA Times and National Geographic. It is a great starting point, and we would have used this, but we found a project based on Boris’s work called A-frame, spearheaded by Mozilla’s VR group.

An Introduction to A-frame

A-frame’s key feature is its markup-based scene-building system. Instead of building your entire scene in JavaScript, A-frame gives you the ability to build scenes using custom HTML tags. Because A-frame defines custom HTML tags for you, they are treated by the browser as DOM elements, making them manipulable in JavaScript just like any other DOM element.

A simple A-frame scene might look like this:

<a-scene>
    <a-sky src="url/to/my/image.jpg"></a-sky>
</a-scene>

This would build a VR scene that projects an equirectangular image across a 360º sphere. In three lines of markup, we have the basis of our app.

Every A-frame document begins and ends with an <a-scene> tag, just like an HTML document starts and ends with an <html> tag. And just like a valid HTML document, you can only have one.

The <a-sky> tag demonstrates the basic functionality of A-frame. A-frame is based on the “entity-component-system” pattern. The structure of entity-component-system is worth reading in detail, but it basically works like this:

Entities are general objects that, by themselves, do nothing, like an empty <div>. A-frame represents entities as tags. Components define aspects of entities, such as their size, color, geometry or position. These appear in A-frame as attributes of those tags (perhaps confusingly, standard HTML attributes like class still work). Components can have multiple properties; for example, the camera component has a fov property which defines the field of view, an active property which defines whether or not the camera is active and more. Importantly, components are reusable — they do not rely on certain entities to work.

Returning to our example, <a-sky> is our entity, and src is a component that loads an image and projects it into the sky.

There is one caveat to this: <a-sky> is technically not an entity. A-frame provides one extra convenience layer beyond entities and components: primitives. Primitives look like entities, but are in fact an extension of entities that make it easier to perform common tasks, like projecting a 360º image in a 3D scene. In short, they are entities with pre-defined components. An <a-sky> is an entity with a pre-defined geometry component.

Building multiple scenes

In our story, we wanted to display multiple equirectangular images in a sequence tied to our audio story. A-frame poses a problem: you can only have one scene in A-frame. And when A-frame builds that scene, it renders everything at once. So how can you move between multiple scenes inside your one scene? You show and hide entities.

A component available to all entities in A-frame is the visibility component. It works simply: add visible: false to any entity tag and the entity is no longer visible.

Thus, the basic structure of our A-frame scene looked like this:

<a-scene>
    <a-entity class="scene" id="name-of-scene">
        <a-sky src="path/to/image1.jpg" visible="true">
    </a-entity>
    <a-entity class="scene" id="name-of-scene">
        <a-sky src="path/to/image2.jpg" visible="false">
    </a-entity>
    <a-entity class="scene" id="name-of-scene">
        <a-sky src="path/to/image3.jpg" visible="false">
    </a-entity>
    <a-entity class="scene" id="name-of-scene">
        <a-sky src="path/to/image4.jpg" visible="false">
    </a-entity>
</a-scene>

We timed switching visible scenes with certain points in our audio file. By hooking into the HTML5 audio timeupdate event, we could know the current position of our audio at any time. We attached the time we wanted scenes to switch as data attributes on the scene entities. Again, A-frame entities are just DOM elements, so you can do anything with them that you can do to another DOM element.

<a-entity class="scene" id="name-of-scene" data-checkpoint="end-time-in-seconds">
    <a-sky src="path/to/image1.jpg" visible="true">
</a-entity>
…

Using the timeupdate event, we switched the visible scene once we past the end time of the currently visible scene. This is a technique we’ve used many times in the past and you can read more about here.

Animation

Another core piece of A-frame is the ability to animate elements within a scene. We used A-frame’s animation engine to control the “hands-free” experience we offered on desktop.

To do this, we animated A-frame’s camera. The camera itself is an entity within the scene. To animate an entity, you create the animations as tags that are children of the entity. For example:

<a-scene>
    <a-entity camera drag-look-controls>
        <a-animation attribute="rotation" duration="40000" from="10 -80 0" to="0 15 0"></a-animation>
    </a-entity>
</a-scene>

This animation will rotate the camera in 40 seconds.

You can also begin and end animations based on events. You pass the names of the events as attributes on the animation tag:

<a-animation attribute="rotation" duration="40000" from="10 -80 0" to="0 15 0" begin="enter-scene" end="cancel-animation"></a-animation>

Then, in JavaScript, you can have the camera (or any entity) emit an event, which will either begin or end the animation.

var camera = document.querySelector('a-entity[camera]');
camera.emit('enter-scene');

To make our guided experience work, we had an animation for each of our scenes. When we entered the scene at the correct place in the audio, we emitted the proper event that started the animation.

Putting It All Together

While it is great that A-frame is a markup-based system, having the team manage the entire experience by modifying markup would have been frustrating and difficult. So we turned to a system we have been using for years: spreadsheet-driven templating. Using a spreadsheet allowed us to put each entity in its own row. Then, columns corresponded to components on the entity or other data we needed to attach to the entity via data attributes.

A simplified version of the spreadsheet looks like this:

Using Jinja templates and our copytext library, we were able to loop through each row and build our scene. For example, the first row in our sheet would result in the following:

<a-entity class="scene" id="dream-lake" data-name="Dream Lake" data-checkpoint="29" data-fov="80" >
    <a-entity class="sky" visible="false">
        <a-sky src="dl-615.jpg" rotation="0 -250 0"></a-sky>
    </a-entity>
</a-entity>

In a separate spreadsheet, we built each animation we wanted for guided mode. Using the id of the scene, we could effectively join the two sheets together on the id. Here’s a sample of the animation spreadsheet:

Then, within the camera entity as demonstrated above, we can loop through this spreadsheet and build each animation. The first row of the spreadsheet would build this:

<a-entity camera drag-look-controls>
    <a-animation attribute="rotation" dur="40000" from="-10 80 0" to="0 15 0" begin="enter-dream-lake" end="cancel" easing="linear"></a-animation>
    …
</a-entity>

Take note of the begin attribute. By using the id of the scene, each scene’s animation can begin independently. In our JavaScript, we would emit that event as soon as the scene switched.

Combining these two concepts, our A-frame scene looks like this in a Jinja template:

<a-scene>
    <a-entity camera drag-look-controls>
        {% for row in COPY.vr_animations %}
        <a-animation attribute="{{ row.attribute }}" dur="{{ row.duration }}" from="{{ row.from_value }}" to="{{ row.to_value }}" begin="enter-{{ row.id }}" end="cancel" easing="linear"></a-animation>
        {% endfor %}
    </a-entity>
    {% for row in COPY.vr %}
    <a-entity class="scene" id="{{ row.id }}" data-name="{{ row.name }}" data-checkpoint="{{ row.end_time }}" data-fov="{{ row.fov }}">
        <a-entity class="sky" visible="false">
            <a-sky src="{{ row.image }}" rotation="{{ row.image_rotation }}"></a-sky>
        </a-entity>
    </a-entity>
    {% endfor %}
</a-scene>

There are more things unique to our particular UI that I did not include here for sake of simplicity, but you can see the complete HTML file here.

Nine Miscellaneous Tips About Building In A-Frame

There are lots of little things we encountered building a VR experience that didn’t fit in the explanation above but would be good to know.

  1. We used jPlayer to handle our audio experience. While A-frame provides a sound component, it had strange issues with playback, sometimes placing all the audio in one ear or the other. It was also more apparent with jPlayer how to provide a responsive UI for users to interact with the audio. Also, separating concerns between the playing audio file and the switching of scenes was easier using separate libraries.
  2. Three.js, which ultimately does all of the projection into 360º space, expects most assets to be sized to the power of two. That means the dimensions should always be a power of two. For example, our equirectangular images were sized to 212 x 211.
  3. A-frame has to be included on the page before the <a-scene> is invoked; otherwise, the tags will not be recognized. We included it in the <head>.
  4. Because A-frame has to be included early, it’s smart to use some critical CSS to ensure something loads on your page in a timely manner. A-frame is a very large library. Our app-header.js file is 214 KB, most of which is A-frame.
  5. Ensure that users cannot enter the VR experience before all assets are loaded. This is as simple as disabling your UI until JavaScript’s native load event fires.
  6. Exiting VR mode on iOS and Android are totally different. On iOS, you rotate your device to portrait mode. On Android, you use the device’s native back button instead of rotating because Android goes into fullscreen mode. Make sure your instructions to the user are accurate for both types of device.
  7. Ultimately, A-frame renders your scene to a canvas element. You can do anything with that canvas element. We chose to fade the canvas to black and fade back up when switching scenes.
  8. To date, text in A-frame is hard. There are some plugins and extensions that provide the ability to write on your scene, but it is almost certainly easier at this point to make a transparent PNG and project it onto your scene. In VR mode for our app, we used transparent PNGs to project an annotation telling the user where they were in Rocky Mountain National Park, as seen in the screenshot at the top of this post.
  9. A-frame ships with a controls component called “look-controls”. We used a plugin called “drag-look-controls”, which is largely the same, except it inverts the click-and-drag experience so that the photo moves in the direction you drag.

In the coming days, we will publish a couple more things about our project, including how we made our images and soundscapes and what we’ve learned from analytics about how people used our VR project. Stay tuned!

Useful Scraping Techniques

A recent NPR project that collects structured data about gun sale listings from Armslist.com demonstrates several of my favorite tricks for writing simple, fast scrapers with Python.

The code for the Armslist scraper is available on Github.

Can you scrape?

Scraping is a complicated legal issue. Before you start, make sure your scraping is acceptable. At minimum, check the terms of service and robots.txt of the site you’d like to scrape. And if you can talk with a lawyer, you should.

Data model classes

The Armslist scraper encapsulates scraped data in model classes.

Here’s the basic idea. You provide the model class with all the HTML it should scrape. The class performs the scrape and stores each piece of data in an instance property. Then, you access the scraped attributes in your code via those instance properties. Look at this lightly modified example of the model class code from the project.

class Listing:
   """Encapsulates a single gun sale listing."""

    def __init__(self, html):
        self._html = html
        self._soup = BeautifulSoup(self._html, 'html.parser')

    @property
    def title(self):
        """Return listing title."""
        return self._soup.find('h1').string.strip()

To use this class, instantiate it with an HTML string as the first argument, then start accessing properties:

html = '<html><body><h1>The title</h1></body></html>'
mylisting = Listing(html)
mylisting.title

Every listing instance takes an HTML string which can be downloaded during a scrape or provided from another source (e.g. from a file in an automated test). The Listing class uses the @property decorator to create methods that “look like” instance properties but perform some computation before returning a value.

This makes it easy to test and understand each computed value. Want to double check that we’re grabbing the price correctly? This method is sane enough that you don’t have to know a lot about the other parts of the system to understand how it works:

class Listing:
    #...
    @property
    def price(self):
        span_contents = self._soup.find('span', {'class': 'price'})
        price_string = span_contents.string.strip()
        if price_string.startswith('$'):
            junk, price = span_contents.string.strip().split('$ ')
            return price
        else:
            return price_string

Controller scripts

The model class is then used in a simple script which makes the actual HTTP request based on a URL provided as an argument and prints a single CSV line.

Here’s a lightly modified version of our controller script:

#!/usr/bin/env python

import sys
import requests
import unicodecsv as csv

from models.listing import Listing

def scrape_listing(url):
    writer = csv.writer(sys.stdout)
    response = requests.get(url)
    listing = Listing(response.content)
    writer.writerow([
        url,
        listing.post_id,
        listing.title,
        listing.listed_date,
        # ...
    ])


if __name__ == '__main__':
    if len(sys.argv) != 2:
        print('url required')
        sys.exit()

    url = str(sys.argv[1])
    scrape_listing(url=url)

This script is very easy to interact with to see if the scraper is working properly. Just invoke the script on the command line with the URL to be scraped.

Parallelization with GNU parallel

The framework above almost seems too simple. And indeed, scraping the 80,000+ pages with listings on Armslist one-by-one would be far too slow.

Enter GNU parallel, a wonderful tool for parallelization.

Parallelization means running multiple processes concurrently instead of one-by-one. This is particularly useful for scraping because so much time is spent simply initiating the network request and downloading data. A few seconds of network overhead per request really starts to add up when you have thousands of URLs to scrape.

Modern processors have multiple cores, which hypothetically makes this easy. But it’s still a tricky problem in common scripting languages like Python. The programming interfaces are clunky, managing input and output is mysterious, and weird problems like leaving thousands of file handles open can crop up.

Most importantly, it’s easy to lose hardware abstraction, one of the most powerful parts of modern scripting languages when using parallelization libraries. Including a bunch of multiprocessing library magic in a Python scraper makes it much harder for anyone with basic programming skills to be able to read and understand the code. In an ideal world, a Python script shouldn’t need to worry about how many CPU cores are available.

This is why GNU parallel is such a useful tool. Parallel elegantly handles parallelizing just about any command that can be run on the command line. Here’s a simple example from the Armslist scraper:

csvcut -c 1 cache/index.csv | parallel ./scrape_listing.py {} > cache/listings.csv

The csvcut command grabs the first column from a CSV with URLs and some metadata about each one. The scrape_listing.py command takes a URL as an argument and outputs one processed, comma separated line of extracted data. By passing the output of csvcut to a parallel command which calls scrape_listing.py, the scraper is automatically run simultaneously on all the system’s processors.

Parallel is smart about output – normal Unix output redirection works the way you would expect when using parallel. Because the commands are running simultaneously and timing will vary, the order of the records in the listings.csv file will not exactly match that of the index.csv file. But all the output of the parallelized scrape operation will be dumped into listings.csv correctly.

The upshot is that scrape_listing.py is still as understandable as it was before we added parallelization. Plus it’s easy to run one-off scrapes by passing scrape_listing.py a URL and seeing what happens.

Getting close to the source

It never hurts to figure out where the server you’d like to scrape is, physically, to see if you can cut down on network latency. The Maxmind GeoIP2 demo lets you geolocate a small number of IP addresses.

When I plugged the Armslist.com IP address into the demo, I found something very interesting: The location is in Virgina and the ISP is Amazon. That’s the big east coast Amazon data center (aka us-east-1).

Because NPR Visuals also uses Amazon Web Services, we were able to set up the machine to scrape the server in the same data center. Putting your scraper in the same data center as the host you’re scraping is going to eliminate about as much network overhead as humanly possible.

While that’s probably a bit too lucky to cover most common cases, if you are hosting your scraper on Amazon and find the server you’d like to scrape is on the West Coast of the US, you can set up your EC2 instance in the west coast data center to lose a little extra latency.

Choosing the right EC2 server

We used an Amazon c3.8xlarge server, which is a compute optimized instance with 32 virtual processors available. We chose a compute-optimized instance because the scraper doesn’t use a lot of memory or disk. It doesn’t use that much CPU either, but it’s more CPU intensive than anything else, and the c3.8xlarge is cheaper than any other option with more than 16 CPUs.

On a c3.8xlarge, scraping roughly 80,000 urls took less than 16 minutes, which comes out to less than $0.50 to run a full scrape.

Putting it all together

The full scraper actually carries out two operations:

  • Scrape the Armslist.com index pages to harvest listing URLs and write the list to csv. To speed up the process, this step is parallelized over states. It could be refactored to be even more efficient but works well enough for our purposes.
  • Scrape each listing URL from the index csv file using parallel to scrape as many URLs simultaneously as possible.

Analyzing the data

We do further post-processing for our analysis using shell scripts and PostgreSQL using a process similar to the one described here. If you’d like to check our work, take a look at the armslist-analysis code repository.

A quick shoutout

I learned many of these techniques – particularly model classes and using GNU parallel – from developer Norbert Winklareth while we were working on a Cook County Jail inmate scraper in Chicago.

We love this Tumblr but…

…we haven’t updated it in a while. We feel bad about that. It’s because we’ve been focusing on building oodles of storytelling resources over at NPR’s Editorial Training website.

So this Tumblr is on vacation. But there’s still lots of great stuff here. Scroll down. Check it out. And we’re still here. Still listening, reading, looking for the best examples of great storytelling. You can always find us on Twitter or at TrainingTeam@npr.org.

image

A Better Way To Track Listening

A screenshot of our elections app titlecard during Mega Tuesday on March 15, 2016. A screenshot of our elections app titlecard during Mega Tuesday on March 15, 2016.

For the entirety of the primary season, we have been running our elections app at elections.npr.org, focusing both on live event coverage during primary nights and updated content between events to keep users up-to-date on the events taking place each day.

A major component of our election coverage is audio-driven, whether through our live event coverage during primary nights or the NPR Politics Podcast in between events. Part of our decision to focus our app around audio stemmed from our newsroom putting a significant effort behind the audio coverage, but we also wanted to learn more about how our audience engages with audio on the internet. We treated our election app as a huge opportunity to do so.

We wanted to be fair to ourselves and treat our audio online like we treat audio on the radio. That means placing much more difficult restrictions on what we call a “listener.” In the calculations to follow, we treat listeners as those who listened to at least five minutes of audio, which is how we count listeners in our radio ratings.

Given this calculation, just 10% of our total user base are what we would consider “listeners”. That being said, we haven’t had audio in the experience 24/7, and sometimes we haven’t had audio during high-traffic primary events.

For the purposes of this analysis, I am going to focus on times when we were broadcasting a live election night special, as those are the moments throughout the primary season that we have gotten a significant amount of traffic and we have consistently had audio to work with.

Overall performance

Screenshots of the first two cards of our app during our live broadcast on Mega Tuesday, March 15, 2016. Screenshots of the first two cards of our app during our live broadcast on Mega Tuesday, March 15, 2016.

As of writing, NPR has broadcast 11 election night specials, and we have carried all of them inside of the app. If a user arrived at the app, the special would autoplay upon swiping or clicking past the titlecard.

During times the broadcast was live, we served over 475,000 sessions, and over 100,000 of those sessions were listeners. In other words, 22.4% of live event sessions became listening sessions by listening to at least five minutes of audio. If we look at listen rates across npr.org or consider five minutes as a “view” on a Facebook or YouTube videos, that’s a pretty good number. We’re happy with that number.

But it is a sobering reality: even when we advertise our app as a listening experience (as we often did on social media) and autoplay the content, only 22% of our users stick around for more than five minutes. Of course, our election app is not exclusively an audio app, and the other 78% of sessions still may have gotten what they needed out of the app, like a quick checkup on the results.

On a given night, our live specials would run anywhere from one hour to four hours. I have data at the hourly level, which means I can analyze the performance of the special hour by hour. Aggregating all of our sessions into hourly blocks, it is clear that performance of our live specials degrades the longer we go on. 26% of our sessions that began in the first hour became listening sessions, while just 18% of the sessions that began in the fourth hour became listening sessions.

What do we know about our listeners?

We know a whole bunch of other things about our app, most of which are out of scope for this blog post. But since we know which sessions were listening sessions, we can examine the behavior of our listeners as compared to our non-listeners.

The first, most obvious thing we can determine is that our listeners spend more time total on the app than non-listeners. This is not surprising – after all, they spent at least five minutes listening to audio. However, the proportion is surprising.

The average user overall spent an average of about eight minutes on the app, while listeners spent an average of 44 minutes on the app, whether they were listening for all 44 minutes or not.

A screenshot of our donation card

At the end of February, we added a new type of card to our app: a card that asked users to donate to their local member station. We tested a few different prompts throughout the duration of the primary, but no matter what test we were running, we consistently found that listeners were more likely to click the button than non-listeners.

A simple statistical test evaluation shows that we can say that listeners are 93.9% more likely to click the donate button than non-listeners, and we can say this with 99% confidence.

That being said, because we had far more non-listeners than listeners, we actually got more total clicks from non-listeners. This is worth taking into account.

Finally, we know that our listeners are far more likely to be desktop users than non-listeners. 65% of our listeners were desktop users, compared to just 40% of non-listeners.

What have we learned?

By limiting our definition of who a listener is, we can know much more about our most engaged users, and we can adjust for the future knowing these new things. While this analysis does not necessarily provide answers, it provokes questions to ask about next steps.

We know that the majority of our users, despite autoplaying the content for them, will not listen long enough to be considered listeners. We also know that the beginnings of our broadcasts perform much better than the end of our broadcasts. How can we make our content more accessible for people jumping in in the middle?

We know that engaging users with our audio makes them more likely to click a donate button. How can we optimize the donation experience for people who are listening to our audio?

At the same time, we have a majority of users who are not listening to our audio. How can we make donation seem more compelling to them?

We know that users engaged with our audio spend a lot more time in general on our app than users who do not. How can we take better advantage of the 44 minutes listeners spend on our app? Again, are there better ways to use that time to prompt them for donations? Can we surface more information in a compelling way to keep them better informed?

We know that listeners are more likely to be desktop users, while nonlisteners are more likely to be mobile users. Knowing from the other data that listeners take more desirable actions, like clicking donate buttons, how can we convert more of our mobile users into listeners?

Why definitions matter

Of course, you can do this type of deep analysis with numbers from Facebook or YouTube or SoundCloud or wherever you use your timed media. But definitions matter. Facebook infamously counts three watched seconds as a view, even though they autoplay videos in a user’s timeline. If we went by their lead and defined the baseline metric as three seconds listened, then we would learn to read those numbers first. And then we would optimize content to make that number perform better. Facebook, YouTube, and all the others make it too easy to see their shallow definitions of engagement to ignore it.

The cynical way to interpret this is that timed media platforms are goosing their metrics so that they compete with TV and charge higher advertising rates. It might even be the correct way of interpreting it. What I know is that it doesn’t serve our audience to assume that such a low rate of engagement says anything about what our audience actually values.

With a tougher, better definition of a listener, we can learn more about our audience’s needs and desires. Instead of learning how to hook someone to a page with a headline, or how to catch more people’s eyes in a timeline of autoplaying videos, we will learn what keeps an audience engaged, what makes them share, what makes them learn.

So get out in front of it and define what listenership or viewership means for you. Learn what resonates with your audience at a deeper level and optimize for that. I guarantee you will ask better questions of your content strategy and come up with better answers.

How Libraries Are Curating Current Events, Becoming Community Debate Hubs

This piece is part of a special series on Libraries + Media. Click here for the whole series.

When the Pew Research Center tracks where Americans get their news, we hear about Reddit, Twitter and Facebook, television, newspapers and radio. Libraries don’t make the list. You might not expect them to.

Base image via Shutterstock; photo illustration by Kerry Conboy. Click the image for the full series.

Click the image for the full series.Base image via Shutterstock; photo illustration by Kerry Conboy.

But there’s another side to this story. Pew studies also report that Americans do head to libraries, online and in person, to read news and research topics of interest. People do value the services of reference librarians. And they do trust libraries to help them decide what information is trustworthy.

Libraries are now turning that trust into an opportunity: Around the world, they’re experimenting with more direct participation in the issues that affect their communities. Library teams are selecting topics of local importance, compiling resource guides that keep up with evolving issues, and inviting public discussion and debate.

Librarians are curating current events.

Hot Topics in America

At California’s Alameda County Library, guides to current events feature the “news you are talking about.” Feeds from local and international news sources dovetail with guides to newsworthy subjects such as privacy, elections and guns and violence. While research guides are old news for libraries, these modernized versions are created with content management tools like LibGuides that let librarians organize selected sources and incorporate live news feeds, reader polls and other interactive features.

Pennsylvania’s Monroeville Public Library invests nearly a third of the library’s home page real estate in “Hot Topics.” The goal, according to the library’s promotions, is to “guide you to reliable online news and information on the issues you want to know about.” Whatever the topic — from the Islamic State to the U.S. presidential primaries to local tax issues — visitors get quick access to useful resources vetted by a team of librarians.

It’s a thoughtful service with a conscious goal: to support the informed citizenry a strong democracy depends on.

“Librarians can help people gain the basic knowledge and understanding they need to participate in debates, engage in effective political action, and make the societies they live in more democratic,” said Mark Hudson, head of adult services and one of several librarians who contribute to the Hot Topics research, in an email interview.

Every few weeks, Monroeville librarians identify a new topic and present a short list of resources from the web, the community and the library collection to help readers learn the basics and find out where to dig deeper. Along with mainstream news sources, links may include public interest organizations, independent media and analysis.

Mark Hudson, head of adult services at the Monroeville Public Library.

Resources are balanced and diverse. “In the Hot Topics, we try to include a full range of perspectives on every issue,” Hudson said. “Not because we are neutral on these issues ourselves, but because we always want to help people understand the debate on a given issue and what is truly at stake for society in that debate, so they can form their own independent viewpoints based on real knowledge and understanding of the issue.”

Sometimes the library also develops programs related to the Hot Topics: It has hosted, for example, speakers and panel discussions on climate change, voter identification laws, post-traumatic stress disorder, county property assessments, air quality, privacy and surveillance, and immigration policy.

To connect with librarians engaged in similar efforts, Hudson and Norwegian librarian and freelance writer Anders Ericson started a Facebook group, Libraries Improving Public Participation and Democracy, in late 2015. Within a few months, group membership grew to represent seven countries.

Taking Up the Case in Norway

In Norway, Ericson argues for librarians to adopt an activist approach to democracy, building comprehensive portals that “take up the case” and expose hidden content, such as primary sources, that citizens might not find with a basic web search.

“The abundance of information on the web and in media conceals the fact that news and data are often insufficient, unbalanced, and/or very complex, and often as a result, poorly or not at all organized,” wrote Ericson in a blog post. “Dealing with such deficiencies has always been part of the library mission, and libraries should be the first to take action.” The research service, Ericson suggests, could be useful to journalists as well as the public.

Ericson cites as an example the Global Surveillance portal from the University Library of Oslo. For his part, he has documented a curation process and built a portal on a current topic of national interest: a controversial effort to consolidate Norway’s regional governments, merging nearly 500 municipalities into just 100. Hosted by the county library of Nord-Trøndelag, the portal highlights breaking news, coming events, documents, political statements and other resources. While special attention is paid to the impact of the civic mergers on libraries, the portal, Ericson believes, is the most comprehensive resource available to Norwegians on the larger issue of municipal consolidation.

“No library is in the position of competing with any local or national commercial news services with regard to the general, comprehensive news coverage communities need and deserve,” said Ericson. But with issue-oriented research portals and related discussions, he believes, libraries can “take up specific cases of utmost importance to the community.”

In Norway, Anders Ericson runs a library portal on the controversial subject of municipal consolidation. Photo courtesy of Anders Ericson.

In Norway, Anders Ericson runs a library portal on the controversial subject of municipal consolidation. Photo courtesy of Anders Ericson.

“Debate Libraries” in Scandinavia

Moving offline, Scandinavian libraries have taken up the debate in their physical civic spaces. Denmark’s libraries host “debate cafes,” programs offered jointly by public libraries, the Aarhus University Press, Danish National Radio and the newspaper Jyllands Posten. Dubbed “Tænkepauser” (“Pause to Think”), these open forums address broad themes like hope, trust, terror and truth. On a given date, community members can reflect on the chosen theme at the library or listen to national radio broadcasts on the same subject. A companion book from the university press is free to download, inexpensive to buy in print or available to borrow from the library. Authors lead off the coordinated national discussions.

“The library’s role has changed,” Tine Vind, head of libraries for the Danish Agency for Culture, recently wrote in the Scandinavian Library Quarterly. “Many libraries endeavor to ensure that the citizens reflect on the knowledge they acquire, and provide the opportunity for independent opinion shaping.“

Working with schools and other organizations, she believes, libraries can help equip people to become critical thinkers who can thoughtfully express ideas. “The library is in a position to safeguard one of modern democracy’s most important building blocks: freedom of speech,” Vind said.

Freedom of speech is not without controversy. In Sweden and Norway, updates to the countries’ 2014 national library acts require libraries to serve as independent arenas for public debate, Ericson reported in Information for Social Change. In Norway’s “debate libraries,” each library’s leader is charged with a role analogous to that of a newspaper editor, setting the content and form of debates just as an editor would guide a publication or a librarian would curate a collection.

Debates and discussions have been held in urban and rural libraries on international issues and hyperlocal concerns ranging from racism to energy futures to municipal consolidation—the subject of Ericson’s complementary web portal.

How is the media involved? “Some ideas have arisen about cooperation between smaller, local newspapers and public libraries,” Ericson said. “The idea is that the newspaper will need the library’s research competence and the library may take advantage of editorial resources. There have been a number of debates where the two institutions have cooperated, with a journalist as a moderator and on the library premises.”

A Bias Toward Democracy

As the conversation continues, librarians and journalists are beginning to find new ways to work toward a common goal: the informed community.

“Freedom, prosperity and the development of society and of individuals… will only be attained through the ability of well-informed citizens to exercise their democratic rights and to play an active role in society,” points out the IFLA/UNESCO Public Library Manifesto. And so libraries, anchored by the independence of their physical and virtual spaces, lean into democracy.

“As librarians,” said Monroeville’s Mark Hudson, “I believe we have a responsibility to be partisan on the side of democracy, human rights, social inclusion and social justice.”

Perhaps not all would agree. There’s one more hot topic to discuss at a library debate.

Laurie Putnam is a communications consultant and a lecturer at the San Jose State University School of Information. By day she coaches students on communications and helps high-tech clients tell their stories; by night she continues her work as founder of the Library and Information Science Publications Wiki. You’ll find her online @NextLibraries.

 

SCH_SOI_Blue_Gray_Web  The Libraries + Media series is sponsored by the School of Information at San José State University. Your source for master’s degrees and professional certificates to propel your career in the information professions.

Makers Gonna Make, Innovators Gonna Innovate at WVU’s Women’s IoT Makeathon

The following is coverage from West Virginia University of a recent ‘Hack the Gender Gap’ event, which MediaShift co-hosted with the Reed School of Media. See more coverage here.

Makers are going to make, so why not gather them in the same place to collaborate? That was the idea of the Women’s Internet of Things (IoT) makeathon, held April 1-3 at West Virginia University’s Media Innovation Center.

More than 50 students from across the nation attended the makeathon, which was developed to empower women to experiment with and dip their toes into the IoT field. The weekend was packed with women tapping their potential to solve a current problem in the media industry.

Day 1: Technology Meet-and-Greet

The first night of the makeathon was all about getting comfortable with one another and meeting other women, from students to professionals.

The main activity was the introduction of do-it-yourself electronics in the “petting zoo” stations. There were gadgets and gizmos galore, from a hobby-computer called an Arduino to a 3D printer. WVU StreamLab’s technology called a “riffle,” a thinner Arduino, also made an appearance. Participants learned how to solder robot pins with blinky eyes that many of them proudly wore the entire weekend.

Women learn how to solder a soon-to-be blinky robot in the Maker Lab. Photo Credit: David Smith.

Women learn how to solder a soon-to-be blinky robot in the Maker Lab. Photo Credit: David Smith.

I was impressed at how quickly everyone picked up the new technology in such a short amount of time. These women were unafraid and ready to innovate.

Day 2: Makers Gonna Make, Innovators Gonna Innovate

The morning was a blur, filled with more meeting and greeting and a kick-off message via Google Hangout with Umbreen Bhatti. Bhatti, a human-centered design coach who focuses on real human needs, gave us a pep talk about how to be problem solvers, empathize with users and use journalism for the greater good.

And then came the challenge: Identify a media or journalism problem and solve it using the IoT. The “media” parameter was set wide on purpose, so the solution could range from focusing on fixing a problem for media professionals or just using an app. From there, we were set loose to start working in teams.

The Media Innovation Center at West Virginia University overlooks Morgantown. Photo Credit: David Smith

The Media Innovation Center at West Virginia University overlooks Morgantown. Photo Credit: David Smith

The organizers had divided us into teams, and our team name was derived from an object that each participant received when she arrived at the makeathon. My team – Team Scissors – was composed of Mary Chuff, a journalism student from Penn State; Birdie Hawkins, a strategic communications and recreation, parks and tourism resource student at WVU; Chelsea Bricker, a strategic communications student at WVU; and me, a journalism and wildlife & fisheries resource management student at WVU. Our mentor-facilitator was Kate Boeckman, a tracker of wearables and other new technologies for Thomson Reuters.

Team Scissors crafts ideas in a huddle room at the Media Innovation Center. Photo Credit: David Smith.

Team Scissors crafts ideas in a huddle room at the Media Innovation Center. Photo Credit: David Smith.

Team Scissors jumped on the wearables wagon early and tried to solve a problem of communication between younger and older generations. We nailed down a decent idea about a conversation bracelet to stimulate communication between young and old, but our IoT element was lacking.

That was especially apparent after a thorough explanation of IoT from BuzzFeed fellow Christine Sunu. She inspired us with her talk about devices that automatically connect to the internet, like a coffee maker that starts making coffee an hour before someone’s first appointment on a Google calendar.

After going back to our group huddle room, we shifted our attention to a problem in the media. We wanted to focus our wearable on working women rather than the generation gap. Women in their mid-to-late 20s sometimes are unsure of how to make new friends, especially in a new location, and we wanted to create a way for working women to find each other, stick together and help each other.

After a hysterical and motivational presentation from Tiffany Shackelford of the Association of Alternative Newsmedia about how to make money off an idea, we returned to our room and pulled our two concepts together.

Thus was born the Bonobo (not to be confused with the men’s clothing brand). It’s a keychain that “checks in” a woman when she arrives at work via GPS. Her location pops up on the app, which includes the locations of other women in the network to show that, “We’re all in this together. We’re all strong women at work now.” Other features on the app include an advice wall, event postings and coupons for socials.

The name stems from bonobo apes because, in their society, females are the dominant sex. The Bonobo reflected the entire weekend of empowering women to chase after their dreams, despite societal boundaries.

After an exhaustive day of thinking and planning our idea, and the looming presentation, we called lights-out around 9:45 p.m.

Day 3: Loud and Proud

The last day’s wake-up call was a presentation by Gina Dahlia, an associate professor of journalism at WVU. We learned all about how to give a killer presentation in fewer than five minutes, lessons that we directly applied to our pitch.

Our pitch was the focus of a two-hour brainstorming session on Sunday, and my team was excited and prepared. Putting the presentation together helped sharpen business skills I didn’t know I had.

At last, it was time for us to present our ideas to the judges. Seven other groups also presented unique solutions to completely different problems. All had excellent, impressive solutions. And then came the waiting game as the judges deliberated. Boeckman treated Team Scissors to coffee, and we nervously stirred our iced caramel macchiatos for what felt much longer than 30 minutes.

At last, they announced the winners. The Bonobo was not the winning idea, but that was OK. The judges were lovely and helpful and went through all projects, offering advice and tips for everyone.

Although we didn’t win, I was really proud of Team Scissors. We were five different women with totally different backgrounds who banded together to encourage others to do the same. The makeathon showed me that competition among women isn’t necessary and that we should build one another up instead of tearing each other down.

During the weekend, I witnessed feminine genius in a unique way. We will continue to hack the gender gap. I’m confident the gap will close one day, and I was glad to be one small part of the process.

Women learn the interworkings of an arduino. Photo Credit: David Smith.

Women learn the interworkings of an arduino. Photo Credit: David Smith.

Jillian Clemente is a journalism and wildlife & fisheries resource management sophomore at West Virginia University. She loves to tinker with all aspects of the Internet of Things and hopes to bridge the technology and outdoor realms to help better the environment. She is a writer, hunter, bird nerd and pun lover. @jillyclementine

Be our design/code/??? intern for fall 2016!

Increasingly, we're finding more ways to celebrate women older than 50.Illustration by viz team intern Annette Elizabeth Allen!

Hey! You! With the weird talent!

We have two internships on the Visuals team. One is for photo editing, the other, well, it’s weird.

We’ve had journalists who are learning to code, programmers who are learning about journalism, designers who love graphics, designers who love UX, reporters who love data, and illustrators who make beautiful things!

Does any of this sound like you? Please join our team! You’ll learn a ton and it’ll be fun.

Here’s how to apply

Read our post about how to write a cover letter and then apply now!

The deadline for applications is May 22, 2016, 11:59pm EST.

What makes a great photo editing intern (Apply now for Fall 2016!)

NPR Interns at workPhoto by Rachael Ketterer

This is not your standard photo internship!

This internship is an opportunity to learn more about the world of photo editing. Our goal isn’t to make you into a photo editor; we view this internship as a chance for you to understand what it is like to be an editor and improve your visual literacy, which can help you become a better photographer.

What you will be doing

  • Editing: You’ll be working closely with the Visuals Team’s photo editors (Ariel and Emily) on fast-paced deadlines – we’re talking anywhere from 15 minutes to publication, to short-term projects that are a week out. You’ll dig into news coverage and photo research, learning how to communicate about what makes a good image across a range of news topics, including international, national, technology, arts and more.

  • Photography: Depending on the news cycle, there may be opportunities to photograph DC-area assignments. This can mean you’d have one or two shoots in a week, or maybe just a couple shoots in a month. You’ll work closely with a radio or web reporter while out in the field, and a photo editor will go through your work and provide feedback for each assignment. There will also be a chance to work on portraiture and still lifes in our studio.

  • We also encourage each intern to create a self-directed project to work on throughout the semester. It can be an Instagram series, video, photo essay, text story or anything in-between. You can work independently or with another intern or reporter.

You will be part of NPR’s intern program, which includes 40-50 interns each semester, across different departments. There will be coordinated training and intern-focused programming throughout the semester, which includes meeting NPR radio hosts, career development and other opportunities. As an intern, you will be treated as a member of the team. Many NPR employees are former interns and they’re always willing to help current interns.

Eligibility

Any student (undergraduate or graduate), or person who has graduated no more than 12 months prior to the start of the internship period to which he/she is applying is eligible. Interns must be authorized to work in the United States.

Who should apply

We’re looking for candidates that have a strong photojournalism background. An interest in editing, or experience with video/photo editing is a nice plus. It’s also helpful if you’ve completed at least one photojournalism-focused internship prior to applying (let us know if you have!), though it’s not necessary. A portfolio, however, is required.

We also want folks who can tell us what they would like to accomplish during their time at NPR. What do you want to learn? What do you want to try? We try to shape each internship around our intern, so we rely on you to tell us what goals you have for your time with us!

So how do I apply?

Does this sound like you? Read our post about how to write a cover letter and then apply now!

The deadline for applications is May 22, 2016, 11:59pm EST.

How Virtual Reality Will Revolutionize Multiple Industries

A version of this piece first appeared on Medium from the Tow-Knight Center for Entrepreneurial Journalism at CUNY’s Graduate School of Journalism.

Goldman Sachs VR/AR software market assumptions for year 2025

Goldman Sachs VR/AR software market assumptions for year 2025

Virtual and augmented reality isn’t just the new cool thing. Serious reports are signaling tectonic shifts poised to disrupt a wide range of businesses.

According to a recent report from Goldman Sachs Research, VR and AR have the potential to become the next big computing platform. The report indicates that “VR and AR can reshape existing ways of doing things, from buying a new home to interacting with a doctor or watching a concert.”

Let’s consider some examples…

Video Games

The video game industry is the main dynamic behind the rise of today’s virtual and augmented reality market. Historically gamers’ demand for the best graphics has stimulated computer markets. There are already great game titles for HTC Vive, Oculus Rift and Playstation VR and more are on the way. According to a recent report from SuperData Research, the worldwide market for VR gaming will reach $5.1 billion in revenue in 2016 and an install base of 55.8 million consumers.   (Remember that Playstation VR has already a 36 million user base with PS 4.) This is an important sign that video games will drive the early adoption of virtual reality. Other industries should closely watch the lessons learned.

Healthcare

One of the biggest early adopters of virtual and augmented reality has been the healthcare industry. For years, VR & AR technology has been used as a critical element to train health professionals for surgeries. The greatest advantage of this technology is to offer trainings in a safe environment.

Real Estate

Architects and brokers can benefit from VR & AR to sell houses in a highly engaging way. Sotheby’s real estate agents or sales offices for Luma —  a new 24-story condominium development in Seattle’s First Hill neighborhood — have already started using VR headsets as a sales tool. Virtual reality offers depth and space that no video can show. The ability to show a realistic view of “currently non-existing spaces” can be quite engaging for potential customers.

Engineering

Virtual and augmented reality technology enables engineers to see and test their projects in a safe environment. Engineers can use VR and AR from the first concept designs through the whole implementation process and make changes accordingly.

Education & Training

One of the biggest application areas for VR & AR are education and training. These applications may be used by military, schools or public and private organizations. VR & AR’s ability to simulate real life scenarios in a safe environment creates endless possibilities to enhance the way we educate kids, train personnel or visualize complex theories without spending vast amounts of money.

What Can We Do To Innovate Our Businesses?

In a blog post, I announced my new Tow-Knight Entrepreneurial Journalism project Haptical as a VR & AR news service. I simply started by curating a weekly newsletter with an aim to give a better understanding of the VR & AR markets.

During four weeks, I received very positive feedback from Haptical’s followers. The subscription list kept growing — not in impressive quantity but with “quality.”

Currently there are around 125 people on the list — mainly from media, tech, education and marketing industries. The open rate is above 40 percent for the newsletter, which is not a bad number considering how busy our inboxes may be.

This small but significant interest in the Haptical newsletter has encouraged me to think more about what I could do next.

After thorough market research, interviews with potential users and the advice of my valuable Tow-Knight professors and mentors, I’ve decided to take Haptical to a new level.

1*YnKtWe1GXKLSMPOb6k9WNg

While billons of dollars being poured into the VR & AR market, a recent study finds that two thirds of Americans are still unaware of what VR & AR offers.

The reason for this lack of awareness isn’t clear, but there appears to be an information gap that needs to be filled with expertise.

What’s needed are the kind of experts motivated to connect the dots between what businesses need and what new technologies offer. These are the experts who will eventually turn VR & AR into a human platform, rather than just a device-centric technology platform — which is one of the main reasons behind the public’s current lack of awareness.

In order to fill this information gap, Haptical will offer a strategy service dedicated to helping organizations innovate and grow through next-generation computing platforms, like VR and AR.

Haptical will advise public organizations and private companies about the exciting possibilities that VR & AR offer to transform the way we communicate, entertain ourselves, educate our kids, train our employees and add exceptional values to our businesses. Haptical will organize events, workshops, trainings and publish market reports through time.

This is a long journey of discovery.

If you are excited like me about what VR & AR may bring to our lives, I’d be more than happy to talk to you!

Deniz Ergurel, 2016 Tow-Knight Fellow, is a tech journalist, advisor and entrepreneur. Find him on Twitter or Facebook. Follow Haptical newsletter on Medium or subscribe here.

Lunchbox Update: We’re Dropping Support For Electron

Last year, NPR Visuals sent a team to OpenNews’s Portland Code Convening to create Lunchbox, a suite of newsroom tools that make images for social media sharing, and make it easily deployable for newsrooms.

We decided to experiment with a new way of distributing newsroom technology – desktop apps, built with the brilliant library Electron. Electron allows you to build webapps with JavaScript and package them into native software. We also maintained the ability to deploy the app as a static webapp on Amazon S3 or a fileserver.

And truth be told, we’re still using Lunchbox as a web app, not as a desktop app. As it turns out, installing desktop apps across our newsroom with a corporate IT policy is pretty much impossible for us, and other Lunchbox users have faced similar problems across newsrooms.

Truth be told again, the Electron app for Windows was always super buggy in perplexing ways.

After talking to a few of our biggest users about Lunchbox, we’ve decided to drop Electron support for Lunchbox. We are now encouraging you to deploy the app to Amazon S3 or another fileserver. The processes for doing this are documented.

Moving Lunchbox to a web app first requires one change to Waterbug: Because of cross-domain issues, loading images into Waterbug from external URLs is unreliable and pretty much impossible from our end. So we’ve removed that from the app – users will need to download the image locally and then upload it into Waterbug.

Despite removing support, I’ve left all the electron code (basically a fab command and some npm config) in the app, in case anyone wants to continue building desktop apps (or fix the Windows app!). But we will not be actively developing or building desktop app versions of Lunchbox in the future.

If you would like to contribute to Lunchbox in any way, the repo is here. Feel free to open issues and submit pull requests!

Can We Save Journalism?

This piece first appeared on Medium from the Tow-Knight Center at CUNY’s Graduate School of Journalism.

Whatever happens next, I am always going to blame Hunter Page and his damn fool questions for starting me off on this journey. Hunter: “Pete, how are we going to save journalism?” Me: “Save journalism? I dunno.” Hunter: “But you must, we must. How are we going to know what’s going on when all the journos are sacked?”

That was almost four years ago, and yes, journalism did look a tad sick. I for one was out of a job, working out what to do next after taking a payout from the Sydney Morning Herald. (My best answer was starting PolitiFact Australia, an offspring of the Pulitzer prize-winning fact-checking service currently doing wonders for Donald Trump.)

Peak Content?

Chugach Mountains, Alaska. Photo by Paxson Woelber and used with Creative Commons license.

Chugach Mountains, Alaska. Photo by Paxson Woelber and used with Creative Commons license.

You’d be forgiven for thinking that journalism still appears a bit unwell, even though I can see plenty of reasons to be optimistic. Most legacy media companies are having to cut away at the cost base to cope with falling revenues. It would be great to see them paying as much attention to revenue innovation as cost reduction. But, hey, that’s another article.

Yes, journalism is going through a massive change and many journos have been sacked. About 1600 full-time positions have gone in Australia over the past five years. It is happening across the world.

But we, the public, still know what’s going, or to be more accurate, there’s still plenty of content out there. Masses. Sure, roughly half of it seems to be about cats and Kardashians, but if you have the time, you can read more and more widely now than ever before.

In fact, as Kevin Anderson recently argued in Media Briefing, we’ve probably reached Peak Content: the point at which the glut of “things to read, watch and listen to becomes completely unsustainable”.

Anderson, a media consultant, online innovator and former executive editor at Gannett, writes: “One of the few workable business models in this age of digital disruption has been to produce as much content as cheaply as possible.

“But flooding a glutted market only leads to a deflationary spiral until it becomes completely uneconomic to produce that commodity. It is a simple matter of economics, and it doesn’t matter whether that commodity is maize or media.”

It’s not only the journos pumping out the content; though my guess is that most journalists in full-time employment are pushing out about twice as much journalism now than they did a decade or so ago  —  and across an ever-growing number of platforms. I mean, who’d thought the Wall Street Journal would launch on Snapchat?

By the middle of last year there were 400 hours of video being uploaded to YouTube every minute, Anderson notes. That’s probably grown by more than 10 per cent since then. And that’s before we starting counting the daily avalanche of shared social media posts.

I don’t know if we’ve actually reached peak content — will someone shout “OK, everybody, you can stop filing”, when we get there? — but the glut of material is why another of Hunter’s question rings far truer and louder than his first: “In a world awash with content, how do you know who or what to trust?”

And, to add my own, who’s going to pay for all this journalism? (Anderson contends the media industry has reached the point where ad revenues won’t support current outputs.)

And how are we — the audience — going to value it?

Yes, that’s something to have a crack at for sure. Jeff Jarvis, the resident agent provocateur at City University New York’s graduate school journalism, where I am currently studying, argues that journalism needs to redefine its mission, that seeing ourselves as content creators is a trap. A more productive and sustainable idea is, he argues in “Geeks Bearing Gifts,” journalist as service provider.

“Consider journalism as a service. Content is that which fills something. Service is that which accomplishes something. To be a service, news must be concerned with outcomes rather than products. What should journalism’s result be? That seems obvious: better-informed individuals and a better informed society.”

This all sounds, well, nice and kinda common sense. A better-informed public? What’s not to like. Most journalists would consider what they do as a service to readers.

But, as Jarvis argues, it is the journalists who have defined the terms of that service rather than the public. That’s the bit that has changed. Forever.

“This idea of outcomes-oriented journalism requires that we respect the public and what it knows and needs to know. It forces us to stop thinking that we know better than the public. It leads us to stop thinking that we know better than the public. It leads us to create systems to gather the public’s knowledge.”

The Divine Right of the Editor

Image from Wikipedia.

Image from Wikipedia.

And, to be frank, most journalists and editors are pretty poor at understanding or even wanting to understand what the public thinks. The Divine Right of the Editor is an addictive drug to quit. I know.

I also know that despite the co-dependence of journalists and their readers (certainly from the journo side), many editors still have trouble seeing the relationship as an equal partnership or in fact one in which the audience should by rights have the upper hand.

A small example: one of my proudest moments as editor-in-chief of the Herald was to establish an independent in-house readers’ editor/quasi ombudsman, a senior member of staff whose job it was to question the decisions, practices and methods of the masthead and its editors in response to reader inquiries.

Soon after I left the Herald, my successor closed the job down. His rationale: why waste good money paying someone to shit in the paper’s own nest. Sure, he’s more than entitled to make hard calls. They were and still are tough times. But what sort of message did that action send to the readers? They are the last people you want to put off side. Right?

Anyway, back to Hunter, whom I should have mentioned isn’t a journalist. His mum and sister are. He’s in finance.

Hunter: “In a world awash with content, who are you going to trust.”

Well, I think we still can and should trust journalists and journalism, but, as Jarvis says and I’ve opined about before, we also have to listen to what the audience wants and needs.

We have to either make, find or share the tools that allow that listening to happen and we have to be entirely transparent about the how, what and why we are doing.

One of the many lessons from PolitiFact was the importance of listening, explaining and having an ongoing relationship with the audience. You can never do enough. On reflection, I should have done more. Next time. This time.

So, with the words of Page, Jarvis, Anderson and many others ringing in my ears, I’ve been thinking about a new journalism service, currently called GoClevr, that will curate and aggregate the sharpest, most insightful and, sometimes the most surprising, analysis, opinion and commentary by journalists.

It will give readers who don’t have the time to find their favourite bylined journalists or discover new ones, a daily digest of the best stuff going around.

Initially, I will curate the material (and if you are interested, please sign up to the coming-soon newsletter; details on the site) but the plan is to give readers the ability to pick the subjects and the authors they like.

GoClevr will listen and learn from the readers. It will give readers the information to make informed decisions about current events.

This is not the first idea in this space and it won’t be the last. But most other aggregators mainly offer news articles (and let’s face it, news is everywhere) and they don’t curate via the byline or author.

I have a few more ideas about how GoClevr will stand out from the pack, how it will work across publishers and most importantly, how it will deliver value to readers, journalists and publishers but I will save them for the next post.

Right now this is a voyage of discovery. Working with others, at CUNY and with people in Sydney and Zurich, I am trying to work out if whether it is something readers will value. I don’t know the answer yet and I won’t really know until GoClevr starts publishing and gaining feedback.

But I do think gaining insights into current events is a valuable commodity and many journalists, young and old, known and unknown, have something to contribute.

The challenge of age is to climb up and over Peak Content and bring back only the good stuff. That is the challenge I want to take on, with or without oxygen. Watch out for more updates.

I’d like to acknowledge the support of Sakura Sky, Hunter Page, Richard McLaren, Alistair Munro, and my peers and colleagues at University Technology Sydney and CUNY, especially Jeremy Caplan.

Peter Fray is an Australian editor, journalist and recent academic. He is currently a digital entrepreneurial fellow at CUNY and a professor of journalism practice at University of Technology Sydney and the former editor-in-chief or editor of The Sydney Morning Herald, The Sun-Herald, The Sunday Age and the Canberra Times. In 2013, he founded the fact-checking PolitiFact Australia and until joining UTS in late 2015, was the deputy editor of The Australian. After 30 years in journalism, he is starting to listen to the audience.

Reporter/Project Coordinator, Great Lakes Regional Journalism Collaborative

Position Summary

WNED | WBFO, Buffalo, is seeking an experienced journalist to serve as a reporter and project coordinator for the Great Lakes Regional Journalism Collaboration. The position will spend 50% time on each of the two activities and reports to the Managing Editor. The position is responsible for multi-media reporting of news and information stories related to the Great Lakes and for coordinator of overall project activities to accomplish the objectives of the RJC.

Duties and Responsibilities

Multimedia Journalist/Reporter

WOUB is accepting applications for a Multimedia Journalist/Reporter. This position produces high quality broadcast pieces for radio, TV and online focused on topics related to the areas surrounding the Ohio River. Must be able to mine for stories and conduct interviews for broadcast on regional radio and TV, with national submission. Position works primarily independently. Must have thorough knowledge of journalistic ethics and AP Style. Need to possess on-air hosting skills, and have extensive knowledge of technical aspects of digital recording and editing for both audio and video.

Director of Development

Director of Development – KRWG at New Mexico State University seeks an experienced fundraising leader that will plan, develop, implement, and manage marketing/fundraising activities including philanthropic initiatives, (Planned Giving & Major Gift Programs); direct advancement campaigns on‐air; online; and pursue corporate support for the federally licensed University radio and television stations in accordance with federal and state regulations.

Does anything HAPPEN in your story?In public radio, we cover a…



Does anything HAPPEN in your story?

In public radio, we cover a lot of policy issues affecting LOTS of people. These are hard stories to tell. We gather hours of tape with policymakers and people affected by policies – but it’s challenging to turn that into a compelling narrative.

Renata Sago of WMFE (edited by Brett Neely, NPR) solved that problem by narrating one thing happening to one person over the course of her story (you can listen above). She was reporting on Florida ex-felons trying to get their voting rights back. They must appeal to a powerful clemency board. 

Renata went to the clemency hearing and identified a person who could serve as a main character. Here’s how that one person is introduced:

SAGO: Justin drove seven hours for a five-minute chance to make his case. He waits in the back of the room, clutching an Expando file full of court papers. They date back to one mistake.

JUSTIN: In 1994, Miami, I was snatching a gold chain, and I did 31 months.

SAGO: He was 16.

JUSTIN: I never thought that snatching a gold chain would lead to this, that I’m in State Capitol at 38 years old trying to ask them for my rights back.

As the story proceeds, we hear the big picture – Florida’s difficult clemency process and its history – but we stick around to discover what happens to Justin. And in the end, we find out.

This is a simple story structure including the narrative arc of one person’s experience – surrounded by context. It’s simple, but vastly more listenable than a story that offers no protagonist and no reason to keep listening until the end.

                                                                        – Alison

PS - From NPR’s Editorial Training website, here are more strategies for structuring your stories, in order to give them narrative arcs.

Be our design/code/??? intern for summer 2016!

Increasingly, we're finding more ways to celebrate women older than 50.Illustration by viz team intern Annette Elizabeth Allen!

Hey! You! With the weird talent!

We have two internships on the Visuals team. One is for photo editing, the other, well, it’s weird.

We’ve had journalists who are learning to code, programmers who are learning about journalism, designers who love graphics, designers who love UX, reporters who love data, and illustrators who make beautiful things!

Does any of this sound like you? Please join our team! You’ll learn a ton and it’ll be fun.

Here’s how to apply

Read our post about how to write a cover letter and then apply now!

The deadline for applications is January 3, 2016, 11:59pm EST.

We’re looking for a developer to help us build Carebot

The NPR Visuals team

We’re looking for a programmer to join our team for a few months.

Your mission? Break the news’s addiction to pageviews, by bringing meaningful analytics to journalists.

Why?

At NPR Visuals, our goal is to make people care. To get them to give a shit about tough problems and people they’ve never met. It’s our job to create empathy in world.

If that’s our goal, how do we know if we’re accomplishing it? How do we celebrate success?

Enter the Carebot!

Because what you choose to celebrate is super important. If your organization celebrates pageviews, people will create work that gets more pageviews. But it’s not our job to get clicks. Our job is to touch hearts. And so we must celebrate stories that do that.

Basic web analytics don’t help us do that. So we applied for a Knight grant to build something we’re calling Carebot.

Carebot will be a little system for gathering, analyzing and distributing better analytics. (Specifically, there’ll be some javascript, some back-end server and API stuff, and a bunch of notification things like email and Slack bots and stuff.) We’ve only got a few months to work on it, so we’re building a prototype. It will help us test an idea: that better analytics make for better journalism.

Who? When? Where?

Carebot will be built by a small team next winter/spring. You’ll be working closely with UX expert Livia Labate, our lead architect David Eads, and other members of the Visuals team.

We’re based in Washington, DC. It’s cool if you work remotely, but we’ll want you here a couple times during the project. (We’d cover those travel costs.)

It’s a three-month gig, Februaryish-Aprilish.

Interested? Email bboyer@npr.org.

Know somebody who’d love this? Please spread the word!!!

What makes a great photo editing intern (Apply now for Summer 2016!)

NPR Interns at workPhoto by Rachael Ketterer

This is not your standard photo internship!

This internship is an opportunity to learn more about the world of photo editing. Our goal isn’t to make you into a photo editor; we view this internship as a chance for you to understand what it is like to be an editor and improve your visual literacy, which can help you become a better photographer.

What you will be doing

  • Editing: You’ll be working closely with the Visuals Team’s daily news photo editors (Ariel and Emily) on fast-paced deadlines – we’re talking anywhere from 15 minutes to publication, to short-term projects that are a week out. You’ll dig into news coverage and photo research, learning how to communicate about what makes a good image across a range of news topics, including international, national, technology, arts and more.

  • Photography: Depending on the news cycle, there may be opportunities to photograph DC-area assignments. This can mean you’d have one or two shoots in a week, or maybe just a couple shoots in a month. You’ll work closely with a radio or web reporter while out in the field, and a photo editor will go through your work and provide feedback for each assignment. There will also be a chance to work on portraiture and still lifes in our studio.

  • We also encourage each intern to create a self-directed project to work on throughout the semester. It can be an Instagram series, video, photo essay, text story or anything in-between. You can work independently or with another intern or reporter.

You will be part of NPR’s intern program, which includes 40-50 interns each semester, across different departments. There will be coordinated training and intern-focused programming throughout the semester, which includes meeting NPR radio hosts, career development and other opportunities. As an intern, you will be treated as a member of the team. Many NPR employees are former interns and they’re always willing to help current interns.

Eligibility

Any student (undergraduate or graduate), or person who has graduated no more than 12 months prior to the start of the internship period to which he/she is applying is eligible. Interns must be authorized to work in the United States.

Who should apply

We’re looking for candidates that have a strong photojournalism background. An interest in editing, or experience with video/photo editing is a nice plus. It’s also helpful if you’ve completed at least one photojournalism-focused internship prior to applying (let us know if you have!), though it’s not necessary. A portfolio, however, is required.

We also want folks who can tell us what they would like to accomplish during their time at NPR. What do you want to learn? What do you want to try? We try to shape each internship around our intern, so we rely on you to tell us what goals you have for your time with us!

So how do I apply?

Does this sound like you? Read our post about how to write a cover letter and then apply now!

The deadline for applications is January 3, 2016, 11:59pm EST.

Gut check: How to capture the emotion of a moment

What do you do after finishing a bunch of interviews in the field? 

You double check your recorder… Was I actually rolling? Phew!… You record a bunch more ambience… The producers will always ask me for more… What else?

NPR’s Sam Sanders does what he calls “an emotional gut check” (see photo above for Sam’s demonstration). He pauses to ask himself, “How do I feel? How do the folks I talked to feel? How does the entire moment feel?” 

Sam explains why: 

“I do it because so often in our reporting, we’re focused on the facts – how big was the crowd, who said what, what might happen next, etc. But a lot of times, the color and the FLAVOR of the piece will come from the mood of the scene, something that isn’t always found in your audio when you’re playing it back later. I’ve found that, at least for me, I have to actually stop myself, to ask what the mood is.”

For Sam, these gut checks frequently result in down-to-earth, relatable pieces of writing. Like this, from a story about mega-Ben Carson fans:

ADDY EARHART: I was shaken.

SANDERS: You were shaken.

EARHART: Yeah.

SANDERS: Why were you shaken?

EARHART: I was just so nervous.

SANDERS: She was like a teenage fan meeting a Taylor Swift or a One Direction. At every event, there are attempts at weird quick hugs, selfies taken before handlers quickly push fans away and tears…

Sam explains that he wouldn’t have come up with the comparison to Taylor Swift and One Direction if he hadn’t paused to ask himself, “What does this scene feel like?” 

And another thing: Sam always runs these sentiments by his editor – because every gut check needs a gut check.

                                                                        – Alison 

Tow Report Details the Power and Promise of Crowdsourcing

Posted To: Ideas & Innovation > Blogically Thinking

Posted To: Ideas & Innovation > Articles

First published Nov. 23, 2015 on Mediashift.org.

Jan Schaffer co-authored this report with Mimi Onuoha, a Fulbright-National Geographic fellow and data specialist, and Jeanne Pinder, founder of ClearHealthCosts.com, which crowdsources medical costs.

 


 

When CNN recently announced it was ending its longstanding iReport crowdsourcing efforts to, instead, source stories directly from social media streams, it was a notable marker signaling how news organizations are making different choices about audience growth and engagement.

It also affirmed the findings in our Guide to Crowdsourcing, released Nov. 20 by Columbia’s Tow Center for Digital Journalism.

As far as engagement around creating content, our team saw two paths clearly emerging: One involves news organizations investing major resources into inviting and organizing input from their audiences. The other involves culling non-solicited contributions from social media to help either create a story or identify story ideas.

The label “crowdsourcing” has been applied to both. Indeed, the term has become conflated with many things over the last decade. Some regard all story comments as crowdsourcing. Others apply it to any user-generated content, distributed reporting, collaborative journalism, networked journalism, participatory journalism and social journalism as well. To be sure, all of these share attributes.

Our task, we decided, was to zero in on journalism efforts that involve specific call-outs. Then, through interviews, a survey and case studies, we developed a new typology to spotlight how journalists are using crowdsourcing. The team included me, Mimi Onuoha, a Fulbright-National Geographic fellow and data specialist, and Jeanne Pinder, founder of ClearHealthCosts.com, which crowdsources medical costs.

OUR DEFINITION
Here’s our definition: Journalism crowdsourcing is the act of specifically inviting a group of people to participate in a reporting task- — such as newsgathering, data collection, or analysis — through a targeted, open call for input, personal experiences, documents, or other contributions.

Using that definition, we found that most crowdsourcing generally takes two forms:

  • An unstructured call-out, which is an open invitation to vote, email, call, or otherwise contact a journalist with information.
  • A structured call-out, which engages in targeted outreach to ask people to respond to a specific request. Responses can enter a newsroom via multiple channels, including email, SMS, a website, or Google form. Often, they are captured in a searchable database.

We assert that crowdsourcing requires a specific call-out rather than simply harvesting information available on the social web. We believe that the people engaging in crowdsourcing need to feel they have agency in contributing to a news story to be considered a “source.”

While crowdsourcing efforts don’t fit neatly into classifications, for this guide, we’ve organized our typologies by six different calls to action:

  1. Voting — prioritizing which stories reporters should tackle.
  2. Witnessing — sharing what you saw during a breaking news event or natural catastrophe.
  3. Sharing personal experiences — divulging what you know about your life experience. “Tell us something you know that we don’t know.”
  4. Tapping specialized expertise — contributing data or unique knowledge. “We know you know stuff. Tell us the specifics of what you know.”
  5. Completing a task — volunteering time or skills to help create a news story.
  6. Engaging audiences — joining in call-outs that range from informative to playful.

 

We found that crowdsourcing has produced some amazing journalism. Look at ProPublica’s efforts on Patient Safety, political ad spending, or Red Cross disaster assistance. Or check out The Guardian’s efforts to chronicle people killed by police in the U.S., or track expenditures from Members of Parliament. See what WNYC has done to map winter storm cleanup. Or look what stories listeners wanted CNN Digital’s John Sutter to do in its 2 Degrees project on climate change.

Crowdsourcing made all these stories possible.

It has also made journalism more iterative – turning it from a product into a process. It enables newsrooms to build audience entry points at every stage of the process — from story assigning, to pre-data collection, to data mining, to sharing specialized expertise, to collecting personal experiences and continuing post-story conversations on Facebook and elsewhere. Moreover, experienced practitioners are learning how to incrementally share input in ways that tease out more contributions.

We see how today’s crowdsourcing would not be possible without advances in web technologies that have made it easier for journalists to identify and cultivate communities; organize data; and follow real-time, breaking-news developments.


Journalistic Tensions

Still, crowdsourcing produces some tensions within the industry, Some journalists worry about giving the audience too much input into what their newsrooms cover. Others fear the accuracy of the contributions citizens make — a concern that long-time crowdsourcers dismiss. Many investigative reporters, in particular, recoil at telegraphing their intentions through an open call for contributions.

Others balk at committing the resources. Crowdsourcing can be a high-touch activity. Journalists must strategize about the type of call-out to make, the communities to target for outreach, the method for collecting responses, and the avenues for connecting and giving back to contributors to encourage more input. That is all before the contributions are even turned into journalism.

We found that, for all its potential, crowdsourcing is widespread and systemic at just a few big news organizations — ProPublica, WNYC, and The Guardian, for example. At other mainstream news organizations, only a handful of reporters and editors — and not the institutions themselves — are the standard bearers.


Crowdsourcing and Support for News

There are intriguing clues that there is a business case for crowdsourcing. Indeed, some crowdsourcing ventures, such as Hearken and Food52, are turning into bona fide businesses.

For digital-first startups, in particular, crowdsourcing provides a way to cultivate new audiences from scratch and produce unique journalism. Moreover, once communities of sources are built, they can be retained forever — if news organizations take care to maintain them with updates and ongoing conversation

Amanda Zamora, ProPublica’s senior engagement editor, credits their crowdsourcing initiatives with building pipelines directly to the people who are affected.

“We are creating lists of consumers interested in our stories,” she said in an interview.

She recently spearheaded the creation of the Crowd-Powered News Network, a venue for journalists to share ideas.

Jim Schachter, vice president for news at WNYC, said the engagement levels seen in crowdsourcing help the station get grants and bolster its outreach to donors.

Within the news industry, however, we think wider systemic adoption awaits more than enthusiasm from experienced practitioners and accolades from sources who welcome contact. Ways of measuring the impact of engaging in crowdsourcing initiatives and analyzing its value to a newsroom must be further developed.

We ask, for instance, whether crowdsourced stories have more real-world impact, such as prompting legislative change, than other types of journalism do?

To that end, we advocate for more research and evidence exploring whether crowdsourcing can foster increased support for journalism. That support might take the form of audience engagement, such as attention, loyalty, time spent on a site, repeat visits, or contributing personal stories. Or it might involve financial support from members or donors, from advertisers who want to be associated with the practice, or from funders who want to support it.


“The soufflé collapses” and other writing that surprises

The man in this photo is Ilya Marritz. He is NOT a football player. He’s the host of WNYC’s podcast “The Season,” which ends its season this week. Ilya has been narrating, in serialized form, the story of the underdog Columbia football team.*

image

Also, Ilya is a great writer. What he does so well is describe things in surprising and specific ways. Here are just a few examples:

  • When Columbia almost wins its first game in 2 years and then blows it, “the soufflé collapses.”
  • When Ilya describes a post-game locker room, it smells of “Lycra marinated in sweat.”
  • When the team misses a field goal during a game in Ithaca, NY, “the ball flies off in the direction of Syracuse.”

In these examples, the descriptions aren’t predictable and, because of that, they’re especially evocative. “Lycra” (specific) and “marinated” (surprising) is much better than “uniforms” and “soaked.” 

Ilya’s writing is also restrained. If every sentence were peppered with this kind of description, it would be too heavy on the ears. So he’s sparing; every once in a while, Ilya drops in a gem. 

You can do that, too. In any story – long or short – you can offer what some people call “a grace note,” or “spark,” or a moment of “flair.” Just a word or phrase. In this post from the NPR Editorial Training website, it’s described as dropping “gold coins along the path… every 60 seconds or so.”

                                                                                 – Alison

*(Disclaimer: Ilya is a friend of mine.)

Photo credits: Matt Collette (above, Ilya in the uniform); WNYC (below, podcast logo)

image

Do Visual Stories Make People Care?

Since we published Borderland in April of 2014, the NPR Visuals Team has been iterating on a style of storytelling we call “sequential visual stories.” They integrate photography, text, and sometimes audio, video or illustration into a slideshow-like format. My colleague Wes Lindamood already wrote more eloquently than I can about the design approach we took to evolving these stories, and you should absolutely read that.

In this blog post, I will use event tracking data from Google Analytics to evaluate the performance of certain features of our sequential visual storytelling, focusing on our ability to get users to start and finish our stories.

With a few aberrations, we have consistently tracked user engagement on these stories and, with over 2 million unique pageviews on our sequential visual storytelling, we can come to some conclusions about how users interact with this type of storytelling.

Why Do This?

At NPR Visuals, our mission is to make people care. In order to determine whether or not we are making people care, we need a better tool than the pageview.

You may have heard the Visuals Team recently received a Knight Prototype Grant to build a product we’re calling Carebot. We’re hoping the Carebot can help us determine whether people cared about our story automatically and quickly. Consider this exploration a very manual, very early, very facile version of what Carebot might do.

Clear Calls To Action Work

A consistent feature among our set of stories is a titlecard that presents a clear call to action, often asking users to “Go” or “Begin”, which advances the user to the next slide. Using Google Analytics, we were able to track clicks on these buttons. Of the 16 stories we tracked begin rates on, nine of them have begin rates of greater than 70%.

An example titlecard

For the stories where begin performance fell flat, we can point to a clear reason: “Put on your headphones” prompts or similar notices that audio will be a part of the experience. Of all users who saw a titlecard without an audio notice, 74.4% of them clicked to the next slide. If an audio notice was on the slide, only 59.8% of users faced with that titlecard moved forward. The lowest performing titlecard was on prompted users to “Listen” instead of “Begin.”

It is also worth nothing that we have tried audio notices at other places in our stories, and we see similar levels of dropoff. In Drowned Out and Arab Art Redefined, we placed the audio notice on a second slide. With Drowned Out, only 61.28% of users got past both slides, while with Arab Art Redefined, only 44.3% did. Though these are two examples with lower traffic than most stories, it seems clear that this is not a more effective way of getting users into the story.

Does this mean we should remove audio notices from titlecards? Or stop doing sequential visual stories that integrate audio altogether? Not necessarily. As we will see later, stories with audio in them perform better in other aspects that filter out the begin rate.

People Read — Or Watch! — Sequential Visual Stories

One of the most important metrics for determining the success of our stories is completion rate. Completion is defined as when a user reaches the last slide of content in a sequential visual story.

We can calculate the mean completion rate for our sequential visual stories by taking the overall completion rate of each story, adding them together, and dividing by the total number of stories. This places equal weight on each story rather than letting certain stories with outsized traffic numbers skew the results.

Across our sequential visual stories, this method shows a 35.4% completion rate on average.

Compare that to Chartbeat data about the average web page, where 55% of users spend less than 15 seconds on a page. Chartbeat never talked about completion rate, but if the average web page were to compete with our sequential visual stories, 85-90% of users who spend more than 15 seconds with a page would have to finish the page. That seems unlikely.

However, completion rates varied wildly across stories. In our first sequential visual story, Borderland, we only acheived a completion rate of 20%. It was also 130 slides long, nearly twice as many slides as any other sequential visual story we’ve done. Meanwhile, The Unthinkable, a heavy story about the “war on civilians” in Yemen, managed a completion rate of 57.6%, our highest ever. It clocked in at 35 slides.

Despite these two data points, there seems to be no correlation between number of slides and completion rate. For example, Plastic Rebirth, a relatively quick story about plastic surgery in Brazil, only had 33 slides and had completion rate of 33.2% (which is a number we were still pretty happy with).

A Better Completion Rate

However, as demonstrated by the wide variance in begin rate across stories, completion rate is highly influenced by the ability for the titlecard to entice people to continue into the story. So I created a new metric, what I call “engaged user completion rate,” to find which of our stories were doing the best at pulling an engaged user all the way through. Engaged user completion rate uses the number of users who began the story as the denominator instead of the number of unique pageviews.

Our average engaged user completion rate across stories was 50.9%. But the data gets more interesting when we start dividing by story subtypes — particulary the divide between stories that integrate audio and those that do not. In that divide, the average engaged user completion rate for stories with audio is 54.5%, compared to 48.5% without.

(Note that for all of these calculations, I considered “beginning” the story getting after the audio notice on the second slide in the case of Drowned Out and Arab Art Redefined.)

So what’s the answer? I think the jury is still out on whether integrating audio into our sequential visual stories makes them perform better or worse because our sample size is still quite small, but early indicators point towards them being better for users that choose to engage. However, A Photo I Love: Reid Wiseman is our highest performing story overall with regards to engaged user completion rate, so we have evidence that at its best, combining audio and visuals can make a compelling, engaging story.

So, Did We Make People Care?

Maybe? It’s clear that we are achieving high completion rates even on our lowest performing stories. Consider that Borderland, our lowest performing story with a completion rate of 20.1% and engaged user completion rate of 31.6%, was over 2,500 words long.

Of course, in order to determine how successful we were, we often track other metrics such as shares per pageview, as well as qualitative measures like sampling Facebook comments and Twitter replies.

Ultimately, making people care is about the quality of the story itself, not about the format in which we tell it. But I think that, with stories where text plays a large role, we are capable of making people read stories longer than they normally would because of how sequential visual storytelling allows us to pace the story.

Of course, this is not an argument for telling all stories in the sequential visual story format. Sequential visual stories work when the visuals are strong enough for the treatment. Not all of our stories have worked. But when they do, we can tell important stories in a way that pulls people through to the end.

To truly evaluate the success of our sequential visual stories, it would help to see data from other organizations who have tried this type of storytelling. If you have insights to share, please share them with me in the comments, on Twitter or through email at tfisher@npr.org. Or, even better, write a blog post!