Why Everybody Is Talking About Learning And Development In Organisations...The Simple Truth Revealed
Free Intelligent Conversation Manifesto
Intelligence is a process of acquiring knowledge, not an achievement.
Knowledge is a resource that is dispersed amongst the members of society and manifests itself in different ways. An intelligent conversation is one where we gain knowledge from whom we are speaking with. Making intelligent conversations a regular part of our lives requires us to continually seek to learn from the people we encounter. The process of seeking to learn from those we encounter will provide us each with the greatest intelligence.
Intelligence is often perceived as unidimensional and we tend to determine how intelligent a person is based on how much they know or achieve by society’s standards. These standards often oversimplify how we perceive intelligence because they fail to notice how knowledge can be gained from personal life experience. By overvaluing these typical measures of intelligence, we undermine the process of gaining knowledge through lived experiences. Intelligence has many forms, and it is a mistake to place significant value in one form while depreciating the others.
With Free Intelligent Conversation we want to learn from everyone we meet and believe that the value of our individual insights is reason enough to seek each other out. We believe that the most socially responsible, noble, and honorable thing to do with the knowledge one has acquired is to share it with other members of society, in an attempt to benefit humanity. We believe that by continually seeking to learn from others we can improve ourselves and the world around us.
Social Freedom of Speech is an understanding that anything can be said and everyone gets the benefit of the doubt.
The exchange and evaluation of ideas is our greatest mechanism for individual and collective development. The freedom to think and express oneself permits opportunities for the truth to surface and prevail in one’s life. Exchanging ideas with others is also therapeutic, as it provides an outlet for thoughts causing internal dissonance. Individuals who exchange ideas are more dynamic; their minds are breeding grounds for new ideas, innovations, and introspection. By repeatedly engaging with other ideas, we refine our own thought process and allow ourselves to fully reach our truest potential.
Despite the benefits of exchanging ideas, we perpetually hinder this process by unwittingly discouraging people from sharing their thoughts. In our day-to-day interactions, people are often apprehensive about speaking freely due to a fear of unfavorable judgement and punishment. A fear rooted in the suspicion that the participants involved are keen on making a rash judgement more than they are willing to listen and understand. Allowing this censorship to carry on is detrimental to our well-being, as we will squander opportunities to learn from one another and improve our lives. The act of engaging in free dialogue with others is so beneficial that it should be regularly encouraged, continuously sought out, and practiced routinely.
With Free Intelligent Conversation we create sacred spaces where no topic is taboo and participants know they can speak freely and without the fear of harsh judgement. A place where people can be vulnerable and unapologetically speak their truths. A playground where ideas can interact, be challenged, and refined. A venue where people seek to reach their full potential and help others do the same through dialogue. A place where, no matter what is said, participants strive to communicate with each other respectfully. A place of conversation, not quarrel.
Free Intelligent Conversation promotes social freedom of speech: an understanding between communicating parties that the environment is one where anything can be said and everyone gets the benefit of the doubt. In creating these places we also hope to create a culture of people who are willing to talk to anyone about anything. A culture willing to talk about the uncomfortable and unfamiliar. A culture welcoming people they disagree with and inviting them to speak freely. A culture that can listen to others’ opinions without needing to convince them of their own. A culture of people who acknowledge their own ignorance and can disagree, without becoming adversaries.
Free Intelligent Conversation actively creates outlets for others to share their most intimate and deeply felt thoughts. We are devoted to defending environments that encourage people to share whatever is on their mind. We want to inspire others to speak freely by encouraging social freedom of speech.
Our differences should be distinctive, not divisive.
The shared perception that our differences are significant is more responsible for our inability to collaborate than the differences themselves. The focus on our differences — whether social, cultural, racial, or economic — is distorted. This distortion has led us to be, at times, overly divisive; sometimes causing us to treat people as stereotypes, not individuals. We often perpetuate this divisiveness out of shyness, uneasiness, and fear of vulnerability. To avoid discomfort, we surround ourselves with like-minded individuals.
While it is convenient and at times necessary for us to congregate with people like us, we must make a regular practice to seek out those we label as different. Otherwise we’re at risk of developing a myopic understanding and depriving ourselves of potentially transformative experiences. We will also miss out on the joy and growth associated with learning something new. Discomfort is a small price to pay for knowledge, and not paying this fee leaves us exposed to the larger penalty of negative stereotypes, prejudice, and bigotry.
Free Intelligent Conversation believes in paying the price of knowledge. We believe our differences should be distinctive, not divisive. We believe it to be in our individual and collective interest to inform ourselves about each other’s differences. Learning about other perspectives not only helps us understand one another, but also increases self awareness. The differences between things are best revealed in high contrast. We can never truly know what makes us unique without context and people to benchmark against. We can only begin appreciating and learning from others when we come to see them as individuals. Seeing people as individuals involves two conscious choices: the first is to approach without prematurely categorizing them. The second, is to listen as if you have something to learn from them. In our current sociopolitical climate — given global tension over refugees, immigration, racism, hatred, and harassment — seeing people as individuals has revolutionary implications.
We believe the best way to learn from our differences is to engage in conversation. Conversation is our greatest tool for collaborating in an open-ended way. Through successfully attempting to have conversations with people who have different views, we lay the foundation for goodwill and empathy amongst each other. Free Intelligent Conversation creates places where we seek out, learn from, and celebrate each other’s differences through conversation.
Cities are enhanced when the communities within are connected.
Half of humanity — 3.5 billion people — lives in cities today and this number is expected to grow. Cities are centers of activity for ideas, commerce, productivity, science, and social development. Cities provide the best opportunity for social and economic advancement. To capitalize on the advantages that cities offer there must be ongoing opportunities to convene, collaborate, and converse amongst the various communities within. In a city there are plenty of activities, but few public places that actively facilitate fellowship.
Along with the individuals who live in these growing communities, tourists also commonly interact with cities. Aside from work-related purposes, people primarily visit cities to have a semblance of what life in that city may be like. Despite this desire, visitors often leave cities without ever having tapped into its culture. Even the most intentional of visitors are likely to struggle finding individuals from the city who are willing and able to interact. As a result, visitors often find themselves shopping and eating at the same chain franchises they frequent at home — all while leaving much of the city unexplored, and fail to experience what it truly has to offer.
With Free Intelligent Conversation we believe that the most profound way for a visitor to experience a city is to spend time in conversation with people from the city, as locals have the best insight on what’s to love about the city. We create public places of active fellowship, where residents and tourists can meet and have meaningful conversations.
We are aligned with the United Nations goal to create Sustainable Cities and Communities — the 11th goal within the 2030 Sustainable Development Goals — which plainly stated is: “To make cities inclusive, safe, resilient and sustainable.”
Free Intelligent Conversation creates opportunities for socio-economic mixing and generates positive contact between people of different social groups, by means of meaningful face-to-face conversations in public spaces. Our approach is an innovative and inexpensive solution to building sustainable, vibrant cities. With Free Intelligent Conversation we believe we will vitalize business, friendship, and community development and create more inclusive, safe, resilient, and sustainable cities.
Meaningful conversations catalyze long-term changes in behavior, attitude, and/or perspective.
Small-talk is best used as a bridge to meaningful conversations and should not comprise the core of a discussion. Though at times necessary, small-talk is mostly unmemorable and leaves participants uninspired. It may even become a burden that causes unnecessary anxiety in undesirable but socially mandated interactions. To combat the hollowness of small-talk we must be intentional about our attitudes when talking to one another, as that is the best way to transform our empty dialogues into meaningful conversations.
Meaningful conversations are conversations that catalyze long-term changes in behavior, attitude, and/or perspective. They are the conversations that call us to be vulnerable and empathetic. They call us to speak our truths and also to hear the truths of others — the truth about their experiences and their ideas. The truth about who they are, and why they are. Meaningful conversations are the most desirable conversations, yet we constantly pass up the opportunities to have them.
With Free Intelligent Conversation we want to catalyze a culture that encourages people to seek out meaningful conversations. We minimize small-talk and trivial pleasantries, and instead prioritize learning about the underlying ideas and stories that compose people’s unique identities. We encourage others to discuss the things that excite, scare, and move them.
In our society, there are few places designated to creating and having meaningful conversations. In an attempt to satiate our desire for meaningful interactions we sometimes resort to trivial social gatherings, awkward parties, and dull evenings at the bar or club, hoping to have an interesting conversation.
With Free Intelligent Conversation we create public spaces for meaningful interactions. We create places where conversations happen for conversations’ sake. A designated space where nothing is being sold, and no one is pushing a religious or political agenda. A public place one can reliably turn to for connecting with people. A place where, by virtue of being present, you indicate that you are interested and willing to talk to anyone about anything. A place where people can be unapologetic about their appetite for talking to, connecting with, and learning from others.
To this end, Free Intelligent Conversation intends to first inspire a culture of people that seeks out meaningful conversations, and then create public places where they can happen.
Face-to-face interaction improves our ability to communicate and connect with people.
For the first time in history, we are immersed in the digital world and detached from the real one. In the age of digital technology, it’s difficult to tell if we interact more through our screens than we do in person. The majority of our conversations — business or personal — take place via text message, email, instant message, and social media platforms. Our advances in technology have made communicating far more convenient, but with a price: with our increased dependency on technology, we decrease opportunities for face-to-face interactions.
Face-to-face interactions are necessary for strong relationships, proper socialization, and the development of great communication skills. As our day-to-day becomes increasingly busy, we must keep in mind that despite the convenience our technology provides, it cannot replace the need for face-to-face communication altogether.
In-person interactions is one of the basic components of our social system. They are a significant part of individual socialization and consequently central to the development of groups and organizations composed of those individuals. In face-to-face interactions we are able to communicate with our whole bodies. Non-verbal cues are just as crucial as the words we say: our facial expression, posture, gesture, tone, and eye contact give powerful suggestions about what we’re trying to communicate — much is expressed through a simple smile or nod.
Face-to-face interactions provide better communication feedback loops: participants are able to give immediate visual feedback, which informs whether or not communicating parties are understanding each other and how participants feel about the discussion. In contrast, the appeal of written communication is that it allows us to craft our message with precision through extensive revisions and edits. Though this medium encourages greater accuracy, it does not allow us to gauge reader reception. Much of our tone is left to the reader’s imagination, which at times leads to miscommunication. Miscommunication is essentially inevitable, however, face-to-face interactions allow us to immediately identify and resolve misunderstanding.
Face-to-face interactions call us to be present by requiring our full attention: we can’t email, text, or be secretly distracted. Face-to-face interactions better allow people to address sensitive issues, which is necessary to build trust, empathy, and strong relationships. Face-to-face interactions are instances of shared real-life experience that can enhance future communication between people. These are some of the aspects that make face-to-face more meaningful than other methods of communication.
As we spread our online relationships wide and thin on social media, the value of each connection is lessened and the benefits we gain from each connection decreases. It is often difficult, if not impossible, on social media to recreate the conditions that define deep, intimate relationships. At best, our social media friends are a supplement, not a substitute, of real-life interactions with people. Real-life relationships help us learn about others and ourselves. Online friendships, while certainly valuable in many ways, do not satisfy our deepest needs for intimacy derived from proximity. We should seek out our online friends, rekindle lost connections, and revisit childhood friendships, as long as it is not at the expense of real-life relationships.
Our online interactions provide anonymity that is often used to create misleading identities. Written communication allows us to assemble our words until they fit the image we want to project, something that is more difficult to do in a real life conversation. Since we can craft and edit our message, it is much easier to be duplicitous or even self-deluding.
As the coming generations grow up with technology at their fingertips, we must encourage the development of real-life interpersonal skills. We’ve learned to boldly share and defend our thoughts and opinions online, but are apprehensive of doing so in person. We’ve gained social media skills while neglecting our people skills.
While technology has streamlined business communication and social media has provided us with glimpses into other people’s’ lives, we don’t want to forget the important, intimate parts of human connection that only face-to-face interaction provides. We are often distracted from and belittling our living experiences: we attend social functions and prefer to use our cell phones, tablets, and computers rather than engage in conversations. The technology that promised to bring us closer together, by connecting us better than we had ever been before, has instead made some of us reclusive and has put us in ideological echo chambers. This isolation has negative effects on our physical, mental, and emotional health. Having close and frequent connections with friends and community promotes healthy behaviors which encourage a more socially-oriented populace.
With the increase of communication technology, as a society we’ve lost the appreciation for the art of conversation and in person interactions. As a consequence, we have undermined our best tool for building deep, real-life, meaningful relationships. As our online networks grow, we increasingly feel lonely and disconnected from those around us. At some point we must acknowledge that we can’t have it both ways: we cannot be impersonal and expect to have deep connections. If we want deep connections with our community, we must be willing to work for them.
Free Intelligent Conversation is creating face-to-face conversations as a counter-cultural response to social isolation. We want to be present and remove distractions to have focused, uninterrupted dialogue. We appreciate conversation as an art form, relishing the beauty of communication subtleties that are only felt face-to-face. We want to look people in the eyes while we talk about sensitive issues. We want to read body language and end our interactions with handshakes and hugs. We want to celebrate with high fives, and feed off of the energy of the people around us. We want to witness people laughing out loud, shaking their heads, or rolling on the floor laughing.
We want to improve our ability to communicate and connect with people without hiding behind technology. Free Intelligent Conversation is not a protest against technology or social media; we just don’t want social media to become the crutch that our relationships lean on. Though we use our devices and screens to stay in contact, we organize in public spaces to celebrate life.
We want to be as intentional as we can about engaging with and maintaining relationships with our community and encourage others to do the same. We will provide a place where people — especially across generations — can meet, learn from each other, and build deep relationships. With Free Intelligent Conversation we want to encourage a culture of people who understand the value of face-to-face interactions, seek them out, and encourage their community to engage in them likewise.
Conversation with strangers is an invitation to see the world from another person’s perspective.
The world is a big crowd we often feel alone in. We have friends and family in certain pockets of the world, but most of our interactions involve strangers. We often feel isolated even though we share spaces with and are surrounded by people. This seems unnecessary, considering that strangers just are people we don’t know yet.
The people we know provide a sense of intimacy and connection, but it’s important to remember that strangers can also contribute to our sense of belonging. Even though momentary, a quick hello, or a nod of acknowledgement can make us feel less isolated. On some days this is our only sense of human connection — without it, we feel empty. Especially in large cities, talking to strangers reminds us that we all live there together.
The unexpected interruption of a conversation with a stranger disrupts the natural order of our day and calls us to our full attention. As a response, we have a heightened sense of awareness of the world around us. It calls us to be awake, in the moment, and attentive to our surroundings. Surprisingly, it is sometimes easier to share intimate parts of our lives with strangers than it is with people we know. When talking to strangers, we can be vulnerable because we feel that we have nothing to lose. We can openly share our stories, opinions, and secrets without the fear of long term consequences. In this context, you may find yourself sharing feelings you hadn’t talked about yet, or details about your life that only a few people know. Strangers are likely to reciprocate, which strengthens the connection. Overall, the heightened awareness and the freedom to be vulnerable facilitates intimacy between absolute strangers.
Conversation with strangers is an invitation to see the world from another person’s perspective. When we take the time to connect with strangers, even briefly, it moves us away from fear and towards empathy. People develop a preference for things merely because they are familiar with them. The more familiar we are with a person, the more we’re likely to interact with and have positive experiences with them. Positive experiences with one person from a given social group reduce prejudice toward the entire group to which that person belongs.
Talking with strangers is a life-changing experience, but is difficult to initiate for various reasons. For one, we often don’t have an “excuse” to start conversations. We also don’t know what’s acceptable behavior in this context since we don’t want to be perceived as disruptive or abrasive. These uncertainties, however, are what makes conversations with strangers captivating. Many of us would like to have more of these unexpected, intimate moments with strangers — we just don’t think we have a reason for them to happen. For those of us who are more willing to approach strangers, we need some kind of cue that lets us know we are welcome.
With Free Intelligent Conversation we provide an excuse to spark up conversation between strangers. We create places where strangers can meet, talk, and have public opportunities for positive interactions. We encourage strangers not only to talk with us, but also with each other. We believe that by making conversation with strangers more accessible we can positively transform both individuals and the communities, cities, and countries that they live in.
We know that getting strangers to talk to each other is a tall order. We’ve been told that we’re too idealistic. Some people think we want their money. Others think it’s a trick or hoax. Some people are suspicious and want to know “who we work for” or “why we’re really here.” Some find it impossible to conceive that there could be a group of people interested in just talking to others. Some recognize that — in some small way — we want to change the world; and they smile sympathetically at us, feeling sure that the world can’t really be changed like this.
What they don’t know is that we’ve been changed.
We’ve been inspired. We’ve broken out of our comfort zones. We’ve heard stories that have made us laugh, and we’ve had conversations that have brought us to tears. We’ve gotten dates and we’ve been offered jobs. We’ve learned how to be present. We’ve learned about different cultures, and how to appreciate them. We gained insight about ourselves. We learned how to disagree without becoming enemies. We’ve had long chats with our elders. We’ve been given great advice, and have been a listening ear to many. We’ve heard new ideas and we’ve heard dumb ideas. We’ve changed our long held opinions and we’ve let go of previously held prejudices. We’ve made great friends — all from taking time to talk with strangers.
We want to encourage people to have meaningful, face-to-face conversations and we will create designated places for them to happen. A place where we can learn about the experiences of others, and be vulnerable, honest, and inquisitive. We want to create these places in every city. Our hope is that these designated places will attract and nurture a particular kind of person. The kind of person who tries to learn from everyone they meet. The kind of person that cultivates authentic relationships, and puts away their devices to interact with the real world. The kind of person who looks for ways to engage with their community and broaden their social networks in meaningful ways. The kind of person who can communicate across generational, cultural, and ideological boundaries. We believe this kind of person will change the world for the better.
There are a lot of problems in the world and though we don’t have the answers to them, we believe that the first step towards a solution is to get people to talk to each other. We believe that we can solve these problems one conversation at a time.
If you see us holding a sign that reads “Free Intelligent Conversation,” come talk to us. About what? About anything and everything. The only thing needed for an intelligent conversation is willing individuals. We’re willing.
Are you?
—
Edit 8/7/2017: You can find a read-through of the Manifesto here:
—
13 TCA Takeaways: Goodbye ‘Shameless,’ ‘Saul’ and Hank Azaria’s Apu – Hello ‘LOTR’ Cast, More ‘AHS’
After 13 days, the 2020 Television Critics Association (TCA) Winter Press Tour has come to a close.
Below are 13 of our top takeaways from the never-ending TV media event occupying the Langham Huntington Pasadena ballroom. Thanks for all of the free coffee, but now we need some sleep.
Also Read: How HGTV's 'Extreme Makeover: Home Edition' Revival Will Avoid Original's Foreclosure Horror Stories
Some Awards Shows Have Hosts, Some Don’tThe Oscars don’t need a host, and the Golden Globes got two.
ABC chief Karey Burke told those in attendance that the 2020 Academy Awards will have “no traditional host” again this year. Hey, it sure worked out last year — check out these TV ratings.
We also got this year’s Emmys date from ABC. Nominations for the Sunday, Sept. 20 show will come out on Tuesday, July 14, ABC said, and “Host(s) and producers for the telecast will be announced at a later date.”
TheWrap asked network reps if that sentence in the press release means there will definitely be a host this year, to which a spokeswoman replied, “Details are not yet firm.”
You know what will definitely not go host-less? The 2021 Golden Globes. NBC announced at TCA that Tina Fey and Amy Poehler will return as hosts for the annual January kickoff to awards season.
Also Read: Winter TV Press Tour 2020: 'Modern Family' Final Bow, Paul Telegdy in Hot Seat and 'LOTR' Details at Last?
Extra RealityThanks in large to its hit reality import “The Masked Singer,” Fox won the fall in Nielsen ratings. How do you keep that momentum going?
For starters, Fox is debuting “Masked Singer” Season 3 immediately following its Super Bowl LIV, which means it will be all of, like, a month and a half between runs. In case that’s not enough “Masked” madness, Fox and Ellen DeGeneres are going into business together on a spinoff: “The Masked Dancer.”
Yes, that one is literally being adapted from an “Ellen” joke.
Not to be outdone, ABC is also spinning off its key unscripted property. Because “The Bachelor,” “The Bachelorette” and “Bachelor in Paradise” can’t consume all of primetime six nights a week, 52 weeks per year, ABC has ordered some kind of confusing new “Bachelor” installment: the music-based “Listen to Your Heart.” (Try to) Learn all about that here.
Also Read: Meet BYUtv, the 20-Year-Old TV Network You've Probably Never Heard Of
No Room for Cinemax at HBO MaxDespite the obvious name association, AT&T will not be bringing Cinemax to its upcoming streaming service, HBO Max. Not only that, but HBO Max chief Kevin Reilly said Cinemax will stop producing original content all together. We don’t know yet what that means for shows like “Jett” and “Warrior.” “Strike Back” debuts its eighth and final season Feb. 14.
Don’t worry, Cinemax will still be a channel. “I think it still serves an important value for its customers in terms of its movie offerings,” Michael Quigley, executive vice president of content acquisitions for HBO Max, said.
And though it wasn’t announced at TCA, we learned during TCA that AT&T’s Audience Network, home to “Mr. Mercedes,” “Condor” and “Loudermilk,” will also no longer be a home for original programming. Instead, AT&T is turning the premium network into a promo channel for HBO Max.
A rep for AT&T told TheWrap that “any future use of Audience Network content will be assessed at a later date.”
Also Read: Spectrum's 'Manhunt: Deadly Games' Gets Premiere Date, Trailer (Video)
No Plans for ‘Modern Family’ Spinoff Yet (But, We Know Who Would Be Up for It)Despite ABC chief Karey Burke’s hopes for some kind of continuation for the ABC series, which wraps up this spring, co-creator Steve Levitan told us nothing is planned.
“The short answer right now is, there are no plans,” he said.
When TheWrap followed up by asking which cast members would be interested in seeing their characters continue, both Reid Ewing (who plays Dylan) and Aubrey Anderson-Emmons (Lily) raised their hands. They’re two of the younger cast members in the large ensemble — “young” being relative on this family show, as Ewing is 31 years old and Anderson-Emmons is 12 — so at least there’d be a long runway for stories with their respective characters.
Julie Bowen, however, gave us her stipulations for continuing on as Claire Dunphy. “Is the spinoff as good as ‘Modern Family?’ Do we get to have the amazing writers? Do we get to have the amazing cast? The incredible hours? Do I get to work in LA and see my kids? Then yeah.”
Got all that, ABC?
Also Read: ViacomCBS Moves Pop TV Under Chris McCarthy, Brad Schwartz to Remain President
We’re Losing Some Shows…It just wouldn’t be TCA without hearing news that some of our favorite shows are ending, and the 2020s decade is apparently no different.
Showtime is finally bringing an end to “Shameless” after 11 seasons (and eyeing an end date for “Ray Donovan”), which will come this summer. Yes, that means that the William H. Macy-led drama will finish its 10th season and air its final run all in the same year. Why?
“We wanted it on earlier, because we wanted to strengthen our summer and we also wanted to provide a great lead-in for ‘On Becoming a God’ in its second season,” Showtime’s entertainment president Gary Levine explained to us. “Homeland” also ends its eight-season run beginning next month.
Meanwhile, AMC is getting ready to say goodbye to Albuquerque for the second time, announcing that its “Breaking Bad” prequel “Better Call Saul” will wrap up next year with its sixth and final season (it returns for Season 5 in February). That means that next year we’ll finally get that payoff for Gene Takovic, the alias that Saul — er, Jimmy, has been using for his post-“Breaking Bad” exploits.
Also Read: TNT Renews 'All Elite Wrestling: Dynamite' Through 2023, Will Adapt YouTube Series 'AEW Dark' for TV
…But There Are Still Too Many ShowsIn his annual survey of scripted programming on TV, FX chief John Landgraf revealed that the number of shows on TV surpassed the 500 mark for the first time in 2019. The coming years will surely only see that number rise, and networks made the most of their time these past two weeks teasing what they’ve got in the pipeline.
Here’s just a taste of what’s coming and when to expect it:
Also Read: 'One Day at a Time' on Pop TV Will Have to Sacrifice That Catchy Theme Song (Video)
Update on NBC’s Investigation Into Gabrielle Union’s ‘America’s Got Talent’ ExitNBC boss Paul Telegdy told reporters that the network’s ongoing investigation into Gabrielle Union’s contentious departure as a judge on “America’s Got Talent” should wrap up by the end of the month. “We’re very confident if we learn something… we’ll put new practices in place, if necessary, and we certainly take anyone’s critique of what it means to come to work here, incredibly seriously,” the executive said.
Union’s exit was accompanied by multiple news reports describing behind-the-scenes clashes between her and the show’s producers over what was described as a “toxic” workplace culture. Former “AGT” judges Howard Stern and Sharon Osbourne have since spoken out against the “boys’ club” environment on the show, which they said was facilitated by executive producer-turned-judge Simon Cowell.
For her part, former judge Heidi Klum says her experience on the show was nothing short of “amazing.”
“I didn’t experience the same thing,” she said. “To me, everyone treats you with utmost respect.”
Also Read: Pop TV Plots Post-'Schitt's Creek' Future: Comedy With 'Heart' Not 'Snark,' Brad Schwartz Says
Amazon Finally Casts Two of Its Most Anticipated ShowsAmazon’s push into big-budget blockbuster programming has been in the works for years, but the streamer finally shared some casting news for both its highly anticipated “Lord of the Rings” TV series and its international spy franchise from “Avengers” directors Joe and Anthony Russo.
“Quantico” star Priyanka Chopra and “Game of Thrones” alum Richard Madden have signed on to star on “Citadel,” the U.S. installment of the Russo Brothers’ franchise, which will be accompanied by three other interconnected local-language series based in Italy, India and Mexico.
The streamer’s “LOTR” series, meanwhile, cast a whopping 13 series regulars: Owain Arthur, Nazanin Boniadi, Tom Budge, Ismael Cruz Córdova, Ema Horvath, Markella Kavenagh, Joseph Mawle, Tyroe Muhafidin, Sophia Nomvete, Megan Richards, Dylan Smith, Charlie Vickers and Daniel Weyman. They join previously reported stars Robert Aramayo and Morfydd Clark in the eight-episode series.
Also Read: 'The Biggest Loser': Things Get Heavy During Contentious TCA Panel for USA Reboot
Multiplicity Is InMultiple networks picked up multiple shows for multiple seasons during this winter’s press tour.
Ryan Murphy’s “American Horror Story” is coming up on Season 10 — which TheWrap learned exclusively will feature the return of “AHS” staple Sarah Paulson — and has now been renewed for three more, meaning the anthology is guaranteed to run at least 13 seasons. (Isn’t that just poetic?)
Adding to the additional-seasons trend are NBC, which has ordered three more years of its Ryan Eggold-led medical drama “New Amsterdam,” TBS, which has picked up Seth MacFarlane’s “American Dad” for two more seasons, and Comedy Central, which has renewed “Tosh.0” for four more installments as part of the channel’s new first-look deal with creator Daniel Tosh.
Also Read: Former 'AGT' Judge Heidi Klum Weighs in on Flap Over Gabrielle Union Ouster
Why HBO Picked Dragons Over Naomi WattsHBO finally ordered one of its several potential “Game of Thrones” spinoffs to series last October — only it wasn’t the one everyone was expecting would get picked up. The pay TV channel scrapped its Naomi Watts-led “GoT” prequel referred to internally as “Bloodmoon” after shooting the pilot and opted to give a straight-to-series order to George R.R. Martin and “Colony” co-creator Ryan Condal’s “House of the Dragon,” which is a show about the Targaryens’ history.
When TheWrap sat down with HBO programming chief Casey Bloys, we asked why — and the answer wasn’t so simple.
“In general for a pilot, and this is very much the case in this one, there’s not one thing that I would say, ‘Oh, this went terribly wrong,'” Bloys told us. “Sometimes a pilot comes together, sometimes it doesn’t. Sometimes even the best aspects don’t totally gel, sometimes they do. That’s kind of the little bit of luck and magic in doing shows — and sometimes they come together and sometimes they don’t.”
“Bloodmoon” was co-created by Martin and “Kingsman” screenwriter Jane Goldman and set thousands of years before the events of the original “Game of Thrones” series. “House of the Dragon,” on the other hand, is based on Martin’s “Fire & Blood” book, which details the Targaryen family lineage, and takes place just 300 years before the events of “GoT.”
Bloys added: “One of the advantages of ‘House of the Dragon’ is you’ve got history and text from George in terms of the history of the Targaryens. So you had a little bit more of a roadmap. So that made it easier to say go straight to series on that. Also, in general with ‘Game of Thrones,’ one of the things going into it we knew — that we know from the development in general — is very few things you get right the first time. And so that’s why we did multiple scripts. And we would have been very fortunate had the one pilot worked and gone straight to series and that would have been that. But you also had to make plans for if that didn’t happen. So we wanted to have a lot of options, so that’s why we went in very deliberately trying to go at it a number of different ways.”
Also Read: 50 Cent Says He Advised Eminem to Not Respond to Nick Cannon: 'You Can't Argue With a Fool'
Seriously, Jussie Smollett Is Not Coming Back to ‘Empire’Fox would love for us all to stop bringing this up, but since “Empire” showrunner Brett Mahoney hasn’t shut the door on the idea, we had to ask again: Will Jussie Smollett reprise his role as Jamal for this spring’s series finale of the hip-hop drama?
“He will not be coming back,” Fox entertainment boss Michael Thorn told TheWrap. “As you would expect when you’re finishing an iconic series like ‘Empire,’ that Brett, as the showrunner, along with his producing partners, would certainly have discussions about what’s the best way to finish the show. In this case, Jussie will not be coming back for the finale.”
Smollett left the show toward the end of last season, shortly after Chicago law enforcement accused him of staging a high-profile hate crime against himself. The producers, responding to intense public pressure, wrote Smollett out of the final episodes of Season 5 despite protests from Smollett’s supporters on the cast.
“Our hope at Fox — and I know the producers feel the same way — is that the show, to us, is much bigger than some of the personal stuff that’s unfortunately happened for Jussie, where we just want the ending to be as epic as the beginning,” Thorn said.
Hank Azaria Is Officially Done With ApuThe “Brockmire” star confirmed once and for all that he will no longer be the voice of the Indian American convenience-store proprietor Apu Nahasapeemapetilon on Fox’s “The Simpsons.”
“I won’t be doing the voice anymore, that’s all we know. Unless there’s some way to transition it or something,” Hank Azaria told reporters after the panel for his IFC series, which is ending with its upcoming fourth season.
“What they’re going to do with the character is their call, it’s up to them, they haven’t sorted that out yet,” Azaria said of “The Simpsons” team. “All we agreed on is that I won’t do the voice anymore. We all had made the decision together, we all feel it was the right thing and good about it.”
Also Read: 'Morning Show' Producer Says 'No Update' on Steve Carell Returning for Season 2, But They're 'Exploring' It
Apple TV+ Joins the PartyApple TV+ had its first-ever appearance at TCA during this winter’s tour, landing the final day slot on Sunday, which isn’t exactly a coveted one, due to the fact you’re presenting to a room full of really tired journalists.
(We have to note here that before the new streaming service kicked things off in Pasadena, its direct competitor Disney+ started the morning with an interestingly timed investment in Twitter promotion. Now back to Apple, the belle of Sunday’s ball.)
The tech giant opened its presentations with a buff-looking Kumail Nanjiani and his wife and co-writer Emily V. Gordon, who appeared via satellite to discuss their series “Little America.” They and the other executive producers (Alan Yang, Lee Eisenberg, Sian Heder, Joshua Bearman) talked about how a show that depicts immigrants in an empathic light is inherently political, despite their efforts to focus on the character’s personal stories rather than the American immigration system.
Hilde Lysiak, the 13-year-old journalist who went viral with her exclusive report of a hometown murder in 2016, was on hand to discuss “Home Before Dark,” the drama series based on her investigative reporting in which Brooklynn Prince plays her and Jim Sturgess plays her father.
Also Read: Rob McElhenney Says the Characters in 'Mythic Quest' Are 'Real People' Compared to 'Sunny' Gang
Diversity was a big topic throughout Apple’s day, particularly during the panels for upcoming doscuseries “Visible: Out on Television” and the Kristen Bell and Josh Gad-voiced cartoon “Central Park.”
When “It’s Always Sunny in Philadelphia” creator Rob McElhenney and star David Hornsby showed up to talk about their gamer comedy “Mythic Quest: Raven’s Banquet” they were obviously immediately asked about “Sunny” — specifically, the similarities and differences between the two shows.
“The Morning Show” executive producer Michael Ellenberg said he has “no update” on the possibility of Steve Carell returning to the series for Season 2, but that they are “exploring” it — and everyone else remained tight-lipped about the next season, including Jennifer Aniston, Reese Witherspoon, Billy Crudup and director Mimi Leder.
Apple wrapped up its day with more eye-candy via satellite in the form of Chris Evans, who was on hand to promote his upcomign series “Defending Jacob.”

Four seasons into "This Is Us" and the only thing fans can rely on more than the fact that they're sure to get at least one twist or turn when they tune in each week is that they are definitely going to shed at least one -- and usually more -- tear per episode. Ahead of the NBC family drama's return from winter hiatus on Jan. 14, TheWrap has rounded up the show's biggest tearjerker moments -- both good and bad -- so far. Obviously, spoilers ahead.
Also Read: Winter TV 2020: Premiere Dates for New and Returning Shows (Photos)
NBC
How Randall became the third tripletEven before the big time-jump twist was revealed on the pilot of "This Is Us," the tears were already flowing when it was revealed that one of Jack and Rebecca's triplets didn't make it through childbirth. Struck by a moment of inspiration in grieving, the Pearsons decide to adopt a baby who had been abandoned and showed up at the hospital on the same day.
Also Read: ‘This Is Us’ Series Premiere Recap: The Big Twist Revealed
NBC
Jack's fateFirst, the show dropped the bombshell that Jack has been gone for so long that Randall's kids consider someone else entirely their "grandpa." We shouldn't have been surprised to learn, then, that the Pearson family patriarch is actually dead, and had been since about 2006. Kate seems to have a hard time moving on from it, still maintaining a tradition of watching every Steelers game with her dad -- even if it's just his ashes left now.
Also Read: ‘This Is Us’ Recap: Jack’s Fate Revealed as Family Secrets Spill
NBC
The truth about WilliamWe knew William was dying pretty early on, but that didn't make him trustworthy. It took Beth gently chiding Randall about his inherent goodness and William's shady behavior to get the truth out on the table: William isn't still doing drugs, he's been disappearing for hours on end each day in order to take the bus to his house to feed his cat. "Well now I feel like a bitch," Beth quipped, and we wept.
Also Read: ‘This Is Us’ Recap: Randall Gets His Name, and an Origin Story
NBC
Toby's pastThe heartbreaking moments don't belong exclusively to the Pearsons. When Kate pushes, Toby reveals exactly why he's not with his gorgeous ex-wife Josie anymore: Turns out, she was so horrible to him that he became suicidal and gained 100 pounds in a year. Ouch.
Also Read: ‘This Is Us’ Recap: ‘The Big Three’ Introduces a New Mystery
NBC
Kate the princessAfter being ostracized by her friends in the cruelest way possible, little Kate's spirit was brought back to life thanks to a bit of magical storytelling by Jack, who hands her his t-shirt and tells her she can be anything she wants when she wears it, though she's always a princess in his eyes. Who's chopping onions around here?
Also Read: ‘This Is Us’ Recap: ‘The Big Three’ Introduces a New Mystery
NBC
Rebecca and ShakespeareFinding herself unable to bond with her adopted son after the death of one of her triplets, Rebecca seeks out the baby's birth father instead of telling her husband. Their heart-to-heart, where she refused his request for visitation and he gives her the inspiration to name the baby Randall, is one of the show's most gut-wrenching so far.
Also Read: ‘This is Us’ Review: Dan Fogelman Mines Great Melodrama From Everyday Life
NBC
Randall just wants to fit inWhen the Pearsons discover little Randall is gifted, it seems like a great thing, until Jack gets it out of him that he's been pretending to be not as smart as he is in order to fit in with his siblings.
NBC


Randall's trip down memory laneOnly on this show could a bad hallucinogenic mushroom take the characters and the audience on a gut-wrenching emotional journey. As Randall reels from his mother's betrayal, a vision of his dead father takes him down memory lane to reveal just how hard and lonely it must have been for his mother to keep such a secret to herself.
Also Read: ‘This Is Us': Jack Pearson Is Old, Grey and Not Dead in Season 2 Finale Promo (Video)
NBC


Kate and the rage-drumsDuring a seemingly hokey fat camp exercise, Kate taps into some very real emotions, mostly about her father's death, and lets out a primal scream so heartbreaking we still can't get it out of our minds and ears.
NBC
Trouble in paradiseAfter an episode where their happy marriage was contrasted to Miguel and Shelly's crumbling relationship, Jack and Rebecca were left off on an ominous note when she revealed she wants to go on tour with her band -- led by ex-boyfriend Ben.
NBC
Randall's breakdownThe stress of work and William's impending death finally takes its toll on Randall, who has a full-blown breakdown. The scene is made all the more emotional when the ever-self-absorbed Kevin picks up on the strain in his brother's voice and races to his side to cradle him in his arms as he cries.
NBC
All of "Memphis"
William finally passes at the end of this heartbreaking episode, which centers around his road trip home to Memphis with Randall. The hour is spent going between flashbacks of his youth and the road he traveled to leave his mother and music career behind and get mixed up with drugs. Randall is with his biological father when he passes and the drive back home is both the most cathartic and heart-wrenching scene we've ever watched.

Jack and Rebecca's separation
Though the break was ultimately a very short one, as Rebecca soon came to pick Jack up from Miguel's place, the end of Season 1 left us on a cliffhanger as we wondered if and when Rebecca and Jack would reunite after a huge fight. Trying to keep from bawling for the next few months as we awaited the answer at the beginning of Season 2 was our biggest problem over the summer hiatus.
NBC
Kate's miscarriage
This episode packed more of a punch than anyone was ready for, even though we found out at the end of the previous episode what was to come. Watching Kate lose her and Toby's baby in "Number Two" and the different ways they chose to grieve was truly a once in a lifetime TV experience. But it was definitely something we only wanna witness once.
NBC
Jack's father's death
To say that Jack and his father had a tumultuous relationship would be an understatement. But when we saw him on his death bed we forgot all about their horrible history and watched Kate and Rebecca say goodbye to a man they never knew because Jack wasn't there.
NBC
Kevin and Sophie's break-up
Seeing as we have yet to witness how these two lovebirds originally split, it was really hard to watch Kevin dump Sophie due to his new addiction. The two were high school sweethearts who divorced at a young age after Kevin cheated and we don’t know what went wrong but after this upsetting scene we don’t know if fans can handle seeing the first go round.
NBC
Kevin's addiction
Kevin developed an addiction to painkillers and alcohol following an on-set injury at the beginning of Season 2. After watching “Number One” and learning how his already bad knee was ruined during a football game — thus ending his dream career — the tears started flowing and haven’t let up since.
NBC
Drunk driving with a stowaway
Remember the time that Kevin was upset and drove off from his brother Randall’s house drunk? Yep, that was a dumb move. But what made it even worse was the little stowaway in the backseat. Yes, the midseason finale of Season 2 saw Kevin behind the wheel of a car while under the influence with Tess right there behind him. She snuck away when Randall and Beth were saying goodbye to Deja, a teenager they were fostering who was taken back by her biological mother. While we knew before the end of the episode everyone was safe (thank goodness), Kevin was arrested on a DWI -- in front of his niece. And of course Randall and Beth were waiting at home to kill him. Now these were probably some rage tears.
NBC
Rehab scene/The Big Three "bench moment"
“The Fifth Wheel” delivers one of the most tense moments of the series, when Rebecca and the Big Three enter a family therapy session while Kevin is in rehab. Kevin shares that he felt like a fifth wheel growing up because Kate had Jack and Randall had Rebecca (this is a scenario anyone who comes from a three-sibling family can probably relate to). Rebecca ends up admitting that yes, she was closer with Randall because he was “easier” to parent. It’s a tough blow, and everyone leaves the session pretty pissed at each other. But the siblings convene later on in the episode to look back objectively on their childhood and all is well, leaving us touched and sad all at once.
NBC
The Sidekicks/"Star Wars" speech
Meanwhile, Toby, Beth and Miguel decide to get some drinks, as they are not invited to family therapy. They bond over feeling excluded from the crazy-tight bond the remaining Pearsons have with one another, and compare themselves to the side characters in “Star Wars.” But when Beth and Toby start to talk about how they were never able to meet Jack, and feel like his kids put him on a pedestal, Miguel sweeps in in Jack’s defense and shuts that conversation down -- reminding us, again, of the giant hole Jack left after his death.
NBC
Kate's dog issues
Kate and that dog. Have you ever seen someone more conflicted about something so adorable? We first learned of Kate’s serious issues with canine’s when she just couldn’t bring herself to adopt the cutest furry friend for her fiancé Toby, who desperately wants a dog of his own. She got so close to picking him up, but then bailed, prompting a heavy flood of tears from viewers. However, she ultimately decided making Toby happy was more important than her issues and the tears came even harder when she revealed the dog, Audio, to his new owner.
Also Read: ‘This Is Us': Crock-Pot Tells ‘Heartbroken’ Fans Product Is Safe After Terrifying Episode
NBC
That damn slow cooker
We'd like to start out by saying sorry to both the fans and Crock-Pot for the amount of anguish this cost them. At the end of the episode "That'll Be the Day" -- which recounts Jack's last day alive, we find out that a very old slow cooker with a faulty switch was what ignited the fire that burned down the Pearson family home. Yes, Crock-Pot took the heat for that one to the point where Milo Ventimiglia and the show graciously stepped in to do a promo for the product to prove it was totally Jack Pearson-approved.
Also Read: Milo Ventimiglia Explains How (and Why) ‘This Is Us’ Did That Last-Minute Crock-Pot Promo
NBC
Jack’s cause of death/Rebecca keeping it together for the kids
In the big post-Super Bowl episode (aptly named “Super Bowl Sunday”), we finally learn the true cause of Jack’s death. Jack and the rest of the family make it out of the house fire safely, and Jack even had time to go back in to save Kate’s dog and a few other family keepsakes. But that’s just what turned out to be his downfall: he inhaled too much smoke, causing him to have a heart attack at the hospital and die unexpectedly. Mandy Moore delivers the most heartbreaking scene at the end of the episode when Jack dies: not only do we see her break down completely in the hospital, but she’s able to pull it together to tell the kids. The whole thing left us crying into our leftover guacamole from the Super Bowl.
Also Read: ‘This Is Us’ Hits 27 Million Viewers Right in the Feels With Post-Super Bowl Tearjerker
NBC
Literally "The Car" -- just, yeah, "The Car"
This whole episode was even harder to watch than “Super Bowl Sunday.” We’ve been ugly crying at every mention of Jack’s death since Season 1, but seeing Dr. K console Rebecca not only emphasized the fact that Jack was dead but reminded us that he was able to finally find love again after his wife died. When Rebecca took the kids to Jack’s tree, and then they all agree to go to the Bruce Springsteen concert in his honor, we were a puddle on the floor.
Also Read: Milo Ventimiglia, Crock-Pot ‘Overcome’ Their Differences in New Super Bowl Ad (Video)
NBC
Kate, Randall, Lean Pockets and “Sex & the City”
Leave it to "This Is Us" to make an episode about bachelor and bachelorette parties about sibling love. In "Vegas, Baby" we see why Kate has never really become close with her sister-in-law Beth. Turns out, Kate and Randall were very close as teens -- Lean Pocket and "Sex & the City" viewing parties, of course -- and when he met Beth, she knew she'd "lose him." Of course Randall reassures his sister their bond just can't be broken. Also, he wasn't really a "Sex & the City" fan, he just watched it to spend time with Kate. Aww.
NBC
That Deja episode
What is it with falling in love with the grandparents on this show? In “This Big, Amazing, Beautiful Life,” we get to learn a lot more about Deja’s backstory and tough upbringing--and meet her amazing grandma Joyce, who mostly takes care of her because her mom, Shauna, was only 16 years old when she had Deja. When Joyce dies, it marks the point where things really start to go downhill for Deja and Shauna, and they end up sleeping in their car until Beth and Randall find them. When Shauna realizes Deja can have a better life living with the Pearsons and leaves in the middle of the night, it broke us all.
Also Read: ‘This Is Us': Jack Pearson Is Old, Grey and Not Dead in Season 2 Finale Promo (Video)
NBC

When Deja vandalizes Randall's car
At Toby and Kate's wedding, Toby's mom mistakes Deja for Randall's biological daughter, and let's just say the mistake does not make her happy. Deja's already upset, given that her mom's parental rights have been revoked, and she takes out her emotions by taking a baseball bat to her foster dad's fancy car -- which hits us right in the heart strings.
Also Read: Watch ‘This Is Us’ Cast Read for Their Now-Iconic Roles in Old Audition Tapes (Video)
NBC
Jack talking to little Kate
As Kate walks down the aisle, we hear a voiceover of Jack talking to a younger "Katie-girl."
"One day, a long time from now, you're going to meet someone who's better than me," Jack tells her after she asks if she can marry him someday. "He's gonna be stronger, and handsomer, maybe better at board games than me. And when you find him, when you find that guy, that's the guy you're going to marry." But no one could be as strong and handsome as you, Jack!! Needless to say, this scene juxtaposed with Kate walking down the aisle had us in a puddle of tears.
Also Read: ‘This Is Us’ Season 3 Sneak Peek: See How Jack and Rebecca Spent Their Very First Date (Video)
NBC
"It's time to go see her, Tess"
Hold. On. We've added a third timeline to this story? Of course we have. At the very end of the Season 2 finale, we get a quick glimpse of future Tess and an older Randall says ominously, "It's time to go see her, Tess."
Tess responds that she's not ready, and Randall says he's not either -- but we are ready to find out who the mystery "her" is. The scene is cut in a way that makes us think Beth might be dying (but that was ruled out by Susan Kelechi Watson herself) or something with Deja, but we know enough by now to expect a twist. Just seeing older Tess and Randall together is enough to make us teary.
Also Read: ‘This Is Us': 16 New Season 3 Premiere Photos to Make You Giddy for Pearson Return
NBC
Kate and Toby are rejected by an IVF doctor
In the first episode of Season 3, Kate and Toby find out they are insufficient candidates for in vitro fertilization -- on her birthday.
NBC
Jack and Rebecca's $9 first date
The night is a disaster due to Jack's low budget, which he doesn't want to tell Rebecca about until the very end. But when he does, all is forgiven.
NBC
When Future Randall called Future Toby about "her"
Cause we all thought it was Kate and OMG.
NBC
Kate gets pregnant again -- with "one shot" from IVF
That positive test brought on the waterworks.
NBC
Kevin finds out where Jack's necklace came from
The necklace Kevin received from his dad when he broke his leg originally belonged to a Vietnamese woman who Jack helped when he was in the war. So sweet.
NBC
Jack and Rebecca's road trip to L.A.
Oh, so many romantic moments. So many tears.
NBC
Tess comes out to Kate -- and later to Beth and Randall
In a very special episode on a show full of very special episodes, Tess gets her first period and confesses to her aunt that she has feelings for girls. Not longer after, she comes out to her parents and it's impossible to not weep over how beautiful the moment is.
NBC
When "Her" is finally revealed to be Rebecca
Don't pretend like hearing Future Beth say everyone was going to see "Randall's mother" didn't kill you.
NBC
When we found out Nicky was alive
And everything changed.
NBC
The toll Randall's campaign for city council takes on his family
We really did get worried about Randall and Beth there for a few episodes.
NBC
When Nicky finds out Jack is dead
In his first-ever meeting with his adult niece and nephews. His subsequent first-ever meeting with Rebecca packs even more of a punch, for both Nicky and the audience.
NBC
Beth's backstory episode
The whole thing. All of it. Her tireless effort at a teen dancing career, her dad's death, the moment she bumps into Randall at college, present-day Beth going to a dance studio. Everything.
NBC
The whole family waits to see if Kate and her premature baby are going to be OK
The things that come out in "The Waiting Room" episode. And then Kate and Toby name their baby Jack. Wow.
NBC
Randall and Beth's love story, juxtaposed with their current relationship issues
"R&B" forever and ever and ever.
NBC
Kevin and Zoe breakup
Because he wants kids and Zoe knows it, so she ends it and it's really rough to watch.
NBC
The first time we see Future Rebecca
And obviously begin to worry more about her clearly failing health.
NBC
The reveal that Baby Jack is blind -- and an incredibly talented singer in the future
The one-two punch of the Season 4 premiere's big reveal rivaled that of the emotional rollercoaster that the series premiere took us on. That song? Tears.
NBC
Deja and Malik's adorable young love
Their adorable ditch day in Philly was one of the sweetest teen love stories we've ever seen. Of course the real tears came when their parents tried to put the breaks on the whole thing.
NBC
Rebecca's dad making it clear he doesn't think Jack is good enough for his daughter
We know that they make it through fine -- obviously -- but this backstory is still rough to watch.
NBC
Nicky, Kevin and Cassidy getting sober together
Multiple moments from this arc have had us sobbing, as these three keep banding together to help each other through some really rough battles to stay clean.

Jack getting jealous over Randall's relationship with his only Black teacher
Jack realizes he can't be everything for his son, because he isn't exactly like his son. But when he accepts this, he learns how to let Randall connect with others who can give him what he needs.
NBC
Rebecca admitting to Randall her memory is getting so bad she needs help
After an entire Thanksgiving episode in which she gets lost -- and we find out that is actually a flashforward to a point when the problem has gotten much more series.
NBC
Slow cookers, road trips, miscarriages, house fires — where does it end?!?
Four seasons into "This Is Us" and the only thing fans can rely on more than the fact that they're sure to get at least one twist or turn when they tune in each week is that they are definitely going to shed at least one -- and usually more -- tear per episode. Ahead of the NBC family drama's return from winter hiatus on Jan. 14, TheWrap has rounded up the show's biggest tearjerker moments -- both good and bad -- so far. Obviously, spoilers ahead.
Also Read: Winter TV 2020: Premiere Dates for New and Returning Shows (Photos)
Actually, it’s about Ethics, AI, and Journalism: Reporting on and with Computation and Data
Photo | Scanned from the 1964 handout for the IBM Pavillion at the World's Fair. Scan from Jeff Roth, The New York Times.
We live in a data society. Journalists are becoming data analysts and data curators, and computation is an essential tool for reporting. Data and computation reshape the way a reporter sees the world and composes a story. They also control the operation of the information ecosystem she sends her journalism into, influencing where it finds audiences and generates discussion.
So every reporting beat is now a data beat, and computation is an essential tool for investigation. But digitization is affected by inequities, leaving gaps that often reflect the very disparities reporters seek to illustrate. Computation is creating new systems of power and inequality in the world. We rely on journalists, the “explainers of last resort”[1], to hold these new constellations of power to account.
We report on computation, not just with computation.
While a term with considerable history and mystery, artificial intelligence (AI) represents the most recent bundling of data and computation to optimize business decisions, automate tasks, and, from the point of view of a reporter, learn about the world. (For our purposes, AI and “machine learning” will be used interchangeably when referring to computational approaches to these activities.) The relationship between a journalist and AI is not unlike the process of developing sources or cultivating fixers. As with human sources, artificial intelligences may be knowledgeable, but they are not free of subjectivetivity in their design — they also need to be contextualized and qualified.
Ethical questions of introducing AI in journalism abound. But since AI has once again captured the public imagination, it is hard to have a clear-eyed discussion about the issues involved with journalism’s call to both report on and with these new computational tools. And so our article will alternate a discussion of issues facing the profession today with a “slant narrative” — indicated because these sections are in italics.
The slant narrative starts with the 1964 World’s Fair and a partnership between IBM and The New York Times, winds through commentary by Joseph Weizenbaum, a famed figure in AI research in the 1960s, and ends in 1983 with the shuttering of one of the most ambitious information delivery systems of the time. The simplicity of the role of computation in the slant narrative will help us better understand our contemporary situation with AI. But we begin our article with context for the use of data and computation in journalism — a short, and certainly incomplete, history before we settle into the rhythm of alternating narratives.
Data, Computation, and Journalism
Reporters depend on data, and through computation they make sense of that data. This reliance is not new. Joseph Pulitzer listed a series of topics that should be taught to aspiring journalists in his 1904 article “The College of Journalism.” He included statistics in addition to history, law, and economics. “You want statistics to tell you the truth,” he wrote. “You can find truth there if you know how to get at it, and romance, human interest, humor and fascinating revelations as well. The journalist must know how to find all these things—truth, of course, first.”[2] By 1912, faculty who taught literature, law, and statistics at Columbia University were training students in the liberal arts and social sciences at the Journalism School, envisioning a polymath reporter who would be better equipped than a newsroom apprentice to investigate courts, corporations, and the then-developing bureaucratic institutions of the 20th century.[3]
With these innovations, by 1920, journalist and public intellectual Walter Lippmann dreamt of a world where news was less politicized punditry and more expert opinion based on structured data. In his book Liberty and the News, he envisioned “political observatories” in which policy experts and statisticians would conduct quantitative and qualitative social scientific research, and inform lawmakers as well as news editors.[4] The desire to find and present facts, and only facts, has been in journalism’s DNA since its era of professionalization in the early 20th century, a time when the press faced competition with the new—and similarly to photography and cinema, far more visceral—media: the radio.[5] It was also a time when a sudden wave of consolidations and monopolization rocked the press, prompting print journalists to position themselves as a more accurate, professional, and hopefully indispensable labor force. Pulitzer and William Randolph Hearst endowed journalism schools, news editor associations were formed and codes of ethics published, regular reporting beats emerged, professional practices such as the interview were formed, and “objectivity” became the term du jour in editorials and the journalistic trade press.[6]
“Cold and impersonal” statistics could “tyrannize over reason,”[7] Lippmann knew, but he had also seen data interpreted simplistically and stripped of context. He advocated for journalists to think of themselves as social scientists, but he cautioned against wrapping unsubstantiated claims into objective-looking statistics and data visualizations. However, it took decades until journalism eased into data collection and analysis. Most famously, the journalist and University of North Carolina professor Phillip Meyer promoted the new style in journalistic objectivity, known from the title of his practical handbook as “precision journalism.”[8]
Precision journalism gained popularity as journalists took advantage of the Freedom of Information Act (FOIA), passed in 1967, and used it and other resources to make up for some chronic deficiencies of shoe-leather reporting. Meyer recounts reporting with Louise Blanchard[9] in the late 1950s on fire and hurricane insurance contracts awarded by Miami’s public schools. Nobody involved in the awards would talk about the perceived irregularities in the way no-bid contracts were granted, so Meyer and Blanchard analyzed campaign contributions to school board members and compared them to insurance companies receiving contracts. The process required tabulation and sorting—two key components at which mainframe computers would excel a decade later.[10] Observing patterns—as opposed to single events or soundbites from interviews—introduced one of the core principles of computer assisted reporting (CAR): analyzing the quantitative relationship between occurrences, as opposed to only considering those occurrences individually.
As computation improved, most reporters began to use spreadsheets and relational databases to collect and analyze data. This new agency augments investigative work: individual journalists can interpret, collect, and receive data to characterize situations that might otherwise be invisible. Meyer’s early stories helped make computation feel like the logical, and increasingly necessary, work of a reporter. But this is only a beginning for the advances in reporting and news production that computation represents for the profession. There is more ahead for computational journalism—that is, journalism using machine learning and AI—and we will explore the complexities in the remainder of this article.
This historical presentation has been brief and incomplete, but it is meant to provide a context for the use of data and computation in journalism. We are now going to retreat to a point in the middle of this history, the early 1960s, and begin our narrative with the 1964 World’s Fair.
1964—An elaboration of human-scale acts
For the IBM Pavilion at the 1964 World’s Fair in New York, Charles Eames and Eero Saarinen designed an environment that was meant to demystify the workings of a computer, cutting through the complexity and making its operations seem comprehensible, even natural. The central idea was that computer technologies—data, code, and algorithms—are ultimately human.
“… [W]e set forth to create an environment in which we could show that the methods used by computers in the solution of even the most complicated problems are merely elaborations of simple, human-scale techniques, which we all use daily.
“… [T]he central idea of the computer as an elaboration of human-scale acts will be communicated with exciting directness and vividness.”
Designer’s Statement by Charles Eames and Eero Saarinen, 1964
At the heart of their pavilion was The Information Machine, a 90-foot high ovoid theater composed of 22 staggered screens that filled a visitor’s field of view. The audience took their seats along a steep set of tiers that were hoisted into the dome theater. The movie, or movies, that played emphasized that the usefulness of computation goes beyond any specific solution. The film is really about human learning as a goal—about how we gain insight from the process leading to and following a computation. The next quote comes from an IBM brochure describing The Information Machine—it is the film’s final narration followed by the stage direction for its close.
Narrator: “…[T]he specific answers that we get are not the only rewards or even the greatest. It is in preparing the problem for solution, in these necessary steps of simplification, that we often gain the richest rewards. It is in this process that we are apt to get an insight into the true nature of the problem. Such insight is of great and lasting value to us as individuals and to us as society.”
With a burst of music, the pictures on the screens fade away, your host comes back to say goodbye. Below you, the great doors swing open… The show is over.
IBM Description of “The Information Machine”, 1964
In subsequent reviews of the pavilion, the design of The Information Machine was said by some critics to walk a fine line, attempting to make the computer seem natural on the one hand, but conveying its message through pure spectacle on the other—with the dome theater, the coordinated screens, and the large, hydraulic seating structure. The end effect is not demystification but “surprise, awe and marvel.”[11] In contemporary presentations of computation, and in particular AI and machine learning, we often lose the human aspects and instead see these methods just with “surprise, awe and marvel” — a turn that can rob us of our agency to ask questions about how these systems really operate and their impacts.
Contemporary Reporting and Its Relation to AI
Creating data, filling the gaps
The commentary in Eames’ film reminds us that computational systems are human to the core. This is a fact that we often forget, whether or not it is expressed in an advertising campaign for IBM. Reporting depends on a kind of curiosity about the world, a questioning spirit that asks why things work the way they do. In this section, we describe how journalists employ AI and machine learning to elaborate their own human-scale reporting. How does AI extend the reach of a reporter? What blind spots or inequalities might the partnership between people and computers introduce? And how do we assess the veracity of estimations and predictions from AI, and judge them suitable for journalistic use?
The job of the data or computational journalist still involves, to a large extent, creation of data about situations that would otherwise be undocumented. Journalists spend countless hours assembling datasets: scraping texts and tables from websites, digitizing paper documents obtained through FOIA requests, or merging existing tables of data that have common classifiers. While we focus on the modeling and exploration of data in this section, some of the best reporting starts by filling gaps with data sets. From the number of people killed in the US by police[12] to the fates of people returned to their countries of origin after being denied asylum in the US,[13] data and computation provide journalists with tools that reveal facts about which official records are weak or nonexistent.
In another kind of gap, computational sophistication is uneven across publications. Some outlets struggle to produce a simple spreadsheet analysis, while large organizations might have computing and development resources located inside the newsroom or available through an internal R&D lab. Organizations such as Quartz’s AI Studio[14] share their resources with other news organizations, providing machine learning assistance to reporters. Journalism educators have slowly begun to include new data and computing tools in their regular curricula and in professional development. Many of the computational journalism projects we discuss in this article are also collaborations among multiple newsrooms, while others are collaborations with university researchers.
The Two Cultures
Querying data sources has become an everyday practice in most newsrooms. One recent survey found that almost half of polled reporters used data every week in their work.[15] Reporters are experimenting with increasingly advanced machine learning or AI techniques, and, in analyzing these data, also encounter different interpretations of machine learning and AI. As statistician Leo Breiman wrote in 2001, two “cultures” have grown up around the questions of how data inform models and how models inform narrative.[16]
Breiman highlights the differences between these two groups using a classical “learning” or modeling problem, in which we have data consisting of “input variables” that, through some unknown mechanism, are associated with a set of “response variables.”
As an example, political scientists might want to judge Democratic voters’ preferences for candidates in the primary (a response) based on a voters’ demographics, their views on current political topics like immigration, characteristics of the county where they live (inputs). The New York Times recently turned to this framework when trying to assess whether the Nike Vaporfly running shoe gave runners an edge in races (again, a response) based on the weather conditions for the race, the gender and age of the runner, their training regime and previous performances in races (inputs).[17] Finally, later in the article we will look at so-called “predictive policing,” which attempts to predict where crime is likely to occur in a city (response) based on historical crime data, as well as aspects of the city itself, like the location of bars or subway entrances (inputs).
Given these examples, in one of Breiman’s modeling cultures, the classically statistical approach, models are introduced to reflect how nature associates inputs and responses. Statisticians rely on models for hints about important variables influencing phenomena, as well as the form that influence takes, from simply reading the coefficients of a regression table, to applying post-processing tools like LIME (Local Interpretable Model-Agnostic Explanations)[18] to describe the more complex machine learning or AI models’ dependence on the values of specific variables. In this culture, models are rich with narrative potential—reflecting something of how the system under study operates—and the relationship between inputs and outputs offer new opportunities for reporting and finding leads.
The second modeling approach deals more with predictive accuracy — the performance of the model is important, not the specifics of how each input affects the output. In this culture, a model does not have to bear any resemblance to nature as long as its predictions are reliable. This has given rise to various algorithmic approaches in which the inner workings of a model are not immediately evident or even important. Instead, the output of a model—a prediction—carries the story. Journalists using these models focus on outcomes, perhaps to help skim through a large data set for “interesting” cases. We can judge the ultimate “fairness” of a model, for example, by examining the predicted outcomes for various collections of inputs. Since Breiman’s original paper in 2001 the lines between these two cultures have begun eroding as journalists and others call for explainability of algorithmic systems.
Classical Learning Problems and Narratives
The distinction between supervised and unsupervised learning problems is also a useful one. So far, we have considered models for so-called supervised learning, linking inputs to responses. ProPublica’s “Surgeon Scorecard,” for example, involved mixed-effects logistic regressions to assess nationwide surgeon and hospital performance on eight elective procedures, controlling for variables like a patient’s age, sex and “HealthScore.” Their modeling involved four years of hospital billing data from Medicare, 2.3 million data points in all. Buzzfeed’s “Spies in the Skies” series trained a machine learning model to recognize the flight patterns of FBI and DHS surveillance aircraft and applied the model to the flight paths of 20,000 other planes over four months, using the data to identify potential spy planes. In both cases, the news organizations published their data and code.[19][20] Recalling Breiman’s two cultures, in supervised learning the narrative can be produced from the outputs of a model (“Which flight paths correspond to spy planes?” or “Which surgeons are better than others?”) or the structure of the model itself.
In unsupervised learning, on the other hand, we don’t have responses. Instead, we use models to find patterns in a dataset—cluster analysis is a common example. Unsupervised learning, with its emphasis on exploratory analysis, has been used to identify patterns to create a narrative. The Marshall Project’s “Crime in Context” performed a form of analysis called hierarchical clustering to group cities in the United States based on the time series of historical crime statistics. The reporters were asking a succinct question: What are the broader patterns of crime in cities over the last 40 years? Similarly, FiveThirtyEight’s “Dissecting Trump’s Most Rabid Online Following” applied an unsupervised procedure known as latent semantic analysis (LSA) to describe the character of the comment boards on Reddit, with two subreddits being similar if many users post to both. This results in what they call “subreddit algebra” — “adding one subreddit to another and seeing if the result resembles some third subreddit, or subtracting out a component of one subreddit’s character and seeing what’s left.” Again, both organizations, The Marshall Project and FiveThirtyEight, released data and/or code for their stories.[21]
In all of the cases mentioned, from the Surgeon Scorecard to FiveThirtyEight’s analysis of Reddit, AI has helped journalists scale up their reporting, asking questions of systems that would otherwise be beyond their reach. But difficult technical questions come with this stretch—questions about whether a model adequately describes the data it draws on, and about whether we are more inclined to trust a model because it agrees with beliefs we held before we performed an analysis. Have we merely given our prior opinions an air of scientific validity? The use of machine learning in forming narratives from data requires a degree of caution. Consulting technical experts is always sensible in complex situations—each of the stories we cited had a statistician, a computer scientist, or a data scientist contributing to or performing the analysis.
Dealing with Uncertainty and Criticism
As with any statistical procedure, conveying uncertainty became an important component of these stories. When journalists express uncertainty of a model in their stories, they show that the data and the fitted model are inherently “noisy,” that not all data can be fully explained by the model. They also tell their readers that they are aware of the limits and constraints of their analysis. ProPublica, for example, incorporated uncertainty assessments along with its “Surgeon Scorecard” visualizations — publishing individualized data of this kind can raise ethical concerns, and the data could easily give an incorrect impression without the proper assignment and representation of uncertainty. BuzzFeed explicitly noted the fallibility of its algorithm—the algorithm would mistake circular flight paths over skydiving centers for surveillance operations—and relied on traditional reporting to check for false positives. False negatives— in this case, surveillance planes with uncharacteristic behaviors—could escape the attention of the reporters, they noted.
Predictions of the winner of the 2016 presidential election, computed by many journalistic outlets, often lacked any prominent assessment of uncertainty. Large percentage indicators in red and blue type held many anxious voters’ attention captive for months. Then on election night, The New York Times faced sharp criticism for a controversial piece of code, the JITTER.get()command in its infamous “needle” data visualization—while the needle’s jitter, or random vibration, was designed to reflect the uncertainty in the selection of the current frontrunner, readers instead viewed it as expressing live, incoming data updates. In response, during the lead-up to the 2018 midterms, The Times explored a real-time polling approach that tried to unpack the mechanisms behind modern surveys. The new project promised “you’ll learn what the so-called margin of error means in a more visceral way than ‘+/- 4%’ can ever convey,” and that you would understand the difficulties in conducting an accurate poll.[22]
Communicating uncertainty effectively is already a complex problem, more so when it comes to investigations. As Jonathan Stray, research fellow at The Partnership on AI, points out,[23] some forms of reporting require journalists to be certain of wrongdoing before they publish a story about a crime. It is unlikely that a machine learning prediction will ever meet that standard, that degree of certainty. This forces journalism into new territory.
Journalists tend to be tool users rather than tool makers, and the application of advanced AI methods is not without growing pains. When a journalist’s reporting leads them into the realm of original research, they are open to the criticisms more commonly found in the academic review process, and not solely the questions posed by their editor. Explaining a computational methodology might fall into a “nerd box” published with a story, or there might also be a separate white paper and GitHub repo with source code if the complexity warrants it. ProPublica, for example, sought expert advice from academics, anticipating that different communities would examine their work. In the case of FiveThirtyEight’s Reddit story, the explanation of LSA served as the backbone of the story. In this way, machine learning is helping to create new narrative forms that allow deeper explanations of complex stories.
Implications for Practice
The journalistic examples discussed so far are neither unique nor even fully representative of the variety and rate of change taking place in the field. But they do highlight some of the problems that journalists have to contend with when partnering with AI. The first is simply the choice of available data that are used to represent some phenomenon being studied. This, together with the machine learning method selected define a kind of perceptual engine from which the journalist will ask questions, dig deeply into some situation, follow her curiosity. If there are blind spots, like missing bits of data, or classifiers that fail to capture important circumstances of a story, she has to engineer ways to identify them, checking on her methods.
Her work needs to be clear and well-documented—“Show your work!” is a longstanding appeal from the CAR community. To test the veracity of one reporter’s computation, some newsrooms even have a second reporter reproduce their results independently. In some cases, it is only the results of the model—or the implications of the results—that are verified independently, as in BuzzFeed’s surveillance article and the FiveThirtyEight Reddit piece. It did not matter how the conclusion was derived, only that it was correct. When journalists produce data-based observations on their own, their work is also open to other lines of critique. Standards from statistics, the social sciences, or even engineering may apply.
Standards for journalism have emerged from sessions at professional conferences, like the “Regression in the Newsroom” session at SRCCON 2018[24], an annual conference focusing on the intersection of journalism and technology, as well as a standards session on advanced modeling at 2019’s Online News Association meeting. Journalists have begun to recognize that the profession has to think through what tools it employs and how it should test the results of their use. Standardization is not purely the result of measured consideration among professionals: through reporting on AI, journalists are also defining a set of standards for their reporting with AI.
1964—“The Fair? We’re There!”
The New York Times was part of IBM’s presence at the 1964 World’s Fair. In a corner of the IBM pavilion was a machine trained to recognize handwritten dates. A visitor could write down a date—month, day, and year—on a card and feed it into an optical scanner. Before the fair, IBM researchers associated each date with an important news event reported in The Times. IBM researchers visited The Times’s headquarters on 43rd Street in Manhattan.
Way back last summer when the Fair was mostly a gleam in Robert Moses’s eye, I.B.M. researchers took over what used to be the Information Department on the 3rd floor, and began the exhaustive job of poring through microfilm editions of The Times, day by day since Sept. 18, 1851. They extracted from each day’s paper the most important news story and compressed it into headlines. This is the material that was fed into the I.B.M. machine.
The Times’ first issue appeared on September 18, 1851—although it was then known as the New York Daily Times—meaning the machine had full access to the paper’s archives. In their report of the IBM pavilion, Popular Science described the handwriting recognition machine in this way:[25]
“… [W]rite the date on a card and watch an electronic Ouija board gobble it up, read the handwriting, and seconds later spell out the banner headline of that day,”


Souvenir Card from IBM Pavilion at the 1964/1965 New York World’s Fair, from the Collections of The Henry Ford.
The card that was produced referenced a Times story on the front—with the headline written by an IBM researcher—and then a detailed explanation of the method used to recognize your date on the back. In describing the algorithm for a brochure accompanying the exhibition, the company described how an electronic beam outlines “the contours of each number by traveling around it in a series of continuous circles, in much the same way that children trace letters for penmanship exercises.”[26] The explanation was simple, keeping with the theme that computers enact “elaborations of human-scale acts.”
The front of the card, however, was the result of a myriad of human-scale acts, these primarily editorial. How did the IBM researchers select an important headline for a day? Was it the A1 story? Was it something that might have seemed less important on the requested date in 1900, but was seen as a pivotal moment by 1964? Did the headline have to involve familiar people and places for popular appeal? There is no explanation, at least none that we could find.
The handwriting recognition machine was also prepared for people who made unreasonable requests. When one visitor asked for a headline from April 22, 9999, for example, the machine returned a card reading “Since the date you have requested is still in the future, we will not have access to the events of this day for 2,934,723 days”[27] For IBM, the purpose of the exhibition seemed to be less about exhibiting Times’ content than demonstrating handwriting recognition and “the progress being made in finding better ways to enter data into computers”, and so that IBM could “reduce the time and cost now required to get this information from the people who create it to the high-speed computers that process it.”[28]

Printout of The Times’ headline database for the IBM Pavillion’s handwriting recognition system. Image by Mark Hansen.
The “database” of Times headlines assembled by IBM researchers, binding each day with an event, was printed out and bound as a series of books that eventually found their way into The Times’ clipping room or “morgue.” Ultimately, most of them were thrown out when The Times moved to their new building on 8th Avenue in Manhattan in 2007, but the current “keeper” of the morgue, Jeff Roth, saved three.
The News as Data
In IBM’s handwriting recognition machine, algorithms are an “elaboration of human-scale acts,” computational systems that could replace or augment human effort—in that case, tedious data input. We have already seen how AI and machine learning can help journalists report, but computation affects other kinds of human work supporting both the profession and the business. Algorithms recommend, personalize, moderate, and even assess fairness. In designing “editorial” systems, much can be learned about the original, human problem that AI or computation broadly is designed to address. In some cases, computation augments or replaces human labor. In others, algorithms perform “at scale” tasks that were not possible before, raising ethical questions. These editorial advances start with the view that the news itself can be thought of as data.
A1 v. AI: Maximizing (and Moderating) Engagement
Contemporary newsgathering relies heavily on computation. We start with a substrate of recorded observations of the world—digital recordings of firsthand accounts, traditional documents and spreadsheets, trends on social media, even real-time sensor data. These inputs can then be analyzed via AI to identify (and sometimes even author) stories, with roles for both supervised and unsupervised learning. When stories are published on the web and across social media, a digital cloud of audience reactions forms around them. And journalism is also data, open to a range of new kinds of treatments as such. In addition to the content of a story, modern news as an industrial, digital, and commercial product is bristling with metadata describing who produced it and what it is about.
Every story from The New York Times is “tagged” to categorize its content—a much more sophisticated system than IBM used in its index at the 1964 World’s Fair. The Times creates these tags using rules drawn partly from the frequency of words and phrases in the article, or their appearance in the headline or lede. The rules are passed on to a human in the newsroom who verifies or changes the tags. Eventually the “Taxonomy Team” adjusts rules and introduces new tags to The Times category vocabulary.[29] These tags might be used to help publishers make recommendations, or issue alerts when stories appear on a topic (one or more tags) that readers are interested in.
Because stories are data, a reader’s engagement with content is now data, as well. Computation and AI recommend content to readers, or add it to personal feeds. They may also adapt its form. Decisions about story placement are slowly shifting from morning news meetings to include personalization algorithms.
New strategies arise as organizations mix traditional of “A1” (front-page) page layout and the AI that sorts potentially A1-worthy stories according to its own logic. Small experiments are constantly running on many news sites—organized via classical A/B testing[30] and, recently, contextual “multi-arm bandits.” These tools collect user actions to make choices between stories placed on a site’s home page, writing different potential headlines for each story, or affecting the placement of different “page furniture” like illustrations, charts, or photographs. These AI systems attempt to maximize engagement. Other user-facing functions are being incorporated into “smart” paywalls that use AI to estimate the probability that a reader will respond in a specific way to an action taken by the publisher. Instead of capping your monthly allotment of free articles at 10, a smart paywall might extend your free period by 5 articles if it estimates your propensity to subscribe will increase significantly as a result.
These calculations might involve studying histories of readers’ actions on the site. Reader data retention—and the kinds of computations performed on that data—take on special importance for news organizations that report skeptically on other companies’ use or misuse of personal data, but personalize their own content to maximize engagement and sell advertising. Just recently, Chris Wiggins, the chief data scientist at The Times, announced that “We’re moving away from tracking analytics on people and towards tracking analytics on stories.”[31] As an example of flipping emphasis, The Times’ advertising team wondered if they could “accurately predict the emotions that are evoked by Times articles?” The answer was “Yes” and as a result determined “that readers’ emotional response to articles is useful for predicting advertising engagement.”[32]
As objects of computation, stories circulate across the web and on social media sites, each collecting their own data and employing their own AI systems guiding content sharing. Again, data about engagement (sharing and likes, say) can be optimized by a publisher, adapting messages that lead readers to their content. The AI run by social media platforms are typically opaque to news outlets, however, and many suffer when algorithms change, directing traffic and attention away from their sites.[33]
AI also shapes audience engagement with comment moderation. For many sites, comments from readers can quickly degenerate into harsh disagreements and away from anything resembling civil discourse. The Times once manually edited comments, but in 2017 gave much of this work over to Google, using a machine learning tool trained on 16 million Times-moderated comments dating back to 2007.[34] The tool estimated the probability that a comment would be rejected by a moderator, leaving the final judgment to a human working from a sorted list of comments.
Equity Questions
Collections of tagged news stories are at the core of many natural language processing toolkits,[35] providing a stable training bed with consistent language and stylistic standards. Text classification courses typically derive an automated tagger from a collection of news articles. But we can go much farther than just keywords now, using or developing tools that can treat text as data.
GenderMeme, for example, was a platform created by a pair of graduate students in Stanford’s Statistics Department.[36] It used various features from the text to identify the (binary) gender of the sources quoted in a story. The tool has found its way into two prominent news outlets, one of which has incorporated it into its content management system. And so in addition to a spell check option, and a list of suggested tags, reporters and editors can conduct a gender check—a simple report that “Of your 13 sources, 9 are male and 4 are female.” While the initial application is simply informational, the tool addresses real concerns in newsrooms about equity in sourcing.
Examining television news broadcasts, or even 24-hour news networks, is a heavier computational lift. Video is simply harder to analyze than flat text since computers recognize letters, numbers, and other characters, but require a lot of programming to identify objects represented by the pixels of a photo or a still image taken from a video. But computer vision has advanced to the point that it can reliably estimate who is doing the talking, as well as who and what is being talked about. Another Stanford Computer Science project, called Esper, has partnered with the Internet News Archive to tease apart gender and other equity issues on news outlets like CNN and Fox News.[37] Doing this at scale provides a view across months or years of data.
The work of assigning tasks to these systems, whether human or machine or cyborg, reduce to a set of institutional choices, which beat reporters will recognize from their own work: What questions do we ask and who gets to ask them? These are “elaborations of human-scale acts” and reflect the character of power in the newsroom.
Automated Stories
We can see news as data in two ways. In this section, we have examined the use of news as input used to categorize, creating structured data for recommendation engines, or to assess equity in sourcing. But news can also be the product of AI systems, auto-generated textual data describing the who, what, when, where, why, and how of an event in story form. Many large newsrooms have used this technology for some time in sports and financial reporting: Box scores in, news story out. Narrative Science, founded in 2010, was an early player, at first hand-crafting rule-based systems (called “heuristic programming”) and later building rules using machine learning. The AP outlined this technology and other AI applications in a popular recent whitepaper.[38] The AP’s “robo-journalism” about publicly-traded companies’ earnings reports “synthesizes information from firms’ press releases, analyst reports, and stock performance and are widely disseminated by major news outlets a few hours after the earnings release,” according to a recent study, which said the automated stories can affect “trading volume and liquidity.”[39] Much of the initial writing about AI-assisted story generation focused on workflows and human labor, but this study suggests impacts on our dynamic information ecosystem.
The widespread use of AI or machine learning to generate certain classes of stories, even if only in draft form, seems inevitable. The profession is working through the best way to augment reporters’ capabilities with those of a learning system—which stories are collaborations and which are fully automated? Various research projects are also looking at how these systems could be designed by journalists and not just for journalists, a transition that seems inevitable as the profession takes responsibility for more of its technology development — building methods and tools that reflect their unique values and ethics.
A Short Comment on Fakery
Importantly from our perspective in 2019, there have been several attempts to use AI to identify “fake news.” Approaches to misinformation range from automating fact-checking to identifying telltale patterns of coordination on social media designed to amplify a message—the “who” behind promoting a story is important data here. One recent approach involves defining a special set of tags for a news story known as “credibility indicators.”[40] “These indicators originate from both within an article’s text as well as from external sources or article metadata.”—from how “representative” the title is, to the tone of the article, to whether it was fact-checked. These indicators are designed to provide signals to third parties, including AI systems, designed to detect misinformation. As news stories have become data, we can scan them for signals that they are, well, news. The use of AI to tackle fakery is in its infancy. Some approaches focus on multimedia analysis to detect doctored images and video,[41] while another applies neural networks in an adversarial way, using AI to generate fake stories and building detectors from the features of the generated stories.[42]
1966—To Explain is to Explain Away
From 1964 to 1966, Joseph Wiezenbaum was developing his ELIZA program, an experiment in human-computer “conversation.” In short, a chatbot. Much has been written about ELIZA and its success, and the program has been republished in modern languages like Python.[43] At its core, ELIZA searches through a person’s input to the conversation for certain keywords. If a keyword is found, a rule is applied to transform their input and craft ELIZA’s response. Wiezenbaum’s program is an example of a kind of artificial intelligence that was often referred to as “heuristic programming”—it depends on “‘Hints’, ‘suggestions’, or ‘rules of thumb’” to quote the AI pioneer Marvin Minsky[44].
ELIZA worked a little too well, often convincing its users that they were interacting with a real person. This horrified Wiezenbaum, who eventually went on a campaign to explain its simple heuristics, “to further rob ELIZA of the aura of magic.”[45] His motivation was to expose the program as a set of computer instructions. He starts his 1966 paper on ELIZA this way:
It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself ‘I could have written that’.
ELIZA is ultimately shown as the product of “human-scale acts,” but ELIZA’s own acts amount to impersonation. Here we see the balance between how the public receives AI and its underlying design—the magic versus the very real human dynamics expressed in data, code, and algorithms.
Explainers of Last Resort
The ubiquity of computation has led to questions about how these systems actually work. Ideas like “explainability” have been added to the concerns about speed and memory usage. In journalism, explainable systems help with reporting. When explainable, AI is open to direct interrogation, and, if the AI itself is open-source, can be examined line by line. This was Wiezenbaum’s attempt with ELIZA—demystify by “explaining away.” In his case, the rules ELIZA followed were one and the same with his algorithm, and “explaining” those rules was simply a matter of exhibiting code. With modern learning systems, the priority has been prediction accuracy, not explainability, and so we have produced small black boxes that we can only examine from patterns of inputs and outputs. Given the proprietary nature of many of our important software systems, journalists have a difficult time holding AI’s power to account.
While Wiezenbaum was reacting to impressions of ELIZA as “magic” in 1966, today’s journalists must contend with a perceptions of “objectivity” and “efficiency.” AI, informed by past data, is deployed to help news organizations “optimize” decisions. And, as a mathematical object, AI offers cover—the machine made the decision and we can pretend it is free of subjectivity. But there is rarely a single correct characterization of anything, including, what model should be employed to capture the relationship between “input” and “response” variables. What is being optimized when human journalists fit the model? Which human-scale acts should we elaborate? Our choices reflect our values, our biases, our blind spots.
Ultimately, we want to know, is it fair?
Algorithmic Accountability
AI tools themselves can be newsworthy. Sometimes they are applied to governance or management decisions; sometimes they contribute to social or political inequality in some way. Reporting on them is often called “algorithmic accountability,” and it is a thriving field. Nicholas Diakopoulos is an early figure in this area[46], and he maintains a website of potential reporting assignments on algorithms.[47]
A widely cited and early example is ProPublica’s article “Machine Bias” by Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, which described statistical models—algorithms—for computing “risk assessment scores” used by officials to set parole, grant probation, and even sentence the convicted. “If computers could accurately predict which defendants were likely to commit new crimes, the system could be fairer and more selective about who is incarcerated and for how long,”[48] officials reasoned. The reporters demonstrated that one such algorithm tended to score blacks as higher-risk than whites, and made more mistakes when scoring blacks. The debate between the company producing these scores and ProPublica focused, in part, on the different mathematical definitions of “fairness” used by each group, and highlighted the ethical problem of assigning a numerical value to human behavior and social situations. For example, ProPublica reported that although the company did not ask about race in its risk assessment questionnaire, it did ask whether a defendant’s friends used drugs or if their parents had been in jail. The measurement tool then used the responses to these questions as proxies to infer the risk of violence or recidivism.
While explainability makes accountability assessments easier, it also impacts the kinds of power a learning system enacts. Take, for example, The Marshall Project’s reporting on “predictive policing.”[49] Increasingly, municipalities outsource decisions about where to physically deploy officers during their shifts. An AI system attempts to predict where crime will occur and puts “cops on dots” for the next shift.
The programs typically overlay a grid system onto the city, dividing it into cells that measure a block or so on each side. One system, PredPol, makes predictions for each block using a proprietary algorithm inspired by mathematical models of earthquakes, the impact of a crime running through a neighborhood like an aftershock. It draws entirely on historical crime data to make its predictions and its code is entirely proprietary.[50] An alternative approach could model the probability of crime in a grid cell using both previous crimes and aspects of the city near the cell.
Developers of AI systems to predict crime could incorporate material and social aspects of these same environments, and re-embed their algorithmic system into that more complex environment. Those aspects are a natural extension of the data from which the algorithm learns: Is there a subway stop nearby? Are there bars nearby? Is there a full moon? The so-called risk terrain modeling[51] (RTM) is a method that specifically examines historical crime data, as well as the relationship between crime and the geographic features of the area where crime takes place. It takes a machine-learning approach to the problem and lets the AI system decide what conditions are dangerous. With an explainable model making the predictions, community stakeholders could be invited to work with law enforcement to diagnose what is making certain conditions dangerous, determining the underlying causes of crime. They can then have a say into the appropriate responses, since not all crimes are best prevented by deploying more officers in an area. In this way, reporting on systems of power that coalesce into AI systems can be more than an analysis of inputs and outputs, or even code audits — it can be a rich socio-technical exercise.
While it is expected that journalism, as a profession, would try to harness the power represented by data, code and algorithms, it is not necessarily a natural fit. Journalism education is becoming increasingly computational, venturing out from spreadsheet work into full-fledged coding courses that provide reporters with better understandings of technological systems. But what is the next step? Journalists call attention to systems of inequality, but should they also produce prototypes for the ethical application of AI? While we report on unfair algorithmic business practices, must we ensure that our own AI-powered paywalls behave fairly? If we seek explainability in algorithmic decision making, do we have the same degree of transparency in our own reporting?
The pedagogical effects have extended beyond journalism. Data science, interpreted broadly as an interdisciplinary field drawing inspiration from many research traditions and methodologies, could incorporate lessons from reporting on AI, and from the use of AI to support journalism. Through this interplay, journalism has attained a degree of data and computational rigor, while data science might find a public mission. At the very least, journalism can and is playing a role in helping AI practitioners adopt standards for responsible application of these tools.
1969—The Information Bank
The first automated retrieval system for news began at The New York Times in the early sixties.[52] Initially, the goal was to computerize access to The Times’ “Index,” a reference to the paper sent to libraries and universities, and used by students and researchers.
Later, the plan included creating a database from the files in The Times’ clipping library or “morgue.” Accessing information in the morgue was complex—it consisted of 20 million clippings (articles from The Times and other publications) with an indexing system described as “haphazard, inconsistent, inaccurate” and that only allowed each article to be filed under a single subject. The complexity was growing quickly, with about 10,000 clippings added to the library each week.[53]
Managers at The Times were receptive to the idea of computerization from a cost-cutting perspective—given the recent 1962-63 newspaper strike, they hoped an automated system could make their staff more efficient. Because of the expense of the project, The Times decided it could sell access to “outsiders” like libraries and universities, and create a new source of revenue. This was the first time the public could have a peek into The Times’ morgue.
Here is a portion of a 1969 press release announcing The Times’ Information Bank. It describes their aspirations—bold stuff.
We envision that the instantaneous accessibility of a gigantic store of background information on virtually every subject of human research and inquiry will prove to be of immeasurable value not only to major reference and research libraries, general business services and other media, but also to individuals engaged in all forms of research… For example, the services could be put to invaluable use by government agencies engaged in social research, scholars preparing such major documents as doctoral dissertations, general business services conducting research in specific areas for various clients, journalists marshalling material for books and articles.
Ivan Veit, VP of The Times, March 1969.
The Times presented further justification for their project publicly in an editorial published on August 9, 1969. It begins by linking the then-recent lunar landing on July 20 to the Information Bank and its technological achievement in communications.
Today one can transmit a story so fast that the record of an event is instantaneous if not simultaneous with the event itself. The speed and diversity of communications have overwhelmed the world with reportage, so facilities have had to be devised and techniques developed for handling, storing and retrieving the vast proliferation of current events data to provide a means of bringing some order to the chaos of information and the possibility of reflecting on what it all means.
…[T]he automated information bank… will put the recorded events within instant reach, but it will be up to the human researcher to grasp them.
Editorial, “Reaching for the Record,” August 9, 1969, The New York Times.
IBM performed the programming for the retrieval system (132 person-months[54]) which ran on IBM hardware. The Information Bank was operational by the Fall of 1972. New York Magazine included the Information Bank in Chris Welles’s profile of The Times from 1972.
…[A] student anywhere in the United States interested in the Cambodian incursion, or in what else was happening on the day Lincoln was shot, could simply step into his local university library to converse with the computerized morgue by means of a coaxial umbilical cord direct to New York…
The Times cannot be accused of thinking small. The information seeker would be able to talk to this deus ex machina of information in plain ordinary English—no intricate programming language like BAL, PL 1 , or COBOL would be necessary.
Chris Welles, “Harder Times at ‘The Times’”, New York Magazine, January 17, 1972.
The natural language interface was important to the project, and The Times likened it to consulting with a librarian. To the reporters who first used the system, however, something was lost in translation. This loss was best captured in Welles’ New York Magazine story.
As to the reporters/writers, it is hard to find a good word for the new venture among them. The system is being designed by technicians who aren’t newspapermen, and there is a great fear of loss of the serendipity factor. Going through clippings by hand often leads to unexpected results (one of my best articles came from an ad on the back of a clipping).
Chris Welles, “Harder Times at ‘The Times’”, New York Magazine, January 17, 1972.
Aside from the loss of serendipity and surprise from the physical representation of clippings as clippings, Welles further wondered whether it would even be possible to cross-index obscure recollections from reporters searching for specific events.
On April 30, 1969, the Information Bank’s principal advocates and visionaries at The Times, Dr. John Rothman, director of information services, and Mr. Robert November, director of the library services and information division, testified before a subcommittee of the House of Representatives. The hearings had to do with HR 8809, “a bill to amend Title IX of the National Defense Education Act of 1958 to provide for the establishment of a national information retrieval system for scientific and technical information.”[55] The meeting was largely informational, to help the subcommittee members better understand the challenges in creating such an ambitious system and its costs.
While the members of the subcommittee expressed admiration for what the Information Bank was attempting to do, there were questions. Some had to do with its anticipated high price, others with the underlying data. Here is an exchange between Hon. Roman Pucinski, the representative from Chicago presiding over the meeting—and Dr. Rothman. (For reference, this discussion is taking place between The Times’ announcement of the Information Bank in March of 1969 and its editorial on August of 1969 referencing the Apollo lunar landing.)
Mr. Pucinski: I want to make one observation, that the New York Times of 1969 is a far cry from that of 1920. I once came across an editorial in my research on the New York Times in 1920 in which it was suggested that Dr. Goddard be fired. In effect, the editorial called him an imbecile. It stated that anyone who would suggest a rocket can be launched out of the force of gravity and then propel itself into outer space and around the moon must be completely out of his mind, and any further expenditures on that kind of project is just a waste of taxpayers’ money.
That was a very fine editorial in the New York Times in 1920. I am very happy to know in 1969 there is considerably different thinking at the New York Times.
Dr. Rothman: May I respond to that? I have been asked by someone—and I don’t know whether he was trying to be funny or whether he was being incredibly naive—whether in this system we are going to go back and correct incorrect material.
The answer is no.
U.S. Congress, House, Committee on Education and Labor, National Science Research Data Processing and Information: Hearings before the General Subcommittee on Education, 91st Cong., 2nd sess., 1969.
“The speed and diversity of communications”—When AI Learns to Discover Stories
The New York Times imagined the Information Bank both helping reporters with historical research as well as to “bring some order to the chaos” of the volume of near real time events. The archive was transformed into a computational object, indexed and categorized, growing and becoming richer with time. It would move out from The Times headquarters on 43rd Street and out to libraries and universities across the country. It was a bold vision for 1969.
With the development of social media, we now truly have access to a seemingly never-ending stream of commentary and observations. Topics emerge and fade, changing over time, from country to country, and from city to city. But in this stream are indications of news events, broken out by time and location. Even Google searches respond quickly enough to certain classes of new material that they can be used to help in (near) real time reporting—a Google News search for the term “police involved shooting” was the basis for independent projects[56][57] attempting to come up with more realistic counts of the number of people killed by police than those reported by the FBI or the Centers for Disease Control.
One class of application of AI to journalism involves story discovery from platforms like Twitter and Instagram—“social media events monitoring.” Tools like dataminr, Newswhip and Reuters’s News Tracer all attempt to distill important events from streams of posts. The difficulty is in determining what is newsworthy and, given our current information ecosystem, what is likely true and not “fake.” News Tracer makes use of AI trained to “mimic human judgement” when poring over Twitter posts. A similar, earlier project, CityBeat[58] from CornellTech, attempted to identify newsworthy events from Instagram instead.
The CityBeat team admitted that “one of the main challenges while developing their project was in translating into algorithm the notion of a newsworthy event.” Part of the problem stemmed from reliance on the volume of posts in a geographic region as a predictor of an event taking place. They termed the problem “popularity bias”—whereas local news “focuses on small, exclusive small-scale stories,” CityBeat would respond to events that a large group of people found worthy of sharing on Instagram. A pillow fight might outrank a house fire. To get around this, the team introduced a post-processing step to their algorithm that relied on human judgment—they used Amazon’s Mechanical Turk, asking human raters to decide which of the identified events were truly newsworthy. (Recall The Times editorial’s insistence that despite automation, interpretation will be up to the human researcher.) The CityBeat team wanted the consensus of human judgment to train its algorithm.
This step came with its own set of problems. Not only did the team find that different newsrooms had their own definitions of newsworthiness, but the introduction of “novice” editors via Mechanical Turk was also not well received. They were said to lack “journalistic intuition” and, in the end, the CityBeat team used these ratings simply as another signal they passed along to the newsroom as part of the tool’s output.
CityBeat began in 2013 as a tool for “hyper-local aggregation” of social media posts, semi-automating the way journalists scan social media for signals of breaking news. From the moment journalists took to social media—identifying what’s happening, who to talk to, finding facts about an incident—there have been concerns about how to verify what they found. But with the lead up to the 2016 election and its coordinated misinformation campaigns, everyone worries about attempts to mislead machine-sourced news.
Full Fact has developed a database of “fact checks” and routinely scans media for claims that match them. According to a recent Poynter article,[59] they have also created “‘robo-checks’ that automatically match statistical claims with official government data. In May of this year Full Fact announced that it—along with Chequeado, Africa Check and the Open Data Institute—won a Google.org grant” of $2 million to use “AI to partially automate fact-checking.”
Poynter quotes a Full Fact press release: “In three years, we hope our project will help policymakers understand how to responsibly tackle misinformation, help internet platforms make fair and informed decisions and help individual citizens know better who and what they can trust,” it says. The project plans to develop tools for non-English speaking communities, a notorious gap in the development of natural language applications.
Clearly, questions of selection (Which clippings are included in a file? Which events are “newsworthy”?) and inclusion (Who makes these decisions?) and verification (Did this really happen?) are as critical in 2019 as in 1969.
1972—All the news that [was once] fit to print
Part of the frustration with the Information Bank came from its indexing, which depended on a thesaurus of searchable terms. While extensive, the thesaurus did not include “newly coined jargon, slang, acronyms or technical terms… [T]he reporters especially were frustrated when they could see these words in abstracts but could not use them as search terms.”[60] In short, the Information Bank did not support free text searching. This made quality indexing all the more important. But, as reporters at The Times worried, this important “abstracting” was being done by “hired hands” outside the newsroom.
Another perhaps more serious critique of the Information Bank came not from its organization but from the nature of the archive itself. The morgue, after all, was made up of reports of events which were, in turn, the product of human editorial choices at The Times. Given the page constraint of the paper, not every event occurring on a given day could be included. If the Information Bank became a crucial part of research at libraries and universities it was feared that
“…The Times could easily become an even more powerful arbiter of history than it now is. For too many researchers, what isn’t in the Bank simply won’t have happened.”
Erik Sandberg-Diment, “All the News That’s Fit to Print Out”, New York Magazine, January 17, 1972.
Weizenbaum went farther (much farther) and declared the Information Bank was, in fact, destroying history.
The computer has thus begun to be an instrument for the destruction of history… The New York Times has already begun to build a “data bank” of curated events. Of course, only those data that are easily deliverable as by-products of typesetting machines are admissible to the system. As the number of subscribers to the system grows, and as they learn more and more to rely on “all the news that [was once] fit to print,” as the Times proudly identifies its editorial policy, how long will it be before what counts as fact is determined by the system, before all other knowledge, all memory, is simply declared illegitimate?
Joseph Weizenbaum, Computer Power and Human Reason, 1976.
The Information Bank did not have any serious competition in the “online newspaper business” until 1976 when the Boston Globe and MTL sought to create their own text editing and retrieval system.[61] Development on The Times project continued until the early 1980s, with Dr. Rothman even pitching an expansion into a collection of statistical databases. The Information Bank came to an official end in 1983, and the clippings that had been indexed and digitized were repackaged and sold to Mead—owner of the then-separate Lexis and Nexis databases.
As an enactment of Weizenbaum’s warning about the Information Bank’s threat to history, The Times itself then seemed to simply erase the Information Bank effort from its own history.
The Information Bank apparently lost the battle of the bottom line, experiencing losses in every year except one after it became a commercial venture, and consuming a reported $20 million investment… The Times then appeared to forget that this pioneering service, one of their most ambitious most expensive, and most visionary outreach projects ever existed. A recent book published by the Times about its history over its entire life span included not a single word about the Information Bank or its fourteen years of very visible public activity…
Bourne, Charles P. and Hahn, Trudi B. A History of Online Information Services 1963-1976, 2003.
Conclusion
We rely on journalism to stay informed about “what’s on” in the world. But even in a simple statement like that, there is a lot to examine. What do we recognize as an event? How and who decides if it is newsworthy? Who sees our reporting and in what form? How do we trust the people narrating? Journalism is a complex cultural package. It gets even more complex when humanly-designed and deployed machines are added to the mix—machines which are capable of scaling journalistic work, extending our reporting capabilities, and optimizing tasks like distribution and personalization. They are powerful in image and deed, but these learning systems are ultimately the result of human effort and human decision making. In short, they are far from objective partners and arise from technical cultures with different values.
The slant story told here reminded us of this fact, casting a computer system as an “elaborator of human-scale acts.” We followed a news organization attempting to literally become a “newspaper of record” by extending its audience to include not only the reading public but machines as well. This story continues in the present day when the “human-scale acts” are often overshadowed by machine-scale acts, and when news becomes data involuntarily, every day. No external force called “AI” shapes journalism against its will. In newsrooms, reporters and developers react to the inevitable shifts that machine learning and AI enable, but adapting century-old working habits and organizational structures is always a rocky process.
As AI and machine learning take their place in the newsroom, we rehearse experiences from countless other professions who have faced the effects of digitization and computation—from business to law and the humanities. Remembering that these learning systems are ultimately human inventions, we reckon that journalists need to continue to develop their technical skills to become better tool makers rather than tool users. It is in this way that we control, we shape, our relationship to AI and can perhaps create an entirely new technical form, replete with our profession’s ethics and values.
Acknowledgements
We would like to thank Samuel Thielman, Charles Berret and Priyanjana Bengani for insightful comments and careful edits. Laura Kurgan, Michael Krisch and Juan Saldarriaga were invaluable inspirations as our story unfolded. Finally, we owe a significant debt to Jeff Roth from The New York Times’ morgue, who helped us research the significant technological innovations taken up by The Times and IBM in both their exhibition at the 1964 World’s Fair and the eventual design and deployment of the Information Bank.
Bibliography
Anderson, Christopher W. Apostles of Certainty. Oxford: Oxford University Press, 2018.
Anderson, Christopher W.; Bell, Emily & Shirky, Clay. “Post-Industrial Journalism: Adapting to the Present.” Tow Center For Digital Journalism (2012): 17.
Angwin, Julia; Larson, Jeff; Mattu, Surya & Kirchner, Lauren. “Machine Bias,” ProPublica, May 23, 2016.
Blankespoor, Elizabeth. “Capital market effects of media synthesis and dissemination: evidence from robo-journalism.” Review of Accounting Studies 23, no. 1 (March 2018): 1-36.
Bourne, Charles P. and Hahn, Trudi B. A History of Online Information Services, 1963-1976. Cambridge, M.A.: MIT Press, 2003.
Boylan, James R. Pulitzer’s School: Columbia University’s School of Journalism, 1903-2003. New York: Columbia University Press, 2003.
Brieman, Leo. “Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author).” Statistical Science 16, no. 3 (2001): 199-231.
Brown, Pete. “Facebook struggles to promote ‘meaningful interactions’ for local publishers, data shows. Columbia Journalism Review, 2018.
Chammah, Maurice & Hansen, Mark. “Policing the Future,” The Marshall Project, February 3, 2016.
Cohn, Nate. “Live Polls of the Midterm Elections,” The New York Times. September 6, 2018.
Diakopoulos, Nicholas. Automating the News: How Algorithms Are Rewriting the Media. Cambridge, M.A.: Harvard University Press, 2019.
Etim, Bassey. “The Times Sharply Increases Articles Open for Comments, Using Google’s Technology.” The New York Times, June 13, 2017.
Fisher, Sarah. “NYT dropping most social media trackers,” Axios Media Trends, November 19, 2019.
Funke, Daniel. “These fact-checkers won $2 million to implement AI in their newsrooms.” Poynter.org, May 10, 2019.
Lippmann, Walter. “Elusive Curves.” The Boston Globe, April 13, 1935.
———. Liberty and the News. Princeton, NJ.: Princeton University Press, 2008.
Marconi, Francesco; Siegman, Alex & Machine Journalist. “The Future of Augmented Journalism: A guide for newsrooms in the age of smart machines.” Associated Press white paper, 2018.
Meyer, Philip. Paper Route: Finding My Way to Precision Journalism. Bloomington, IN: iUniverse, 2012. Pp. 192-201.
———. Precision Journalism: A Reporter’s Guide to Social Science Methods. Bloomington, IN: Indiana University Press.
Minsky, Marvin L. “Some Methods of Artificial Intelligence and Heuristic Programming.” In National Physical Laboratory. Mechanisation of Thought Processes I (London, 1959): 3-28.
Nerone, John. Media and Public Life. Cambridge, UK: Polity, 2015. P. 172.
Parrucci, Jennifer. “Metadata and the Tagging Process at The New York Times.” IPTC.org blog, March 14, 2018.
Pierce, Olga & Allen, Marshall. “Assessing surgeon-level risk of patient harm during elective surgery for public reporting,” ProPublica white paper, August 4, 2015.
Pulitzer, Joseph. “The College of Journalism,” North American Review 178, no. 570 (May 1904): 641-80.
Ribeiro, Marco Tulio; Singh, Sameer; & Guestrin, Carlos. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM (KDD ’16): 1135-1144.
Rogers, Simon; Schwabish, Jonathan & Bowers, Danielle. “Data Journalism in 2017.” Google News Lab report. September 2017.
Rose, Tony; Stevenson, Mark & Whitehead, Miles. “The Reuters Corpus Volume 1 – from Yesterday’s News to Tomorrow’s Language Resources.” Reuters Technology Innovation Group report, 2002.
Schudson, Michael. “Political observatories, databases & news in the emerging ecology of public information.” Daedalus 139, no. 2 (Spring 2010): 100-109.
Schwartz, R., Naaman M., Teodoro R. “Editorial Algorithms: Using Social Media to Discover and Report Local News.” The Ninth International AAAI Conference on Weblogs and Social Media (ICWSM, May 2015).
Spangher, Alexander. “How Does This Article Make You Feel?” Times Open, October 31, 2018.
Stein, Jesse Adams. “Eames Overload and the Mystification Machine: The IBM Pavilion at the 1964 New York World’s Fair”. Seizure 2 (2011).
Stray, Jonathan. “Making Artificial Intelligence Work for Investigative Journalism.” Digital Journalism, (January 2019).
Student. “The Probable Error of a Mean,” Biometrika 6, no. 1 (March 1908): 1–25.
Wang, Shan. “BuzzFeed’s strategy for getting content to do well on all platforms? Adaptation and a lot of A/B testing,” Nieman Lab, September 15, 2017.
Wang, Sheng-Yu; Wang, Oliver; Owens, Andrew; Zhang, Richard; Efros, Alexei A. “Detecting Photoshopped Faces by Scripting Photoshop.” CoRR abs/1906.05856 (2019).
Weizenbaum, Joseph. Computer Power and Human Reason. New York: W.H. Freeman & Co., 1976.
Weizenbaum, Joseph. “ELIZA: A ComputerProgram For the Study of Natural Language Communication Between Man And Machine.” Communications of the ACM. 9, no. 1 (January 1966): 36-45.
Zhang, Amy X.; Ranganathan, Aditya; Metz, Sarah Emlen; Appling, Scott; Sehat, Connie Moon; Gilmore, Norman; Adams, Nick B.; Vincent, Emmanuel; Lee; Jennifer 8; Robbins; Martin; Bice, Ed; Hawke Sandro; Karger, David & Mina, An Xiao. “A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles,” in: WWW ’18 Companion Proceedings of the The Web Conference 2018, 603-612.
Zellers, Rowan; Holtzman, Ari; Rashkin, Hannah; Bisk, Yonatan; Farhadi, Ali; Roesner, Franziska & Choi, Yejin. “Defending Against Neural Fake News.” CoRR abs/1905.12616 (2019).
Citations
[1] Christopher W. Anderson, Emily Bell & Clay Shirky, “Post-Industrial Journalism: Adapting to the Present.” Tow Center For Digital Journalism (2012), 17.
[2] Ibid.
[3] Boylan, James R. Pulitzer’s School: Columbia University’s School of Journalism, 1903-2003. (New York: Columbia University Press, 2003), 56-59.
[4] Michael Schudson, “Political observatories, databases & news in the emerging ecology of public information.” Daedalus 139, no. 2 (Spring 2010), 100-109.
[5] John Nerone, Media and Public Life (Cambridge, UK: Polity, 2015), 172.
[6] Walter Lippmann, Liberty and the News (Princeton, NJ.: Princeton University Press, 2008), 45.
[7] Walter Lippmann, “Elusive Curves.” The Boston Globe, 13 April 1935,17.
[8] Philip Meyer, Precision Journalism: A Reporter’s Guide to Social Science Methods. (Bloomington, IN: Indiana University Press, 1973).
[9] Philip Meyer. Paper Route: Finding My Way to Precision Journalism. (Bloomington, IN: iUniverse, 2012), 192-201.
[10] Christopher W. Anderson, Apostles of Certainty. (Oxford: Oxford University Press, 2018), 99-100.
[11] Jesse Adams Stein, “Eames Overload and the Mystification Machine: The IBM Pavilion at the 1964 New York World’s Fair”. Seizure 2 (2011).
[12] The Guardian: “The Counted” database. URL: https://www.theguardian.com/us-news/series/counted-us-police-killings
[13] Sarah Stillman, “When Deportation is a Death Sentence,” The New Yorker, January 8, 2018. URL: https://www.newyorker.com/magazine/2018/01/15/when-deportation-is-a-death-sentence
[14] https://qz.ai/
[15] Simon Rogers et al. “Data Journalism in 2017.” Google News Lab report. September 2017. URL: https://newslab.withgoogle.com/assets/docs/data-journalism-in-2017.pdf
[16] Leo Brieman, “Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author).” Statistical Science 16, no. 3 (2001): 199-231.
[17] Kevin Quealy & Josh Katz, “Nike Says Its $250 Running Shoes Will Make You Run Much Faster. What if That’s Actually True?”, The New York Times, July 18, 2018. URL: ““https://www.nytimes.com/interactive/2018/07/18/upshot/nike-vaporfly-shoe-strava.html
[18] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM (KDD ’16): 1135-1144.
[19] Olga Pierce & Marshall Allen, “Assessing surgeon-level risk of patient harm during elective surgery for public reporting,” ProPublica white paper, August 4, 2015. URL: https://static.propublica.org/projects/patient-safety/methodology/surgeon-level-risk-methodology.pdf
[20] Buzzfeed’s codebase for “Spies in the Skies” https://buzzfeednews.github.io/2017-08-spy-plane-finder/
[21] The Marshall Project’s data https://github.com/themarshallproject/city-crime and FiveThirtyEight’s codebase https://github.com/fivethirtyeight/data/tree/master/subreddit-algebra
[22] Nate Cohn, “Live Polls of the Midterm Elections,” The New York Times. September 6, 2018. URL: https://www.nytimes.com/2018/09/06/upshot/midterms-2018-polls-live.html
[23] Jonathan Stray, “Making Artificial Intelligence Work for Investigative Journalism.” Digital Journalism, (July 2019).
[24] SRCCON 2018 session: Regression in the newsroom: When to use it and thinking about best practices. URL: https://2018.srccon.org/sessions/#proposal-stats-newsroom
[25] “Inside IBM’s World’s Fair `Egg’”. Popular Science. July, 1964.
[26] New York World’s Fair, IBM Computer Application Area memo. URL: https://www.worldsfairphotos.com/nywf64/documents/ibm-computer-applications-area.pdf
[27] “The Fair? Sure, We’re There.” Times Talk, April 1964.
[28] Ibid. 8.
[29] Jennifer Parrucci, “Metadata and the Tagging Process at The New York Times.” IPTC.org blog, March 14, 2018. URL: https://iptc.org/news/metadata-and-the-tagging-process-at-the-new-york-times/
[30] Shan Wang, “BuzzFeed’s strategy for getting content to do well on all platforms? Adaptation and a lot of A/B testing,” Nieman Lab, September 15, 2017. URL: https://www.niemanlab.org/2017/09/buzzfeeds-strategy-for-getting-content-to-do-well-on-all-platforms-adaptation-and-a-lot-of-ab-testing/
[31] Sarah Fisher, “NYT dropping most social media trackers,” Axios Media Trends, November 19, 2019. URL: https://www.axios.com/newsletters/axios-media-trends-a189a865-c7ed-4a0a-86ca-7182692eb74f.html?chunk=3&utm_term=twsocialshare#story3
[32] Alexander Spangher, “How Does This Article Make You Feel?” Times Open, October 31, 2018. URL: https://open.nytimes.com/how-does-this-article-make-you-feel-4684e5e9c47
[33] Pete Brown, “Facebook struggles to promote ‘meaningful interactions’ for local publishers, data shows. Columbia Journalism Review, 2018. URL: https://www.cjr.org/tow_center/facebook-local-news.php
[34] Bassey Etim, “The Times Sharply Increases Articles Open for Comments, Using Google’s Technology.” The New York Times, June 13, 2017. https://www.nytimes.com/2017/06/13/insider/have-a-comment-leave-a-comment.html?module=inline
[35] Tony Rose et al. “The Reuters Corpus Volume 1 – from Yesterday’s News to Tomorrow’s Language Resources.” Reuters Technology Innovation Group report, 2002. URL: https://pdfs.semanticscholar.org/3e4b/dc7f8904c58f8fce199389299ec1ed8e1226.pdf
[36] GenderMeme. URL: https://gendermeme.org/
[37] Stanford Cable TV News Analyzer. URL: https://esper-tv.stanford.edu/
[38] Francesco Marconi et al. “The Future of Augmented Journalism: A guide for newsrooms in the age of smart machines.” Associated Press white paper, 2018.
[39] Elizabeth Blankespoor. “Capital market effects of media synthesis and dissemination: evidence from robo-journalism.” Review of Accounting Studies 23, no. 1 (March 2018): 1-36.
[40] Amy X. Zhang et al. “A Structured Response to Misinformation: Defining and Annotating Credibility Indicators in News Articles,” in WWW ’18 Companion Proceedings of the The Web Conference 2018, 603-612.
[41] Sheng-Yu Wang et al. “Detecting Photoshopped Faces by Scripting Photoshop.” CoRR abs/1906.05856.
[42] Rowan Zellers et al. “Defending Against Neural Fake News.” CoRR abs/1905.12616 (2019).
[43] https://github.com/jeffshrager/elizagen/tree/master/eliza/version
[44] Marvin L. Minsky, “Some Methods of Artificial Intelligence and Heuristic Programming.” In National Physical Laboratory. Mechanisation of Thought Processes I (London, 1959): 3-28.
[45] Joseph Weizenbaum, “ELIZA: A ComputerProgram For the Study of Natural Language Communication Between Man And Machine.” Communications of the ACM. 9, no. 1 (January 1966): 36-45.
[46] Nicholas Diakopoulos, “Algorithmic Accountability Reporting,” in: Automating the News: How Algorithms Are Rewriting the Media (Cambridge, M.A.: Harvard University Press, 2019).
[47] http://algorithmtips.org
[48] Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, “Machine Bias,” ProPublica, May 23, 2016. URL: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
[49] Maurice Chammah & Mark Hansen, “Policing the Future,” The Marshall Project, February 3, 2016. URL: https://www.themarshallproject.org/2016/02/03/policing-the-future
[50] PredPol website URL: https://www.predpol.com/
[51] Risk Terrain Modeling website URL: http://www.riskterrainmodeling.com/
[52] Charles P. Bourne & Trudi B. Hahn, A History of Online Information Services, 1963-1976. Cambridge, M.A.: MIT Press, 2003.
[53] Ibid.
[54] Ibid.
[55] Hearings Before the General Subcommittee on Education, Ninety-First Congress, First Session on H.R. 8809, April 29 and 30, 1969. URL: https://archive.org/details/ERIC_ED060893/page/n371
[56] The Washington Post’s GitHub repository for their “Fatal Force” project in which they have been counting the number of fatal police shootings in the US since 2015 https://github.com/washingtonpost/data-police-shootings
[57] The Guardian: “The Counted” (captures police shootings in 2015 and 2016 only). https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database
[58] Raz Schwartz, Mor Naaman & Rannie Teodoro. “Editorial Algorithms: Using Social Media to Discover and Report Local News.” The Ninth International AAAI Conference on Weblogs and Social Media (ICWSM, May 2015).
[59]Daniel Funke, “These fact-checkers won $2 million to implement AI in their newsrooms.” Poynter.org, May 10, 2019. URL: https://www.poynter.org/fact-checking/2019/these-fact-checkers-won-2-million-to-implement-ai-in-their-newsrooms/
[60] Ibid. 39.
[61] Ibid. 39.
[62] Ibid.
Has America ever needed a media watchdog more than now? Help us by joining CJR today. Bernat Ivancsics and Mark Hansen study digital journalism at Columbia Journalism School. Bernat Ivancsics is a PhD candidate and focuses on the emergent trends of computational journalism. Mark Hansen is a Professor, and Director of the Brown Institute for Media Innovation.
Comments
Post a Comment