I get it: I’m one of the last people you’d expect to hear warning about the danger of conspiracies and lies. I’ve built a career on pushing the limits of propriety and good taste. I portrayed Borat, the first fake news journalist, along with satirical characters such as Ali G, a wannabe gangster, and Bruno, a gay fashion reporter from Austria. Some critics have said my comedy risks reinforcing old racial and religious stereotypes.
I admit that most of my comedy over the years has been pretty juvenile. However, when Borat was able to get an entire bar in Arizona to sing “throw the Jew down the well,” it revealed people’s indifference to anti-Semitism. When, as Bruno, I started kissing a man in a cage fight in Arkansas and nearly started a riot, it showed the violent potential of homophobia. And when, disguised as an ultra-woke developer, I proposed building a mosque in one rural community, prompting a resident to proudly admit, “I am racist, against Muslims,” it showed a wide acceptance of Islamophobia.
The ugliness my jokes help reveal is why I’m so worried about our pluralistic democracies. Demagogues appeal to our worst instincts. Conspiracy theories once confined to the fringe are going mainstream, fueled in part by President Donald Trump, who has spread such paranoid lies more than 1,700 times to his 67 million Twitter followers. It’s as if the Age of Reason – the era of evidential argument – is ending, and now knowledge is delegitimised and scientific consensus is dismissed. Democracy, which depends on shared truths, is in retreat, and autocracy, which thrives on shared lies, is on the march. Hate crimes are surging, as are murderous attacks on religious and ethnic minorities.
All this hate and violence actually has something in common: It’s being facilitated by a handful of Internet companies that amount to the greatest propaganda machine in history.
Facebook, YouTube, Twitter and other social media platforms reach billions of people. The algorithms these platforms depend on deliberately amplify content that keeps users engaged – stories that appeal to our baser instincts and trigger outrage and fear. That’s why fake news outperforms real news on social media; studies show that lies spread faster than truth.
On the Internet, everything can appear equally legitimate. Breitbart resembles the BBC, and the rantings of a lunatic seem as credible as the findings of a Nobel Prize winner. We have lost a shared sense of the basic facts upon which democracy depends.
When I, as Ali G, asked the astronaut Buzz Aldrin, “What woz it like to walk on de sun?” the joke worked, because we, the audience, shared the same facts. If you believe the moon landing was a hoax, the joke was not funny.
When Borat got that bar in Arizona to agree that “Jews control everybody’s money and never give it back,” the joke worked because the rest of us knew that the depiction of Jews as miserly is a conspiracy theory originating in the Middle Ages.
Social media platforms make it easier for people who share the same false premises to find one another, and then the technology acts as an accelerant for toxic thinking. When conspiracies take hold, it’s easier for hate groups to recruit, easier for foreign intelligence agencies to interfere in our elections and easier for a country like Myanmar to commit genocide against the Rohingya.
Yes, social media companies have taken some steps to reduce hate and conspiracies on their platforms. Yet these steps have been mostly superficial, and the next 12 months could be pivotal: British voters will go to the polls next month while online conspiracists promote the despicable theory of “great replacement” that white Christians are being deliberately replaced by Muslim immigrants. Americans will vote for president while trolls and bots perpetuate the disgusting lie of a “Hispanic invasion.” And after years of YouTube videos calling climate change a “hoax,” the United States is on track, a year from now, to formally withdraw from the Paris agreement.
Unfortunately, the executive of these platforms don’t appear interested in a close look at how they’re spreading hate, conspiracies and lies. Look at the speech Facebook founder and chief executive Mark Zuckerberg delivered last month that warned against new laws and regulations on companies like his.
Zuckerberg tried to portray the issue as one involving “choices” around “free expression.” But freedom of speech is not freedom of reach. Facebook alone already counts about a third of the world’s population among its users. Social media platforms should not give bigots and paedophiles a free platform to amplify their views and target victims.
Zuckerberg claimed that new limits on social media would “pull back on free expression.” This is utter nonsense. The First Amendment says that “Congress shall make no law” abridging freedom of speech, but this does not apply to private businesses. If a neo-Nazi comes goose-stepping into a restaurant and starts threatening other customers and saying he wants to kill Jews, would the restaurant owner be required to serve him an elegant eight-course meal? Of course not. The restaurant owner has every legal right, and, indeed, a moral obligation, to kick the Nazi out. So do Internet companies.
Zuckerberg seemed to equate regulation of companies like his to the actions of “the most repressive societies.” This, from one of the six people who run the companies that decide what information so much of the world sees: Zuckerberg at Facebook; Sundar Pichai at Google; Larry Page and Sergey Brin at Google’s parent company, Alphabet; Brin’s ex-sister-in-law, Susan Wojcicki, at YouTube; and Jack Dorsey at Twitter. These super-rich “Silicon Six” care more about boosting their share price than about protecting democracy. This is ideological imperialism – six unelected individuals in Silicon Valley imposing their vision on the rest of the world, unaccountable to any government and acting like they’re above the reach of law. Surely, instead of letting the Silicon Six decide the fate of the world order, our democratically elected representatives should have at least some say.
Zuckerberg speaks of welcoming a “diversity of ideas,” and last year, he gave us an example. He said he found posts denying the Holocaust “deeply offensive,” but he didn’t think Facebook should take them down “because I think there are things that different people get wrong.” This is madness. The Holocaust is a historical fact, and those who deny it aim to encourage another one. There’s no benefit in pretending that “the Holocaust is a hoax” is simply a “thing” that “different people get wrong.” Zuckerberg says that “people should decide what is credible, not tech companies.” But two-thirds of millennials say they haven’t even heard of Auschwitz. How are they supposed to know what’s “credible”? How are they supposed to know that the lie is a lie?
When it comes to removing content, Zuckerberg asked, “where do you draw the line?” Yes, that can be difficult, but here’s what he’s really saying: Removing lies and conspiracies is just too expensive.
Facebook, Google, and Twitter are unthinkably rich, and they have the best engineers in the world. They could fix these problems if they wanted to. Twitter could deploy an algorithm to remove more white supremacist hate speech, but they reportedly haven’t because it would eject some very prominent politicians. Facebook could hire enough monitors to actually monitor, work closely with groups such as the Anti-Defamation League and the NAACP and purge deliberate lies from their platforms.
But they won’t, because their entire business model relies on generating more engagement, and nothing generates more engagement than lies, fear and outrage.
These companies pretend they’re something bigger, or nobler, but what they really are is the largest publishers in history – after all, they make their money on advertising, just like other publishers. They should abide by basic standards and practices just like the ones that apply to newspapers, magazines, television and movies. I’ve had scenes in my movies cut or truncated to abide by those standards. Surely companies that publish material to billions of people should have to abide by basic standards just like film and television studios do.
Zuckerberg said social media companies should “live up to their responsibilities,” but he’s totally silent about what should happen when they don’t. By now, it’s pretty clear that they cannot be trusted to regulate themselves. In other industries, you can be sued for the harm you cause: Publishers can be sued for libel; people can be sued for defamation. I’ve been sued many times. But social media companies are almost completely protected from liability for the content their users post – no matter how indecent – by Section 230 of, get ready for it, the Communications Decency Act.
That immunity has warped their whole worldview. Take political ads. Fortunately, Twitter finally banned them, and Google says it will make changes, too. But if you pay Facebook, it will run any “political” ad you want, even if it’s a lie. It’ll even help you micro-target those lies to users for maximum effect. Under this twisted logic, if Facebook were around in the 1930s, it would have allowed Adolf Hitler to post 30-second ads on his “solution” to the “Jewish problem.” Here’s a good way for Facebook to “live up to” its responsibilities: Start fact-checking political ads before running them, stop micro-targeted lies immediately, and when ads are false, don’t publish them.
Section 230 was amended last year so that tech companies can be held responsible for paedophiles who use their sites to target children. Let’s also hold them responsible for users who advocate for the mass murder of children because of their race or religion. And maybe fines are not enough. Maybe it’s time for Congress to tell Zuckerberg and his fellow CEOs: You already allowed one foreign power to interfere in US elections; you already facilitated one genocide; do it again and you go to prison.
In the end, we have to decide what kind of world we want. Zuckerberg claims his main goal is to “uphold as wide a definition of freedom of expression as possible.” Yet our freedoms are not only an end in themselves, but they’re also a means to another end – to our right to life, liberty and the pursuit of happiness. And today these rights are threatened by hate, conspiracies and lies.
A pluralistic democratic society should make sure that people are not targeted, not harassed and not murdered because of who they are, where they come from, who they love or how they pray. If we do that – if we prioritize truth over lies, tolerance over prejudice, empathy over indifference and experts over ignoramuses – maybe we have a chance of stopping the greatest propaganda machine in history. We can save democracy. We can still have a place for free speech and free expression.
And, most important, my jokes will still work.
© The Washington Post 2019
Messenger API to support Instagram
Today, we are announcing updates to the Messenger API to support Instagram messaging, giving businesses new tools to manage their customer communications on Instagram at scale. The new API features enable businesses to integrate Instagram messaging with their preferred business applications and workflows; helping drive more meaningful conversations, increase customer satisfaction and grow sales. The updated API is currently in beta with a limited number of developer partners and businesses.
Instagram is a place for emerging culture and trend creation and discovering new brands is a valuable part of this experience. Messaging plays a central role in helping people connect with brands in personal ways through story replies, direct messages, and mentions. Over the last year, total daily conversations between people and businesses on Messenger and Instagram grew over 40 percent. For businesses, the opportunity to drive sales and improve customer satisfaction by having meaningful interactions with people on Instagram messaging is huge.
“Instagram is a platform for community building, and we’ve long approached it as a way for us to connect with our customers in a place where they are already spending a lot of their time. With the newly launched Messenger API support for Instagram, we are now able to increase efficiency, drive even stronger user engagement, and easily maintain a two-way dialogue with our followers. This technology has helped us create a new pipeline for best-in-class service and allows for a direct line of communication that’s fast and easy for both customers and our internal team.” – Michael Kors Marketing
Works with your tools and workflows
Businesses want to use a single platform to respond to messages on multiple channels. The Messenger API now allows businesses to manage messages initiated by people throughout their Instagram presence, including Profile, Shops, and Stories. It will be possible for businesses to use information from core business systems right alongside Instagram messaging, enabling more personal conversations that drive better business outcomes. For example, businesses integrating with a CRM system can give agents a holistic view of customer loyalty. Furthermore, existing investments in people, tools, and workflows to manage other communication channels can be leveraged and extended to support customers on Instagram. This update will also bring Facebook Shops messaging features to the Messenger API so businesses can create more engaging and connected customer experiences.
To get started, businesses can easily work with developers to integrate Instagram messaging with their existing tools and systems.
Increases responsiveness and customer satisfaction
Customers value responsiveness when they have questions or need help from businesses. For the first time on Instagram, we’re introducing new features that will allow businesses to respond immediately to common inquiries using automation, while ensuring people are seamlessly connected to live support for more complex inquiries. One of our alpha partners, Clarabridge, reported their client brands had improved their response rate by up to 55% since being able to manage Instagram DMs through their platform.
The updates to the Messenger API are part of our overall effort to make it easier for businesses to reach their customers across our family of apps.
Messenger API support for Instagram is currently in beta with a focus on providing high quality, personalized messaging experiences on Instagram while increasing business efficiency. Adidas, Amaro, Glossier, H&M, MagazineLuiza, Michael Kors, Nars, Sephora and TechStyle Fashion Group and other consumer brands are already participating in the beta program. We are excited about early results some businesses saw during alpha testing, including higher response rates, reduced resolution times, and deeper customer insights as a result of integrations. We’re also testing with a limited number of developer partners; and are delighted at the initial response.
“On average, brands have saved at least four hours per agent per week by streamlining social community management within the Khoros platform, plus shortened response rates during business hours — which is crucial to meet as customers who message brands on social media expect a quick reply.” – Khoros
Required migration to token-based access for User Picture and oEmbed endpoints
As part of our Graph API 8.0 release, we announced breaking changes to how developers can access certain permissions and APIs. Starting October 24, 2020, developers need to migrate to token-based access in order to access User Picture and oEmbed endpoints for Facebook and Instagram.
This post outlines these changes and the necessary steps developers need to take to avoid disruption to their app.
Facebook will now require client or app access tokens to access a user’s profile picture. Beginning on October 24, 2020 queries for profile pictures made against user IDs without an access token will return a generic silhouette rather than a profile picture. This is a breaking change for partners. While client or app tokens will be required for user ID queries, they will continue to be a best practice (and not required) for ASID queries for the time being.
Facebook and Instagram oEmbed
We are also deprecating the existing Legacy API oEmbed endpoints for Facebook and Instagram on October 24, 2020, which will be replaced with new Graph API endpoints. If developers don’t make this change and continue to attempt to call the existing oEmbed API, their requests will fail and developers will receive an error message instead. These new endpoints will require client or app access tokens or ASID queries.
Ready to make the switch? You can read more about these changes in our developer documentation for User Picture and also visit our changelog for Facebook OEmbed and IG OEmbed for details on how to start calling these Graph API endpoints.
PyTorch Tutorials Refresh – Behind the Scenes
Hi, I’m Jessica, a Developer Advocate on the Facebook Open Source team. In this blog, I’ll take you behind the scenes to show you how Facebook supports and sustains our open source products – specifically PyTorch, an open source deep learning library. With every new release version, PyTorch pushes out new features, updates existing ones, and adds documentation and tutorials that cover how to implement these new changes.
On May 5, 2020, PyTorch released improvements to Tutorials homepage with new content and a fresh usability experience out into the world (see the Twitter thread) for the community. We introduced keyword based search tags and a new recipes format (bite-sized, ready-to-deploy examples) and more clearly highlighted helpful resources, which resulted in the fresh homepage style you see today.
Something Went Wrong
We’re having trouble playing this video.
As the framework grows with each release, we’re continuously collaborating with our community to not only create more learning content, but also make learning the content easier.
The tutorials refresh project focused on re-envisioning the learning experience by updating the UX and updating the learning content itself.
Our 3 major goals for the refresh were:
- Reduce blocks of text and make it easy for users to find important resources (e.g. PyTorch Cheat Sheet, New to PyTorch tutorials)
- Improve discoverability of relevant tutorials and surface more information for users to know about the available tutorial content
- Create content that allows users to quickly learn and deploy commonly used code snippets
And we addressed these goals by:
- Adding callout blocks with direct links to highlight important resource such as the beginner tutorial, the PyTorch Cheat Sheet and new recipes
- Adding filterable tags to help users easily find relevant tutorials; and formatting the tutorials cards with summaries so users know what to expect without having to click in
- Creating a new learning format, Recipes, and 15 brand new recipes covering some of the most popular PyTorch topics such as interpretability and quantization as well as basics such as how to load data in PyTorch
- In summary:
Add callouts with direct links to highlight important resources
Improve discoverability of relevant tutorials and surface more information for users to know about the available tutorial content
Add filterable tags to help users easily find relevant tutorials. Reformat tutorial cards with summaries so users know what to expect
Create content that allow users to quickly learn and deploy commonly used code snippets
Create a new learning format – Recipes. These are bite-sized, actionable examples of how to use specific Pytorch features, different from our previous full-length tutorials
Why We Made These Changes
Now, what drove these changes? These efforts were driven by feedback from the community; two sources of feedback were the UX research study and direct community interactions:
- UX Research study – Earlier in 2020, we conducted a UX research study of our website in collaboration with the Facebook UX Research team to understand how our developer community is using the website and evaluate ways we can improve it to better meet their needs
- In-person events and online feedback – The team gathered community feedback about existing tutorials to help identify learning gaps.
We used these channels of input to fuel revisioning our learning experience.
Rethinking the Learning Experience
Given the feedback from the UX Research study and the in-person workshop, we went back and rethought the current learning experience.
- 3 levels
- Level 1: API docs. Currently these exist and they contain code snippets that provide an easily understandable (and reproducible) example that allows a user to implement that particular API
- Level 2: The missing puzzle piece
- Level 3: Tutorials ideally provide an end-to-end experience that provides users an understanding of how to take data, train a model and deploy it into a production setting using PyTorch. These exist, but needed to be pruned of outdated content and cleaned up to better fit this model
- Realized we were missing something in between, content that was short, informative, and actionable. That’s how recipes were born. Level 2: Recipes are bite-sized, actionable examples of how to use specific PyTorch features, different from our tutorials
What Was the Process
Just as it took a large team effort, this was more of a marathon as opposed to a sprint. Let’s look at the process:
Timeline of the process:
Overall, the project took about 6 months, not including the UX research and prior feedback collection time. It started off with the kickoff discussion to align on the changes. We assessed the existing tutorials, pruned outdated content and decided on new recipe topics and assigned authors. In the meantime, marketing and documentation engineers collaborated with our web design team on the upcoming UI needs, created mocks to preview with the rest of the team and built out the infrastructure.
For logistics, we created a roadmap and set milestones for the team of authors. We held weekly standup meetings, and the team bounced ideas in chat. The changes were all made in a staging branch in GitHub, which allowed us to create previews of the final build. Next, the build process. Many of the recipe authors were first time content creators, so we held a live onboarding session where we discussed teaching mindset, writing with an active voice, outlining, code standards and requirements; and this was all captured in a new set of content creation documentation.
The bulk of the process was spent in building out the content, copy editing and implementing the UI experience.
With the product out the door, we took some time to perform a team retrospective – asking what went well? What went poorly? What can we do better next time? In addition, we continue to gather ongoing feedback from the community through GitHub issues.
Moving forward, we are brainstorming and forming a longer-term plan for the PyTorch learning experience as it relates to docs and tutorials.
Ways to Improve
Looking back on ways we could have improved:
- Timeline – Our timeline ended up taking longer than anticipated because it had been coupled with a previous version release and team members were serving double-duty working on release content, as well as tutorials refresh content. As version release approached, we took a step back and realized we needed more time to put out a more polished product.
- Testing – In software development, if there is an impending deadline, typically testing is the first thing to get squished; however, more focused testing will always save time in the bigger picture. For us, we would always welcome more time for more CI tests of the tutorial build, as well as beta tests of the user experience. Both of these are ongoing works in progress, as we continue to improve the tutorials experience overall.
So what’s next? We understand that this was just one change in a larger landscape of the overall PyTorch learning experience, but we are excited to keep improving this experience for you, our dedicated PyTorch user.
We would like to hear from you about your experience in the new tutorials. Found a tutorial you loved? Tweet about it and tag us (@PyTorch). Ran into an issue you can help fix? File an issue in https://github.com/pytorch/tutorials. We are excited to continue building the future of machine learning with you!