Connect with us


Social media is a lifeline for desperate Indians. And a threat for Narendra Modi



By Diksha Madhok, CNN Business

Updated 8:37 PM ET, Sat May 1, 2021

On most days, Network Capital, a business networking group with over 67,000 members on Facebook (FB), focuses on providing its community with information on job vacancies, higher education, and careers.

Recently, however, the group has been flooded with posts from users looking for hospital beds, oxygen and medicines, as a devastating second wave of Covid-19 sweeps India. Critics of Prime Minister Narendra Modi say his handling of the pandemic is to blame, pointing to decisions to allow mass gatherings of people.

Members of the Facebook group, mostly Indian professionals, have responded swiftly to appeals for help, at times sharing extensive Google (GOOGL) spreadsheets with details of medical suppliers and volunteer organizations.

free widgets for website

“In such times of political polarization, it leaves you with a lot of hope when you see people come together like this,” said Utkarsh Amitabh, a former Microsoft (MSFT) employee who started Network Capital on Facebook in 2016.

He isn’t the only one organizing relief efforts on social media.

Over the last couple of weeks, as India’s Covid-19 crisis has deepened, American social media giants have become platforms of hope for millions of people. The world’s second most populous country has recorded over 18 million cases since the pandemic began — and its health care infrastructure has crumbled under pressure, with hospitals running out of oxygen and medicines.

Family members of Covid-19 patients waiting to refill empty oxygen cylinders in Manesar, India.

With authorities struggling to provide adequate information, distressed patients and their families have turned to Twitter (TWTR), Facebook, WhatsApp, Instagram or LinkedIn, begging for help.

Social media influencers, from Bollywood actors and cricketers to comic artists and entrepreneurs, have been amplifying SOS calls on their accounts. Others have offered to cook meals, clean homes and walk pets for Covid-19 patients. Some have even managed to find help for friends using dating app Tinder.

free widgets for website

On LinkedIn, companies and nonprofit organizations have launched donation initiatives, Ashutosh Gupta, the company’s country manager for India, said in an email. Raheel Khursheed, Twitter’s former head of news in India, said amplifying messages was one way Indians could feel like they were helping.

“It is endearing to watch others help Covid-19 patients on Twitter, but it is also distressing to see how little we can do,” said Khursheed, who now runs a video streaming company. “We don’t know what to do in a pandemic. I don’t have an oxygen cylinder lying at home, so other than amplifying, I can’t do much.”

See also  Facebook Fast Facts

But even as Indians turn to social media during one of the country’s darkest hours, Modi seems to be cracking down on the major platforms in an attempt to stifle dissent. Last month, Twitter removed several tweets about Covid-19 at the request of the Indian government, including some that were critical of the Prime Minister’s handling of the pandemic.

New Delhi’s intervention has put the social media companies in a difficult position in one of their biggest markets, wedged between their users and a government that recently introduced new rules that could make them liable for not removing controversial posts.

In this aerial picture taken on April 26, burning pyres of victims who lost their lives due to Covid-19 are seen at a cremation ground in New Delhi.

Censorship fears

free widgets for website

Each day, images are shared on social media of the anguish unfolding in India, amid mounting public anger against the ruling Bharatiya Janata Party (BJP) for not doing enough to control the brutal second wave. As well as asking for help, people are posting critical comments using trending hashtags including #ResignModi, #SuperSpreaderModi, and #WhoFailedIndia.

Twitter declined to reveal the number of Covid-related posts on its platform in India and, when asked about its India-related traffic during this surge, Facebook sent CNN Business a list of seven community groups working on pandemic-related issues.

Prime Minister Narendra Modi's political party continued to hold election rallies in April despite the  crisis.

In a statement last week, India’s Ministry of Electronics and Information Technology said it had asked Twitter, Facebook and others to remove around 100 posts by users it accused of spreading fake or misleading information. The users had created “panic” about the latest Covid-19 wave by “using unrelated, old and out of the context images or visuals, communally sensitive posts and misinformation about Covid-19 protocols,” the ministry said.

A Twitter spokesperson confirmed the company had withheld those tweets in India — but users outside could still see them. Modi is particularly active on Twitter, with over 41 million followers.

The government order angered many users on social media, who criticized New Delhi for focusing on its own image, instead of the crisis.

free widgets for website

Pratik Sinha, co-founder of fact-checking website Alt News, said he does not buy the government’s explanation that it was going after fake news. “There are hundreds of thousands of posts with fake news on social media during the pandemic, why take down only these 100 and let the others stay?” he said. “A lot of the tweets [which were removed] were in the form of opinion with no element of misinformation,” he added.

See also  Jobs report, Tesla meeting, Facebook whistleblower top week ahead | Fox Business

Some of the tweets were posted by opposition politicians, who blamed Modi for the devastating Covid-19 surge.

Pawan Khera, spokesperson for opposition party Congress, sent a legal notice to Twitter seeking reinstatement of his post, in which he questioned the Modi government for allowing mass gatherings at Kumbh Mela — one of the largest religious pilgrimages on Earth — and holding election rallies. The notice said the removal of his tweets was “arbitrary” and “illegal.” Twitter has not responded to a request for comment.

Supporters of Modi's Bharatiya Janata Party (BJP) wave towards a helicopter carrying the prime minister as he arrives at a rally on April 10.

New uncertainty

Days after Twitter blocked posts critical of Modi’s response to the crisis, the police in the BJP-run state of Uttar Pradesh pressed criminal charges against 26-year-old Shashank Yadav who used the platform to try to find an oxygen cylinder for his dying grandfather, according to the BBC.

free widgets for website

Yadav has “been booked for spreading misleading information” about oxygen supply, a police officer told The Indian Express newspaper.

On Friday, India’s Supreme Court told states not to target citizens communicating their grievances on social media. “Let us hear their voices. We will treat this as contempt if any citizen is harassed if they want bed or oxygen,” India’s top court said.

This isn’t the first time Twitter has been caught in the government’s efforts to crack down on dissent.

In February this year, as farmers protested Modi’s new agriculture laws, the company clashed with the Indian government over its order to take down accounts. While Twitter ultimately complied with part of the order, it refused to take action against journalists, activists or politicians.

“What I am surprised by is that this time Twitter actively removed these tweets — in what seems like an act of censorship — when they had stood up to the government in February,” said Nikhil Pahwa, an internet activist and founder of tech website MediaNama.

free widgets for website

So what’s changed since then? Pahwa pointed to India’s new rules for Big Tech firms, which were unveiled shortly after Twitter showed resistance. According to the new guidelines, large social media companies have to appoint a chief compliance officer, who may be held liable in any proceedings if flagged content is not removed, legal observers say.

“The officer can be personally liable in criminal proceedings relating to hosted content, if the platform fails to satisfy a number of obligations now imposed on social media companies, including an obligation to take down content based on a government order,” Anirudh Rastogi, founder of tech law firm Ikigaw Law, told CNN Business.

See also  Facebook posts make misleading claims about the Sun and rainbows

Soon after the release of the new rules, the government reacted to reports about company employees risking arrest if they fail to comply, saying it has never “threatened the employees of any of the social media platforms of jail term.”

Stuck between a rock and a hard place

Twitter isn’t the only company that drew attention last week for taking down posts.

free widgets for website

On Wednesday, Facebook blocked posts with #ResignModi for several hours. “We temporarily blocked this hashtag by mistake, not because the Indian government asked us to, and have since restored it,” Facebook said in a statement.

Google’s CEO Sundar Pichai told CNN’s Poppy Harlow last week that his company hasn’t received any recent requests from the government to remove content.

And Pichai remains optimistic about being able to work amicably with authorities in the country. “I think one of India’s strengths is a deeply rooted democratic tradition, based in freedom of expression and allowing for diversity of viewpoints … In the past we’ve been able to work constructively with governments around the world, and we’ll continue that approach here,” said Pichai.

India is one of the largest markets for Big Tech firms, and it would be tricky for them to stand their ground if the Modi government continues to put pressure on them.

Facebook, which also owns WhatsApp and Instagram, has 400 million users in India, more than in any other country. Twitter does not break down user data for India, but third-party research suggests it is one of its larger markets. Professional social network LinkedIn counts India as its second-biggest market with more than 76 million users.

free widgets for website

For now, most of these companies are tight-lipped about the impact of the new rules on their operations. Experts don’t think they have much choice but to comply, if they’re going to continue to operate in the fast growing market.

“I do hope Twitter stands up for its users and rolls back [their decision to block tweets,]” said Khursheed. “But there is not much wriggle room in terms of compliance because now there is jail time for this sort of stuff.”

    “Institutions that protect free speech in the US are way stronger than they are in India.”

    Read More


    Meet the Developers – Linux Kernel Team (David Vernet)





    Credit: Larry Ewing ( and The GIMP for the original design of Tux the penguin.


    For today’s interview, we have David Vernet, a core systems engineer on the Kernel team at Meta. He works on the BPF (Berkeley Packet Filter) and the Linux kernel scheduler. This series highlights Meta Software Engineers who contribute to the Linux kernel. The Meta Linux Kernel team works with the broader Linux community to add new features to the kernel and makes sure that the kernel works well in Meta production data centers. Engineers on the team work with peers in the industry to make the kernel better for Meta’s workloads and to make Linux better for everyone.

    Tell us about yourself.

    I’m a systems engineer who’s spent a good chunk of his career in the kernel space, and some time in the user-space as well working on a microkernel. Right now, I’m focusing most of my time on BPF and the Linux kernel scheduler.

    I started my career as a web developer after getting a degree in math. After going to grad school, I realized that I was happiest when hacking on low-level systems and figuring out how computers work.

    As a kernel developer at Meta, what does your typical day look like?

    I’m not a maintainer of any subsystems in the kernel, so my typical day is filled with almost exclusively coding and engineering. That being said, participating in the upstream Linux kernel community is one of the coolest parts of being on the kernel team, so I still spend some time reading over upstream discussions. A typical day goes something like this:

    free widgets for website
    1. Read over some of the discussions taking place on various upstream lists, such as BPF and mm. I usually spend about 30-60 minutes or so per day on this, though it depends on the day.

    2. Hack on the project that I’m working on. Lately, that’s adding a user-space ringbuffer map type to BPF.

    3. Work on drafting an article for

    What have you been excited about or incredibly proud of lately?

    I recently submitted a patch-set to enable a new map type in BPF. This allows user-space to publish messages to BPF programs in the kernel over the ringbuffer. This map type is exciting because it sets the stage to enable frameworks for user-space to drive logic in BPF programs in a performant way.

    Is there something especially exciting about being a kernel developer at a company like Meta?

    The Meta kernel team has a strong upstream-first culture. Bug fixes that we find in our Meta kernel, and features that we’d like to add, are almost always first submitted to the upstream kernel, and then they are backported to our internal kernel.

    Do you have a favorite part of the kernel dev life cycle?

    I enjoy architecting and designing APIs. Kernel code can never crash and needs to be able to run forever. I find it gratifying to architect systems in the kernel that make it easy to reason about correctness and robustness and provide intuitive APIs that make it easy for other parts of the kernel to use your code.

    I also enjoy iterating with the upstream community. It’s great that your patches have a whole community of people looking at them to help you find bugs in your code and suggest improvements that you may never have considered on your own. A lot of people find this process to be cumbersome, but I find that it’s a small price to pay for what you get out of it.

    Tell us a bit about the topic you presented at the Linux Plumbers Conference this year.

    We presented the live patch feature in the Linux kernel, describing how we have utilized it at Meta and how our hyper-scale has shown some unique challenges with the feature.

    free widgets for website

    What are some of the misconceptions about kernel or open source software development that you have encountered in your career?

    The biggest misconception is that it’s an exclusive, invite-only club to contribute to the Linux kernel. You certainly must understand operating systems to be an effective contributor and be ready to receive constructive criticism when there is scope for improvement in your code. Still, the community always welcomes people who come in with an open mind and want to contribute.

    What resources are helpful in getting started in kernel development?

    There is a lot of information out there that people have written on how to get integrated into the Linux kernel community. I wrote a blog post on how to get plugged into Linux kernel upstream mailing list discussions, and another on how to submit your first patch. There is also a video on writing and submitting your first Linux kernel patch from Greg Kroah-Hartman.

    In terms of resources to learn about the kernel itself, there are many resources and books, such as:

    Where can people find you and follow your work?

    I have a blog where I talk about my experiences as a systems engineer: I publish articles that range from topics that are totally newcomer friendly to more advanced topics that discuss kernel code in more detail. Feel free to check it out and let me know if there’s anything you’d like me to discuss.

    To learn more about Meta Open Source, visit our open source site, subscribe to our YouTube channel, or follow us on Twitter, Facebook and LinkedIn.

    First seen at

    free widgets for website
    See also  ELI5: Buck - Modular Build System
    Continue Reading


    Get started with WhatsApp Business Platform in Minutes with Postman





    Our collaboration brings tools you already use to WhatsApp Business Platforms APIs

    Postman is a best-in-class API platform used by 20M developers worldwide. Using Postman simplifies each step of the API lifecycle and streamlines collaboration.

    Postman’s strong platform and broad adoption in the developer community made deciding to work with Postman to deliver a robust developer experience an easy decision for our WhatsApp Business Platform product team.

    What Postman means for your WhatsApp projects

    The benefits of this collaboration for developers are clear – you can easily leverage Postman’s platform with your Meta projects to onboard, collaborate, and contribute towards documentation and best practices as you build out your integrations.

    Fast Onboarding

    The WhatsApp team is able to offer, via Postman, an API collection that pre-fills environment variables and walks you through your initial test requests – helping developers dive right in to using the Cloud API. Our product managers show you how easy it is to get started with Postman in this session from Conversations:

    Foster Collaboration

    The public Postman workspace fosters collaboration – allowing environments, collections, and documentation augmentation to happen in one place.

    free widgets for website

    Enhance Documentation

    Postman’s API documentation tools augment our own documentation and allows developers to contribute directly to the community’s shared knowledge, building a strong reference library for all developers and encouraging new, innovative use cases.

    The Results

    Working with Postman from the beginning helps create a developer-friendly experience for the WhatsApp Business Platform – allowing you to get started quickly, build community, and share knowledge.

    Want to know more about our partnership with Postman? Check out their case study, follow along with the video above, or dive right into the Postman Workspace for the WhatsApp Business Platform.

    See also  Facebook Researcher's New Algorithm Ushers New Paradigm Of Image Recognition


    First seen at

    free widgets for website
    Continue Reading


    Summer of open source: building more efficient AI with PyTorch





    Note: Special thanks to Less Wright, Partner Engineer, Meta AI, for review of and additional insights into the post.

    This post on creating efficient artificial intelligence (AI) is the second in the “Summer of open source” series. This series aims to provide a handful of useful resources and learning content in areas where open source projects are creating impact across Meta and beyond. Follow along as we explore other areas where Meta Open Source is moving the industry forward by sharing innovative, scalable tools.

    PyTorch: from foundational technology to foundation

    Since its initial release in 2016, PyTorch has been widely used in the deep learning community, and its roots in research are now consistently expanding for use in production scenarios. In an exciting time for machine learning (ML) and artificial intelligence (AI), where novel methods and use cases for AI models continue to expand, PyTorch has reached the next chapter in its history as it moves to the newly established, independent PyTorch Foundation under the Linux Foundation umbrella. The foundation is made up of a diverse governing board including representatives from AMD, Amazon Web Services, Google Cloud, Microsoft Azure and Nvidia, and the board is intended to expand over time. The mission includes driving adoption of AI tooling through vendor-neutral projects and making open source tools, libraries and other components accessible to everyone. The move to the foundation will also enable PyTorch and its open source community to continue to accelerate the path from prototyping to production for AI and ML.

    Streamlining AI processes with Meta open source

    PyTorch is a great example of the power of open source. As one of the early open source deep learning frameworks, PyTorch has allowed people from across disciplines to experiment with deep learning and apply their work in wide-ranging fields. PyTorch supports everything from experiments in search applications to autonomous vehicle development to ground-penetrating radar, and these are only a few of its more recent applications. Pairing a versatile library of AI tools with the open source community unlocks the ability to quickly iterate on and adapt technology at scale for many different uses.

    See also  Facebook Fast Facts

    As AI is being implemented more broadly, models are trending up in size to tackle more complex problems, but this also means that the resources needed to train these models have increased substantially. Fortunately, many folks in the developer community have recognized the need for models to use fewer resources—both from a practical and environmental standpoint. This post will explore why quantization and other types of model compression can be a catalyst for efficient AI.

    free widgets for website

    Establishing a baseline for using PyTorch

    Most of this post explores some intermediate and advanced features of PyTorch. If you are a beginner that is looking to get started, or an expert that is currently using another library, it’s easiest to get started with some basics. Check out the beginner’s guide to PyTorch, which includes an introduction to a complete ML workflow using the Fashion MNIST dataset.

    Here are some other resources that you might check out if you’re new to PyTorch:

    • PyTorch Community Stories: Learn how PyTorch is making an impact across different industries like agriculture, education, travel and others
    • PyTorch Beginner Series: Explore a video playlist of fundamental techniques including getting started with tensors, building models, training and inference in PyTorch.

    Quantization: Applying time-tested techniques to AI

    There are many pathways to making AI more efficient. Codesigning hardware and software to optimize for AI can be highly effective, but bespoke hardware-software solutions take considerable time and resources to develop. Creating faster and smaller architectures is another path to efficiency, but many of these architectures suffer from accuracy loss when compared to larger models, at least for the time being. A simpler approach is to find ways of reducing the resources that are needed to train and serve existing models. In PyTorch, one way to do that is through model compression using quantization.

    Quantization is a mathematical technique that has been used to create lossy digital music files and convert analog signals to digital ones. By executing mathematical calculations with reduced precision, quantization allows for significantly higher performance on many hardware platforms. So why use quantization to make AI more efficient? Results show that in certain cases, using this relatively simple technique can result in dramatic speedups (2-4 times) for model inference.

    See also  Facebook Said to Consider Forming an Election Commission - The New York Times

    The parameters that make up a deep learning model are typically decimal numbers in floating point (FP) precision; each parameter requires either 16 bits or 32 bits of memory. When using quantization, numbers are often converted to INT4 or INT8, which occupy only 4 or 8 bits. This reduces how much memory models require. Additionally, chip manufacturers include special arithmetic that makes operations using integers faster than using decimals.

    There are 3 methods of quantization that can be used for training models: dynamic, static and quantize-aware training (QAT). A brief overview of the benefits and weaknesses is described in the table below. To learn how to implement each of these in your AI workflows, read the Practical Quantization in PyTorch blog post.

    free widgets for website

    Quantization Method




    • Easy to use with only one API call
    • More robust to distribution drift resulting in slightly higher accuracy
    • Works well for long short-term memory (LSTM) and Transformer models

    Additional overhead in every forward pass

    Static (also known as PTQ)

    free widgets for website
    • Faster inference than dynamic quantization by eliminating overhead

    May need regular recalibration for distribution drift

    Quantize-Aware Training (QAT)

    • Higher accuracy than static quantization
    • Faster inference than dynamic

    High computational cost

    Additional features for speeding up your AI workflow

    Quantization isn’t the only way to make PyTorch-powered AI more efficient. Features are updated regularly, and below are a few other ways that PyTorch can improve AI workflows:

    • Inference mode: This mode can be used for writing PyTorch code if you’re only using the code for running inference. Inference mode changes some of the assumptions when working with tensors to speed up inference. By telling PyTorch that you won’t use tensors for certain applications later (in this case, autograd), it adjusts to make code run faster in these specific scenarios.

    • Low precision: Quantization works only at inference time, that is, after you have trained your model. For the training process itself, PyTorch uses AMP, or automatic mixed precision training, to find the best format based on which tensors are used (FP16, FP32 or BF16). Low-precision deep learning in PyTorch has several advantages. It can help lower the size of a model, reduce the memory that is required to train models and decrease the power that is needed to run models. To learn more, check out this tutorial for using AMP with CUDA-capable GPUs.

    • Channels last: When it comes to vision models, NHWC, otherwise known as channels-last, is a faster tensor memory format in PyTorch. Having data stored in the channels-last format accelerates operations in PyTorch. Formatting input tensors as channels-last reduces the overhead that is needed for conversion between different format types, resulting in faster inference.

    • Optimize for inference: This TorchScript prototype implements some generic optimizations that should speed up models in all environments, and it can also prepare models for inference with build-specific settings. Primary use cases include vision models on CPUs (and GPUs) at this point. Since this is a prototype, it’s possible that you may run into issues. Raise an issue that occurs on the PyTorch GitHub repository.

    Unlocking new potential in PyTorch

    Novel methods for accelerating AI workflows are regularly explored on the PyTorch blog. It’s a great place to keep up with techniques like the recent BetterTransformer, which increases speedup and throughput in Transformer models by up to 2 times for common execution scenarios. If you’re interested in learning how to implement specific features in PyTorch, the recipes page allows you to search by categories like model optimization, distributed training and interpretability. This post is only a sampling of how tools like PyTorch are moving open source and AI forward.

    To stay up to date with the latest in Meta Open Source for artificial intelligence and machine learning, visit our open source site, subscribe to our YouTube channel, or follow us on Facebook, Twitter and LinkedIn.

    First seen at

    free widgets for website
    Continue Reading