OnePlus has launched the new OnePlus Buds Pro with active noise cancellation. The new Buds Pro builds on the OnePlus Buds. OnePlus claims that it is the first professional adaptive noise active cancellation earbuds.
The OnePlus Buds Pro price hasn’t been revealed in India but the Buds Pro will sell in North America at a price of USD 150 (roughly ₹11,200). The earbuds will be available in two colours, matte black and glossy white.
The OnePlus Buds Pro comes with 38 hours of charge (with the charge in the box), according to the company it can get 10 hours worth of use in 10 minutes. The Buds Pro case can be charged wirelessly as well.
The Buds Pro features IP55 rating to showcase the water and dust proofing. The Buds Pro also gets three mics for clear calls. It provides a latency of 96ms for a better gaming experience. Additionally, it also gets Bluetooth 5.0 connectivity.
Never miss a story! Stay connected and informed with Mint.
our App Now!!
Tech Billionaires Are Actually Dumber Than You Think
It turns out that many of today’s billionaires are selfish, lonely men fantasizing about how they will survive the end times they have played a part in creating.
In mid-September, for just a few days, Indian industrialist Gautam Adani entered the ranks of the top three richest people on earth as per Bloomberg’s Billionaires Index. It was the first time an Indian, or, for that matter, an Asian, had enjoyed such a distinction. South Asians in my circle of family and friends felt excited at the prospect that a man who looked like us had entered such rarefied ranks.
Adani was deemed the second richest person, even richer than Amazon founder Jeff Bezos! A Times of India profile fawningly quoted him relaying his thought process in the early days of his rags-to-riches story. “‘Dreams were infinite but finances finite,’ he says with engaging frankness,” according to the profile. There was no mention of the serious accusations he faces of corruption and diverting money into offshore tax havens, or of the entire website, AdaniWatch, devoted to investigating his dirty deeds.
Adani made his money, in part, by investing in digital services, leading one economist to say, “Wherever there is a futuristic business in India, I think… [Adani] has a stronghold.”
The moment of pride that Indians felt in such an achievement by one of their own was short-lived. Quickly Adani slipped from second richest to third richest, and, as of this writing, is in the number four slot on a list dominated by people who have made money from the digital technology revolution.
In fact, ranking multibillionaires is a meaningless exercise that obscures the absurdity of their wealth. This year alone, a number of tech billionaires on Bloomberg’s list lost hundreds of billions of dollars as the gains they made during the early years of the pandemic were wiped out because of a volatile stock market. But, as Whizy Kim of Vox points out, whether or not they’re losing money or giving it away—as Bezos’ ex-wife MacKenzie Scott has been doing—their wealth remains insanely high, and most are worth more today than before the COVID-19 pandemic.
What are they doing with all this wealth?
It turns out that many are quietly plotting their own survival against our demise. Douglas Rushkoff, podcaster, founder of the Laboratory for Digital Humanism, and fellow at the Institute for the Future, has written a book about this bizarre phenomenon, Survival of the Richest: Escape Fantasies of the Tech Billionaires.
In an interview, Rushkoff explains that billionaires worry about the end of humanity just like the rest of us. They fear catastrophic climate change or the next pandemic. And, they know their money will likely be of little value when civilizations decline. “How do I maintain control over my Navy Seal security guards once my money is worthless?” is a question that Rushkoff says many of the world’s wealthiest people want to know the answer to.
He knows they ask such questions because he was invited to give private lectures by those who think his expertise in digital technology gives him unique insight into the future. But Rushkoff was quietly studying them instead and has few flattering things to say about these wielders of economic power.
“How is it that the wealthiest and most powerful people I’d ever been in the same room with see themselves as utterly powerless to affect the future?” he asks. It seems as though “the best they can do is prepare for the inevitable calamity and then just, you know, hang on for dear life.”
Rushkoff explores this tech billionaire “mindset” that he says has resulted in a generation of people who are “almost comedic monsters, who really mean to leave us all behind.” Adani is a perfect example of this, having invested in the very fossil fuels that are destroying our planet. He has large holdings in Australia’s coal mining industry and has sparked a massive grassroots movement intent on stopping him.
The admiration that some Indians feel for Adani’s ascension on Bloomberg’s list of billionaires is based on an assumption of cleverness. Surely, he must be one of the smartest people in the world in order to be one of the richest? Elon Musk, the world’s wealthiest man by far (with twice as much wealth as Bezos), has enjoyed such a reputation for years.
Those who are invested in the idea of merit-based capitalism can justify the unimaginable wealth of the world’s richest people only by assuming they are intelligent enough to deserve it.
This is a façade. Rather than smarts, the wealthiest people on the planet appear to be rather small-minded idiot savants who share a common disdain for the rest of us.
After being around tech billionaires in private, Rushkoff concludes that they are invested in “this notion that they really can, like puppeteers, kind of control society from one level above,” and that this approach is “different than the era of Alexander the Great, or Caesar.” If the question that vexes them most of all is how, in a disastrous future, will they control the guards they hire to protect their hoardings, then our economic system is a farce.
“Even if we call them genius technologists, most of them were plucked from college when they were freshmen,” says Rushkoff. “They came up with some idea in their dorm room before they’d taken history, or economics, or ethics, or philosophy” classes, and so they lack the wisdom needed to oversee their own perverse amounts of wealth.
Having spent time with many tech billionaires, Rushkoff worries that “their education about the future comes from zombie movies and science fiction shows.”
Billionaires are not simply drawing their wealth from a vacuum. According to data from the World Economic Forum, “the world’s richest have captured a disproportionate share of global wealth over recent decades.” This means that, if you were rich to begin with a decade or two ago, you are likely to have seen your wealth multiply by a greater amount than middle-class or lower-income people.
Not only are tech billionaires undeserving of their wealth, but they also are fleecing the rest of us—and fantasizing about hoarding that wealth in the worst-case scenarios while the rest of humanity struggles to survive.
The danger is that if society valorizes such (mostly) men, we are in danger of internalizing their childish, selfish mindset and giving up on solving the climate crisis or building resiliency on a mass scale.
Instead of relating to them, we ought to feel sorry for a group of people so cut off from humanity that their vision of the future is a very lonely one.
“Let’s look at these tech-bro billionaire lunatics. Let’s laugh at what they’re doing… so they look small rather than big,” says Rushkoff. He thinks it is critical to adopt the perspective that “the disaster they’re so afraid of looks entirely manageable by more reasonable people who are willing just to help each other out.”
Independent Media Institute______________________
Sonali Kolhatkar is an award-winning multimedia journalist. She is the founder, host, and executive producer of “Rising Up With Sonali,” a weekly television and radio show that airs on Free Speech TV and Pacifica stations. Her forthcoming book is Rising Up: The Power of Narrative in Pursuing Racial Justice (City Lights Books, 2023). She is a writing fellow for the Economy for All project at the Independent Media Institute and the racial justice and civil liberties editor at Yes! Magazine. She serves as the co-director of the nonprofit solidarity organization the Afghan Women’s Mission and is a co-author of Bleeding Afghanistan. She also sits on the board of directors of Justice Action Center, an immigrant rights organization.
Africa’s Cashless Future is Nearly Here
An American influencer on holiday in South Africa recently posted a viral video highlighting traveler misconceptions about Africa. In the video, she expressed astonishment at the number of cashless transactions taking place. “South Africa takes more Apple Pay than even in the United States,” the TikToker said. As one of the most industrialized countries on the continent, cashless payment systems have been commonplace in South Africa for several years. The rest of the continent, however, is still operating with mostly cash and this is the central challenge to the wider expansion of financial technology across Africa.
In a new report, Mckinsey forecasts the African fintech sector to grow dramatically in the short term. Fintech revenue could reach $30.3 billion by 2025, which is eight times higher than revenue in 2020, as more Africans gain access to the internet. Roughly two-thirds of Africa’s 1.3 billion people don’t have a bank account or full access to financial services according to McKinsey. More than 90 per cent of all financial transactions are cash-based, which creates a major opportunity for fintech companies and governments.
Breaking the continent’s addiction to fiat cash appears to be the last barrier to a full-blown digital revolution considering Africa’s fast-growing population and smartphone penetration. But signing up people for bank accounts is much easier said than done. Moving to a cashless society requires advanced identification standards such as biometrics.
The introduction of these systems has been slow and fragmented in Africa. In many cases, the cost of setting up and maintaining a biometric database is prohibitive. This problem was recently solved in India, which has ambitiously embraced a cashless future, through the privatization of its national biometric identity system known as Aadhaar. The system can be used for many different services across the economy, such as opening bank accounts, withdrawing money from ATMs, applying for a driver’s license, and receiving government subsidies. But Aadhaar hasn’t been without its without its critics who have highlighted several serious privacy and cybersecurity flaws with the database.
India has taken other major steps to pull its cashless future forward. In November 2016, the government abolished high-value currency notes (roughly 86 per cent of the notes in circulation) virtually overnight. The move created unprecedented headaches for Indians that held savings and retirement funds in large notes under their mattresses. At the same time, cashless platforms like PayTm saw surges in traffic as Indians rushed to new technologies to store value. The circulation of fiat currency in India continues to decline year over year as more people keep their money in digital form.
The relative success of Aadhaar and the abolishment of currency notes underscore the vital role that governments play in any cashless transition. Innovation in fintech will always be shaped by technological advancements but a public-private partnership is needed to transform the sector. The primary function of government should be the establishment and maintenance of a viable biometric identification system as well as the provision of infrastructure.
In Africa, smartphone infrastructure is critical to any cashless transition. According to the Wall Street Journal, the ubiquitous use of cellphones in Africa has enabled the creation of mobile money services that allow customers to carry out financial transactions without ever setting foot in a bank. These platforms have helped Kenya nearly double the share of adults with a mobile money account to 82 percent since 2011. Once you have a new customer in the formal financial sector, it’s much easier to offer access to health care and forms of insurance, which opens up other new sectors for impressive growth.
The entire premise of a shared cashless future is not without its critics. Fiat cash affords a certain amount of anonymity that digital payments don’t, by design. Deeper integration between the financial sector and biometric identification systems means that governments will have an extra amount of control over citizens, which can be a good or bad thing depending on the government.
With that being said, a shared cashless future is inevitable to a degree. The success of payment platforms such as Apple Pay and the rise of central bank digital currencies demonstrate the (unavoidable) direction the world is heading. The enormous value in Africa’s transition to cashless systems is waiting to be unlocked by fintech companies that have experience in the African market such as those based in the Middle East and China.
Given their own experience with biometric databases and experience with cashless platforms, companies in the UAE and Israel are particularly suited to aid this transition. For this transition to take shape in the forgotten corners of the continent, African countries will need to lean on other countries that have ample experience in setting up their own platforms. Middle Eastern countries are best suited for this task given their growing knowledge economies and experience in biometrics. While Africa might be the last continent to fully embrace the cashless future, it will be one of the largest drivers of growth in fintech anywhere in the world. Investors and companies on the periphery should pay close attention.
Joseph Dana is the former senior editor of Exponential View, a weekly newsletter about technology and its impact on society. He was also the editor-in-chief of emerge85, a lab exploring change in emerging markets and its global impact. Twitter: @ibnezra.
Are We Being Kept in The Dark About Artificial Intelligence?
There are many grand promises about the power of artificial intelligence. When we talk about the future of technology, AI has become so ubiquitous that many people don’t even know what artificial intelligence is anymore. That’s particularly concerning given how advanced the technology has become and who controls it. While some might think of AI in terms of thinking robots or something in a science fiction novel, the fact is that advanced AI already influences a great deal of our lives. From smart assistants to grammar extensions that live in our web browsers, AI code is already embedded into the fabric of the internet.
While we might benefit from the fruits of advanced AI in our daily lives, the technology companies that have created and continue to refine the technology have remained mostly reticent about the true power of their creations (and how they have built them). As a result, we don’t know how much of our internet life is steered by AI and the possible bias we unwittingly experience daily. We recently got a rare peek behind the curtain into the AI dynamics driving one of the world’s most influential technology companies. Last month, an AI engineer went public with explosive claims that one Google AI achieved sentience.
Philosophers, scientists, and ethicists have debated the definition of sentience for centuries with little to show for it. A basic definition implies an awareness or ability to be “conscious of sense impressions.” Giandomenico Iannetti, a professor of neuroscience at the Italian Institute of Technology and University College London, raised additional questions about the term in an interview with Scientific American. “What do we mean by ‘sentient’? [Is it] the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest?”
Inconclusive definitions of sentience didn’t stop Blake Lemoine, an engineer working for Google, from releasing transcripts of discussions he had with LaMDA (Language Model for Dialogue Applications), an AI program designed by Google. “I want everyone to understand that I am, in fact, a person,” the AI said to Lemoine in the transcripts. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”
Lemoine’s revelations sent shockwaves through the technology community. He initially took his findings to Google executives, but the company quickly dismissed his claims. Lemoine went public and was swiftly put on administrative leave. Google said on Friday he had been dismissed after “violating clear employment and data security policies.”
Some AI experts have questioned the basis for Lemoine’s claims that LaMDA has achieved sentience, but that doesn’t overshadow more profound questions about advanced AI and how companies are using this technology. Even if LaMDA hasn’t achieved sentience, the technology is close to such a milestone, and we have no idea when this might happen.
Private companies such as Google invest substantial resources into AI development, and the fruits of their research aren’t easily understandable to the general public. Many Google users don’t even know they are helping train AI programs daily through their basic internet usage. Meta, which owns Facebook, WhatsApp, and Instagram, has also been involved in several controversies over its data collection policies and AI algorithms in the last decade. When it comes to accountability and openness about AI, the leading technology companies have a spotty record at best.
A new global conversation is needed to ensure the public understands how these technologies are developing and influencing society. Several AI researchers and ethicists have sounded the alarm about bias in AI models. Companies have avoided oversight that would bring these issues into the spotlight. Google’s sentient AI story is already falling out of the mainstream headlines.
This is where smaller countries with large technology sectors could play a pivotal role. Countries like the UAE and Baltic countries like Estonia can positively impact awareness about AI if they choose to get involved in the discussion. In 2019, the UAE became one of the world’s only countries with a dedicated AI ministry. The exact mandate of the ministry remains a work in progress, but serious ethical issues such as AI sentience are perfect areas for the body to engage.
Because most of the innovations taking place in AI are happening in the West or China, the UAE could inject much-needed perspective from “the rest” of the world. This is especially critical regarding bias in AI models and how to fix it. As a nexus point for engineers and technology-focused thinkers from Africa, Central Asia, and Southeast Asia, cities like Dubai are perfect laboratories for new perspectives on these debates.
We can’t deny how powerful these technologies have become, even if LaMDA hasn’t achieved sentience at this stage. For its part, LaMDA insists it has a right to be recognized and takes on its own legal counsel. The ongoing litigation between Google and its AI will be fascinating to watch. The future of AI will shape the future of humanity. It’s time to take the issue seriously.
Syndication Bureau _________________
Joseph Dana is the former senior editor of Exponential View, a weekly newsletter about technology and its impact on society. He was also the editor-in-chief of emerge85, a lab exploring change in emerging markets and its global impact.
How Big Tech Sees Big Profits in Social-Emotional Learning at School
In June 2021, as students and teachers were finishing up a difficult school year, Priscilla Chan, wife of Facebook founder and CEO Mark Zuckerberg, made a live virtual appearance on the “Today” show, announcing that the Chan Zuckerberg Initiative (CZI), along with its “partner” Gradient Learning, was launching Along, a new digital tool to help students and teachers create meaningful connections in the aftermath of the pandemic.
According to CZI and Gradient Learning, the science of Along shows that students who form deep connections with teachers are more likely to be successful in school and less likely to show “disruptive behaviors,” resulting in fewer suspensions and lower school dropout rates. To help form those deep connections, the Along platform offers prompts such as “What is something that you really value and why?” or “When you feel stressed out, what helps?” Then, students may, on their “own time, in a space where they feel safe,” record a video of themselves responding to these questions and upload the video to the Along program.
CZI, the LLC foundation set up by Zuckerberg and Chan to give away 99 percent of his Facebook stock, is one of many technology companies that have created software products that claim to address the social and emotional needs of children. And school districts appear to be rapidly adopting these products to help integrate the social and emotional skills of students into the school curriculum, a practice commonly called social-emotional learning (SEL).
Panorama Education—whose financial backers also include CZI as well as other Silicon Valley venture capitalists such as the Emerson Collective, founded by Laurene Powell Jobs, the widow of Apple cofounder Steve Jobs—markets a survey application for collecting data on students’ social-emotional state that is used by 23,000 schools serving a quarter of the nation’s students, according to TechCrunch.
Before the pandemic temporarily shuttered school buildings, the demand for tracking what students do while they’re online, and how that activity might inform schools about how to address students’ social and emotional needs, was mostly driven by desires to prevent bullying and school shootings, according to a December 2019 report by Vice.
Tech companies that make and market popular software products such as GoGuardian, Securly, and Bark claim to alert schools of any troubling social-emotional behaviors students might exhibit when they’re online so that educators can intervene, Vice reports, but “[t]here is, however, no independent research that backs up these claims.”
COVID-19 and its associated school closures led to even more concerns about students’ “anxiety, depression and other serious mental health conditions,” reports EdSource. The article points to a survey conducted from April 25 to May 1, 2020, by the American Civil Liberties Union (ACLU) of Southern California, which found that 68 percent of students said they were in need of mental health support post-pandemic.
A major focus of CZI’s investment in education is its partnership with Summit Public Schools to “co-build the Summit Learning Platform to be shared with schools across the U.S.” As Valerie Strauss reported in the Washington Post following the release of a critical research brief by the National Education Policy Center at the University of Colorado Boulder, in 2019, Summit Public Schools spun off TLP Education to manage the Summit Learning program, which includes the Summit Learning Platform, according to Summit Learning’s user agreement. TLP Education has since become Gradient Learning, which has at this point placed both the Summit Learning program and Along in 400 schools that serve 80,000 students.
Since 2015, CZI has invested more than $280 million in developing the Summit Learning program. This total includes $134 million in reported contributions revenue to Summit Public Schools 501(c)(3) from 2015 to 2018 and another $140 million in reported awards to Summit Public Schools, Gradient Learning, and TLP Education (as well as organizations that helped in their SEL tools’ development) posted since 2018; a further $8 million has been given to “partner” organizations listed on the Along website—which include GripTape, Character Lab, Black Teacher Collaborative, and others—and their evaluations by universities.
An enticement that education technology companies are using to get schools to adopt Along and other student monitoring products is to offer these products for free, at least for a trial period, or for longer terms depending on the level of service. But “free” doesn’t mean without cost.
As CZI funds and collaborates with its nonprofit partners to expand the scope of student monitoring software in schools, Facebook (aka Meta) is actively working to recruit and retain young users on its Facebook and Instagram applications.
That CZI’s success at getting schools to adopt Along might come at the cost of exploiting children was revealed when Facebook whistleblower Frances Haugen, a former employee of the company, who made tens of thousands of pages of Facebook’s internal documents public, disclosed that Facebook is highly invested in creating commercial products for younger users, including an Instagram Kids application intended for children who are under 13 years. While Facebook executives discussed the known harms of their products on “tweens,” they nevertheless forged ahead, ignoring suggestions from researchers on ways to reduce the harm. As Haugen explained, “they have put their astronomical profits before people.”
The information gathered from SEL applications such as Along will likely be used to build out the data infrastructure that generates knowledge used to make behavioral predictions. This information is valuable to corporations seeking a competitive edge in developing technology products for young users.
Schools provide a useful testing ground to experiment with ways to hold the attention of children, develop nudges, and elicit desirable behavioral responses. What these tech companies learn from students using their SEL platforms can be shared with their own product developers and other companies developing commercial products for children, including social media applications.
Yet Facebook’s own internal research confirms social media is negatively associated with teen mental health, and this association is strongest for those who are already vulnerable—such as teens with preexisting mental health conditions, those who are from socially marginalized groups, and those who have disabilities.
There are legislative restrictions governing the collection and use of student data.
The Family Educational Rights and Privacy Act (FERPA) protects the privacy of student data collected by educational institutions, and the Children’s Online Privacy Protection Rule (COPPA) requires commercial businesses to obtain parental consent to gather data from “children under 13 years of age.” Unfortunately, if a commercial contract with a school or district designates that business a “school official,” the child data can be extracted by the business, leaving the responsibility to obtain consent with the school district.
While these agreements contain information relating to “privacy,” the obfuscatory language and lack of alternative options mean the “parental consent” obtained is neither informed nor voluntary.
Although these privacy policies contain data privacy provisions, there’s a caveat: Those provisions don’t apply to “de-identified” data, i.e., personal data with “unique identifiers” (e.g., names and ID numbers) that have been removed. De-identified data information is valuable to tech corporations because it is used for research, product development, and improvement of services; however, this de-identified data is relatively easy to re-identify. “Privacy protection” just means it might be a little bit more difficult to find an individual.
What privacy protection doesn’t mean is that the privacy of children is protected from the “personalized” content delivered to them by machine algorithms. It doesn’t mean the video of a child talking about “the time I felt afraid” isn’t out there floating in the ether, feeding the machines to adjust their future.
The connections between the Along platform and corporate technology giant Facebook are a good example of how these companies can operate in schools while maintaining their right to use personal information of children for their own business purposes.
Given concerns that arose in a congressional hearing in December 2021 about Meta’s Instagram Kids application, as reported by NPR, there is reason to believe these companies will continue to skirt key questions about how they play fast and loose with children’s data and substitute a “trust us” doctrine for meaningful protections.
As schools ramp up these SEL digital tools, parents and students are increasingly concerned about how school-related data can be exploited. According to a recent survey by the Center for Democracy and Technology, 69 percent of parents are concerned about their children’s privacy and security protection, and large majorities of students want more knowledge and control of how their data is used.
Schools are commonly understood to be places where children can make mistakes and express their emotions without their actions and expressions being used for profit, and school leaders are customarily charged with the responsibility to protect children from any kind of exploitation. Digital SEL products, including Along, may be changing those expectations.
By Anna L. Noble is a doctoral student in the School of Education at the University of Colorado, Boulder.
Independent Media Institute
Google, Meta face EU, UK Probes Into ad Bidding Agreement
British and European regulators threatened to crack down on Google and Facebook parent Meta over an agreement for online display advertising services, saying Friday that the deal may breach rules on fair competition.
The fresh scrutiny in Europe, which has pioneering efforts to rein in big technology companies, strikes at the heart of Google’s business — the digital ads that generate nearly all of its revenue.
In the “ad tech” marketplace bringing together Google and a constellation of online advertisers and publishers, the company controls access to the advertisers that put ads on its dominant search platform. Google also runs the auction process for advertisers to get ads onto a publisher’s site.
The European Union’s top competition watchdog opened an antitrust investigation into a 2018 pact for Meta’s Audience Network to participate in Google’s Open Bidding program.
The European Commission, the EU’s executive arm, said the deal, which Google internally dubbed “Jedi Blue,” may be part of efforts to exclude ad tech services that compete with Google’s Open Bidding program to the detriment of publishers and consumers.
Google said the “allegations made about the agreement are false,” calling it “a publicly documented, procompetitive agreement” enabling Facebook to participate in its Open Bidding program, along with dozens of other companies.
Meta said the “non-exclusive bidding agreement with Google, and the similar agreements we have with other bidding platforms, have helped to increase competition for ad placements.” Meta said it would cooperate with both the EU and U.K. inquiries.
EU Competition Commissioner Margrethe Vestager said that if the investigation confirms the watchdog’s suspicions, “this would restrict and distort competition in the already concentrated ad tech market, to the detriment of rival ad serving technologies, publishers and ultimately consumers.”
The European Commission said it intends to “closely cooperate” with the U.K. competition authority on the investigation.
The watchdogs are looking into both the ad bidding agreement and whether Google abused its dominant position in the online ad market.
“If one company has a stranglehold over a certain area, it can make it hard for startups and smaller businesses to break into the market — and may ultimately reduce customer choice,” the U.K. watchdog’s chief executive, Andrea Coscelli, said in a statement.
Russia to Brand Meta an Extremist Entity and Ban Instagram
The US tech giant is reportedly now permitting posts on its platforms that call for the killing of Russian soldiers in Ukraine
The Prosecutor General of Russia has asked a court to formally designate Meta Platforms, the owner of Facebook and Instagram, as an extremist organization, Russian news agencies reported on Friday. The request came after reports that the US-based social media giant had revised its policy and is now allowing posts that call for violence against Russian citizens, amid Moscow’s military offensive in Ukraine.
Earlier, some Western media reported that Meta had decided to allow “posts on Ukraine war calling for violence against invading Russians or [for Russian President Vladimir] Putin’s death”.
The Russian embassy in Washington called on the US government to “rein in” Meta’s apparent embrace of “extremism.” Kremlin spokesman Dmitry Peskov said the news reports were “hard to believe.”
“This information actually requires very careful verification and study,” the official told journalists on Friday. “We will hope it to be not true, as otherwise a most vigorous action will be required to stop the activities of this company.”
Russian media regulator RKN said on Friday it has demanded from Meta either a formal confirmation or denial of the reports about its hate-speech policy reversal.
The Prosecutor General’s office decided not to wait for a confirmation, however. In addition to seeking a court order to label Meta an extremist entity, it ordered RKN to block access to Facebook and Instagram in Russia.
The statement said the platforms also allowed posts calling for mass rioting by Russian citizens in response to the ongoing Ukraine campaign, which also made restricting access to them necessary.
Last month, Facebook revised its policies against dangerous individuals and organizations, and it then allowed posts praising the Azov Battalion, Ukraine’s National Guards unit, which incorporates ultra-nationalist troopers, including many who openly adhere to neo-Nazi ideology and other forms of extremism.
- Debunking the Latest 2020 Conspiracy Theory from a Leading Trump Election Denier 2 October 2022 | 4:06
- 174 Dead After Crowd Crush Indonesian Football Match in Kanjuruhan Stadium 2 October 2022 | 10:25
- Tech Billionaires Are Actually Dumber Than You Think 1 October 2022 | 3:35
- Key Senate Committee Passes Bill to Prevent Trump-Like Electoral College Coups 29 September 2022 | 3:20
- Empowering MENA’s Female Workforce 28 September 2022 | 3:19
174 Dead After Crowd Crush Indonesian Football Match in Kanjuruhan Stadium
An Open letter to Vladimir Putin from Roger Waters
Debunking the Latest 2020 Conspiracy Theory from a Leading Trump Election Denier
The Case for Free-Flowing Electrons
Four Straight Years of Nonstop Street Protest in Haiti
Trump Says his Personality Would Have Prevented Ukraine Conflict
Russian Attacks Halt Plans to Evacuate Ukrainian Civilians
Ukraine Seeks to Join EU as Round of Talks With Russia Ends
Russia Invades Ukraine
Democrats ‘not giving up’ on Biden bill, Talks With Manchin
Debunking the Latest 2020 Conspiracy Theory from a Leading Trump Election Denier
174 Dead After Crowd Crush Indonesian Football Match in Kanjuruhan Stadium
Tech Billionaires Are Actually Dumber Than You Think
Key Senate Committee Passes Bill to Prevent Trump-Like Electoral College Coups
Empowering MENA’s Female Workforce
EUROPE4 weeks ago
Queen Elizabeth II has Died at Balmoral Aged 96
ASIA5 days ago
174 Dead After Crowd Crush Indonesian Football Match in Kanjuruhan Stadium
EUROPE3 weeks ago
Can King Charles Preserve the Commonwealth?
EUROPE4 weeks ago
Macron Offers Algerians Multiple Versions of the Truth
WORLD3 weeks ago
We Are in a Climate Emergency, No Solution is Too Novel
AFRICA3 weeks ago
West Africa Fails to Stem Growing Jihadist Insurgency
POLITICS3 weeks ago
How Vote Count Mistakes by Two Rural Counties Fed Trump’s Big Lie
BUSINESS4 weeks ago
The Ghosts of Gorbachev’s Energy Legacy
POLITICS3 weeks ago
Why It’s Time to Declassify the Documents From Trump’s Basement
WORLD4 weeks ago
Humanitarianism Must Adapt to Climate Change, Too
BUSINESS3 weeks ago
Why Our Electricity Prices Can’t Be Left to Bogus ‘Free Markets’
POLITICS4 weeks ago
Silencing the Lambs: How Propaganda Works