Connect with us

TECH

Google To Pay $700 Million In Antitrust Settlement With States

Published

on

Google To Pay 0 Million In Antitrust Settlement With States
Google app is seen on a smartphone in this illustration taken, July 13, 2021. REUTERS/Dado Ruvic/File Photo

Google has agreed to pay $700 million and make several other concessions to settle allegations that it had been stifling competition against its Android app store — the same issue that went to trial in another case that could result in even bigger changes.

Although Google struck the deal with state attorneys general in September, the settlement’s terms weren’t revealed until late Monday in documents filed in San Francisco federal court.

The settlement with the states includes $630 million to compensate U.S. consumers funneled into a payment processing system that state attorneys general alleged drove up the prices for digital transactions within apps downloaded from the Play Store. That store caters to the Android software that powers most of the world’s smartphones.

Like Apple does in its iPhone app store, Google collects commissions ranging from 15% to 30% on in-app purchases — fees that state attorneys general contended drove prices higher than they would have been had there been an open market for payment processing.

Those commissions generated billions of dollars in profit annually for Google, according to evidence presented in the recent trial focused on its Play Store.

Another $70 million of the pre-trial settlement will cover the penalties and other costs that Google is being forced to pay to the states.

Google also agreed to make other changes designed to make it even easier for consumers to download and install Android apps from other outlets besides its Play Store for the next five years. It will refrain from issuing as many security warnings, or “scare screens,” when alternative choices are being used.

The makers of Android apps will also gain more flexibility to offer alternative payment choices to consumers instead of having transactions automatically processed through the Play Store and its commission system. Apps will also be able to promote lower prices available to consumers who choose an alternate to the Play Store’s payment processing.

Washington D.C. Attorney General Brian Schwalb hailed the settlement as a victory for the tens of millions of people in the U.S. that rely on Android phones to help manage their lives. “For far too long, Google’s anticompetitive practices in the distribution of apps deprived Android users of choices and forced them to pay artificially elevated prices,” Schwalb said.

Although the state attorneys general hailed the settlement as a huge win for consumers, it didn’t go far enough for Epic Games, which spearheaded the attack on Google’s app store practices with an antitrust lawsuit filed in August 2020.

Epic, the maker of the popular Fortnite video game, rebuffed the settlement in September and instead chose to take its case to trial, even though it had already lost on most of its key claims in a similar trial targeting Apple and its iPhone app store in 2021.

The Apple trial, though, was decided by a federal judge instead of the jury that vindicated Epic with a unanimous verdict that Google had built anticompetitive barriers around the Play Store. Google has vowed to appeal the verdict.

But the trial’s outcome nevertheless raises the specter of Google potentially being ordered to pay even more money as punishment for its past practices and making even more dramatic changes to its lucrative Android app ecosystem.

Google faces an even bigger legal threat in another antitrust case targeting its dominant search engine that serves as the centerpiece of a digital ad empire that generates more than $200 billion in sales annually. Closing arguments in a trial pitting Google against the Justice Department are scheduled for early May before a federal judge in Washington D.C.

AP/HuffPost

Advertisement
Click to comment

TECH

Apple To Stop Some Watch Sales In U.S. Over Patent Dispute

Published

on

Apple To Stop Some Watch Sales In U.S. Over Patent Dispute
SAN FRANCISCO, CA - OCTOBER 20: The Apple Pay logo is displayed in a mobile kiosk sponsored by Visa and Wells Fargo to demonstrate the new Apple Pay mobile payment system on October 20, 2014 in San Francisco City. Apple's Apple Pay mobile payment system launched today at select banks and retail outlets. (Photo by Justin Sullivan/Getty Images)

If two of the latest Apple Watches are on your holiday shopping list, don’t dawdle for much longer because the devices won’t be available to buy in the U.S. later this week if the White House doesn’t intervene in an international patent dispute.

Apple plans to suspend sales of the Series 9 and Ultra 2 versions of its popular watch for online U.S. customers beginning Thursday afternoon and in its stores on Sunday.

The move stems from an October decision by the International Trade Commission restricting Apple’s watches with the Blood Oxygen measurement feature as part of an intellectual property dispute with medical technology company Masimo.

The White House had 60 days to review the ITC order issued on Oct. 26, meaning Apple could have kept selling the two affected models in the U.S. through Christmas.

But the Cupertino, California, company said in a Monday statement that it is pausing sales early to ensure it complies with the ITC order.

If the ITC’s sales ban isn’t overturned, Apple pledged to “take all measures” to resume sales of the Series 9 and Ultra 2 models in the U.S. as soon as possible.

The Apple Watch SE, which lacks the Blood Oxygen feature, will remain on sale in the U.S. after Christmas Eve. Previously purchased Apple Watches equipped with the Blood Oxygen aren’t affected by the ITC order.

HuffPost

Continue Reading

TECH

European Union Investigates Elon Musk’s X Over Possible Social Media Law Breaches

Published

on

European Union Investigates Elon Musk’s X Over Possible Social Media Law Breaches
Elon Musk, Chief Executive Officer of SpaceX and Tesla and owner of X, is funding the formation of a new school in Austin, Texas. Gonzalo Fuentes/Reuters

The European Union is looking into whether Elon Musk’s online platform X breached tough new social media regulations in the first such investigation since the rules designed to make online content less toxic took effect.

“Today we open formal infringement proceedings against @X” under the Digital Services Act, European Commissioner Thierry Breton said in a post on the platform Monday.

“The Commission will now investigate X’s systems and policies related to certain suspected infringements,” spokesman Johannes Bahrke told a press briefing in Brussels. “It does not prejudge the outcome of the investigation.”

The investigation will look into whether X, formerly known as Twitter, failed to do enough to curb the spread of illegal content and whether measures to combat “information manipulation,” especially through its Community Notes feature, was effective.

The EU will also examine whether X was transparent enough with researchers and will look into suspicions that its user interface, including for its blue check subscription service, has a “deceptive design.”

X remains committed to complying with the Digital Services Act, and is cooperating with the regulatory process,” the company said in a prepared statement. “It is important that this process remains free of political influence and follows the law. X is focused on creating a safe and inclusive environment for all users on our platform, while protecting freedom of expression, and we will continue to work tirelessly towards this goal.”

A raft of big tech companies faced a stricter scrutiny after the EU’s Digital Services Act took effect earlier this year, threatening penalties of up to 6% of their global revenue — which could amount to billions — or even a ban from the EU.

The DSA is is a set of far-reaching rules designed to keep users safe online and stop the spread of harmful content that’s either illegal, such as child sexual abuse or terrorism content, or violates a platform’s terms of service, such as promotion of genocide or anorexia.

The EU has already called out X as the worst place online for fake news, and officials have exhorted owner Musk, who bought the platform a year ago, to do more to clean it up. The European Commission quizzed X over its handling of hate speech, misinformation and violent terrorist content related to the Israel-Hamas war after the conflict erupted.

AP

 

 

Continue Reading

TECH

Artificial Intelligence Now Decides Who Lives and Dies

Published

on

Artificial Intelligence Now Decides Who Lives and Dies
The armed drones carry relatively simple artificial intelligence (AI) that can identify human forms and target them with missiles. AP Photo

Lets start with the uncomfortable truth. We have lost control of artificial intelligence. This shouldnt be too surprising, considering we likely never had any control over it. The maelstrom at OpenAI over the abrupt dismissal of its chief executive, Sam Altman, raised accountability questions inside one of the worlds most powerful AI companies. Yet even before the boardroom drama, our understanding of how AI is created and used was limited. 

Lawmakers worldwide are struggling to keep up with the pace of AI innovation and thus cant provide basic frameworks of regulations and oversight. The conflict between Israel and Hamas in Gaza has raised the stakes even further. AI systems are currently being used to determine who lives and dies in Gaza. The results, as anyone can see, are terrifying. 

In a widespread investigation carried out by Israeli publication +972 Magazine, journalist Yuval Abraham spoke with several current and former officials about the Israeli militarys advanced AI war program called the Gospel.” According to the officials, The Gospel produces AI-generated targeting recommendations through the rapid and automatic extraction of intelligence.” Recommendations are matched with identifications carried out by a human soldier. The system relies on a matrix of data points with checkered misidentification histories, such as facial recognition technology. 

The result is the production of military” targets in Gaza at an astonishingly high rate. In previous Israeli operations, the military was slowed by a lack of targets because humans took time to identify targets and determine the potential of civilian casualties. The Gospel has sped up this process with dramatic effect. 

Thanks to the Gospel, Israeli fighter jets cant keep up with the number of targets these automotive systems provide. The sheer gravity of the death toll over the past six weeks of fighting speaks to the deadly nature of this new technology of war. According to Gaza officials, more than 17,000 people have been killed, including at least 6,000 children. Citing several reports, American journalist Nicholas Kristof said that “a woman or child has been killed on average about every seven minutes around the clock since the war began in Gaza.” 

Look at the physical landscape of Gaza,” Richard Moyes, a researcher who heads Article 36, a group that campaigns to reduce harm from weapons, told the GuardianWere seeing the widespread flattening of an urban area with heavy explosive weapons, so to claim theres precision and narrowness of force being exerted is not borne out by the facts.”

Militaries around the world with similar AI capabilities are closely watching Israels assault on Gaza. The lessons learned in Gaza will be used to refine other AI platforms for use in future conflicts. The genie is out of the bottle. The automated war of the future will use computer programs to decide who lives and who dies. 

While Israel continues to pound Gaza with AI-directed missiles, governments and regulators worldwide need help to keep up with the pace of AI innovation taking place in private companies. Lawmakers and regulators cant keep up with the new programs and the programs being created.

The New York Times notes that that gap has been compounded by an AI knowledge deficit in governments, labyrinthine bureaucracies, and fears that too many rules may inadvertently limit the technologys benefits.” The net result is that AI companies can develop with little or no oversight. This situation is so dramatic that we dont even know what these companies are working on. 

Consider the fiasco over the management of OpenAI, the company behind the popular AI platform ChatGPT. When CEO Sam Altman was unexpectedly fired, the internet rumor mill began fixating on unconfirmed reports that OpenAI had developed a secret and mighty AI that could change the world in unforeseen ways. Internal disagreement over its usage led to a leadership crisis at the company.

We might never know if this rumor is true, but given the trajectory of AI and the fact that we cannot understand what OpenAI is doing, it seems plausible. The general public and lawmakers cant get a straight answer about the potential of a super-powerful AI platform, and that is the problem. 

Israels Gospel and the chaos at OpenAI mark a turning point in AI. Its time to move beyond the hollow elevator pitches that AI will deliver a brave new world. AI might help humanity achieve new goals, but it wont be a force for good if it is developed in the shadows and used to kill people on battlefields. Regulators and lawmakers cant keep up with the pace of the technology and dont have the tools to practice sound oversight. 

While powerful governments around the world watch Israel test AI algorithms on Palestinians, we cant harbor false hopes that this technology will only be used for good. Given the failure of our regulators to establish guardrails on the technology, we can hope that the narrow interests of consumer capitalism will serve as a governor on the true reach of AI to transform society. Its a vain hope, but it is likely all we have at this stage.

Joseph Dana is a writer based in South Africa and the Middle East. He has reported from Jerusalem, Ramallah, Cairo, Istanbul, and Abu Dhabi. He was formerly editor-in-chief of emerge85, a media project based in Abu Dhabi exploring change in emerging markets. Twitter: @ibnezra

Continue Reading

TECH

The AI Regulation Battle Is Only Just Beginning

Published

on

The AI Regulation Battle Is Only Just Beginning
Outmanned, and out-resourced by Russia, Ukraine is hoping smart use of artificial intelligence will turn the tide in the war, both on the battlefield and on the messaging front. Getty Images

Given the pace of development in artificial intelligence in recent years, it’s remarkable that the United States has only just released clear regulations concerning the technology. At the end of October, President Joe Biden issued an executive order to ensure “safe, secure, and trustworthy artificial intelligence.” The directive sets out new standards for all matters of AI safety, including new privacy safeguards designed to protect consumers. While Congress has yet to enact comprehensive laws dictating the use and development of AI, the executive order is a much-needed step toward sensible regulation of this rapidly developing technology.

Casual observers might be surprised to learn that the US didn’t already have any such AI protections on the books. A gathering of 28 governments for the AI Safety Summit in the UK last week revealed that the rest of the world is even further behind. Held at the historic former spy base Bletchley Park, those attending managed to agree to work together on safety research to avert the “catastrophic harm” that could come from AI. The declaration, whose signatories include the US, China, the EU, Saudi Arabia and the United Arab Emirates, was a rare diplomatic coup for the UK but light on detail. The US used the event to brandish its own new guardrails as something that the rest of the world should follow.

You don’t need a degree in computing to understand that AI is a crucial part of one of the most profound technological shifts humanity has ever experienced. AI has the power to change how we think and educate ourselves. It can change how we work and make certain jobs redundant. AI systems require massive amounts of data generally collected on the open internet to deliver these results. Chances are that some of your data is being fed into large language models that power AI platforms like ChatGPT. 

This is just the tip of the iceberg. AI is currently being deployed in Israel’s operations in Gaza to help make decisions of life and death. Israel’s Military Intelligence Directorate said the military uses AI and other “automated” tools to “produce reliable targets quickly and accurately.” One unnamed senior officer said the new AI-powered tools are being used for the “first time to immediately provide ground forces in the Gaza Strip with updated information on targets to strike.”

This is a grave escalation in the use of AI, not just for Palestinians but for the international community. The technology being tested in Gaza will almost certainly be exported as part of Israel’s large and powerful weapons technology sector. Put simply, the AI algorithms used to attack Palestinian targets could soon crop up in other conflicts from Africa to South America. 

Biden’s executive order specifically addresses issues related to AI safety, consumer protection, and privacy. The order requires new safety assessments of new and existing AI platforms, equity and civil rights guidance, and research on AI’s impact on the labor market. Some AI companies will now be required to share safety test results with the US government. The Commerce Department has been directed to create guidance for AI watermarking and a cybersecurity program that can make AI tools that help identify flaws in critical software.

While the US and other Western countries have been slow to draft comprehensive AI regulations, there has been some movement in recent years. Earlier this year, the National Institute of Standards and Technology (NIST) outlined a comprehensive AI risk management framework. The document became the basis for the Biden administration’s executive order. Critically, the Biden administration has empowered the Commerce Department, which houses the NIST, to help implement aspects of the order. 

The challenge will now be securing buy-in from leading American technology companies. Without their cooperation and a legal framework to punish companies that don’t follow the rules, Biden’s order won’t amount to much.

There is still a lot of work to be done. Technology companies have largely been able to develop with little oversight over the past two decades. This is partially due to the interconnected world of tech, where companies have created new products or services outside the US. Amazon’s groundbreaking AWS cloud hosting technology, for example, was created and developed at the University of Cape Town in South Africa, far from the reach of American regulators. 

With honest buy-in from leading companies, the Biden administration could seek more comprehensive laws and regulations. Direct government involvement in technology always runs the risk of stifling innovation. Yet, there is a clear opportunity for smaller countries with knowledge economies to step in. Countries like Estonia and the UAE that have invested in their knowledge economies and have small populations (and regulatory environments) can follow Biden’s lead with AI safeguards. This would have a powerful effect in cities like Dubai, where multinational tech companies have set up regional offices. Because there is less red tape in these smaller countries, AI regulations can be pushed through quickly and, perhaps more importantly, amended if they stifle development too aggressively.

Given the hyper-connected world of technology development, the international community can’t wait for larger countries or blocs like the United States and the European Union to push through legislation first. Instead, new markets that have their tech economies to consider should push ahead with regulations that work for their needs. 

The development of AI technology is happening at a remarkable pace. Because it is so essential to the overall technology sector, we don’t have the luxury of waiting for world leaders to act first. It’s time to lead by example, and AI regulations are an ideal place to start.

Joseph Dana is a writer based in South Africa and the Middle East. He has reported from Jerusalem, Ramallah, Cairo, Istanbul, and Abu Dhabi. He was formerly editor-in-chief of emerge85, a media project based in Abu Dhabi exploring change in emerging markets. Twitter: @ibnezra

Continue Reading

TECH

iFFALCON Enters Indonesian Market with Innovative Smart TVs

Published

on

iFFALCON Enters Indonesian Market with Innovative Smart TVs
iFFALCON E-commerce Representatives and Kevien Willieady - iFFALCON Marketing Representatives believe that iFFALCON Smart TV will answer the lifestyle needs of young consumers in Indonesia. Photo credit: Achmad Marendes CBCOMM

iFFALCON, a global smart TV brand, has entered the Indonesian market with the vision of “bringing an endless experience to young people worldwide.” The launch event, called the iFFALCON Offline Grand Launch, was held by several celebrities and influencers such as Raditya Dika, GadgetBox, Joerdy S, and Riyuka Bunga. The company showcased its innovative smart TV features during the event.

The iFFALCON smart TV is designed to create a new and unique lifestyle for the younger generation, and it has already sold three million units in 16 countries worldwide, including the UK, France, Italy, Spain, Australia, Russia, India, Pakistan, Vietnam, Singapore, Japan, and others.

The iFFALCON Product Manager, Kevien Willieady, met with the social pan The Breeze BSD grand launching (08/04) and assured that iFFALCON would display the best products that meet the needs of young people for advanced audio-visual equipment. According to Kevien Willieady, “iFFALCON makes it easier to access various modern programs and applications such as online streaming services, Google Play, and YouTube with one command using the iFFALCON S52 Smart TV’s voice control remote control.”

TCL-iFFALCON-K2A-Design-and-Build-Quality

iFFALCON also launched two premium products during the Grand Launching event, the iFFALCON Series S52 with the best Android TV and the iFFALCON Series U62, also known as the best Google TV for young people. The S52 Series is available in three sizes, 32 inches, 40 inches, and 43 inches, and has two image resolution types, HD for 32 inches and FHD for 40 inches and 43 inches. The S52 also uses Android TV as its operating system, which has the most comprehensive and up-to-date application selection. It has an added voice control feature that allows users to give commands to the TV easily.

For those who want higher resolutions, the U62 Series is equipped with 4K resolution in all sizes, 43 inches, 50 inches, and 55 inches, and has a higher color depth of up to 1.07 billion colors, as well as supporting HDR10 content. The U62 also has HDMI 2.1, which is suitable for gamers to enjoy all games. The U62 Series also uses the latest version of Android TV called Google TV, which has a higher response rate and a fresher user interface.

Despite being a new brand in Indonesia, iFFALCON’s product quality is unquestionable, proven by its global sales of over three million units and numerous positive reviews on various foreign forums. The company has also provided more than 150 service centers across Indonesia for user convenience. iFFALCON held its first Grand Launch event on the Shopee platform, the largest market share platform in Indonesia, with the theme “Where Infinity Begins,” to introduce itself to the Indonesian market. Daniel Minardi, Director of Brand Management and Digital Products at Shopee, said that Shopee is suitable for increasing brand awareness.

Continue Reading

TECH

Identity crisis: The challenges of naming the dead in the wake of mass disaster

Published

on

Identity crisis: The challenges of naming the dead in the wake of mass disaster
The huge death toll in the Türkiye-Syria earthquakes is now approaching 50,000 people – the worst "event" in the region in 100 years, according to the United Nations, and among the worst earthquakes anywhere this centur

TelegrafEvery natural disaster – fire, flood, avalanche, volcano, tsunami, cyclone – brings its own extraordinary challenges in rescue, but also in trying to name the dead. This is called disaster victim identification, or DVI, a topic the head of Monash’s Department of Forensic MedicineProfessor Richard Bassed, specialises in. Professor Bassed is also deputy director of the Victorian Institute of Forensic Medicine (VIFM). He says within five years, funding permitting, a fledgling project with the Department of Defence using AI facial recognition to better-identify the dead could be up and running.

“Very early stages,” he says, “but we are thinking that we might be able to identify quite large numbers of people, and thus reduce the cost and time required to identify the remainder.”

We’ll return to that. First, Türkiye-Syria, and the sheer scale of it, in a highly earthquake-prone part of the world.

“Remember that in an earthquake like this, you lose your infrastructure,” Professor Bassed says. “Hospitals damaged, mortuaries damaged, probably no power and water. There’ll be massive problems in gathering the deceased, recovering them all, and there’ll also be massive problems in working out where to take them to. You need to have one central location where all the deceased people can be taken to, or a few, if they’re widely spaced geographically.”

Identifying the deceased

The next step is figuring out who is missing and figuring out the names of the dead.

“If a whole family is dead under rubble from an apartment, and no one knows them very well, how are you going to even know they’re missing? It’s about trying to get an accurate toll of the number of missing people, and then collecting all the ante-mortem information you can about the deceased to try and identify them, too.”

Both Türkiye and Syria are member countries of Interpol, which designed and implemented the standard operating procedure for disaster victim identification. It has four stages:

  • Recovering bodies and gathering any identifying information at the scene
  • Examine human remains in the morgue looking at teeth, fingerprints, DNA and distinguishing features such as tattoos
  • The ante-mortem phase, gathering dental and medical records and DNA from family
  • A “reconciliation” phase of comparing the above to identify victims.

But current news reports say bodies are being buried in mass graves on site in Türkiye-Syria, many without being formally identified, with a shortage of forensic investigators on the ground.

An Australian study published in Forensic Science, Medicine and Pathology journal after the Bali bombings and Black Saturday bushfires states that in the aftermath of a mass disaster, a “critical” issue is:

“… speedy and accurate identification of the dead with repatriation to their country of origin. An essential requirement for this process is the establishment of effective and functional mortuaries to enable systematic processing of the victims with appropriate documentation and quality-control processes to ensure that identifications are correct and that processes are capable of audit.”

Professor Bassed says that after a mass disaster, with infrastructure levelled and perhaps hundreds of thousands dead, a confronting question can be asked by governments – “a cold calculus of cost-benefit analysis,” he says, “of ‘Are you better doing things for the living, repairing houses and infrastructure, or do you spend money on a massive recovery and identification process for the deceased?'”

Complex geopolitics can also prevent disaster victim identification.

“The war graves in Iraq and Iran and Syria and everywhere else? People aren’t trying to do anything there, partly because of the political problems in some of those countries, but also because of the huge, huge cost and effort to do it. There’s not enough expertise in the world. When they did Srebrenica, after the Balkans War, the mass-fatality exhumations and identifications took them 10 years to identify 30,000 people. So imagine how long it takes to identify 300,000.”

Utilising facial recognition technology

In the reasonably near future things might be different, which brings us back to the VIFM research collaboration with the Department of Defence and its Defence Science and Technology Group.

The aim is to test the effectiveness of current facial recognition technologies to help identify the deceased.

“At the moment we’re comparing commercial facial recognition systems and how successful they are in recognising the dead. They’re deceased people who are either freshly deceased, so they look the same as they did when they were alive, all the way through to really traumatised or decomposed people. We think what’s going to happen is that we’re going to need to either tweak a current algorithm or create our own facial recognition algorithm that will work reliably with photographs of them when the body is found.”

“Then you can start comparing the photos while alive with the post-mortem images of the deceased through a machine-learning facial recognition algorithm. It won’t work in bushfires as well, because there are often no facial features left.”

Another promising VIFM project involves getting full-body CT scans of every Victorian body that goes to the institute. About 100,000 have already been collected since 2005. The scan shows the bones, but it also shows how a face sits on the bones. The idea is to build an algorithm that can rebuild a face on an unknown skull, and can even be combined with DNA to build hair colour, eye colour or, for example, nose shape.

“When I first started in this business,” says Professor Bassed, “it was a plaster cast of the skull, and you stick pins in it to where you thought the depth of the tissue was over bony points, and you build a face with clay. There was no science in it.”

VIFM is also working on a project with Monash’s Faculty of Information Technology – featured here by the ABC – developing “virtual” autopsies using an augmented reality headset and 3D visualised bodies, reducing the need for invasive physical autopsies.

Continue Reading
Advertisement

/Recent Post/

Advertisement
Advertisement
Advertisement

/Other Articles/

close