https://www.technologyreview.com

  • The hard lessons of Harvard’s failed geoengineering experiment | MIT Technology Review
    https://www.technologyreview.com/2024/04/04/1090626/the-hard-lessons-of-harvards-failed-geoengineering-experiment/?truid=a497ecb44646822921c70e7e051f7f1a

    In late March of 2017, at a small summit in Washington, DC, two Harvard professors, David Keith and Frank Keutsch, laid out plans to conduct what would have been the first solar geoengineering experiment in the stratosphere.

    Instead, it became the focal point of a fierce public debate over whether it’s okay to research such a controversial topic at all.

    The basic concept behind solar geoengineering is that by spraying certain particles high above the planet, humans could reflect some amount of sunlight back into space as a means of counteracting climate change.

    The Harvard researchers hoped to launch a high-altitude balloon, tethered to a gondola equipped with propellers and sensors, from a site in Tucson, Arizona, as early as the following year. After initial equipment tests, the plan was to use the aircraft to spray a few kilograms of material about 20 kilometers (12.4 miles) above Earth and then fly back through the plume to measure how reflective the particles were, how readily they dispersed, and other variables.

    But the initial launch didn’t happen the following year, nor the next, the next, or the next—not in Tucson, nor at a subsequently announced site in Sweden. Complications with balloon vendors, the onset of the covid pandemic, and challenges in finalizing decisions between the team, its advisory committee, and other parties at Harvard kept delaying the project—and then fervent critiques from environmental groups, a Northern European Indigenous organization, and other opponents finally scuttled the team’s plans.

    Critics, including some climate scientists, have argued that an intervention that could tweak the entire planet’s climate system is too dangerous to study in the real world, because it’s too dangerous to ever use. They fear that deploying such a powerful tool would inevitably cause unpredictable and dangerous side effects, and that the world’s countries could never work together to use it in a safe, equitable, and responsible way.

    These opponents believe that even discussing and researching the possibility of such climate interventions eases pressures to rapidly cut greenhouse-gas emissions and increases the likelihood that a rogue actor or solitary nation will one day begin spraying materials into the stratosphere without any broader consensus. Unilateral use of the tool, with its potentially calamitous consequences for some regions, could set nations on a collision course toward violent conflicts.

    Indeed, there are numerous indicators of growing interest in researching this field and providing funding for it. As noted, the US government is developing a research program. The Environmental Defense Fund is considering supporting scientists in the area and recently held a meeting to discuss guardrails that should govern such work. And a number of major philanthropies that haven’t supported the field in the past are in advanced discussions to provide funding to research groups, sources tell MIT Technology Review.

    Meanwhile, under Keith, the University of Chicago is working to hire 10 faculty researchers in this area.

    He says he wouldn’t look to lead an outdoor experiment himself at this point, but he does hope that people working with him at the Climate Systems Engineering Initiative would, if it could offer insights into the scientific questions they’re exploring.

    “I absolutely want to see experiments happen from the University of Chicago,” he says.

    #Geoengineering #Solar_engineering #Le_retour

  • How a tiny Pacific Island became the global capital of cybercrime
    https://www.technologyreview.com/2023/11/02/1082798/tiny-pacific-island-global-capital-cybercrime

    2.11.2023 by Jacob Juda - Despite having a population of just 1,400, until recently, Tokelau’s .tk domain had more users than any other country. Here’s why.

    Tokelau, a necklace of three isolated atolls strung out across the Pacific, is so remote that it was the last place on Earth to be connected to the telephone—only in 1997.

    Just three years later, the islands received a fax with an unlikely business proposal that would change everything.
    Advertisement

    It was from an early internet entrepreneur from Amsterdam, named Joost Zuurbier. He wanted to manage Tokelau’s country-code top-level domain, or ccTLD—the short string of characters that is tacked onto the end of a URL.

    Up until that moment, Tokelau, formally a territory of New Zealand, didn’t even know it had been assigned a ccTLD. “We discovered the .tk,” remembered Aukusitino Vitale, who at the time was general manager of Teletok, Tokelau’s sole telecom operator.

    Zuurbier said “that he would pay Tokelau a certain amount of money and that Tokelau would allow the domain for his use,” remembers Vitale. It was all a bit of a surprise—but striking a deal with Zuurbier felt like a win-win for Tokelau, which lacked the resources to run its own domain. In the model pioneered by Zuurbier and his company, now named Freenom, users could register a free domain name for a year, in exchange for having advertisements hosted on their websites. If they wanted to get rid of ads, or to keep their website active in the long term, they could pay a fee.

    In the succeeding years, tiny Tokelau became an unlikely internet giant—but not in the way it may have hoped. Until recently, its .tk domain had more users than any other country’s: a staggering 25 million. But there has been and still is only one website actually from Tokelau that is registered with the domain: the page for Teletok. Nearly all the others that have used .tk have been spammers, phishers, and cybercriminals.

    Everyone online has come across a .tk––even if they didn’t realize it. Because .tk addresses were offered for free, unlike most others, Tokelau quickly became the unwitting host to the dark underworld by providing a never-ending supply of domain names that could be weaponized against internet users. Scammers began using .tk websites to do everything from harvesting passwords and payment information to displaying pop-up ads or delivering malware.
    a proliferation of .Tk emails with faces crying exclamation point tears

    Many experts say that this was inevitable. “The model of giving out free domains just doesn’t work,” says John Levine, a leading expert on cybercrime. “Criminals will take the free ones, throw it away, and take more free ones.”

    Tokelau, which for years was at best only vaguely aware of what was going on with .tk, has ended up tarnished. In tech-savvy circles, many painted Tokelauans with the same brush as their domain’s users or suggested that they were earning handsomely from the .tk disaster. It is hard to quantify the long-term damage to Tokelau, but reputations have an outsize effect for tiny island nations, where even a few thousand dollars’ worth of investment can go far. Now the territory is desperately trying to shake its reputation as the global capital of spam and to finally clean up .tk. Its international standing, and even its sovereignty, may depend on it.
    Meeting modernity

    To understand how we got here, you have to go back to the chaotic early years of the internet. In the late ’90s, Tokelau became the second-smallest place to be assigned a domain by the Internet Corporation for Assigned Names and Numbers, or ICANN, a group tasked with maintaining the global internet.

    These domains are the address books that make the internet navigable to its users. While you can create a website without registering a domain name for it, it would be like building a house without an easily findable postal address. Many domains are familiar. The UK has .uk, France .fr, and New Zealand .nz. There are also domains that are not tied to specific countries, such as .com and .net.

    Most countries’ domains are run by low-profile foundations, government agencies, or domestic telecom companies, which usually charge a few dollars to register a domain name. They usually also require some information about who is registering and keep tabs to prevent abuse.

    But Tokelau, with just 1,400 inhabitants, had a problem: it simply didn’t have the money or know-how to run its own domain, explains Tealofi Enosa, who was the head of Teletok for a decade before stepping down in July 2023. “It would not be easy for Tokelau to try and manage or build the local infrastructure,” Enosa says. “The best arrangement is for someone else from outside to manage it, trade it, and bring in money from it.”

    This is precisely what Zuurbier, the businessman from Amsterdam, wanted to do.

    Zuurbier had come across Tokelau while chasing the internet’s next big idea. He was convinced that just as people had adopted free email addresses by the millions, the natural next step was for them to have their own free websites. Zuurbier intended to put advertisements on those sites, which could be removed for a small fee. All he needed to turn this billion-dollar idea into reality was a place with a ccTLD that had not yet found a registrar.

    Tokelau—the last corner of the British Empire to be informed about the outbreak of World War I, where regular shortwave radio wasn’t available until the ’70s and most people were yet to even see a website—was the perfect partner.

    Representatives from Tokelau and Zuurbier met in Hawaii in 2001 and put pen to paper on a deal. Quickly, .tk domain names began to pop up as people took advantage of the opportunity to create websites for free. He still had to convince ICANN, which oversees the domain name system, that Tokelau couldn’t host its own servers—one of the criteria for ccTLDs. But Tokelau—which switched off its power at midnight—would still need a reliable internet connection to keep in touch. In 2003 Zuurbier took a grueling 36-hour boat ride from Samoa to Tokelau to install internet routers that he had bought for $50 on eBay.

    Gone was the unreliable dial-up. Tokelau had met modernity. “He provided all the equipment, got all the three atolls connected up, and then he also provided some funding which I used to share with the community,” says Vitale, who established internet cafés that could be used for free by anybody from Tokelau’s four hamlets.

    For the first time, thousands of Tokelauans in New Zealand were able to easily connect with home. “What was important for Tokelau was that we were getting some money that could help the villages,” says Vitale. Many of the initial sign-ups on .tk were completely innocuous individuals wanting to blog about thoughts and holidays, as well as gaming communities and small businesses.

    In an attempt to protect its forests and famous wildlife, Virunga has become the first national park to run a Bitcoin mine. But some are wondering what the hell crypto has to do with conservation.

    Zuurbier sent Teletok regular reports about .tk’s growth, and they indicated that the free-domain model was working better than anybody expected. Tiny Tokelau, which was being paid a small cut of the profits Zuurbier was making, was going global.

    “We were hearing how successful .tk was. We were bigger than China,” says Vitale. “We were surprised, but we didn’t know what it meant for Tokelau. What was more meaningful at the time was that we were getting money to help the villages. We didn’t know about the other side of it then.”

    As the decade wore on, however, it looked to Vitale as if things were beginning to blow off course. “We went in blind,” he says. “We didn’t know how popular it would be.”
    Things fall apart

    It took until the late 2000s for Vitale to realize that something had gone badly wrong. After problems first arose, Zuurbier invited ministers and advisors from Tokelau to the Netherlands, paid for their flights, and explained the business’s nuts and bolts in an effort to reassure them. They went to watch Samoa play at the Rugby World Cup in France.

    “He [Zuurbier] appeared to be a really nice person,” Vitale remembers. “There was all this nice stuff that felt homely, warm fuzzies.” .Tk had hit the milestone of 1 million domain users.

    But soon after this trip, he says, Zuurbier started falling behind on scheduled payments to Tokelau worth hundreds of thousands of dollars. (MIT Technology Review requested an interview with Zuurbier. He initially accepted but subsequently did not answer the phone or respond to messages.)

    Meanwhile, Vitale had begun receiving complaints from concerned members of the “internet community.” He and his peers started to become aware that criminals and other questionable figures had cottoned onto the benefits that registering free domains could bring—providing an almost unlimited supply of websites that could be registered with virtual anonymity.

    “It was obvious from the start that this was not going to turn out well,” says Levine, coauthor of The Internet for Dummies. “The only people who want those domains are crooks.”

    Levine says that .tk had started attracting unsavory characters almost immediately. “The cost of the domain name is tiny compared to everything else that you need to do [to set up a website], so unless you’re doing something weird that actually needs lots of domains—which usually means criminals—then the actual value in free domains is insignificant,” he says.

    What started as techies complaining to Vitale about spamming, malware, and phishing on .tk domains soon turned into more worrisome complaints from the New Zealand administrator tasked with overseeing Tokelau, asking him whether he was aware of who .tk’s users were. Allegations surfaced that .tk websites were being used for pornography. Researchers had found jihadists and the Ku Klux Klan registering .tk websites to promote extremism. Chinese state-backed hackers had been found using .tk websites for espionage campaigns.

    “Satanic stuff” is how Vitale describes it: “There were some activities that were not really aligned with our culture and our Christianity, so that didn’t work very well for Tokelau.” With Zuurbier not replying to worried emails, Vitale moved to unplug him. He opened negotiations with Internet NZ, the registry that runs New Zealand’s squeaky-clean domain, about how Tokelau might be able to wiggle out of its arrangement. He didn’t manage to get an answer before he moved on from Teletok.

    His successor, Enosa, tried to set the relationship on a new footing and signed new deals with Zuurbier on the understanding that he would clean up .tk. However, that never happened. One of Enosa’s final acts as general manager at Teletok, in the summer of 2023, was to reopen negotiations with Internet NZ about how Tokelau might be able to extricate itself from the deal once and for all.

    Meanwhile, most of Tokelau’s residents weren’t even aware of what was happening. Elena Pasilio, a journalist, saw firsthand how much this was hurting her home. When she was studying in New Zealand a few years ago, people—knowing that she was Tokelauan—started to tag her on social media posts complaining about .tk.

    At first, she felt confused; it took time before she even realized that .tk meant Tokelau. “I was really surprised by how many users it had, but then I realized that a lot of people were using .tk to make dodgy websites, and then I felt embarrassed. I was embarrassed because it had our name on it,” Pasilio explains. “It has got our name tangled up there with crimes that people here would not even begin to understand.”

    There is a sense from both Vitale and Enosa that Zuurbier cared little as Tokelau’s reputation was dragged through the mud. “I would argue with Joost,” Enosa says, adding that he would remind him he was the custodian for a legal asset that belonged to Tokelau alone. According to Enosa, he would shoot back: “I built this infrastructure from my own pocket. I spent millions of dollars building it. Do you think that was easy? Do you think that Tokelau can build this kind of infrastructure itself?”
    Advertisement

    “I said: ‘Okay. Understood,’” Enosa recalls. “I understood how a white man looks at it. You know? This is how white men look at things. I understand that.”
    Digital colonialism

    What has happened to Tokelau is not unique. The domains of small islands across the Pacific are cited in numerous stories either celebrating dumb luck or complaining of massive abuse.

    Tuvalu has managed to turn .tv into approximately 10% of its annual GDP. Micronesia’s .fm has been pushed heavily at radio stations and podcasters. Tonga’s .to has been favored by torrent and illegal streaming websites. Anguilla, in the Caribbean, is heavily marketing its .ai at technology startups.

    But these success stories seem to be the exception. In 2016, the Anti-Phishing Working Group found that alongside .tk and .com, the Australian Cocos Islands (.cc) and Palau (.pw) together represented 75% of all malicious domain registrations. They had been flooded by phishers attacking Chinese financial institutions. The Cocos Islands made headlines in Australia when websites allegedly hosting child sexual abuse images were recently found on its domain.

    Those domains whose names—by linguistic luck—seemed to mean something tended to attract better managers. Sharks seem to have circled around those that did not, or had a market that was less clear.

    While the abuse of Pacific Islands’ domains has waxed and waned over the years, the islands’ tiny size means that even small associations with crime can have damaging consequences.

    “There is a problem in Polynesia,” says Pär Brumark, a Swede who represents the Pacific island of Niue abroad. “You had these internet cowboys running around taking domains everywhere.”

    Niue lost control over the domain .nu after it was “stolen” by an American in the late 1990s, Brumark says. Its management was given to the Swedish Internet Foundation—which manages Sweden’s native .se—in a “shady deal” in 2013, he claims. .Nu has been wildly popular in Sweden, as it translates directly to “now.” Niue, which is also linked to New Zealand, is now fighting a David-versus-Goliath battle in the Swedish courts. It is seeking as much as $20 million in lost revenue—almost one year’s worth of Niue’s annual GDP.
    Advertisement

    “Digital colonialism,” claims Brumark. “They exploit resources from another country without giving anything back. They have never spoken to the government. They have no permissions. They exploit. Colonialism to me is if you take resources from a country that you do not have the permission to take.”

    But now there may finally be some accountability—at least in the case of Zuurbier.

    In December 2022, courts in the Netherlands found in favor of an investor suing Freenom, the company that managed .tk and four other domains—those of Gabon, Equatorial Guinea, the Central African Republic, and Mali—that were subsequently added to the model it pioneered. The courts found that Freenom had fallen foul of various reporting rules and appointed a supervisory director.
    Related Story
    crypto city planner concept
    Crypto millionaires are pouring money into Central America to build their own cities

    A new class of crypto investors have bold plans to rebuild society from scratch. But their pet projects risk repeating the region’s long history of corporate colonialism.

    And in March of this year, Meta, which owns Facebook, Instagram, and WhatsApp, also sued Freenom for damages, claiming that sites hosted on .tk and the four African domains were engaging in cybersquatting, phishing, and trademark infringement. Meta provided examples of websites that appeared to be registered at .tk with the express purpose of deceiving users, such as faceb00k.tk, whatsaap.tk, Instaqram.tk.

    In an interview with the Dutch newspaper NRC, Zuurbier denied Meta’s allegations about the “proliferation of cybercrime.” But the Cybercrime Information Center recently reported that “in past years Freenom domains were used for 14% of all phishing attacks worldwide, and Freenom was responsible for 60% of the phishing domains reported in all the ccTLDs in November 2022.” Zuurbier says that Freenom distributed to over 90 trusted organizations, including Meta, an API that allowed them to take down offending sites and that Meta itself failed to continue using it. But many in the tech industry resent what they see as Freenom shifting the cost of policing its domains onto others.

    As of January 2023, it is no longer possible to register a .tk domain. All four African countries—many thousands of times larger than Tokelau—have broken ties with Freenom. Tokelau, which did not seem aware that there were other countries in the same boat, is still trying to figure out what to do next.

    It now looks as if Freenom is essentially finished as a company. But Enosa doesn’t believe that’ll stop Zuurbier from pursuing more shady schemes. “Joost always wins,” he says.
    Shifting tactics

    Without access to the unlimited pool of free domain names that were available through .tk and the four other Freenom ccTLDs, many cybercrime groups that relied on them are being forced to adapt. Certain scattergun approaches to spamming and phishing are likely to go out of fashion. “Spammers are fairly rational,” explains Levine, the spam expert. “If the spam is cheap and the domains are free, they can afford to send out a lot of spam even though the likelihood of response is lower. If they actually have to pay for the domains, then they are likely to make it a lot more targeted.”
    Advertisement

    “Bad things online require a domain name at some point,” says Carel Bitter, head of data at the Spamhaus Project, which tracks malicious activities online. “You need people to go somewhere to fill in their account details. If you can’t get domains for free, you will have to get them somewhere else.” Analysts have noted upticks in malicious use of cheap “new” generic TLDs such as .xyz, .top, and .live, whose reputations have been wrecked by dodgy dealers.

    While other domains may only cost $1, a drop in the ocean for the largest gangs, the fact that they now need to be purchased may limit the damage, says Bitter: “Any cybercrime business that relies on domain names will have some sort of natural limit that determines how much they can spend on domain names.” Others, though, may seek to compromise existing websites with low security.

    It is likely that “basement” operations—so-called “ankle-biters”—will feel the biggest pinch. “What is possible is that the guys that are just doing it as a dabble won’t want to put the money up, but the professionals are not going away,” says Dave Piscitello, director of research activity at the Cybercrime Information Center. “They will go elsewhere. If you are staging a revolution and the cost of a Kalashnikov goes from $150 to $250, you aren’t going to say ‘Forget it.’ It is the business.”
    An existential issue

    The media sometimes reports that Tokelau makes millions from the use of .tk. Zuurbier himself claims on his LinkedIn that his relationship with Tokelau adds over 10% to the atolls’ GDP.

    “Bullshit,” says Enosa when asked. “That’s a lie.”

    Enosa claims that .tk provided a “very small” proportion of Teletok’s income: “It doesn’t give us good money. .Tk was nothing to my revenue.”

    While the arrival of the internet on Tokelau promised to zip information across the Pacific instantaneously, the islands have remained isolated. Even while I was reporting this story, it took weeks to get in touch with Pasilio and other sources there. Interviews were repeatedly delayed because of the price of data packages. Internet in Tokelau is among the most expensive in the world, and NZ$100 (US$60) worth of data can sometimes last only 24 hours at a time. Phone calls to Tokelau from Europe did not connect.

    “I feel sorry for our Tokelau,” Pasilio says. “We have been taken advantage of. I think people would be shocked if they knew what had been going on with .Tk.”
    Advertisement

    Even many Tokelau elders had not fully understood the problem, at least until recently.

    There are other, arguably more existential problems the islands need to deal with, including climate change, emigration, and the atolls’ future relationship with New Zealand. “Our islands are already shrinking as it is, with the sea levels rising,” says Pasilio. She says her father tells her about reefs and sand banks that have sunk beneath the Pacific. “They would rather worry about things that they can see physically and that they know more about, rather than fighting back on this .Tk thing,” she says.

    But the issue of the abused .tk domain was recently raised in the General Fono, or Parliament, indicating that the issue had finally broken out of its technical niche and into the wider public.

    Those existential issues facing the islands are not wholly unrelated to .tk. Questions over the future of the domain have arisen at the same time that a debate over Tokelau’s political future has been revived.

    Tokelau is classified by the United Nations as a “non-self-governing territory” under the oversight of the Special Committee on Decolonization. In 2006 and 2007, referenda on whether Tokelau would enter “free association” with New Zealand—a possible stepping stone toward eventual independence—was approved, but not enough of Tokelau’s population voted to meet the turnout threshold. In May 2022, it was decided that another referendum on Tokelau’s future would be held ahead of the centenary of New Zealand rule in 2025.

    Repairing Tokelau’s devastated international reputation by cleaning up .tk will be a necessity if the atolls are to make any serious bid for sovereignty. Vitale is now the general manager of Tokelau’s government and wants to see its internet domain make a triumphant return to make it clear that the islands are turning a new page.

    “We are building nationhood here,” he explains. “We are on a pathway toward self-determination. We want to use the .tk as a catalyst to promote our nationhood and be proud of it—our domain name and our identity among the internet community.”

    All of Tokelau’s email and website addresses are currently hosted on New Zealand’s .nz. “What does that mean to people? It means that we are in New Zealand,” says Vitale with a sigh. “We should be selling ourselves as being in Tokelau, because .tk is the domain—the identity—for Tokelau.”

    “When you have people coming to knock on your door with attractive packages,” he adds, “you see it as an opportunity you hook onto—not realizing what the consequences will be further down the road.”

    Correction: This story has been updated post-publication as the previous version incorrectly stated that Antigua was the Carribean island with the .ai domain. It is in fact Anguilla. Our apologies.

    #Tokelau #Pays-Bas #Nouvelle-Zélande #internet

  • Inside the quest to engineer climate-saving “super trees” | MIT Technology Review
    https://www.technologyreview.com/2023/06/08/1074287/inside-the-quest-to-engineer-climate-saving-super-trees

    On ne sait pas comment ça fonctionne... mais on va quand même planter des arbre génétiquement modifiés au bord de forêts !
    L’hubris scientifique ou la hype technologique... en tout cas un bon marché à court terme, la société gèrera les problèmes à long terme s’ils adviennent.
    Et pendant ce temps là on continue à déboiser, à mal gérer les forêts et à détruire le cycle de l’eau.

    At Living Carbon, Mellor is trying to design trees that grow faster and grab more carbon than their natural peers, as well as trees that resist rot, keeping that carbon out of the atmosphere. In February, less than four years after he co-founded it, the company made headlines by planting its first “photosynthesis-enhanced” poplar trees in a strip of bottomland forests in Georgia.

    This is a breakthrough, clearly: it’s the first forest in the United States that contains genetically engineered trees. But there’s still much we don’t know. How will these trees affect the rest of the forest? How far will their genes spread? And how good are they, really, at pulling more carbon from the atmosphere?

    Living Carbon has already sold carbon credits for its new forest to individual consumers interested in paying to offset some of their own greenhouse gas emissions. They’re working with larger companies, to which they plan to deliver credits in the coming years. But academics who study forest health and tree photosynthesis question whether the trees will be able to absorb as much carbon as advertised.

    Even Steve Strauss, a prominent tree geneticist at Oregon State University who briefly served on Living Carbon’s scientific advisory board and is conducting field trials for the company, told me in the days before the first planting that the trees might not grow as well as natural poplars. “I’m kind of a little conflicted,” he said, “that they’re going ahead with this—all the public relations and the financing—on something that we don’t know if it works.”

    “One of the things that concerns me is [Living Carbon is] just focusing on carbon acquisition,” says Marjorie Lundgren, a researcher at Lancaster University in the UK who has studied tree species with natural adaptations leading to increased photosynthetic efficiency. She notes that trees need more than just carbon and sunlight to grow; they need water and nitrogen, too. “The reason they have such a high growth rate is because in the lab, you can just super-baby them—you can give them lots of water and fertilizer and everything they need,” she says. “Unless you put resources in, which is time and money, and not great for the environment, either, then you’re not going to have those same outcomes.”

    Living Carbon’s paper acknowledges as much, citing nitrogen as a potential challenge and noting that how the trees move carbon may become a limiting factor. The extra sugars produced through what the company calls “enhanced photosynthesis” must be transported to the right places, something trees haven’t typically evolved to do.

    Et bien évidemment cela marche sur l’arnaque aux crédits carbone

    Living Carbon funds its plantings—and makes its profits—by selling credits for the extra carbon the trees absorb. Currently, the company is offering “pre-purchases,” in which companies make a commitment to buy a future credit, paying a small portion of the fee up front to help Living Carbon survive long enough to deliver results.

    New research shows that California’s climate policy created up to 39 million carbon credits that aren’t achieving real carbon savings. But companies can buy these forest offsets to justify polluting more anyway.

    The company has found that these buyers are more interested in projects with ecosystem benefits, which is why the first project, in Georgia, has become an outlier. There has been a subsequent planting in Ohio; this and all currently planned plantings are not near sawmills or in active timber harvesting regions. Thus, the company does not expect those trees to be harvested.

    Wherever they plant trees—whether atop an old minefield or in a timber-producing forest—Living Carbon will pay the landowner an annual per-acre fee and cover the cost of plant site preparation and planting. At the end of the contract, after 30 or 40 years, the landowner can do whatever they want with the trees. If the trees grow as well as is hoped, Living Carbon assumes that even on timber land, their size would mean they’d be turned into “long-duration wood products,” like lumber for construction, rather than shredded to make pulp or paper.

    Until recently, Living Carbon was also selling small-scale credits to individual consumers. When we spoke in February, Mellor pointed me toward Patch, a software company with a carbon-credit sales platform. The Georgia project was marketed there as “biotech-enhanced reforestation.” The credits were offered as a monthly subscription, at a price of $40 per metric ton of carbon removed.

    When I pressed Mellor for details about how the company calculated this price, given the lack of any solid data on the trees’ performance, he told me something the company had not acknowledged in any public-facing documentation: 95% of the saplings at the Georgia site were not photosynthesis-enhanced. The GE poplar trees were planted in randomized experimental plots, with controls for comparison, and contribute only a small amount to the site’s projected carbon savings. Despite the advertising, then, customers were really paying for a traditional reforestation project with a small experiment tucked inside.

    #OGM #Arbres #Hubris #Mais_quelle_bande_de_cons

  • Making an image with generative AI uses as much energy as charging your phone | MIT Technology Review
    https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging

    This is the first time the carbon emissions caused by using an AI model for different tasks have been calculated.

    (selon une #étude_récente qui pour une fois n’a pas l’air d’une #étude_à_la_con)

  • Deepfakes of Chinese influencers are livestreaming 24/7 | MIT Technology Review
    https://www.technologyreview.com/2023/09/19/1079832/chinese-ecommerce-deepfakes-livestream-influencers-ai

    Scroll through the livestreaming videos at 4 a.m. on Taobao, China’s most popular e-commerce platform, and you’ll find it weirdly busy. While most people are fast asleep, there are still many diligent streamers presenting products to the cameras and offering discounts in the wee hours.

    But if you take a closer look, you may notice that many of these livestream influencers seem slightly robotic. The movement of their lips largely matches what they are saying, but there are always moments when it looks unnatural.

    These streamers are not real: they are AI-generated clones of the real streamers. As technologies that create realistic avatars, voices, and movements get more sophisticated and affordable, the popularity of these deepfakes has exploded across China’s e-commerce streaming platforms.

    Today, livestreaming is the dominant marketing channel for traditional and digital brands in China. Influencers on Taobao, Douyin, Kuaishou, or other platforms can broker massive deals in a few hours. The top names can sell more than a billion dollars’ worth of goods in one night and gain royalty status just like big movie stars. But at the same time, training livestream hosts, retaining them, and figuring out the technical details of broadcasting comes with a significant cost for smaller brands. It’s much cheaper to automate the job.

    The technology has mostly been known for its problematic use in revenge porn, identity scams, and political misinformation. While there have been attempts to commercialize it in more innocuous ways, it has always remained a novelty. But now, Chinese AI companies have found a new use case that seems to be going quite well.

    Back then, Silicon Intelligence needed 30 minutes of training videos to generate a digital clone that could speak and act like a human. The next year, it was 10 minutes, then three, and now only one minute of video is needed.

    And as the tech has improved, the service has gotten cheaper too. Generating a basic AI clone now costs a customer about 8,000 RMB ($1,100). If the client wants to create a more complicated and capable streamer, the price can go up to several thousands of dollars. Other than the generation, that fee also covers a year of maintenance.

    Once the avatar is generated, its mouth and body move in time with the scripted audio. While the scripts were once pre-written by humans, companies are now using large language models to generate them too.

    Now, all the human workers have to do is input basic information such as the name and price of the product being sold, proofread the generated script, and watch the digital influencer go live. A more advanced version of the technology can spot live comments and find matching answers in its database to answer in real time, so it looks as if the AI streamer is actively communicating with the audience. It can even adjust its marketing strategy based on the number of viewers, Sima says.

    These livestream AI clones are trained on the common scripts and gestures seen in e-commerce videos, says Huang Wei, the director of virtual influencer livestreaming business at the Chinese AI company Xiaoice. The company has a database of nearly a hundred pre-designed movements.

    “For example, [when human streamers say] ‘Welcome to my livestream channel. Move your fingers and hit the follow button,’ they are definitely pointing their finger upward, because that’s where the ‘Follow’ button is on the screen of most mobile livestream apps,” says Huang. Similarly, when streamers introduce a new product, they point down—to the shopping cart, where viewers can find all products. Xiaoice’s AI streamers replicate all these common tricks. “We want to make sure the spoken language and the body language are matching. You don’t want it to be talking about the Follow button while it’s clapping its hands. That would look weird,” she says.

    Spun off from Microsoft Software Technology Center Asia in 2020, Xiaoice has always been focused on creating more human-like AI, particularly avatars that are capable of showing emotions. “Traditional e-commerce sites just feel like a shelf of goods to most customers. It’s cold. In livestreaming, there is more emotional connection between the host and the viewers, and they can introduce the products better,” Huang says.

    After piloting with a few clients last year, Xiaoice officially launched its service of generating under-$1,000 digital clones this year; like Silicon Intelligence, Xiaoice only needs human streamers to provide a one-minute video of themselves.

    And like its competitors, Xiaoice clients can spend more to fine-tune the details. For example, Liu Jianhong, a Chinese sports announcer, made an exquisite clone of himself during the 2022 FIFA World Cup to read out the match results and other relevant news on Douyin.

    A cheap replacement for human streamers

    These generated streamers won’t be able to beat the star e-commerce influencers, Huang says, but they are good enough to replace mid-tier ones. Human creators, including those who used their videos to train their AI clones, are already feeling the squeeze from their digital rivals to some extent. It’s harder to get a job as an e-commerce livestream host this year, and the average salary for livestream hosts in China went down 20% compared to 2022, according to the analytics firm iiMedia Research.

    But the potential for companies to complement human work by keeping the livestream going during the hours when fewer people are watching means it’s hard to justify the cost of hiring real streamers.

    That’s already happening. In the post-midnight hours, many of the streaming channels on popular e-commerce platforms like Taobao and JD feature these AI-generated streamers.

    Previous examples have shown that deepfake technologies don’t need to be perfect to deceive viewers. In 2020, a scammer posed as a famous Chinese actor with the aid of crude face-swapping tools and still managed to get thousands of dollars from unsuspecting women who fell in love with his videos.

    “If a company hires 10 livestream hosts, their skill levels are going to vary. Maybe two or three streamers at the top would contribute to 70% to 80% of the total sales,” says Chen Dan, the CEO of Quantum Planet AI, a company that packages technologies like Xiaoice’s and sells them to corporate clients. “A virtual livestream host can replace the rest—six or seven streamers that contribute less and have lower ROI [return on investment] rates. And the costs would come down significantly.”

    Chen says he has witnessed a lot more interest from brands in AI streamers this year, partly because everyone is looking to “降本增效”—lower costs and improve efficiency, the new buzzword among Chinese tech companies as the domestic economy slows down.

    Chen has over 100 clients using Xiaoice’s service now, and these virtual streamers have brokered millions of dollars in sales. One Xiaoice streamer brought in over 10,000 RMB ($1,370) in revenue in just one hour.

    There are still drawbacks, he says. For example, many of his clients are furniture brands, and although the AI is clever enough to speak and use gestures, it can’t really sit on a sofa or lie in a bed, so the streams lack the appeal of real users testing the products.

    The rising popularity of AI-generated livestreams has also caught the attention of video platforms like Douyin, the Chinese version of TikTok, as well—though it’s taking a different approach than other tech giants. It’s seemingly more concerned with transparency and it said in a May document that all videos generated by AI should be labeled clearly as such on the platform, and that virtual influencers need to be operated by real humans. The platform has always banned the use of recorded videos as livestreams. AI-generated livestreaming, with no recorded footage but also little real-time human input, straddles the line on that rule.

    The Chinese government made several laws in the past two years on synthetic media and generative AI that would apply to the use in e-commerce streaming. But the effects of government and platform regulations remain to be seen, because the technology is still too new to have met serious enforcement.

    For Silicon Intelligence, its next step is to add “emotional intelligence” to the AI streamers, Sima says: “If there are abusive comments, it will be sad; if the products are selling well, it will be happy.” The company is also working on making AI streamers interact and learn from each other.

    The company has had a fascinating and sort of terrifying goal since its beginning: it wants to create “100,000,000 silicon-based laborers” by 2025. For now, Sima says, the company has generated 400,000 virtual streamers. There’s still a long way to go.

    #Intelligence_artificielle #Médias_de_synthèse #Chine #Streamers
    #Commerce_electronique

  • Behind the painstaking process of creating Chinese computer fonts | MIT Technology Review
    https://www.technologyreview.com/2021/05/31/1025599/history-first-chinese-digital-computer-fonts

    Bruce Rosenblum switched on his Apple II, which rang out a high F note followed by the clatter of the floppy drive. After a string of thock thock keystrokes, the 12-inch Sanyo monitor began to phosphoresce. A green grid appeared, 16 units wide and 16 units tall. This was “Gridmaster,” a program Bruce had cooked up in the programming language BASIC to build one of the world’s first Chinese digital fonts. He was developing the font for an experimental machine called the Sinotype III, which was among the first personal computers to handle Chinese-language input and output.

    At the time, in the late 1970s and early 1980s, there were no personal computers being built in China. So to make a “Chinese” PC, Rosenblum’s team was reprogramming an Apple II to operate in Chinese. His list of tasks was long. He had to program an operating system from scratch, since Apple II’s DOS 3.3 simply wouldn’t allow the inputting and outputting of Chinese-character texts. Likewise, he had to program the Chinese word processor itself, a job he worked on tirelessly for months.
    A photograph of the Sinotype III monitor shows the Gridmaster program and the digitization process of the Chinese character 电 (dian, electricity).
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    While Gridmaster may have been a simple program, the task that it would be used to accomplish—creating digital bitmaps of thousands of Chinese characters—posed profound design challenges. In fact, creating the font for Sinotype III—a machine developed by the Graphics Arts Research Foundation (GARF) in Cambridge, Massachusetts—took far longer than programming the computer itself. Without a font, there would be no way to display Chinese characters on screen, or to output them on the machine’s dot-matrix printer.

    For each Chinese character, designers had to make 256 separate decisions, one for each potential pixel in the bitmap. (A bitmap is a way of storing images digitally—whether as a JPEG, GIF, BMP, or other file format—using a grid of pixels that together make up a symbol or an image.) Multiplied across thousands of characters, this amounted to literally hundreds of thousands of decisions in a development process that took more than two years to complete.

    Programming Gridmaster—which in hindsight Rosenblum described to me as “clunky to use, at best”—enabled his father, Louis Rosenblum, and GARF to farm out the responsibility of creating the digital font. Using any Apple II machine, and running Gridmaster off a floppy disc, data entry temps could create and save new Chinese character bitmaps, remotely. Once these bitmaps were created and stored, the Rosenblums could install them on the Sinotype III by using a second program (also designed by Bruce) that ingested them and their corresponding input codes into the system’s database.

    Sinotype III was never commercially released. Nevertheless, the painstaking work that went into its development—including the development of this bitmap Chinese font—was central to a complex global effort to solve a vexing engineering puzzle: how to equip a computer to handle Chinese, one of the most widely used languages on Earth.
    A photograph of a Sinotype III monitor displaying the Chinese bitmap font.
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    At the advent of computing and word processing in the West, engineers and designers determined that a low-resolution digital font for English could be built upon a 5-by-7 bitmap grid—requiring only five bytes of memory per symbol. Storing all 128 low-resolution characters in the American Standard Code for Information Interchange (ASCII), which includes every letter in the English alphabet, the numerals 0 through 9, and common punctuation symbols, required just 640 bytes of memory—a tiny fraction of, for example, the Apple II’s 64 kilobytes of onboard memory.
    Related Story
    brain made of electrical cord
    Is your brain a computer?

    We asked experts for their best arguments in the long-standing debate over whether brains and computers process information the same way.

    But there are tens of thousands of Chinese characters, and a 5-by-7 grid was too small to make them legible. Chinese required a grid of 16 by 16 or larger—i.e., at least 32 bytes of memory (256 bits) per character. Were one to imagine a font containing 70,000 low-resolution Chinese characters, the total memory requirement would exceed two megabytes. Even a font containing only 8,000 of the most common Chinese characters would require approximately 256 kilobytes just to store the bitmaps. That was four times the total memory capacity of most off-the-shelf personal computers in the early 1980s.

    As serious as these memory challenges were, the most taxing problems confronting low-res Chinese font production in the 1970s and 1980s were ones of aesthetics and design. Long before anyone sat down with a program like Gridmaster, the lion’s share of work took place off the computer, using pen, paper, and correction fluid.

    Designers spent years trying to fashion bitmaps that fulfilled the low-memory requirements and preserved a modicum of calligraphic elegance. Among those who created this character set, whether by hand-drawing drafts of bitmaps for specific Chinese characters or digitizing them using Gridmaster, were Lily Huan-Ming Ling (凌焕銘) and Ellen Di Giovanni.
    Draft bitmap drawings of Chinese characters for the Sinotype III font.
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    The core problem that designers faced was translating between two radically different ways of writing Chinese: the hand-drawn character, produced with pen or brush, and the bitmap glyph, produced with an array of pixels arranged on two axes. Designers had to decide how (and whether) they were going to try to re-create certain orthographic features of handwritten Chinese, such as entrance strokes, stroke tapering, and exit strokes.

    In the case of the Sinotype III font, the process of designing and digitizing low-resolution Chinese bitmaps was thoroughly documented. One of the most fascinating archival sources from this period is a binder full of grids with hand-drawn hash marks all over them—sketches that would later be digitized into bitmaps for many thousands of Chinese characters. Each of these characters was carefully laid out and, in most cases, edited by Louis Rosenblum and GARF, using correction fluid to erase any “bits” the editor disagreed with. Over top of the initial set of green hash marks, then, a second set of red hash marks indicated the “final” draft. Only then did the work of data entry begin.
    A close-up of a draft bitmap drawing of bei (背, back, rear) showing edits made using correction fluid.
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    Given the sheer number of bitmaps that the team needed to design—at least 3,000 (and ideally many more) if the machine had any hopes of fulfilling consumers’ needs—one might assume that the designers looked for ways to streamline their work. One way they could have done this, for example, would have been to duplicate Chinese radicals—the base components of a character—when they appeared in roughly the same location, size, and orientation from one character to another. When producing the many dozens of common Chinese characters containing the “woman radical” (女), for example, the team at GARF could have (and, in theory, should have) created just one standard bitmap, and then replicated it within every character in which that radical appeared.

    No such mechanistic decisions were made, however, as the archival materials show. On the contrary, Louis Rosenblum insisted that designers adjust each of these components—often in nearly imperceptible ways—to ensure they were in harmony with the overall character in which they appeared.

    In the bitmaps for juan (娟, graceful) and mian (娩, to deliver), for example—each of which contains the woman radical—that radical has been changed ever so slightly. In the character juan, the middle section of the woman radical occupies a horizontal span of six pixels, as compared with five pixels in the character mian. At the same time, however, the bottom-right curve of the woman radical extends outward just one pixel further in the character mian, and in the character juan that stroke does not extend at all.
    The bitmap characters for juan (娟, graceful) and mian (娩, to deliver) from the Sinotype III font, recreated by the author.
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    Across the entire font, this level of precision was the rule rather than the exception.

    When we juxtapose the draft bitmap drawings against their final forms, we see that more changes have been made. In the draft version of luo (罗, collect, net), for example, the bottom-left stroke extends downward at a perfect 45° angle before tapering into the digitized version of an outstroke. In the final version, however, the curve has been “flattened,” beginning at 45° but then leveling out.
    A comparison of two draft versions of the character luo (罗, collect, net).
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    Despite the seemingly small space in which designers had to work, they had to make a staggering number of choices. And every one of these decisions affected every other decision they made for a specific character, since adding even one pixel often changed the overall horizontal and vertical balance.

    The unforgiving size of the grid impinged upon the designers’ work in other, unexpected ways. We see this most clearly in the devilish problem of achieving symmetry. Symmetrical layouts—which abound in Chinese characters—were especially difficult to represent in low-resolution frameworks because, by the rules of mathematics, creating symmetry requires odd-sized spatial zones. Bitmap grids with even dimensions (such as the 16-by-16 grid) made symmetry impossible. GARF managed to achieve symmetry by, in many cases, using only a portion of the overall grid: just a 15-by-15 region within the overall 16-by-16 grid. This reduced the amount of usable space even further.
    Symmetry and asymmetry in the characters shan (山, mounting), zhong (中, middle), ri (日, sun), and tian (田, field).
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    The story becomes even more complex when we begin to compare the bitmap fonts created by different companies or creators for different projects. Consider the water radical (氵) as it appeared in the Sinotype III font (below and on the right), as opposed to another early Chinese font created by H.C. Tien (on the left), a Chinese-American psychotherapist and entrepreneur who experimented with Chinese computing in the 1970s and 1980s.
    A comparison of the water radical (氵) as it appeared in the Sinotype III font (right) versus an early Chinese font created by H.C. Tien (left).
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    As minor as the above examples might seem, each represented yet another decision (among thousands) that the GARF design team had to make, whether during the drafting or the digitization phase.

    Low resolution did not stay “low” for long, of course. Computing advances gave rise to ever denser bitmaps, ever faster processing speeds, and ever diminishing costs for memory. In our current age of 4K resolution, retina displays, and more, it may be hard to appreciate the artistry—both aesthetic and technical—that went into the creation of early Chinese bitmap fonts, as limited as they were. But it was problem-solving like this that ultimately made computing, new media, and the internet accessible to one-sixth of the global population.

    Tom Mullaney is a professor of Chinese history at Stanford University, a Guggenheim fellow, and the Kluge Chair in Technology and Society at the Library of Congress. He is the author or lead editor of six books, including The Chinese Typewriter, Your Computer Is on Fire, and the forthcoming The Chinese Computer—the first comprehensive history of Chinese-language computing.
    by Tom Mullaney

    #Chine #Caractères #Bitmap #Histoire_informatique #Tom_Mullaney

  • How a ubiquitous keyboard app puts hundreds of millions of Chinese users at risk | MIT Technology Review
    https://www.technologyreview.com/2023/08/21/1078207/sogou-keyboard-app-security-loophole/?truid=a497ecb44646822921c70e7e051f7f1a

    For millions of Chinese people, the first software they download on a new laptop or smartphone is always the same: a keyboard app. Yet few of them are aware that it may make everything they type vulnerable to spying eyes.

    Since dozens of Chinese characters can share the same latinized phonetic spelling, the ordinary QWERTY keyboard alone is incredibly inefficient. A smart, localized keyboard app can save a lot of time and frustration by predicting the characters and words a user wants to type. Today, over 800 million Chinese people use third-party keyboard apps on their PCs, laptops, and mobile phones.

    But a recent report by the Citizen Lab, a University of Toronto–affiliated research group focused on technology and security, revealed that Sogou, one of the most popular Chinese keyboard apps, had a massive security loophole.

    “This is an app that handles very sensitive information—specifically, every single thing that you type,” says Jeffrey Knockel, a senior research associate at the Citizen Lab and coauthor of the report. “So we wanted to look into that in greater detail and see if this app is properly encrypting this very sensitive data that it’s sending over the network—or, as we found, is it improperly doing it in a way that eavesdroppers could decipher?”

    Indeed, what he and his colleagues found was that Sogou’s encryption system could be exploited to intercept and decrypt exactly what people were typing, as they were typing it.

    Sogou, which was acquired by the tech giant Tencent in 2021, quickly fixed this loophole after the Citizen Lab researchers disclosed it to the company.

    “User privacy is fundamental to our business,” a Sogou spokesperson told MIT Technology Review. “We have addressed the issues identified by the Citizen Lab and will continue to work so that user data remains safe and secure. We transparently disclose our data processing activities in our privacy policy and do not otherwise share user data.”

    But there’s no guarantee that this was the only vulnerability in the app, and the researchers did not examine other popular keyboard apps in the Chinese market—meaning the ubiquitous software will continue to be a security risk for hundreds of millions of people. And, alarmingly, the potential for such makes otherwise encrypted communications by Chinese users—in apps like Signal, for example—vulnerable to systems of state surveillance.
    An indispensable part of Chinese devices

    Officially called input method editors (IMEs), keyboard apps are necessary for typing in languages that have more characters than a common Latin-alphabet keyboard allows, like those with Japanese, Korean, or Indic characters.

    For Chinese users, having an IME is almost a necessity.

    “There’s a lot more ambiguity to resolve when typing Chinese characters using a Latin alphabet,” says Mona Wang, an Open Technology Fund fellow at the Citizen Lab and another coauthor of the report. Because the same phonetic spelling can be matched to dozens or even hundreds of Chinese characters, and these characters also can be paired in different ways to become different words, a keyboard app that has been fine-tuned to the Chinese language can perform much better than the default keyboard.
    Related Story
    An early mock-up of a Chinese bitmap font made by the Graphic Arts Research Foundation (GARF).
    Behind the painstaking process of creating Chinese computer fonts

    More than 40 years ago, designers drew and edited thousands of characters by hand to make it possible to type and print in Chinese.

    Starting in the PC era, Chinese software developers proposed all kinds of IME products to expedite typing, some even ditching phonetic spelling and allowing users to draw or choose the components of a Chinese character. As a result, downloading third-party keyboard software became standard practice for everyone in China.

    Released in 2006, Sogou Input Method quickly became the most popular keyboard app in the country. It was more capable than any competitor in predicting which character or word the user actually wanted to type, and it did that by scraping text from the internet and maintaining an extensive library of Chinese words. The cloud-based library was updated frequently to include newly coined words, trending expressions, or names of people in the news. In 2007, when Google launched its Chinese keyboard, it even copied Sogou’s word library (and later had to apologize).

    In 2014, when the iPhone finally enabled third-party IMEs for the first time, Chinese users rushed to download Sogou’s keyboard app, leaving 3,000 reviews in just one day. At one point, over 90% of Chinese PC users were using Sogou.

    Over the years, its market dominance has waned; as of last year, Baidu Input Method was the top keyboard app in China, with 607 million users and 46.4% of the market share. But Sogou still had 561 million users, according to iiMedia, an analytics firm.
    Exposing the loophole

    A keyboard app can access a wide variety of user information. For example, once Sogou is downloaded and added to the iPhone keyboard options, the app will ask for “full access.” If it’s granted, anything the user types can be sent to Sogou’s cloud-based server.

    Connecting to the cloud is what makes most IMEs successful, allowing them to improve text prediction and enable other miscellaneous features, like the ability to search for GIFs and memes. But this also adds risk since content can, at least in theory, be intercepted during transmission.

    It becomes the apps’ responsibility to properly encrypt the data and prevent that from happening. Sogou’s privacy policy says it has “adopted industry-standard security technology measures … to maximize the prevention of leak, destruction, misuse, unauthorized access, unauthorized disclosure, or alteration” of users’ personal information.

    “People generally had suspicions [about the security of keyboard apps] because they’re advertising [their] cloud service,” says Wang. “Almost certainly they’re sending some amount of keystrokes over the internet.”

    Nevertheless, users have continued to grant the apps full access.

    When the Citizen Lab researchers started looking at the Sogou Input Method on Windows, Android, and iOS platforms, they found that it used EncryptWall, an encryption system it developed itself, instead of Transport Layer Security (TLS), the standard international cryptographic protocol that has been in use since 1999. (Sogou is also used on other platforms like MacOS and Linux, but the researchers haven’t looked into them.)

    One critical difference between the two encryption systems, the Citizen Lab found, is that Sogou’s EncryptWall is still vulnerable to an exploit that was revealed in 2002 and can turn encrypted data back into plain text. TLS was updated to protect against this in 2003. But when they used that exploit method on Sogou, the researchers managed to decrypt the exact keystrokes they’d typed.
    Example of recovered data; line 19 contains the user-typed text and line 2 contains the package name of the app in which the text is being typed.
    THE CITIZEN LAB

    The existence of this loophole meant that users were vulnerable to all kinds of hacks. The typed content could be intercepted when it went through VPN software, home Wi-Fi routers, and telecom providers.

    Not every word is transmitted to the cloud, the researchers found. “If you type in nihao [‘hello’ in Chinese] or something like that, [the app] can answer that without having to use the cloud database,” says Knockel. “But if it’s more complicated and, frankly, more interesting things that you’re typing in, it has to reach out to that cloud database.”

    Along with the content being typed, Knockel and his Citizen Lab colleagues also obtained other information like technical identifiers of the user’s device, the app that the typing occurred in, and even a list of apps installed on the device.

    A lot of malicious actors would be interested in exploiting a loophole like this and eavesdropping on keystrokes, the researchers note—from cybercriminals after private information (like street addresses and bank account numbers) to government hackers.

    (In a written response to the Citizen Lab, Sogou said the transmission of typed text is required to access more accurate and extensive vocabularies on the cloud and enable a built-in search engine, and the uses are stated in the privacy agreement.)

    This particular loophole was closed when Tencent updated the Sogou software across platforms in late July. The Citizen Lab researchers found that the latest version effectively fixed the problem by adopting the TLS encryption protocol.
    How secure messaging becomes insecure

    Around the world, people who are at high risk of being surveilled by state authorities have turned to apps that offer end-to-end encryption. But if keyboard apps are vulnerable, then otherwise encrypted communication apps like Signal or WhatsApp are now also unsafe. What’s more, once a keyboard app is compromised, even an otherwise offline app, like the built-in notebook app, can be a security risk too.

    (Signal and WhatsApp did not respond to MIT Technology Review’s requests for comment. A spokesperson from Baidu said, “Baidu Input Method consistently adheres to established security practice standards. As of now, there are no vulnerabilities related to [the encryption exploit Sogou was vulnerable to] within Baidu Input Method’s products.”)

    As early as 2019, Naomi Wu, a Shenzhen-based tech blogger known as SexyCyborg online, had sounded the alarm about the risk of using Chinese keyboard apps alongside Signal.

    “The Signal ‘fix’ is ‘Incognito Mode’ aka for the app to say ‘Pretty please don’t read everything I type’ to the virtual keyboard and count on Google/random app makers to listen to the flag, and not be under court order to do otherwise,” she wrote in a 2019 Twitter thread. Since keyboard apps have no obligation to honor Signal’s request, “basically all hardware here is self-compromised 5 minutes out of the box,” she added.

    Wu suspects that the use of Signal was the reason some Chinese student activists talking to foreign media were detained by the police in 2018.

    In January 2021, Signal itself tried to clarify that its Incognito Keyboard feature (which only works for users on Android systems, which are more vulnerable than iOS) was not a foolproof privacy solution: “Keyboards and IME’s can ignore Android’s Incognito Keyboard flag. This Android system flag is a best effort, not a guarantee. It’s important to use a keyboard or IME that you trust. Signal cannot detect or prevent malware on your device,” the company added to its article on keyboard security.

    #Chine #Keyboard_apps #Surveillance #Chiffrement

  • Worldcoin just officially launched. Here’s why it’s being investigated. | MIT Technology Review
    https://www.technologyreview.com/2023/08/07/1077250/worldcoin-officially-launched-why-its-being-investigated/?truid=a497ecb44646822921c70e7e051f7f1a

    It’s a project that claims to use cryptocurrency to distribute money across the world, though its bigger ambition is to create a global identity system called “World ID” that relies on individuals’ unique biometric data to prove that they are humans. It officially launched on July 24 in more than 20 countries, and Sam Altman, the CEO of OpenAI and one of the biggest tech celebrities right now, is one of the cofounders of the project.

    The company makes big, idealistic promises: that it can deliver a form of universal basic income through technology to make the world a better and more equitable place, while offering a way to verify your humanity in a digital future filled with nonhuman intelligence, which it calls “proof of personhood.” If you’re thinking this sounds like a potential privacy nightmare, you’re not alone.

    “Our investigation revealed wide gaps between Worldcoin’s public messaging, which focused on protecting privacy, and what users experienced. We found that the company’s representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent.”

    What’s more, the company was using test users’ sensitive, but anonymized, data to train artificial intelligence models, but Eileen and Adi found that individuals did not know their data was being used that way.

    Importantly, a core objective of the Worldcoin project is to perfect its “proof of personhood” methodology, which requires a lot of data to train AI models. If its proof-of-personhood system becomes widely adopted, this could be quite lucrative for its investors, particularly during an AI gold rush like the one we’re seeing now.

    The company announced this week that it will allow other companies and governments to deploy its identity system.

    “Worldcoin’s proposed identity solution is problematic whether or not other companies and governments use it. Of course, it would be worse if it were used more broadly without so many key questions being answered,” says Eileen. “But I think at this stage, it’s clever marketing to try to convince everyone to get scanned and sign up so that they can achieve the ‘fastest’ and ‘biggest onboarding into crypto and Web3’ to date, as Blania told me last year.”

    #Biométrie #Vie_privée #Données_personnelles #Worldcoin #Proof_of_personhood

  • Next-gen content farms are using AI-generated text to spin up junk websites | MIT Technology Review
    https://www.technologyreview.com/2023/06/26/1075504/junk-websites-filled-with-ai-generated-text-are-pulling-in-money-from-programmatic-ads/?truid=a497ecb44646822921c70e7e051f7f1a

    Pour bien comprendre le phénomène (l’arnaque !) et le rôle des ^mateformes (ici Google), un seul bon livre : Le grand Krack de l’attention de Tim Hwang
    https://cfeditions.com/krach

    The news: AI chatbots are filling junk websites with AI-generated text that attracts paying advertisers. More than 140 major brands are paying for ads that end up on unreliable AI-written sites, likely without their knowledge, according to a new report.

    Making money from junk: Most companies that advertise online automatically bid on spots to run those ads through a practice called “programmatic advertising.” As a result, big brands end up paying for ad placements on sites that they may have never heard of before, with little to no human oversight. To take advantage, content farms have sprung up where low-paid humans use AI to churn out low-quality content to attract maximum ad revenue.

    Why it matters: Ninety percent of the ads from major brands found on these AI-generated news sites were served by Google, in violation of the company’s own policies. The practice threatens to hasten the arrival of a glitchy, spammy internet that is overrun by AI-generated content, as well as wasting massive amounts of ad money.

    #Economie_attention #Tim_Hwang #Google

  • Meta’s former CTO has a new $50 million project : ocean-based carbon removal | MIT Technology Review
    https://www.technologyreview.com/2023/06/06/1074124/metas-former-cto-has-a-new-50-million-project-ocean-based-carbon-removal/?truid=a497ecb44646822921c70e7e051f7f1a

    Un ancien CTO de Facebook se lance dans le géoengineering... pensez-vous que quelque chose puisse mal tourner ?

    On appréciera la phrase : “The way you get started is by doing,” he says. “And by moving, in particular, the science forward and making sure that the people who can answer these fundamental questions have the resources and time to answer them thoroughly.” Le moto traditionnel de la Silicon Valley : on fait et on réfléchit après, en payant des chercheurs pour justifier ce qu’on a fait. Quand c’est problématique, on cache sous le tapis le travail de recherche comme l’a montré la lanceuse d’alerte Frances Hauben.

    Et celle-ci également d’un des "scientifique" qui poussent de tels projets : “It’s a huge operation, of course, similar to fossil fuels or coal mining,” he says. “So these are all side effects we have to take into account.”...exactement ce que fait Meta, isn’t it ?

    A nonprofit formed by Mike Schroepfer, Meta’s former chief technology officer, has spun out a new organization dedicated to accelerating research into ocean alkalinity enhancement—one potential means of using the seas to suck up and store away even more carbon dioxide.

    Additional Ventures, cofounded by Schroepfer, and a group of other foundations have committed $50 million over five years to the nonprofit research program, dubbed the Carbon to Sea Initiative. The goals of the effort include evaluating potential approaches; eventually conducting small-scale field trials in the ocean; advancing policies that could streamline permitting for those experiments and provide more public funding for research; and developing the technology necessary to carry out and assess these interventions if they prove to work well and safely.

    The seas already act as a powerful buffer against the worst dangers of climate change, drawing down about a quarter of human-driven carbon dioxide emissions and absorbing the vast majority of global warming. Carbon dioxide dissolves naturally into seawater where the air and ocean meet.

    But scientists and startups are exploring whether these global commons can do even more to ease climate change, as a growing body of research finds that nations now need to both slash emissions and pull vast amounts of additional greenhouse gas out of the atmosphere to keep warming in check.

    Ocean alkalinity enhancement (OAE) refers to various ways of adding alkaline substances, like olivine, basalt, or lime, into seawater. These basic materials bind with dissolved inorganic carbon dioxide in the water to form bicarbonates and carbonates, ions that can persist for tens of thousands of years in the ocean. As those CO2-depleted waters reach the surface, they can pull down additional carbon dioxide from the air to return to a state of equilibrium.

    The ground-up materials could be added directly to ocean waters from vessels, placed along the coastline, or used in onshore devices that help trigger reactions with seawater.

    Carbon to Sea is effectively an expansion of the Ocean Alkalinity Enhancement R&D Program, which Additional Ventures launched in late 2021 with the Astera Institute, the Grantham Environmental Trust, and others. Ocean Visions, a nonprofit research group working to advance ocean-based climate solutions, is also a partner, though not a funder. Early last year, the organizations began accepting applications for research grants for “at least $10 million” that could be put to use over the next five years. The program has committed $23 million to the research field so far.

    Schroepfer, who will serve as a board chair of Carbon to Sea, said that he decided to support the field of ocean alkalinity enhancement because he consistently heard that it was a promising approach to carbon removal that needed to be closely studied, but “nobody was stepping up to do the actual funding of the work.”

    “The way you get started is by doing,” he says. “And by moving, in particular, the science forward and making sure that the people who can answer these fundamental questions have the resources and time to answer them thoroughly.”

    Antonius Gagern, previously the program director for ocean carbon dioxide removal at Additional Ventures, is leading the new organization.

    “In looking at the different ways that the ocean is already using natural carbon pumps to sequester CO2 permanently, ocean alkalinity enhancement has emerged as, for us, the most promising one for a number of reasons,” Gagern says.

    It’s “extremely scalable,” it’s “very permanent,” and it “doesn’t mess with” biological systems in the ways that other ocean-based approaches may, he adds.
    ’A substantial climatic impact’

    Other observers also consider ocean alkalinity enhancement a promising approach, in part because it’s one of the major ways the planet already pulls down carbon dioxide over very long time scales: rainwater dissolves basic rocks, producing calcium and other alkaline compounds that eventually flow into the oceans through rivers and streams.

    These processes naturally sequester hundreds of millions of tons of carbon dioxide per year, by some estimates. And the planet has more than enough of the reactive materials required to bond with all the carbon dioxide humans have emitted throughout history.

    There are potentially some additional benefits as well. Alkaline substances could reduce ocean acidification locally and might provide beneficial nutrients to certain marine organisms.

    Andreas Oschlies, a climate modeler at the Helmholtz Centre for Ocean Research in Kiel, Germany, agrees that it’s one of the few carbon removal approaches that could “really deliver at scale and have a substantial climatic impact.”

    “The minerals are not limiting and the reservoir, the ocean, is not limiting,” he says.

    (Oschlies hasn’t received research grants from the Additional Ventures consortium but is a senior advisor to a project that has.)

    He’s quick to stress, however, that there are significant challenges in scaling it up, and that far more research is needed to understand the most effective approaches and secondary impacts of such interventions.

    Notably, some approaches would require mining, grinding, and moving around massive amounts of alkaline materials, all of which entails a lot of energy and environmental impacts.

    “It’s a huge operation, of course, similar to fossil fuels or coal mining,” he says. “So these are all side effects we have to take into account.”

    (Not all these concerns would necessarily be raised by other methods, however, like using electrochemistry to remove acid from seawater or processing existing waste from mines.)

    There are additional challenges and uncertainties as well.

    Several recent lab experiments found that these approaches didn’t work as well or easily as expected. Indeed, in some instances the addition of such substances reduced alkalinity as well as the uptake of carbon dioxide. This raises the possibility that these methods may only work in limited areas or circumstances, or could be more costly or complex to implement than hoped.

    Some of the minerals contain trace heavy metals, which can collect in marine ecosystems. They could also alter the light conditions and biogeochemistry of the waters in ways that might harm or help various organisms.

    Finally, the fact that carbon removal happens as a second step in the process makes it challenging to accurately monitor and measure how much CO2 the process really removes, particularly with approaches that occur in the turbulent, variable open oceans. That, in turn, could make it difficult to incentivize and monetize such efforts through carbon markets.

    CarbonPlan, a San Francisco nonprofit that evaluates the scientific integrity of carbon removal projects and techniques, ranks ocean alkalinity enhancement on the low end of its “verification confidence levels,” which evaluate the degree to which long-term carbon removal and storage “can be accurately quantified” with existing tools and approaches.

    “There is a lot of natural variability associated with these processes, which means it can be hard to discern a signal from the noise,” Freya Chay, program lead for carbon removal at CarbonPlan, said in an email.

    “We’re still in exploration mode when it comes to OAE—there is a lot to learn about how to measure, monitor, and effectively deploy these technologies,” she added.
    ‘Getting the science right’

    These challenges are precisely why it’s crucial to fund a coordinated research program into ocean alkalinity research, Gagern says. One of Carbon to Sea’s top priorities will include “getting the science right,” he says, by supporting studies designed to assess what approaches work most effectively and safely, and under what conditions.

    He says that improving systems for monitoring, reporting, and verifying the carbon actually removed through these processes will also be a “major, major focus,” with efforts to develop, test, and refine sensors and models. Finally, Carbon to Sea will also prioritize “community building” in the nascent field, striving to draw in more researchers across disciplines and encourage collaborations through conferences, workshops, and fellowships.

    One of Carbon to Sea’s initial grantees is the Ocean Alk-Align consortium, an international group of researchers studying the potential and environmental safety of ocean alkalinity enhancement.

    “The award from Carbon to Sea enables us to rigorously investigate the promise of OAE for meaningful climate change mitigation and provides us with significant resources to tackle important questions through independent scientific study,” said Katja Fennel, who leads the consortium and is chair of the department of oceanography at Dalhousie University, in a prepared statement.

    The program’s additional funding will likely go to a mix of research groups and startups.

    #Meta #Goeengineering #Hubris

  • The messy, secretive reality behind OpenAI’s bid to save the world
    https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secret

    17.2.2020 by Karen Hao -Every year, OpenAI’s employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. It’s mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.

    In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabet’s DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.

    Above all, it is lionized for its mission. Its goal is to be the first to create AGI—a machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.

    The implication is that AGI could easily run amok if the technology’s development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.

    OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to “build value for everyone rather than shareholders.” Its charter—a document so sacred that employees’ pay is tied to how well they adhere to it—further declares that OpenAI’s “primary fiduciary duty is to humanity.” Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.
    Photograph of OpenAI branded sign in their office space
    OpenAI’s logo hanging in its office.

    Christie Hemm Klok

    But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

    Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation “Can machines think?” Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.

    “It is one of the most fundamental questions of all intellectual history, right?” says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. “It’s like, do we understand the origin of the universe? Do we understand matter?”

    The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. It’s not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.

    But the resounding consensus within the field is that such advanced capabilities would take decades, even centuries—if indeed it’s possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late ’80s and early ’90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. “The field felt like a backwater,” says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.
    Photograph of infinite jest conference room
    A conference room on the first floor named Infinite Jest.

    Christie Hemm Klok

    Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasn’t the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.

    The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped down as president of startup accelerator Y Combinator to become OpenAI’s CEO.)

    But more than anything, OpenAI’s nonprofit status made a statement. “It’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest,” the announcement said. “Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.” Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.

    In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. “It was a beacon of hope,” says Chip Huyen, a machine learning expert who has closely followed the lab’s journey.

    At the intersection of 18th and Folsom Streets in San Francisco, OpenAI’s office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters “PIONEER BUILDING”—the remnants of its bygone owner, the Pioneer Truck Factory—wrap around the corner in faded red paint.

    Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space I’m restricted to during my visit. I’m forbidden to visit the second and third floors, which house everyone’s desks, several robots, and pretty much everything interesting. When it’s time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.
    Pioneer building
    The Pioneer Building.

    wikimedia commons / tfinc

    On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. “We’ve never given someone so much access before,” he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.

    Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a “focused, quiet childhood.” He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.

    Brockman takes me to lunch to remove me from the office during an all-company meeting. In the café across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. It’s easy to appreciate his charisma as a leader. Recounting memorable passages from the books he’s read, he zeroes in on the Valley’s favorite narrative, America’s race to the moon. (“One story I really love is the story of the janitor,” he says, referencing a famous yet probably apocryphal tale. “Kennedy goes up to him and asks him, ‘What are you doing?’ and he says, ‘Oh, I’m helping put a man on the moon!’”) There’s also the transcontinental railroad (“It was actually the last megaproject done entirely by hand … a project of immense scale that was totally risky”) and Thomas Edison’s incandescent lightbulb (“A committee of distinguished experts said ‘It’s never gonna work,’ and one year later he shipped”).
    Photograph of founder
    Greg Brockman, co-founder and CTO.

    Christie Hemm Klok

    Brockman is aware of the gamble OpenAI has taken on—and aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. It’s the price of daring greatly.

    Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was small—formed through a tight web of connections—and management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.

    Musk played no small part in building a collective mythology. “The way he presented it to me was ‘Look, I get it. AGI might be far away, but what if it’s not?’” recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. “‘What if it’s even just a 1% or 0.1% chance that it’s happening in the next five to 10 years? Shouldn’t we think about it very carefully?’ That resonated with me,” he says.

    But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasn’t clear the team itself knew either. “Our goal right now … is to do the best thing there is to do,” Brockman said. “It’s a little vague.”

    Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAI’s members. After two years, at Brockman’s request, Daniela joined too. “Imagine—we started with nothing,” Brockman says. “We just had this ideal that we wanted AGI to go well.”

    Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the company’s existence.

    By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that “in order to stay relevant,” Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass money—while somehow also staying true to the mission.

    Unbeknownst to the public—and most employees—it was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the lab’s core values but subtly shifted the language to reflect the new reality. Alongside its commitment to “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” it also stressed the need for resources. “We anticipate needing to marshal substantial resources to fulfill our mission,” it said, “but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”

    “We spent a long time internally iterating with employees to get the whole company bought into a set of principles,” Brockman says. “Things that had to stay invariant even if we changed our structure.”
    Group photo of the team
    From left to right: Daniela Amodei, Jack Clark, Dario Amodei, Jeff Wu (technical staff member), Greg Brockman, Alec Radford (technical language team lead), Christine Payne (technical staff member), Ilya Sutskever, and Chris Berner (head of infrastructure).

    Christie Hemm Klok

    That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a “capped profit” arm—a for-profit with a 100-fold limit on investors’ returns, albeit overseen by a board that’s part of a nonprofit entity. Shortly after, it announced Microsoft’s billion-dollar investment (though it didn’t reveal that this was split between cash and credits to Azure, Microsoft’s cloud computing platform).

    Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: “Early investors in Google have received a roughly 20x return on their capital,” they wrote. “Your bet is that you’ll have a corporate structure which returns orders of magnitude more than Google ... but you don’t want to ‘unduly concentrate power’? How will this work? What exactly is power, if not the concentration of resources?”

    The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. “Can I trust OpenAI?” one question asked. “Yes,” began the answer, followed by a paragraph of explanation.

    The charter is the backbone of OpenAI. It serves as the springboard for all the lab’s strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the company’s existence. (“By the way,” he clarifies halfway through one recitation, “I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. It’s not like I was reading this before the meeting.”)

    How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? “As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that aren’t imaginable today.” How will you structure yourself to evenly distribute AGI? “I think a utility is the best analogy for the vision that we have. But again, it’s all subject to the charter.” How do you compete to reach AGI first without compromising safety? “I think there is absolutely this important balancing act, and our best shot at that is what’s in the charter.”
    Cover of open AI charter
    APRIL 9, 2018 5 MINUTE READ

    OpenAI

    For Brockman, rigid adherence to the document is what makes OpenAI’s structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesn’t mind—in fact, he agrees with the mentality. It’s the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.

    In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of “effective altruism.” They crack jokes using machine-learning terminology to describe their lives: “What is your life a function of?” “What are you optimizing for?” “Everything is basically a minmax function.” To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)

    But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employee’s absorption of the mission. Alongside columns like “engineering expertise” and “research direction” in a spreadsheet tab titled “Unified Technical Ladder,” the last column outlines the culture-related expectations for every level. Level 3: “You understand and internalize the OpenAI charter.” Level 5: “You ensure all projects you and your team-mates work on are consistent with the charter.” Level 7: “You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.”

    The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.

    But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponized to produce disinformation at immense scale.

    The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? “It seemed like OpenAI was trying to capitalize off of panic around AI,” says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.
    photograph of Jack
    Jack Clark, policy director.

    Christie Hemm Klok

    By May, OpenAI had revised its stance and announced plans for a “staged release.” Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithm’s potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, “no strong evidence of misuse so far.”

    Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadn’t been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that “safety and security concerns” would gradually oblige the lab to “reduce our traditional publishing in the future.”

    This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. “I think that is definitely part of the success-story framing,” said Miles Brundage, a policy research scientist, highlighting something in a Google doc. “The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.”

    But OpenAI’s media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the lab’s big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arm’s length.
    Photograph of books, games, and posters in the office space
    Cover images of OpenAI’s research releases hang on its office wall.

    Christie Hemm Klok

    This hasn’t stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMind’s AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAI’s achievement. I was not compensated for this.)

    And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the lab’s influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: “In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI,” says a line under the “Policy” section. “Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message.” Another, under “Strategy,” reads, “Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to.”

    There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?
    Photograph of Ilya
    Ilya Sutskever, co-founder and chief scientist.

    Christie Hemm Klok

    But little did people know this wasn’t the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.

    There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; it’s just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, won’t be enough.

    Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.

    Brockman and Sutskever deny that this is their sole strategy, but the lab’s tightly guarded research suggests otherwise. A team called “Foresight” runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the lab’s all-in, compute-driven strategy is the best approach.

    For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didn’t know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.
    Photograph of AI books

    Christie Hemm Klok

    In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was “sniffing around.”

    In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. “We expect that safety and security concerns will reduce our traditional publishing in the future,” the section states, “while increasing the importance of sharing safety, policy, and standards research.” The spokesperson also added: “Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.”

    One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.

    The man driving OpenAI’s strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.

    Amodei divides the lab’s strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investor’s “portfolio of bets.” Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.

    As in an investor’s portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why it’s important to keep an open mind. “Pure language is a direction that the field and even some of us were somewhat skeptical of,” he says. “But now it’s like, ‘Wow, this is really promising.’”

    Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun.
    Photo of Dario
    Dario Amodei, research director.

    Christie Hemm Klok

    The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2’s sentence constructions or a robot’s movements.

    Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. “At some point we’re going to build AGI, and by that time I want to feel good about these systems operating in the world,” he says. “Anything where I don’t currently feel good, I create and recruit a team to focus on that thing.”

    For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.

    “We’re in the awkward position of: we don’t know what AGI looks like,” he says. “We don’t know when it’s going to happen.” Then, with careful self-awareness, he adds: “The mind of any given person is limited. The best thing I’ve found is hiring other safety researchers who often have visions which are different than the natural thing I might’ve thought of. I want that kind of variation and diversity because that’s the only way that you catch everything.”

    The thing is, OpenAI actually has little “variation and diversity”—a fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musk’s startup working on computer-brain interfaces, shares the same building and dining room.
    Photograph of Daniela
    Daniela Amodei, head of people operations.

    Christie Hemm Klok

    According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didn’t specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)

    In fairness, this lack of diversity is typical in AI. Last year a report from the New York–based research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. “There is definitely still a lot of work to be done across academia and industry,” OpenAI’s spokesperson said. “Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.”

    Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New York–based company, the city just had too little diversity.

    But if diversity is a problem for the AI industry in general, it’s something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.

    Nor is it at all clear just how OpenAI plans to “distribute the benefits” of AGI to “all of humanity,” as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited “significant unresolved issues regarding … the way in which it would be implemented.”) “This is my biggest problem with OpenAI,” says a former employee, who spoke on condition of anonymity.
    photo of office space

    Christie Hemm Klok

    “They are using sophisticated technical practices to try to answer social problems with AI,” echoes Britt Paris of Rutgers. “It seems like they don’t really have the capabilities to actually understand the social. They just understand that that’s a sort of a lucrative place to be positioning themselves right now.”

    Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. “How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need,” he says. “I don’t think that that strategy is likely to succeed.”

    The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to “make sure that we are understanding the ramifications.”

    Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldn’t functionally change OpenAI’s approach to research. Microsoft was well aligned with the lab’s values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.

    For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didn’t even know what promises, if any, had been made to Microsoft.

    But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altman’s message is clear: OpenAI needs to make money in order to do research—not the other way around.

    This is a hard but necessary trade-off, the leadership has said—one it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.

    But the truth is that OpenAI faces this trade-off not only because it’s not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategy—not because it’s seen as the only way to AGI, but because it seems like the fastest.

    Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and there’s still time for it to change.

    Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldn’t omit from this profile. “I guess in my opinion, there’s problems,” she begins hesitantly. “Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.”

    “But to me, it feels like they are doing something a little bit right,” she says. “I got a sense that the folks there are earnestly trying.”

    Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didn’t think it was possible to “bake ethics in… from the very beginning” when developing AI, he intended it to mean that ethical questions couldn’t be solved from the beginning, not that they couldn’t be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not “on a farm,” but “on a hobby farm.” Brockman considers this distinction important.

    In addition, we have clarified that while OpenAI did indeed “shed its nonprofit status,” a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. We’ve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).

    #capitalisme #benevolat #intelligence_artificielle #USA #idéologie #effective_altruism

    • OpenAI and Stability.AI, the company that built Stable Diffusion, say that they have introduced fixes to mitigate the biases ingrained in their systems, such as blocking certain prompts that seem likely to generate offensive images.

      Je ne comprends pas la logique : la base de référence est biaisée, je ne vois pas en quoi censurer certaines « descriptions/commandes » corrige le problème. Si tu tapes « manager » tu ne récupères que des mecs blancs, on ne va pas bloquer le prompt « manager » pour autant pour « résoudre » le problème.

  • Troll farms reached 140 million Americans a month on Facebook before 2020 election | MIT Technology Review
    https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election

    As of October 2019, around 15,000 Facebook pages with a majority US audience were being run out of Kosovo and Macedonia, known bad actors during the 2016 election.
    Collectively, those troll-farm pages—which the report treats as a single page for comparison purposes—reached 140 million US users monthly and 360 million global users weekly.

    (ça date d’il y a un an et qqs)

  • A startup says it’s begun releasing particles in the atmosphere, in an effort to tweak the climate | MIT Technology Review
    https://www.technologyreview.com/2022/12/24/1066041/a-startup-says-its-begun-releasing-particles-into-the-atmosphere-i

    A startup claims it has launched weather balloons that may have released reflective sulfur particles in the stratosphere, potentially crossing a controversial barrier in the field of solar geoengineering.

    Geoengineering refers to deliberate efforts to manipulate the climate by reflecting more sunlight back into space, mimicking a natural process that occurs in the aftermath of large volcanic eruptions. In theory, spraying sulfur and similar particles in sufficient quantities could potentially ease global warming.

    It’s not technically difficult to release such compounds into the stratosphere. But scientists have mostly (though not entirely) refrained from carrying out even small-scale outdoor experiments. And it’s not clear that any have yet injected materials into that specific layer of the atmosphere in the context of geoengineering-related research.

    That’s in part because it’s highly controversial. Little is known about the real-world effect of such deliberate interventions at large scales, but they could have dangerous side effects. The impacts could also be worse in some regions than others, which could provoke geopolitical conflicts.

    #géoingénierie #climat #startup #écologie #solutionnisme_technologique #ingénieurs

  • The biggest technology failures of 2022 | MIT Technology Review
    https://www.technologyreview.com/2022/12/21/1065625/worst-technology-2022

    We’re back with our latest list of the worst technologies of the year. Think of these as anti-breakthroughs, the sort of mishaps, misuses, miscues, and bad ideas that lead to technology failure. This year’s disastrous accomplishments range from deadly pharmaceutical chemistry to a large language model that was jeered off the internet.

    One theme that emerges from our disaster list is how badly policy—the rules, processes, institutions, and ideals that govern technology’s use—can let us down. In China, a pervasive system of pandemic controls known as “zero covid” came to an abrupt and unexpected end. On Twitter, Elon Musk intentionally destroyed the site’s governing policies, replacing them with a puckish and arbitrary mix of free speech, personal vendettas, and appeals to the right wing of US politics. In the US, policy failures were evident in the highest levels of overdose deaths ever recorded, many of them due to a 60-year-old chemical compound: fentanyl.

    The impact of these technologies could be measured in the number of people affected. More than a billion people in China are now being exposed to the virus for the first time; 335 million on Twitter are watching Musk’s antics; and fentanyl killed 70,000 in the US. In each of these messes, there are important lessons about why technology fails.

    #Technologie #Régulation

  • The AI myth Western lawmakers get wrong | MIT Technology Review
    https://www.technologyreview.com/2022/11/29/1063777/the-ai-myth-western-lawmakers-get-wrong

    While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

    As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen. 

    The EU is currently negotiating a new law called the AI Act, which will ban member states, and maybe even private companies, from implementing such a system.

    The trouble is, it’s “essentially banning thin air,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.

    Back in 2014, China announced a six-year plan to build a system rewarding actions that build trust in society and penalizing the opposite. Eight years on, it’s only just released a draft law that tries to codify past social credit pilots and guide future implementation. 

    There have been some contentious local experiments, such as one in the small city of Rongcheng in 2013, which gave every resident a starting personal credit score of 1,000 that can be increased or decreased by how their actions are judged. People are now able to opt out, and the local government has removed some controversial criteria. 

    But these have not gained wider traction elsewhere and do not apply to the entire Chinese population. There is no countrywide, all-seeing social credit system with algorithms that rank people.

    As my colleague Zeyi Yang explains, “the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either.” 

    What has been implemented is mostly pretty low-tech. It’s a “mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values,” Zeyi writes. 

    Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy, who compiled a report on the subject for the US government, couldn’t find a single case in which data collection in China led to automated sanctions without human intervention. The South China Morning Post found that in Rongcheng, human “information gatherers” would walk around town and write down people’s misbehavior using a pen and paper. 

    The myth originates from a pilot program called Sesame Credit, developed by Chinese tech company Alibaba. This was an attempt to assess people’s creditworthiness using customer data at a time when the majority of Chinese people didn’t have a credit card, says Brussee. The effort became conflated with the social credit system as a whole in what Brussee describes as a “game of Chinese whispers.” And the misunderstanding took on a life of its own. 

    The irony is that while US and European politicians depict this as a problem stemming from authoritarian regimes, systems that rank and penalize people are already in place in the West. Algorithms designed to automate decisions are being rolled out en masse and used to deny people housing, jobs, and basic services. 

    For example in Amsterdam, authorities have used an algorithm to rank young people from disadvantaged neighborhoods according to their likelihood of becoming a criminal. They claim the aim is to prevent crime and help offer better, more targeted support. 

    But in reality, human rights groups argue, it has increased stigmatization and discrimination. The young people who end up on this list face more stops from police, home visits from authorities, and more stringent supervision from school and social workers.

    It’s easy to take a stand against a dystopian algorithm that doesn’t really exist. But as lawmakers in both the EU and the US strive to build a shared understanding of AI governance, they would do better to look closer to home. Americans do not even have a federal privacy law that would offer some basic protections against algorithmic decision making. 

    There is also a dire need for governments to conduct honest, thorough audits of the way authorities and companies use AI to make decisions about our lives. They might not like what they find—but that makes it all the more crucial for them to look.

    #Chine #Crédit_social

  • China just announced a new social credit law. Here’s what it says. | MIT Technology Review
    https://www.technologyreview.com/2022/11/22/1063605/china-announced-a-new-social-credit-law-what-does-it-mean

    The West has largely gotten China’s social credit system wrong. But draft legislation introduced in November offers a more accurate picture of the reality.
    By Zeyi Yangarchive page
    November 22, 2022

    STEPHANIE ARNETT/MITTR; GETTY
    Tech Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

    It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since 2014, when China announced a six-year plan to build a system to reward actions that build trust in society and penalize the opposite, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there’s an opportunity to correct the record.

    For most people outside China, the words “social credit system” conjure up an instant image: a Black Mirror–esque web of technologies that automatically score all Chinese citizens according to what they did right and wrong. But the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either. 

    Instead, the system that the central government has been slowly working on is a mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values—however vague that last goal in particular sounds. There’s no evidence yet that this system has been abused for widespread social control (though it remains possible that it could be wielded to restrict individual rights). 

    While local governments have been much more ambitious with their innovative regulations, causing more controversies and public pushback, the countrywide social credit system will still take a long time to materialize. And China is now closer than ever to defining what that system will look like. On November 14, several top government agencies collectively released a draft law on the Establishment of the Social Credit System, the first attempt to systematically codify past experiments on social credit and, theoretically, guide future implementation. 

    Yet the draft law still left observers with more questions than answers. 

    “This draft doesn’t reflect a major sea change at all,” says Jeremy Daum, a senior fellow of the Yale Law School Paul Tsai China Center who has been tracking China’s social credit experiment for years. It’s not a meaningful shift in strategy or objective, he says. 

    Rather, the law stays close to local rules that Chinese cities like Shanghai have released and enforced in recent years on things like data collection and punishment methods—just giving them a stamp of central approval. It also doesn’t answer lingering questions that scholars have about the limitations of local rules. “This is largely incorporating what has been out there, to the point where it doesn’t really add a whole lot of value,” Daum adds. 

    So what is China’s current system actually like? Do people really have social credit scores? Is there any truth to the image of artificial-intelligence-powered social control that dominates Western imagination? 

    First of all, what is “social credit”?
    When the Chinese government talks about social credit, the term covers two different things: traditional financial creditworthiness and “social creditworthiness,” which draws data from a larger variety of sectors.

    Related Story

    Bias isn’t the only problem with credit scores—and no, AI can’t help
    The biggest-ever study of real people’s mortgage data shows that predictive tools used to approve or reject loans are less accurate for minorities.
    The former is a familiar concept in the West: it documents individuals’ or businesses’ financial history and predicts their ability to pay back future loans. Because the market economy in modern China is much younger, the country lacks a reliable system to look up other people’s and companies’ financial records. Building such a system, aimed to help banks and other market players make business decisions, is an essential and not very controversial mission. Most Chinese policy documents refer to this type of credit with a specific word: “征信” (zhengxin, which some scholars have translated to “credit reporting”).

    The latter—“social creditworthiness”—is what raises more eyebrows. Basically, the Chinese government is saying there needs to be a higher level of trust in society, and to nurture that trust, the government is fighting corruption, telecom scams, tax evasion, false advertising, academic plagiarism, product counterfeiting, pollution …almost everything. And not only will individuals and companies be held accountable, but legal institutions and government agencies will as well.

    This is where things start to get confusing. The government seems to believe that all these problems are loosely tied to a lack of trust, and that building trust requires a one-size-fits-all solution. So just as financial credit scoring helps assess a person’s creditworthiness, it thinks, some form of “social credit” can help people assess others’ trustworthiness in other respects. 

    As a result, so-called “social” credit scoring is often lumped together with financial credit scoring in policy discussions, even though it’s a much younger field with little precedent in other societies. 

    What makes it extra confusing is that in practice, local governments have sometimes mixed up these two. So you may see a regulation talking about how non-financial activities will hurt your financial credit, or vice versa. (In just one example, the province of Liaoning said in August that it’s exploring how to reward blood donation in the financial credit system.) 

    But on a national level, the country seems to want to keep the two mostly separate, and in fact, the new draft law addresses them with two different sets of rules.

    Has the government built a system that is actively regulating these two types of credit?
    The short answer is no. Initially, back in 2014, the plan was to have a national system tracking all “social credit” ready by 2020. Now it’s almost 2023, and the long-anticipated legal framework for the system was just released in the November 2022 draft law. 

    That said, the government has mostly figured out the financial part. The zhengxin system—first released to the public in 2006 and significantly updated in 2020—is essentially the Chinese equivalent of American credit bureaus’ scoring and is maintained by the country’s central bank. It records the financial history of 1.14 billion Chinese individuals (and gives them credit scores), as well as almost 100 million companies (though it doesn’t give them scores). 

    On the social side, however, regulations have been patchy and vague. To date, the national government has built only a system focused on companies, not individuals, which aggregates data on corporate regulation compliance from different government agencies. Kendra Schaefer, head of tech policy research at the Beijing-based consultancy Trivium China, has described it in a report for the US government’s US-China Economic and Security Review Commission as “roughly equivalent to the IRS, FBI, EPA, USDA, FDA, HHS, HUD, Department of Energy, Department of Education, and every courthouse, police station, and major utility company in the US sharing regulatory records across a single platform.” The result is openly searchable by any Chinese citizen on a recently built website called Credit China.

    But there is some data on people and other types of organizations there, too. The same website also serves as a central portal for over three dozen (sometimes very specific) databases, including lists of individuals who have defaulted on a court judgment, Chinese universities that are legitimate, companies that are approved to build robots, and hospitals found to have conducted insurance fraud. Nevertheless, the curation seems so random that it’s hard to see how people could use the portal as a consistent or comprehensive source of data.

    How will a social credit system affect Chinese people’s everyday lives?
    The idea is to be both a carrot and a stick. So an individual or company with a good credit record in all regulatory areas should receive preferential treatment when dealing with the government—like being put on a priority list for subsidies. At the same time, individuals or companies with bad credit records will be punished by having their information publicly displayed, and they will be banned from participating in government procurement bids, consuming luxury goods, and leaving the country.

    The government published a comprehensive list detailing the permissible punishment measures last year. Some measures are more controversial; for example, individuals who have failed to pay compensation decided by the court are restricted from traveling by plane or sending their children to costly private schools, on the grounds that these constitute luxury consumption. The new draft law upholds a commitment that this list will be updated regularly. 

    So is there a centralized social credit score computed for every Chinese citizen?
    No. Contrary to popular belief, there’s no central social credit score for individuals. And frankly, the Chinese central government has never talked about wanting one. 

    So why do people, particularly in the West, think there is? 
    Well, since the central government has given little guidance on how to build a social credit system that works in non-financial areas, even in the latest draft law, it has opened the door for cities and even small towns to experiment with their own solutions. 

    As a result, many local governments are introducing pilot programs that seek to define what social credit regulation looks like, and some have become very contentious.

    The best example is Rongcheng, a small city with only half a million in population that has implemented probably the most famous social credit scoring system in the world. In 2013, the city started giving every resident a base personal credit score of 1,000 that can be influenced by their good and bad deeds. For example, in a 2016 rule that has since been overhauled, the city decided that “spreading harmful information on WeChat, forums, and blogs” meant subtracting 50 points, while “winning a national-level sports or cultural competition” meant adding 40 points. In one extreme case, one resident lost 950 points in the span of three weeks for repeatedly distributing letters online about a medical dispute.

    Such scoring systems have had very limited impact in China, since they have never been elevated to provincial or national levels. But when news of pilot programs like Rongcheng’s spread to the West, it understandably rang an alarm for activist groups and media outlets—some of which mistook it as applicable to the whole population. Prominent figures like George Soros and Mike Pence further amplified that false idea. 

    How do we know those pilot programs won’t become official rules for the whole country?
    No one can be 100% sure of that, but it’s worth remembering that the Chinese central government has actually been pushing back on local governments’ rogue actions when it comes to social credit regulations. 

    In December 2020, China’s state council published a policy guidance responding to reports that local governments were using the social credit system as justification for punishing even trivial actions like jaywalking, recycling incorrectly, and not wearing masks. The guidance asks local governments to punish only behaviors that are already illegal under China’s current legislative system and not expand beyond that. 

    “When [many local governments] encountered issues that are hard to regulate through business regulations, they hoped to draw support from solutions involving credits,” said Lian Weiliang, an official at China’s top economic planning authority, at a press conference on December 25, 2020. “These measures are not only incompatible with the rule of law, but also incompatible with the need of building creditworthiness in the long run.” 

    And the central government’s pushback seems to have worked. In Rongcheng’s case, the city updated its local regulation on social credit scores and allowed residents to opt out of the scoring program; it also removed some controversial criteria for score changes. 

    Is there any advanced technology, like artificial intelligence, involved in the system?
    For the most part, no. This is another common myth about China’s social credit system: people imagine that to keep track of over a billion people’s social behaviors, there must be a mighty central algorithm that can collect and process the data.

    But that’s not true. Since there is no central system scoring everyone, there’s not even a need for that kind of powerful algorithm. Experts on China’s social credit system say that the entire infrastructure is surprisingly low-tech. While Chinese officials sometimes name-drop technologies like blockchain and artificial intelligence when talking about the system, they never talk in detail about how these technologies might be utilized. If you check out the Credit China website, it’s no more than a digitized library of separate databases. 

    “There is no known instance in which automated data collection leads to the automated application of sanctions without the intervention of human regulators,” wrote Schaefer in the report. Sometimes the human intervention can be particularly primitive, like the “information gatherers” in Rongcheng, who walk around the village and write down fellow villagers’ good deeds by pen.

    Related Story

    Who needs democracy when you have data?
    Here’s how China rules using data, AI, and internet surveillance.
    However, as the national system is being built, it does appear there’s the need for some technological element, mostly to pool data among government agencies. If Beijing wants to enable every government agency to make enforcement decisions based on records collected by other government agencies, that requires building a massive infrastructure for storing, exchanging, and processing the data. 

    To this end, the latest draft law talks about the need to use “diverse methods such as statistical methods, modeling, and field certification” to conduct credit assessments and combine data from different government agencies. “It gives only the vaguest hint that it’s a little more tech-y,” says Daum.

    How are Chinese tech companies involved in this system?
    Because the system is so low-tech, the involvement of Chinese tech companies has been peripheral. “Big tech companies and small tech companies … play very different roles, and they take very different strategies,” says Shazeda Ahmed, a postdoctoral researcher at Princeton University, who spent several years in China studying how tech companies are involved in the social credit system.

    Smaller companies, contracted by city or provincial governments, largely built the system’s tech infrastructure, like databases and data centers. On the other hand, large tech companies, particularly social platforms, have helped the system spread its message. Alibaba, for instance, helps the courts deliver judgment decisions through the delivery addresses it collects via its massive e-commerce platform. And Douyin, the Chinese version of TikTok, partnered with a local court in China to publicly shame individuals who defaulted on court judgments. But these tech behemoths aren’t really involved in core functions, like contributing data or compiling credit appraisals.

    “They saw it as almost like a civic responsibility or corporate social responsibility: if you broke the law in this way, we will take this data from the Supreme People’s Court, and we will punish you on our platform," says Ahmed.

    There are also Chinese companies, like Alibaba’s fintech arm Ant Group, that have built private financial credit scoring products. But the result, like Alibaba’s Sesame Credit, is more like a loyalty rewards program, according to several scholars. Since the Sesame Credit score is mostly calculated on the basis of users’ purchase history and lending activities on Alibaba’s own platforms, the score is not reliable enough to be used by external financial institutions and has very limited effect on individuals.

    Given all this, should we still be concerned about the implications of building a social credit system in China?
    Yes. Even if there isn’t a scary algorithm that scores every citizen, the social credit system can still be problematic.

    The Chinese government did emphasize that all social-credit-related punishment has to adhere to existing laws, but laws themselves can be unjust in the first place. “Saying that the system is an extension of the law only means that it is no better or worse than the laws it enforces. As China turns its focus increasingly to people’s social and cultural lives, further regulating the content of entertainment, education, and speech, those rules will also become subject to credit enforcement,” Daum wrote in a 2021 article.

    Moreover, “this was always about making people honest to the government, and not necessarily to each other,” says Ahmed. When moral issues like honesty are turned into legal issues, the state ends up having the sole authority in deciding who’s trustworthy. One tactic Chinese courts have used in holding “discredited individuals” accountable is encouraging their friends and family to report their assets in exchange for rewards. “Are you making society more trustworthy by ratting out your neighbor? Or are you building distrust in your very local community?” she asks.

    But at the end of the day, the social credit system does not (yet) exemplify abuse of advanced technologies like artificial intelligence, and it’s important to evaluate it on the facts. The government is currently seeking public feedback on the November draft document for one month, though there’s no expected date on when it will pass and become law. It could still take years to see the final product of a nationwide social credit system.

    #Chine #Crédit_social

  • YouTube is launching Shorts videos for your TV | MIT Technology Review
    https://www.technologyreview.com/2022/11/07/1062868/youtube-wants-to-take-on-tiktok-with-shorts-videos-for-your-tv/?truid=a497ecb44646822921c70e7e051f7f1a

    YouTube Shorts, the video website’s TikTok-like feature, has become one of its latest obsessions, with more than 1.5 billion users watching short-form content on their devices every month.

    And now YouTube wants to expand that number by bringing full-screen, vertical videos into your TV, MIT Technology Review can reveal.

    From today, users worldwide will see a row of videos from Shorts high up their display on YouTube’s smart TV apps. The videos, which will be integrated into the standard homepage of YouTube’s TV app and will sit alongside longer, landscape videos, are presented on the basis of previous watch history, much as in the YouTube Shorts tab on cell phones and the YouTube website.

    “It is challenging taking a format that’s traditionally a mobile format and finding the right way to bring it to life on TV,” says Brynn Evans, UX director for the YouTube app on TV.

    The time spent developing the TV app integration is testament to the importance of Shorts to YouTube, says Melanie Fitzgerald, UX director at YouTube Community and Shorts. “Seeing the progression of short-form video over several years, from Vine to Musical.ly to TikTok to Instagram and to YouTube, it’s very clear this format is here to stay.”
    Related Story
    The YouTube baker fighting back against deadly “craft hacks”

    Ann Reardon spends her time debunking dangerous activities that go viral on the platform—but the craze shows no signs of abating.

    One major challenge the designers behind YouTube Shorts’ TV integration had to consider was the extent to which Shorts videos should be allowed to autoplay. At present, the initial design will require viewers to manually scroll through Shorts videos once they’re playing and move on to the next one by pressing the up and down arrows on their TV remote.

    “One piece we were playing with was how much do we want this to be a fully lean-back experience, where you turn it on and Shorts cycle through,” says Evans, whose team decided against that option at launch but does not rule out changing future iterations.

    The design presents a single Shorts video at a time in the center of the TV screen, surrounded by white space that changes color depending on the overall look of the video.

    One thing YouTube didn’t test—at least as of now? Filling the white space with ads. YouTube spokesperson Susan Cadrecha tells MIT Tech Review that the experience will initially be ad-free. The spokesperson did say that ads would likely be added at some point, but how those would be integrated into the Shorts on TV experience was not clear.

    Likewise, the YouTube Shorts team is investigating how to integrate comments into TV viewing for future iterations of the app. “For a mobile format like this, you’d be able to maybe use your phone as a companion and leave some comments and they can appear on TV,” says Evans.

    YouTube’s announcement follows TikTok’s own move into developing a TV app. First launched in February 2021 in France, Germany, and the UK and expanded into the United States and elsewhere in November that year, TikTok’s smart TV app hasn’t largely altered how the main app works. (Nor, arguably, has it become an irreplaceable part of people’s living room habits.)

    However, the shift to fold Shorts into the YouTube experience on TV suggests how important YouTube feels the short-form model is to its future. “It’s very clearly a battle for attention across devices,” says Andrew A. Rosen, founder and principal at media analyst Parqor. “The arrival of Shorts and TikTok on connected TVs makes the competitive landscape that much more complex.” Having ceded a head start to TikTok, YouTube now seems determined to play catchup.

    The team behind the initiative still isn’t fully certain how adding short-form video into the YouTube on TV experience will be embraced. “It still remains to be seen how and when people will consume Shorts,” admits Evans—though she tells MIT Tech Review that informal polling and qualitative surveys, plus tests within the Google community, suggest “a very positive impression of Shorts from people who are watching YouTube on TV.” (YouTube declined to share its own data on much time the average user currently spends watching YouTube content on TV but did point to Nielsen data showing that viewers worldwide spent 700 million hours a day on that activity.)

    “Will it be a game-changer in the living room? Yes and no,” says Rosen. “Yes in the sense that it will turn 15-second to 60-second clips into competition for every legacy media streaming service, and Netflix is betting billions on content to be consumed on those same TVs. No, because it’s not primed to become a new default of consumption.”
    by Chris Stokel-Walker

    #YouTube #Shorts #Télévision #Médias #Média_formats

  • Here’s how a Twitter engineer says it will break in the coming weeks | MIT Technology Review
    https://www.technologyreview.com/2022/11/08/1062886/heres-how-a-twitter-engineer-says-it-will-break-in-the-coming-weeks/?truid=a497ecb44646822921c70e7e051f7f1a

    One insider says the company’s current staffing isn’t able to sustain the platform.
    By

    Chris Stokel-Walker
    November 8, 2022

    On November 4, just hours after Elon Musk fired half of the 7,500 employees previously working at Twitter, some people began to see small signs that something was wrong with everyone’s favorite hellsite. And they saw it through retweets.

    Twitter introduced retweets in 2009, turning an organic thing people were already doing—pasting someone else’s username and tweet, preceded by the letters RT—into a software function. In the years since, the retweet and its distant cousin the quote tweet (which launched in April 2015) have become two of the most common mechanics on Twitter.

    But on Friday, a few users who pressed the retweet button saw the years roll back to 2009. Manual retweets, as they were called, were back.

    The return of the manual retweet wasn’t Elon Musk’s latest attempt to appease users. Instead, it was the first public crack in the edifice of Twitter’s code base—a blip on the seismometer that warns of a bigger earthquake to come.

    A massive tech platform like Twitter is built upon very many interdependent parts. “The larger catastrophic failures are a little more titillating, but the biggest risk is the smaller things starting to degrade,” says Ben Krueger, a site reliability engineer who has more than two decades of experience in the tech industry. “These are very big, very complicated systems.” Krueger says one 2017 presentation from Twitter staff includes a statistic suggesting that more than half the back-end infrastructure was dedicated to storing data.

    While many of Musk’s detractors may hope the platform goes through the equivalent of thermonuclear destruction, the collapse of something like Twitter happens gradually. For those who know, gradual breakdowns are a sign of concern that a larger crash could be imminent. And that’s what’s happening now.
    It’s the small things

    Whether it’s manual RTs appearing for a moment before retweets slowly morph into their standard form, ghostly follower counts that race ahead of the number of people actually following you, or replies that simply refuse to load, small bugs are appearing at Twitter’s periphery. Even Twitter’s rules, which Musk linked to on November 7, went offline temporarily under the load of millions of eyeballs. In short, it’s becoming unreliable.

    Estimates from Bot Sentinel suggest that more than 875,000 users deactivated their accounts between October 27 and November 1, while half a million more were suspended.

    “Sometimes you’ll get notifications that are a little off,” says one engineer currently working at Twitter, who’s concerned about the way the platform is reacting after vast swathes of his colleagues who were previously employed to keep the site running smoothly were fired. (That last sentence is why the engineer has been granted anonymity to talk for this story.) After struggling with downtime during its “Fail Whale” days, Twitter eventually became lauded for its team of site reliability engineers, or SREs. Yet this team has been decimated in the aftermath of Musk’s takeover. “It’s small things, at the moment, but they do really add up as far as the perception of stability,” says the engineer.

    The small suggestions of something wrong will amplify and multiply as time goes on, he predicts—in part because the skeleton staff remaining to handle these issues will quickly burn out. “Round-the-clock is detrimental to quality, and we’re already kind of seeing this,” he says.

    Twitter’s remaining engineers have largely been tasked with keeping the site stable over the last few days, since the new CEO decided to get rid of a significant chunk of the staff maintaining its code base. As the company tries to return to some semblance of normalcy, more of their time will be spent addressing Musk’s (often taxing) whims for new products and features, rather than keeping what’s already there running.

    This is particularly problematic, says Krueger, for a site like Twitter, which can have unforeseen spikes in user traffic and interest. Krueger contrasts Twitter with online retail sites, where companies can prepare for big traffic events like Black Friday with some predictability. “When it comes to Twitter, they have the possibility of having a Black Friday on any given day at any time of the day,” he says. “At any given day, some news event can happen that can have significant impact on the conversation.” Responding to that is harder to do when you lay off up to 80% of your SREs—a figure Krueger says has been bandied about within the industry but which MIT Technology Review has been unable to confirm. The Twitter engineer agreed that the percentage sounded “plausible.”

    That engineer doesn’t see a route out of the issue—other than reversing the layoffs (which the company has reportedly already attempted to roll back somewhat). “If we’re going to be pushing at a breakneck pace, then things will break,” he says. “There’s no way around that. We’re accumulating technical debt much faster than before—almost as fast as we’re accumulating financial debt.”
    The list grows longer

    He presents a dystopian future where issues pile up as the backlog of maintenance tasks and fixes grows longer and longer. “Things will be broken. Things will be broken more often. Things will be broken for longer periods of time. Things will be broken in more severe ways,” he says. “Everything will compound until, eventually, it’s not usable.”

    Twitter’s collapse into an unusable wreck is some time off, the engineer says, but the telltale signs of process rot are already there. It starts with the small things: “Bugs in whatever part of whatever client they’re using; whatever service in the back end they’re trying to use. They’ll be small annoyances to start, but as the back-end fixes are being delayed, things will accumulate until people will eventually just give up.”

    Krueger says that Twitter won’t blink out of life, but we’ll start to see a greater number of tweets not loading, and accounts coming into and out of existence seemingly at a whim. “I would expect anything that’s writing data on the back end to possibly have slowness, timeouts, and a lot more subtle types of failure conditions,” he says. “But they’re often more insidious. And they also generally take a lot more effort to track down and resolve. If you don’t have enough engineers, that’s going to be a significant problem.”

    The juddering manual retweets and faltering follower counts are indications that this is already happening. Twitter engineers have designed fail-safes that the platform can fall back on so that the functionality doesn’t go totally offline but cut-down versions are provided instead. That’s what we’re seeing, says Krueger.

    Alongside the minor malfunctions, the Twitter engineer believes that there’ll be significant outages on the horizon, thanks in part to Musk’s drive to reduce Twitter’s cloud computing server load in an attempt to claw back up to $3 million a day in infrastructure costs. Reuters reports that this project, which came from Musk’s war room, is called the “Deep Cuts Plan.” One of Reuters’s sources called the idea “delusional,” while Alan Woodward, a cybersecurity professor at the University of Surrey, says that “unless they’ve massively overengineered the current system, the risk of poorer capacity and availability seems a logical conclusion.”
    Brain drain

    Meanwhile, when things do go kaput, there’s no longer the institutional knowledge to quickly fix issues as they arise. “A lot of the people I saw who were leaving after Friday have been there nine, 10, 11 years, which is just ridiculous for a tech company,” says the Twitter engineer. As those individuals walked out of Twitter offices, decades of knowledge about how its systems worked disappeared with them. (Those within Twitter, and those watching from the sidelines, have previously argued that Twitter’s knowledge base is overly concentrated in the minds of a handful of programmers, some of whom have been fired.)

    To be fair, it was already aging out of relevance before Musk took over.

    Unfortunately, teams stripped back to their bare bones (according to those remaining at Twitter) include the tech writers’ team. “We had good documentation because of [that team],” says the engineer. No longer. When things go wrong, it’ll be harder to find out what has happened.

    Getting answers will be harder externally as well. The communications team has been cut down from between 80 and 100 to just two people, according to one former team member who MIT Technology Review spoke to. “There’s too much for them to do, and they don’t speak enough languages to deal with the press as they need to,” says the engineer.

    When MIT Technology Review reached out to Twitter for this story, the email went unanswered.

    Musk’s recent criticism of Mastodon, the open-source alternative to Twitter that has piled on users in the days since the entrepreneur took control of the platform, invites the suggestion that those in glass houses shouldn’t throw stones. The Twitter CEO tweeted, then quickly deleted, a post telling users, “If you don’t like Twitter anymore, there is awesome site [sic] called Masterbatedone [sic].” Accompanying the words was a physical picture of his laptop screen open on Paul Krugman’s Mastodon profile, showing the economics columnist trying multiple times to post. Despite Musk’s attempt to highlight Mastodon’s unreliability, its success has been remarkable: nearly half a million people have signed up since Musk took over Twitter.

    It’s happening at the same time that the first cracks in Twitter’s edifice are starting to show. It’s just the beginning, expects Krueger. “I would expect to start seeing significant public-facing problems with the technology within six months,” he says. “And I feel like that’s a generous estimate.”
    by Chris Stokel-Walker

    #Twitter #Equipe_technique

  • The smart city is a perpetually unrealized utopia | MIT Technology Review
    https://www.technologyreview.com/2022/06/24/1053969/smart-city-unrealized-utopia/?truid=a497ecb44646822921c70e7e051f7f1a

    While urban theorists somewhat myopically trace the concept of the “smart city” back to the 1990s, when IBM arguably first coined the term, the CAB’s research represents one of the earliest large-scale efforts to model the urban environment through “big data.” Utilizing a combination of computerized data gathering and storage, statistical cluster analysis techniques, aerial-based color infrared photography (what we today call remote sensing), and direct “on the ground” (i.e., driving around the city) validation of the aerial images, the CAB’s analysis was decidedly different from previous attempts. The CAB partitioned the city into clusters representing social-geographic features that sound straight out of today’s social media playbook: “LA singles,” “the urban poor,” “1950s-styled suburbs.” What the cluster analysis truly revealed were correlations between socioeconomic forces that could be used as predictors for which neighborhoods were falling into poverty and “urban blight.”

    Though innovative for the time, the CAB’s harnessing of punch cards and computer-based databases was not an isolated endeavor. It was part of a much larger set of postwar experiments focused on reimagining the urban through computational processes. The urban theorist Kevin Lynch’s 1960 Image of the City spurred years of research into cognitive science on how we map typological elements in urban space (paths, edges, nodes, districts, and landmarks). Cyberneticians such as Jay Forrester at MIT sought to apply complex systems dynamics by way of computer simulations to understand the feedback loops within urban development, involving everything from population and housing to the influence of industry on growth. With Forrester, Lynch, and others, the foundations for smart cities were being laid, just as sensing and computing were entering into the public consciousness.

    The contemporary vision of the smart city is by now well known. It is, in the words of IBM, “one of instrumentation, interconnectedness, and intelligence.” “Instrumentation” refers to sensor technologies, while “interconnectedness” describes the integration of sensor data into computational platforms “that allow the communication of such information among various city services.” A smart city is only as good as the imagined intelligence that it either produces or extracts. The larger question, however, is what role human intelligence has in the network of “complex analytics, modeling, optimization, visualization services, and last but certainly not least, AI” that IBM announced. The company actually trademarked the term “smarter cities” in November 2011, underlining the reality that such cities would no longer fully belong to those who inhabited them.

    When we assume that data is more important than the people who created it, we reduce the scope and potential of what diverse human bodies can bring to the “smart city” of the present and future. But the real “smart” city consists not only of commodity flows and information networks generating revenue streams for the likes of Cisco or Amazon. The smartness comes from the diverse human bodies of different genders, cultures, and classes whose rich, complex, and even fragile identities ultimately make the city what it is.

    Chris Salter is an artist and professor of immersive arts at the Zurich University of the Arts. His newest book, Sensing Machines: How Sensors Shape Our Everyday Life, has just been published by MIT Press.

    #Smart_cities #Senseurs #Réseaux #Urbanisme