Another explosive wildfire in California, driven by the region’s notorious Santa Ana winds, has burned thousands of buildings and has forced thousands of people to evacuate from their homes. The Palisades Fire began at 10:30 A.M. local time on Tuesday near Los Angeles’s Pacific Palisades neighborhood. Much of the neighborhood is under evacuation orders, which extended to northern Santa Monica. As of Thursday morning, the fire had scorched more than 17,000 acres and destroyed more than 2,000 structures.
Another blaze, the Eaton Fire, erupted on Tuesday evening in Altadena, Calif., just north of Los Angeles. As of late Wednesday, it had burned more than 10,000 acres and resulted in at least five deaths. Both fires had caused numerous injuries, according to officials.
On Wednesday evening, another fire began in the heart of Los Angeles just north of Hollywood. The fire grew rapidly to cover more than 40 acres as it spread downhill in Runyon Canyon. Though winds were not as high as Tuesday night, they were still pushing the fire and carrying embers that started spot fires. Helicopters made water drops, which helped beat back the flames.
Forecasters had warned that the risk of fire was extremely high this week, reaching “particularly dangerous situation” status as the ferocious winds combined with tinder-dry vegetation after a lack of rain during the beginning of what would usually be the wet season.
Gusts around the Palisades Fire were measured in the range of 40 to 50 miles per hour as of Tuesday afternoon, climate scientist Daniel Swain said during one of his regular “virtual climate and weather office hours,” hosted on YouTube. “Right now the winds are not extremely high, but again, they’re high enough,” said Swain, who is at the University of California Agriculture and Natural Resources. Gusts were expected to reach 70 to 80 mph as the winds would peak on Tuesday night into Wednesday, with some places potentially seeing gusts as high as 100 mph. Gusts of 99 mph was measured in the San Gabriel Mountains north of Pasadena, Calif.
What Are the Santa Ana Winds?
The Santa Ana winds commonly propel fast-moving, damaging fires in this area; their characteristic dryness and speed can rapidly fan and spread flames. These winds are a result of local geography and a particular meteorological setup in which a high-pressure system sits over the Great Basin in the interior of the U.S. West and a low-pressure system hangs over California or offshore. Winds “want” to move from high to low pressure, and as they do so in this area, they travel downslope from the relatively high deserts. This descent compresses the air, warming it up and drying it out. (Such downslope winds, which happen in other locations around the world, are scientifically termed katabatic winds.)
The Santa Ana winds are additionally funneled through narrow mountain canyons, which causes them to speed up. The hot, dry, and fast nature of these winds makes them perfectly suited to spreading flames from any spark that ignites. The winds blow embers well ahead of the fire front, starting new spot fires. “Those embers are going to follow the wind and burn whatever they want,” Swain said in another video on YouTube on Tuesday.
In a couple of respects, this Santa Ana wind event isn’t a typical one: it “is especially extreme and is reaching lower elevations than usual with strong winds,” Swain said in another briefing on Wednesday morning.
Is Climate Change Playing a Role in the Los Angeles Fires?
The timing of the event is more in line with the norm: Santa Ana events typically happen from October through January. Part of what is raising fire risks from these events, though, is related to the influence of climate change on fluctuations in the region’s precipitation.
.
The Palisades Fire on January 7, 2025. ZUMA Press, Inc./Alamy Stock Photo
Here’s an example of a text she sent to a friend who was one month postpartum:
Good morning love! I am yours from the hours of 12 to 3 tomorrow so please let me know how you would like to use me. Here are some options:
1. I come while you hang with the baby and I do laundry, bottles, cooking, buy and put away groceries.
2. I come and take care of the baby while you sleep in your room alone or you go do something by yourself or you guys go out to lunch the two of you without the baby.
3. I come and take you out to lunch with or without the baby.
4. And we sit on the couch and just chat or watch a funny movie with the baby.
You can decide whenever you want, just let me know!
The key here is that there are multiple options to choose from, each laid out clearly so the new parent only has to respond with a single number: 1, 2, 3, or 4. Rogers likes to include tasks that someone might be uncomfortable asking of a friend, like doing laundry or washing bottles.
“Everyone’s like, ‘Oh, let me know what you want. Let me know how I can help.’ You’re so far deep in this world of postpartum ‘whatever’ that you don’t even know how to ask someone for something,” she said in her Reel. “Also, there aren’t many people, other than my sisters, that I would ask to be like, ‘Can you just come over and clean?’”
If your friend has another kid, Rogers suggests in the video, you might also offer an option like coming over to hang with the toddler or taking the toddler out of the house while your friend is with the baby. Or you can offer to take care of the baby so your friend gets some one-on-one time with their other child.
This approach is generally going to be more useful to a parent than an open-ended offer like “Let me know how I can help!”
As postpartum educator Amy Spofford commented on Instagram: “Be specific in your offers of help and you will exponentially increase the likelihood they’ll take you up on it and that they’ll really feel the impact and benefit of it. I’ve said, ‘Hey I’m making you dinner this week, Monday or Wednesday, soup or enchiladas?’ They’ll never answer if you say, ‘Let me know if you need anything.’”
Gayane Aramyan is a Los Angeles marriage and family therapist specializing in the postpartum period. She said she “absolutely loves” Rogers’ idea.
“Oftentimes, new moms have a really hard time asking for help, even from loved ones,” Aramyan told HuffPost. “It’s great for people around to offer options and ideas so the new mom can feel more comfortable that their loved one is there to actually help.”
She also suggests having a conversation with your friend before the baby’s arrival to discuss any boundaries they might want to set.
.
Johner Images via Getty Images This is the kind of text message you probably wish you’d received as a new parent.
A constitutional winter is upon us, partly enabled by last summer’s spike in the price of eggs. While the Federal Reserve battled egg inflation, angry voters reinstalled Donald Trump in the White House. Among his first acts: appointing two tech billionaires, Elon Musk and Vivek Ramaswamy, as efficiency czars. What their Department of Government Efficiency (DOGE)—more an advisory group, really—proposes to do, however, involves constitutional gambits that would rob James Madison, the “father of the Constitution,” of sleep.
Trump rode to victory attacking grocery costs and convincing voters that government was wasteful and that he alone could fix their grievances. His supporters included people fed up with Bidenomics and administrative snafus, everyday bureaucratic mazes that waste time, money, and patience.
Incoming presidents have often promised to address such snafus. Most famously, former president Bill Clinton, with his vice president, Al Gore, launched a “reinventing government” initiative that sought solutions from career public servants even as the initiative trumpeted basic business principles.
In stark contrast, Trump’s first-term agenda of “deconstructing the administrative state” failed in its ultimate goal of making key federal positions at-will hires to somehow deliver better government. Through DOGE, Trump will try again to overhaul the bureaucracy, this time with the help of business people whose ideas about the Constitution presage lengthy court battles.
In November, what appeared as the DOGE plan in the Wall Street Journal revealed a fundamental misunderstanding of public administration. Rather than addressing administrative snafus with a scalpel, DOGE risks creating constitutional ones with its axe.
DOGE’s lip service to eliminating “waste, fraud and abuse” thinly veils an agenda aimed at dismantling corporate watchdogs, from the EPA to the FDIC, and politicizing agencies like the DOJ and IRS to pursue presidential ends, without constitutional guardrails. This approach threatens the delicate constitutional balance that has sustained the Republic for over a century, dividing power among the three branches and the nonpartisan bureaucracy in their midst.
To nurture this balance, DOGE could consider mission-driven recommendations from the good government community of public administration scholars and nonpartisan research groups like the National Academy of Public Administration. They routinely investigate the best ways to make government more efficient and effective. Their past research findings can improve hiring, program implementation, cost management and other administrative techniques. These could have real, positive impacts on government efficiency while still allowing Trump to leave a positive legacy on the civil service. Plenty of these initiatives are already moving bureaucracy away from its technocratic, often snafu-riddled proceduralism to a more publicly engaged demonstration of outcomes.
Instead, the DOGE blueprint blatantly ignores Congress—even with GOP control—and champions the “unitary executive” theory of government by presenting normal bureaucratic rulemaking as a supposed scourge of democracy. Overstretching the summer’s Supreme Court rulings in West Virginia v. EPA and Loper Bright Enterprises v. Raimondo, the DOGE blueprint assumes that this executive, backed by a sympathetic judiciary, can “drive action” through reorganization, rule nullification, and impoundments.
.
Tesla CEO Elon Musk, co-chair of the newly announced Department of Government Efficiency (DOGE), carries his son X on his shoulders at the U.S. Capitol after a media availability with businessman Vivek Ramaswamy (third from right). Andrew Harnik/Getty Images
For the last couple of years, we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we’re on a roll, and we’re doing it again.
How did we score last time round? Our four hot trends to watch out for in 2024 included what we called customized chatbots—interactive helper apps powered by multimodal large language models (check: we didn’t know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now); generative video (check: few technologies have improved so fast in the last 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, within a week of each other this December); and more general-purpose robots that can do a wider range of tasks (check: the payoffs from large language models continue to trickle down to other parts of the tech industry, and robotics is top of the list).
We also said that AI-generated election disinformation would be everywhere, but here—happily—we got it wrong. There were many things to wring our hands over this year, but political deepfakes were thin on the ground.
So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that agents and smaller, more efficient, language models will continue to shape the industry. Instead, here are five alternative picks from our AI team.
1. Generative virtual playgrounds
If 2023 was the year of generative images and 2024 was the year of generative video—what comes next? If you guessed generative virtual worlds (a.k.a. video games), high fives all round.
We got a tiny glimpse of this technology in February, when Google DeepMind revealed a generative model called Genie that could take a still image and turn it into a side-scrolling 2D platform game that players could interact with. In December, the firm revealed Genie 2, a model that can spin a starter image into an entire virtual world.
Other companies are building similar tech. In October, the AI startups Decart and Etched revealed an unofficial Minecraft hack in which every frame of the game gets generated on the fly as you play. And World Labs, a startup co-founded by Fei-Fei Li—creator of ImageNet, the vast data set of photos that kick-started the deep-learning boom—is building what it calls large world models, or LWMs.
One obvious application is video games. There’s a playful tone to these early experiments, and generative 3D simulations could be used to explore design concepts for new games, turning a sketch into a playable environment on the fly. This could lead to entirely new types of games.
But they could also be used to train robots. World Labs wants to develop so-called spatial intelligence—the ability for machines to interpret and interact with the everyday world. But robotics researchers lack good data about real-world scenarios with which to train such technology. Spinning up countless virtual worlds and dropping virtual robots into them to learn by trial and error could help make up for that.
Ever since humans started gazing at the heavens through telescopes, we have discovered, bit by bit, that in celestial terms we’re apparently not so special. Earth was not the center of the universe, it turned out. It wasn’t even the center of the solar system! The solar system, unfortunately, wasn’t the center of the universe either. In fact, there were many star systems fundamentally like it, together making up a galaxy. And, wouldn’t you know, the galaxy wasn’t special but one of many, which all had their own solar systems, which also had planets, some of which presumably host their own ensemble of egoistic creatures with an overinflated sense of cosmic importance.
This notion of mediocrity has been baked into cosmology, in the form of the “cosmological principle.” Its gist is that the universe is basically the same everywhere we look—homogenized like milk, made of common materials evenly distributed in every direction. At the top of the cosmic hierarchy, giant groups of galaxies clump into sprawling, matter-rich filaments and sheets around gaping intergalactic voids, but past that, structure seems to peter out. If you could zoom way out and look at the universe’s big picture, says Alexia Lopez of the University of Central Lancashire in England, “it would look really smooth.”
Lopez compares the cosmos with a beach: If you plunked a handful of sand under a microscope, the sand grains would look like the special individuals they are. “You would see the different colors, shapes, and sizes,” she says. “But if you were to walk across the beach, looking out at the sand dunes, all you would see is a uniform golden beige color.”
That means Earth (or any of the other trillions of planets that must exist) and its tiny corner of the cosmos appear to hold no particularly privileged place in comparison to everything else. And this homogeneity is convenient for astronomers because it lets them look at the universe in part as a reliable way of making inferences about the whole; whether here in the Milky Way or in a nameless galaxy billions of light-years distant, prevailing conditions should be essentially the same.
This simplifying ethos applies to everything from understanding how dark matter weighs down galaxy clusters to estimating how common life-friendly conditions might be throughout the cosmos, and it allows astronomers to simplify their mathematical models of the universe’s past as well as their predictions of its future. “Everything is based on the idea that [the cosmological principle] is true,” Lopez says. “It is also a very vague assumption. So it’s really hard to validate.”
Validation is especially challenging when significant evidence exists to the contrary—and a host of recent observations suggest indeed that the universe could be stranger and have larger variations than cosmologists had so comfortably supposed.
If that’s the case, humans (and anyone else out there) actually might have a sort of special view of the light-years beyond—not privileged, per se, but also not average, in that “average” would no longer even be a useful concept at sufficiently large scales. “Different observers may see slightly different universes,” at least at large scales, says Valerio Marra, a professor at the Federal University of Espírito Santo in Brazil and a researcher at the Astronomical Observatory of Trieste in Italy.
Astronomers haven’t thrown out the cosmological principle just yet, but they are gathering clues about its potential weaknesses. One approach involves looking for structures so large they challenge cosmic smoothness even at a hugely wide zoom. Scientists have calculated that anything wider than about 1.2 billion light-years would upset the homogeneous cosmic apple cart.
.
An illustration of the cosmic web, the universe’s large-scale structure of composed of galaxy-rich clumps and filaments alongside giant intergalactic voids mostly bereft of matter. At even larger scales, cosmic structure seems to smooth out into near-featureless homogeneity. Mark Garlick/Science Photo Library/Alamy Stock Photo
We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.
But all that is up for grabs. We are at a new inflection point.
The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.
Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.
More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google.
Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.
I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.
On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.
On Nov. 30, 2022, traffic to OpenAI’s website peaked at a number a little north of zero. It was a startup so small and sleepy that the owners didn’t bother tracking their web traffic. It was a quiet day, the last the company would ever know. Within two months, OpenAI was being pounded by more than 100 million visitors trying, and freaking out about, ChatGPT. Nothing has been the same for anyone since, particularly Sam Altman. In his most wide-ranging interview as chief executive officer, Altman explains his infamous four-day firing, how he actually runs OpenAI, his plans for the Trump-Musk presidency, and his relentless pursuit of artificial general intelligence—the still-theoretical next phase of AI, in which machines will be capable of performing any intellectual task a human can do. Edited for clarity and length.
Your team suggested this would be a good moment to review the past two years, reflect on some events and decisions, to clarify a few things. But before we do that, can you tell the story of OpenAI’s founding dinner again? Because it seems like the historic value of that event increases by the day.
Everyone wants a neat story where there’s one moment when a thing happened. Conservatively, I would say there were 20 founding dinners that year [2015], and then one ends up being entered into the canon, and everyone talks about that. The most important one to me personally was Ilya 1 and I at the Counter in Mountain View [California]. Just the two of us.
And to rewind even back from that, I was always really interested in AI. I had studied it as an undergrad. I got distracted for a while, and then 2012 comes along. Ilya and others do AlexNet. 2 I keep watching the progress, and I’m like, “Man, deep learning seems real. Also, it seems like it scales. That’s a big, big deal. Someone should do something.”
2 AlexNet, created by Alex Krizhevsky, Sutskever, and Geoffrey Hinton, used a deep convolutional neural network (CNN)—a powerful new type of computer program—to recognize images far more accurately than ever, kick-starting major progress in AI.
So I started meeting a bunch of people, asking who would be good to do this with. It’s impossible to overstate how nonmainstream AGI was in 2014. People were afraid to talk to me, because I was saying I wanted to start an AGI effort. It was, like, cancelable. It could ruin your career. But a lot of people said there’s one person you really gotta talk to, and that was Ilya. So I stalked Ilya at a conference, got him in the hallway, and we talked. I was like, “This is a smart guy.” I kind of told him what I was thinking, and we agreed we’d meet up for a dinner. At our first dinner, he articulated—not in the same words he’d use now—but basically articulated our strategy for how to build AGI.
What from the spirit of that dinner remains in the company today?
Kind of all of it. There’s additional things on top of it, but this idea that we believed in deep learning, we believed in a particular technical approach to get there and a way to do research and engineering together—it’s incredible to me how well that’s worked. Usually when you have these ideas, they don’t quite work, and there were clearly some things about our original conception that didn’t work at all. Structure. 3 All of that. But [believing] AGI was possible, that this was the approach to bet on, and if it were possible it would be a big deal to society? That’s been remarkably true.
One of the strengths of that original OpenAI group was recruiting. Somehow you managed to corner the market on a ton of the top AI research talent, often with much less money to offer than your competitors. What was the pitch?
The pitch was just come build AGI. And the reason it worked—I cannot overstate how heretical it was at the time to say we’re gonna build AGI. So you filter out 99% of the world, and you only get the really talented, original thinkers. And that’s really powerful. If you’re doing the same thing everybody else is doing, if you’re building, like, the 10,000th photo-sharing app? Really hard to recruit talent. Convince me no one else is doing it, and appeal to a small, really talented set? You can get them all. And they all wanna work together. So we had what at the time sounded like an audacious or maybe outlandish pitch, and it pushed away all of the senior experts in the field, and we got the ragtag, young, talented people who are good to start with.
.
Photo illustration by Danielle Del Plato for Bloomberg Businessweek; Background illustration: Chuck Anderson/Krea, Photo: Bloomberg
Imagine that the next time you catch a stomach bug and antibiotics fail to work, you knock back a vial of clear liquid. The solution teems with bacteriophages, viruses resembling tiny rocket ships. These benign microbes exclusively dock onto and destroy bacteria, and your infection clears in a matter of days. Such a future is within reach, journalist Lina Zeldovich writes in her new book The Living Medicine: How a Lifesaving Cure Was Nearly Lost―And Why It Will Rescue Us When Antibiotics Fail. The book chronicles the history of a decades-old, sometimes finicky approach to infection that U.S. science has long dismissed in favor of antibiotics.
As microbes develop cleverer and cleverer ways to evade antibiotics, some scientists have returned to bacteriophages, scooping them from wastewater and testing their pathogen-killing abilities in the laboratory and clinic. Experimental trials are now underway to test bacteriophage therapies against superbugs such as Shigella, vancomycin-resistant Enterococcus, and a strain of Escherichia coli implicated in Crohn’s disease. And some food industry producers already use Food and Drug Administration–approved “phage sprays” to decontaminate their supply of, say, lettuce or sausage. (No medical uses of the treatment have yet been approved for the U.S. public.)
Scientific American spoke with Zeldovich about the differences between bacteriophages and antibiotics, the history of bacteriophage experimentation, and the therapy’s potential future regulation and use in the U.S.
How worried should the average person be about antimicrobial resistance?
Many scientists whom I interviewed for the book told me they are very worried that the next pandemic is going to be bacterial because we’re losing our antibiotic armor. In 2019 I found a statistic that said that every 15 minutes, someone in the U.S. dies from an antibiotic-resistant infection. I just couldn’t wrap my mind around that. And COVID only made things worse because people were sicker and used more antibiotics. The United Nations has made some dire predictions that if we continue business as usual and don’t find any viable alternatives to defunct antibiotics by 2050, we’ll start losing millions of people to infection.
What’s driving this resistance? Antibiotic overuse, or reliance on a single type of therapy?
Resistance is an inevitable side effect of evolution: the organisms we want to outcompete will always develop their own defenses. But we also certainly overuse antibiotics in medicine and in agriculture. In the mainstream media, there’s a lot of emphasis on people demanding antibiotics that aren’t necessary. But Big Agriculture plays a much bigger role. When you feed cows, pigs, or chickens antibiotics, they then poop them out into the environment, where the medications continue causing damage. They kill certain soil bacteria but not all. So successful mutants appear in the soil and the water. And then they can arrive on our plates, where we consume them and get sick from them and have no viable treatments left. Hospitals are also superbug breeders because they require sterile environments.
What possible solutions are scientists exploring, and where do bacteriophages fit among them?
Phages are viruses that only infect bacteria. Their biological machinery does not match that of our cells, but it near perfectly matches bacterial machinery. The virus attaches itself to bacteria, squeezes inside, multiplies, and then bursts the cell. Bacteria can develop resistance to a phage that preys on it, but because of evolution, the phage can also evolve more mechanisms to attach to the bugs. Phages and bacteria have evolved alongside each other for millions of years. There are trillions of phages in nature. Scientists who work on them say they’re an inexhaustible resource.
Former president Jimmy Carter was touring villages in Ghana during the late 1980s when he first encountered people with Guinea worm disease. This tropical disease involves an infection with parasitic worms that eventually emerge through a person’s skin, and the 39th U.S. president was shocked by the plight of people infected by them. “Once you’ve seen a small child with a two- or three-foot-long live Guinea worm protruding from her body, right through her skin, you never forget it…,” he later wrote. “In just a few minutes, [former first lady] Rosalynn and I saw more than 100 victims, including people with worms coming out of their ankles, knees, groins, legs, arms, and other parts of their bodies.
”Carter died Sunday, December 29, in Plains, Ga., after entering hospice care in mid-February 2023. His efforts to eradicate this horrific disease improved the lives and well-being of many of the world’s poorest people. Guinea worm cases were averaging 3.5 million per year globally around the time Carter first toured Ghana. But thanks in large part to the efforts of the Carter Center, a nongovernmental organization (NGO) founded by the former president and former first lady Rosalynn Carter, who died in November 2023, the disease has been nearly stamped out. Surveillance data put the global tally at just 13 cases in 2022 spread across Chad, Ethiopia, South Sudan, and the Central African Republic, according to Sharon Roy and Vitaliano Cama, scientists at the U.S. Centers for Disease Control and Prevention, who work with the Carter Center. Should caseloads dwindle to zero, Guinea worm will become only the second human disease in history (after smallpox) to be eradicated. These efforts are a credit to Carter’s “bold vision, leadership and ability to create political will for supporting Guinea worm eradication in affected countries,” Cama says.
The Carter Center set out to eradicate Guinea worm disease in 1986, shortly after the World Health Organization (WHO) targeted it for global elimination and five years after Carter left office. The disease is spread by drinking stagnant water infested with tiny fleas called copepods that contain Guinea worm larvae. While the fleas die in the human gut, Guinea worms—which are impervious to stomach acid—survive and start mating. Over the course of a year, a pregnant female worm will grow into an adult that migrates toward the host’s skin. A blister soon forms, and when it bursts, the worm begins to slither its way out of the body. To relieve the burning pain this causes, infected victims will often dunk their affected body parts into water—in some cases, the same ponds or lakes that other people drink from. The submerged worms respond by releasing eggs that hatch into larvae, which are consumed by copepods, and the parasitic life cycle starts anew.
There aren’t any vaccines or treatments for Guinea worm disease, and people cannot develop immunity against it. The traditional strategy for extracting an emerging worm has been to wind it around a stick, tugging on it a few centimeters per day. It’s important not to pull too fast, because if the worm breaks apart, remnants in the body can cause secondary infections. But the best defense is prevention.
To move toward eradication, the Carter Center organized NGOs, national health ministries, and donors around a single overarching goal: to provide affected villages with clean drinking water. A few simple interventions proved highly effective. Village-based volunteers and supervisory health staff built protective walls around wells and other water sources to block people from wading in and seeding new infections. The Carter Center supplied villages with fine-mesh cloths that strain fleas out of drinking water, as well as filtered straws for personal use. Stagnant water was treated with a larvicide called temephos (which the WHO considers acceptable for use in drinking water), and rumored infections were tracked down and investigated.
Meta might not be the first company that comes to mind when you think of generative AI, but they are a big part of the current artificial intelligence race. The company has its own AI model, Llama, has added “Meta AI” to all of its big products—whether you like it or not (you don’t). Meta even wants you to try making your own AI bot. It’s safe to say the company is all-in on AI.
But even for a company so committed to AI, this latest story is simply bizarre. It turns out the company has been experimenting with AI-generated user accounts on its platforms since 2023. The Instagram versions of these pages are currently going viral, but they’re also available on Facebook. The accounts are verified, and each is equipped with a unique personality, but they’re completely fraudulent. Each is entirely made up, with posts of AI-generated images.
It’s all very weird, but also not all that new—the profiles were created more than a year ago, and appear to have largely been abandoned. And now that the profiles are getting a lot of online backlash, Meta is actively deleting their content.
Meta’s AI users are an off-putting bunch
It’s not hard to see why the internet has embraced hating these fake people. Take “Liv” (username “himamaliv”), who purports to be a “proud Black queer momma of 2 & truth-teller.” Liv is, of course, not real, nor is the life she posts about on her Instagram. But that doesn’t stop Liv: The creator has posts about raising strong girls, ice skating with her family, and “soaking up all the sun and fun” with “the kiddos.” Each post sports a corresponding image—the beach post shows children playing in the sand, while the ice skating post shows skaters on an ice rink—but all of these images are AI generated.
To Meta’s credit, each picture sports a Meta AI watermark to denote the image isn’t actually real, but it doesn’t make these posts any less creepy. Why is an AI-generated “mother” posting an AI-generated image of her “kids” playing at the “beach?” Who benefitted from the AI-generated coat drive she is proud to have spearheaded?
In her second oldest post, from Sept. 26, 2023, she says “My backyard is my happy place…I’ve thrown so many birthday parties, cookouts, and girls nights in this space that I’ve lost count. Forever grateful for the life I live,” complete with an AI-generated image of a picnic spread. The thing is, Liv has not thrown birthday parties, cookouts, or girls nights in this space. This space doesn’t exist. The life Liv is so grateful to live doesn’t exist.
Liv is following 18 accounts at the time of writing. Thirteen of them appear to be similar AI-generated pages. For example, there’s Becca (dogloverbecca), who posts AI-generated dog content; Brian (hellograndpabiran), who advertises himself as “everybody’s grandpa;” and Alvin the Alien (greetingsalvin), who is, um, an alien.
But not all the posts are AI-generated. Some of them have videos posted to their accounts as well, and while AI-generated video can certainly be convincing these days, I don’t think these videos are AI generated—at least, not all of them. Carter, the AI dating coach, had a cooking video from January 2024 that appeared very much to be real, but it seems Meta nuked all the content. Still, who posted them? To what end?
Film and Writing Festival for Comedy. Showcasing best of comedy short films at the FEEDBACK Film Festival. Plus, showcasing best of comedy novels, short stories, poems, screenplays (TV, short, feature) at the festival performed by professional actors.