For the last couple of years, we’ve had a go at predicting what’s coming next in AI. A fool’s game given how fast this industry moves. But we’re on a roll, and we’re doing it again.
How did we score last time round? Our four hot trends to watch out for in 2024 included what we called customized chatbots—interactive helper apps powered by multimodal large language models (check: we didn’t know it yet, but we were talking about what everyone now calls agents, the hottest thing in AI right now); generative video (check: few technologies have improved so fast in the last 12 months, with OpenAI and Google DeepMind releasing their flagship video generation models, Sora and Veo, within a week of each other this December); and more general-purpose robots that can do a wider range of tasks (check: the payoffs from large language models continue to trickle down to other parts of the tech industry, and robotics is top of the list).
We also said that AI-generated election disinformation would be everywhere, but here—happily—we got it wrong. There were many things to wring our hands over this year, but political deepfakes were thin on the ground.
So what’s coming in 2025? We’re going to ignore the obvious here: You can bet that agents and smaller, more efficient, language models will continue to shape the industry. Instead, here are five alternative picks from our AI team.
1. Generative virtual playgrounds
If 2023 was the year of generative images and 2024 was the year of generative video—what comes next? If you guessed generative virtual worlds (a.k.a. video games), high fives all round.
We got a tiny glimpse of this technology in February, when Google DeepMind revealed a generative model called Genie that could take a still image and turn it into a side-scrolling 2D platform game that players could interact with. In December, the firm revealed Genie 2, a model that can spin a starter image into an entire virtual world.
Other companies are building similar tech. In October, the AI startups Decart and Etched revealed an unofficial Minecraft hack in which every frame of the game gets generated on the fly as you play. And World Labs, a startup co-founded by Fei-Fei Li—creator of ImageNet, the vast data set of photos that kick-started the deep-learning boom—is building what it calls large world models, or LWMs.
One obvious application is video games. There’s a playful tone to these early experiments, and generative 3D simulations could be used to explore design concepts for new games, turning a sketch into a playable environment on the fly. This could lead to entirely new types of games.
But they could also be used to train robots. World Labs wants to develop so-called spatial intelligence—the ability for machines to interpret and interact with the everyday world. But robotics researchers lack good data about real-world scenarios with which to train such technology. Spinning up countless virtual worlds and dropping virtual robots into them to learn by trial and error could help make up for that.
Ever since humans started gazing at the heavens through telescopes, we have discovered, bit by bit, that in celestial terms we’re apparently not so special. Earth was not the center of the universe, it turned out. It wasn’t even the center of the solar system! The solar system, unfortunately, wasn’t the center of the universe either. In fact, there were many star systems fundamentally like it, together making up a galaxy. And, wouldn’t you know, the galaxy wasn’t special but one of many, which all had their own solar systems, which also had planets, some of which presumably host their own ensemble of egoistic creatures with an overinflated sense of cosmic importance.
This notion of mediocrity has been baked into cosmology, in the form of the “cosmological principle.” Its gist is that the universe is basically the same everywhere we look—homogenized like milk, made of common materials evenly distributed in every direction. At the top of the cosmic hierarchy, giant groups of galaxies clump into sprawling, matter-rich filaments and sheets around gaping intergalactic voids, but past that, structure seems to peter out. If you could zoom way out and look at the universe’s big picture, says Alexia Lopez of the University of Central Lancashire in England, “it would look really smooth.”
Lopez compares the cosmos with a beach: If you plunked a handful of sand under a microscope, the sand grains would look like the special individuals they are. “You would see the different colors, shapes, and sizes,” she says. “But if you were to walk across the beach, looking out at the sand dunes, all you would see is a uniform golden beige color.”
That means Earth (or any of the other trillions of planets that must exist) and its tiny corner of the cosmos appear to hold no particularly privileged place in comparison to everything else. And this homogeneity is convenient for astronomers because it lets them look at the universe in part as a reliable way of making inferences about the whole; whether here in the Milky Way or in a nameless galaxy billions of light-years distant, prevailing conditions should be essentially the same.
This simplifying ethos applies to everything from understanding how dark matter weighs down galaxy clusters to estimating how common life-friendly conditions might be throughout the cosmos, and it allows astronomers to simplify their mathematical models of the universe’s past as well as their predictions of its future. “Everything is based on the idea that [the cosmological principle] is true,” Lopez says. “It is also a very vague assumption. So it’s really hard to validate.”
Validation is especially challenging when significant evidence exists to the contrary—and a host of recent observations suggest indeed that the universe could be stranger and have larger variations than cosmologists had so comfortably supposed.
If that’s the case, humans (and anyone else out there) actually might have a sort of special view of the light-years beyond—not privileged, per se, but also not average, in that “average” would no longer even be a useful concept at sufficiently large scales. “Different observers may see slightly different universes,” at least at large scales, says Valerio Marra, a professor at the Federal University of Espírito Santo in Brazil and a researcher at the Astronomical Observatory of Trieste in Italy.
Astronomers haven’t thrown out the cosmological principle just yet, but they are gathering clues about its potential weaknesses. One approach involves looking for structures so large they challenge cosmic smoothness even at a hugely wide zoom. Scientists have calculated that anything wider than about 1.2 billion light-years would upset the homogeneous cosmic apple cart.
.
An illustration of the cosmic web, the universe’s large-scale structure of composed of galaxy-rich clumps and filaments alongside giant intergalactic voids mostly bereft of matter. At even larger scales, cosmic structure seems to smooth out into near-featureless homogeneity. Mark Garlick/Science Photo Library/Alamy Stock Photo
We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.
But all that is up for grabs. We are at a new inflection point.
The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.
Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.
More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google.
Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.
I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.
On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.
On Nov. 30, 2022, traffic to OpenAI’s website peaked at a number a little north of zero. It was a startup so small and sleepy that the owners didn’t bother tracking their web traffic. It was a quiet day, the last the company would ever know. Within two months, OpenAI was being pounded by more than 100 million visitors trying, and freaking out about, ChatGPT. Nothing has been the same for anyone since, particularly Sam Altman. In his most wide-ranging interview as chief executive officer, Altman explains his infamous four-day firing, how he actually runs OpenAI, his plans for the Trump-Musk presidency, and his relentless pursuit of artificial general intelligence—the still-theoretical next phase of AI, in which machines will be capable of performing any intellectual task a human can do. Edited for clarity and length.
Your team suggested this would be a good moment to review the past two years, reflect on some events and decisions, to clarify a few things. But before we do that, can you tell the story of OpenAI’s founding dinner again? Because it seems like the historic value of that event increases by the day.
Everyone wants a neat story where there’s one moment when a thing happened. Conservatively, I would say there were 20 founding dinners that year [2015], and then one ends up being entered into the canon, and everyone talks about that. The most important one to me personally was Ilya 1 and I at the Counter in Mountain View [California]. Just the two of us.
And to rewind even back from that, I was always really interested in AI. I had studied it as an undergrad. I got distracted for a while, and then 2012 comes along. Ilya and others do AlexNet. 2 I keep watching the progress, and I’m like, “Man, deep learning seems real. Also, it seems like it scales. That’s a big, big deal. Someone should do something.”
2 AlexNet, created by Alex Krizhevsky, Sutskever, and Geoffrey Hinton, used a deep convolutional neural network (CNN)—a powerful new type of computer program—to recognize images far more accurately than ever, kick-starting major progress in AI.
So I started meeting a bunch of people, asking who would be good to do this with. It’s impossible to overstate how nonmainstream AGI was in 2014. People were afraid to talk to me, because I was saying I wanted to start an AGI effort. It was, like, cancelable. It could ruin your career. But a lot of people said there’s one person you really gotta talk to, and that was Ilya. So I stalked Ilya at a conference, got him in the hallway, and we talked. I was like, “This is a smart guy.” I kind of told him what I was thinking, and we agreed we’d meet up for a dinner. At our first dinner, he articulated—not in the same words he’d use now—but basically articulated our strategy for how to build AGI.
What from the spirit of that dinner remains in the company today?
Kind of all of it. There’s additional things on top of it, but this idea that we believed in deep learning, we believed in a particular technical approach to get there and a way to do research and engineering together—it’s incredible to me how well that’s worked. Usually when you have these ideas, they don’t quite work, and there were clearly some things about our original conception that didn’t work at all. Structure. 3 All of that. But [believing] AGI was possible, that this was the approach to bet on, and if it were possible it would be a big deal to society? That’s been remarkably true.
One of the strengths of that original OpenAI group was recruiting. Somehow you managed to corner the market on a ton of the top AI research talent, often with much less money to offer than your competitors. What was the pitch?
The pitch was just come build AGI. And the reason it worked—I cannot overstate how heretical it was at the time to say we’re gonna build AGI. So you filter out 99% of the world, and you only get the really talented, original thinkers. And that’s really powerful. If you’re doing the same thing everybody else is doing, if you’re building, like, the 10,000th photo-sharing app? Really hard to recruit talent. Convince me no one else is doing it, and appeal to a small, really talented set? You can get them all. And they all wanna work together. So we had what at the time sounded like an audacious or maybe outlandish pitch, and it pushed away all of the senior experts in the field, and we got the ragtag, young, talented people who are good to start with.
.
Photo illustration by Danielle Del Plato for Bloomberg Businessweek; Background illustration: Chuck Anderson/Krea, Photo: Bloomberg
Imagine that the next time you catch a stomach bug and antibiotics fail to work, you knock back a vial of clear liquid. The solution teems with bacteriophages, viruses resembling tiny rocket ships. These benign microbes exclusively dock onto and destroy bacteria, and your infection clears in a matter of days. Such a future is within reach, journalist Lina Zeldovich writes in her new book The Living Medicine: How a Lifesaving Cure Was Nearly Lost―And Why It Will Rescue Us When Antibiotics Fail. The book chronicles the history of a decades-old, sometimes finicky approach to infection that U.S. science has long dismissed in favor of antibiotics.
As microbes develop cleverer and cleverer ways to evade antibiotics, some scientists have returned to bacteriophages, scooping them from wastewater and testing their pathogen-killing abilities in the laboratory and clinic. Experimental trials are now underway to test bacteriophage therapies against superbugs such as Shigella, vancomycin-resistant Enterococcus, and a strain of Escherichia coli implicated in Crohn’s disease. And some food industry producers already use Food and Drug Administration–approved “phage sprays” to decontaminate their supply of, say, lettuce or sausage. (No medical uses of the treatment have yet been approved for the U.S. public.)
Scientific American spoke with Zeldovich about the differences between bacteriophages and antibiotics, the history of bacteriophage experimentation, and the therapy’s potential future regulation and use in the U.S.
How worried should the average person be about antimicrobial resistance?
Many scientists whom I interviewed for the book told me they are very worried that the next pandemic is going to be bacterial because we’re losing our antibiotic armor. In 2019 I found a statistic that said that every 15 minutes, someone in the U.S. dies from an antibiotic-resistant infection. I just couldn’t wrap my mind around that. And COVID only made things worse because people were sicker and used more antibiotics. The United Nations has made some dire predictions that if we continue business as usual and don’t find any viable alternatives to defunct antibiotics by 2050, we’ll start losing millions of people to infection.
What’s driving this resistance? Antibiotic overuse, or reliance on a single type of therapy?
Resistance is an inevitable side effect of evolution: the organisms we want to outcompete will always develop their own defenses. But we also certainly overuse antibiotics in medicine and in agriculture. In the mainstream media, there’s a lot of emphasis on people demanding antibiotics that aren’t necessary. But Big Agriculture plays a much bigger role. When you feed cows, pigs, or chickens antibiotics, they then poop them out into the environment, where the medications continue causing damage. They kill certain soil bacteria but not all. So successful mutants appear in the soil and the water. And then they can arrive on our plates, where we consume them and get sick from them and have no viable treatments left. Hospitals are also superbug breeders because they require sterile environments.
What possible solutions are scientists exploring, and where do bacteriophages fit among them?
Phages are viruses that only infect bacteria. Their biological machinery does not match that of our cells, but it near perfectly matches bacterial machinery. The virus attaches itself to bacteria, squeezes inside, multiplies, and then bursts the cell. Bacteria can develop resistance to a phage that preys on it, but because of evolution, the phage can also evolve more mechanisms to attach to the bugs. Phages and bacteria have evolved alongside each other for millions of years. There are trillions of phages in nature. Scientists who work on them say they’re an inexhaustible resource.
Former president Jimmy Carter was touring villages in Ghana during the late 1980s when he first encountered people with Guinea worm disease. This tropical disease involves an infection with parasitic worms that eventually emerge through a person’s skin, and the 39th U.S. president was shocked by the plight of people infected by them. “Once you’ve seen a small child with a two- or three-foot-long live Guinea worm protruding from her body, right through her skin, you never forget it…,” he later wrote. “In just a few minutes, [former first lady] Rosalynn and I saw more than 100 victims, including people with worms coming out of their ankles, knees, groins, legs, arms, and other parts of their bodies.
”Carter died Sunday, December 29, in Plains, Ga., after entering hospice care in mid-February 2023. His efforts to eradicate this horrific disease improved the lives and well-being of many of the world’s poorest people. Guinea worm cases were averaging 3.5 million per year globally around the time Carter first toured Ghana. But thanks in large part to the efforts of the Carter Center, a nongovernmental organization (NGO) founded by the former president and former first lady Rosalynn Carter, who died in November 2023, the disease has been nearly stamped out. Surveillance data put the global tally at just 13 cases in 2022 spread across Chad, Ethiopia, South Sudan, and the Central African Republic, according to Sharon Roy and Vitaliano Cama, scientists at the U.S. Centers for Disease Control and Prevention, who work with the Carter Center. Should caseloads dwindle to zero, Guinea worm will become only the second human disease in history (after smallpox) to be eradicated. These efforts are a credit to Carter’s “bold vision, leadership and ability to create political will for supporting Guinea worm eradication in affected countries,” Cama says.
The Carter Center set out to eradicate Guinea worm disease in 1986, shortly after the World Health Organization (WHO) targeted it for global elimination and five years after Carter left office. The disease is spread by drinking stagnant water infested with tiny fleas called copepods that contain Guinea worm larvae. While the fleas die in the human gut, Guinea worms—which are impervious to stomach acid—survive and start mating. Over the course of a year, a pregnant female worm will grow into an adult that migrates toward the host’s skin. A blister soon forms, and when it bursts, the worm begins to slither its way out of the body. To relieve the burning pain this causes, infected victims will often dunk their affected body parts into water—in some cases, the same ponds or lakes that other people drink from. The submerged worms respond by releasing eggs that hatch into larvae, which are consumed by copepods, and the parasitic life cycle starts anew.
There aren’t any vaccines or treatments for Guinea worm disease, and people cannot develop immunity against it. The traditional strategy for extracting an emerging worm has been to wind it around a stick, tugging on it a few centimeters per day. It’s important not to pull too fast, because if the worm breaks apart, remnants in the body can cause secondary infections. But the best defense is prevention.
To move toward eradication, the Carter Center organized NGOs, national health ministries, and donors around a single overarching goal: to provide affected villages with clean drinking water. A few simple interventions proved highly effective. Village-based volunteers and supervisory health staff built protective walls around wells and other water sources to block people from wading in and seeding new infections. The Carter Center supplied villages with fine-mesh cloths that strain fleas out of drinking water, as well as filtered straws for personal use. Stagnant water was treated with a larvicide called temephos (which the WHO considers acceptable for use in drinking water), and rumored infections were tracked down and investigated.
Meta might not be the first company that comes to mind when you think of generative AI, but they are a big part of the current artificial intelligence race. The company has its own AI model, Llama, has added “Meta AI” to all of its big products—whether you like it or not (you don’t). Meta even wants you to try making your own AI bot. It’s safe to say the company is all-in on AI.
But even for a company so committed to AI, this latest story is simply bizarre. It turns out the company has been experimenting with AI-generated user accounts on its platforms since 2023. The Instagram versions of these pages are currently going viral, but they’re also available on Facebook. The accounts are verified, and each is equipped with a unique personality, but they’re completely fraudulent. Each is entirely made up, with posts of AI-generated images.
It’s all very weird, but also not all that new—the profiles were created more than a year ago, and appear to have largely been abandoned. And now that the profiles are getting a lot of online backlash, Meta is actively deleting their content.
Meta’s AI users are an off-putting bunch
It’s not hard to see why the internet has embraced hating these fake people. Take “Liv” (username “himamaliv”), who purports to be a “proud Black queer momma of 2 & truth-teller.” Liv is, of course, not real, nor is the life she posts about on her Instagram. But that doesn’t stop Liv: The creator has posts about raising strong girls, ice skating with her family, and “soaking up all the sun and fun” with “the kiddos.” Each post sports a corresponding image—the beach post shows children playing in the sand, while the ice skating post shows skaters on an ice rink—but all of these images are AI generated.
To Meta’s credit, each picture sports a Meta AI watermark to denote the image isn’t actually real, but it doesn’t make these posts any less creepy. Why is an AI-generated “mother” posting an AI-generated image of her “kids” playing at the “beach?” Who benefitted from the AI-generated coat drive she is proud to have spearheaded?
In her second oldest post, from Sept. 26, 2023, she says “My backyard is my happy place…I’ve thrown so many birthday parties, cookouts, and girls nights in this space that I’ve lost count. Forever grateful for the life I live,” complete with an AI-generated image of a picnic spread. The thing is, Liv has not thrown birthday parties, cookouts, or girls nights in this space. This space doesn’t exist. The life Liv is so grateful to live doesn’t exist.
Liv is following 18 accounts at the time of writing. Thirteen of them appear to be similar AI-generated pages. For example, there’s Becca (dogloverbecca), who posts AI-generated dog content; Brian (hellograndpabiran), who advertises himself as “everybody’s grandpa;” and Alvin the Alien (greetingsalvin), who is, um, an alien.
But not all the posts are AI-generated. Some of them have videos posted to their accounts as well, and while AI-generated video can certainly be convincing these days, I don’t think these videos are AI generated—at least, not all of them. Carter, the AI dating coach, had a cooking video from January 2024 that appeared very much to be real, but it seems Meta nuked all the content. Still, who posted them? To what end?
Viral samples from a patient in Louisiana who was hospitalized with severe H5N1 avian influenza show genetic mutations that could make the pathogen spread more easily among humans, the Centers for Disease Control and Prevention announced in a statement issued on Thursday.
The mutations were found in samples taken from the patient—but not in those from the backyard poultry that were believed to be the source of the infection. This suggests the changes occurred within the patient. While this development has not changed the CDC’s official assessment of risk to the general public, it does indicate that the H5N1 virus is capable of adapting to human airways.
“The detection of a severe human case with genetic changes in a clinical specimen underscores the importance of ongoing genomic surveillance in people and animals, containment of avian influenza A(H5) outbreaks in dairy cattle and poultry, and prevention measures among people with exposure to infected animals or environments,” the CDC statement said.
On December 18 the CDC confirmed the patient in Louisiana had been hospitalized with the first known severe H5N1 infection in the U.S. this year. The virus has been spreading among wild birds for several years. It was detected in U.S. dairy cows in March, and it has since infected hundreds of herds across 16 states. The Louisiana patient’s viral sequence matches a
different strain of the virus called D1.1, which has been detected in wild birds and poultry in the U.S.
The mutations seen in the Louisiana patient’s samples are confined to the virus’s hemagglutinin gene, which encodes proteins that help the virus bind to cells and infect them. These mutations are only rarely seen in people; a few have been reported in severe human cases, all outside of the U.S. One of the changes was detected in viral samples from a teenager in Canada who was hospitalized with a severe H5N1 infection in November. The Louisiana patient’s samples did not show any changes in the N1 neuraminidase section of the virus’s genome or other sections that could make the pathogen less susceptible to antiviral drugs. The sequences are also similar to those of existing H5N1 strains that can be used to make vaccines if needed.
A total of 65 confirmed human H5N1 infections have been detected in the U.S. so far this year. Most have been linked to exposure to infected cattle or poultry, and the majority have been mild. Infections have occurred in several other animals, including pet cats that may have consumed raw milk or meat from sick animals. The virus recently killed more than half of the big cats at a wildlife sanctuary in Washington State.
Against all expectations, Donald Trump won his second term in office. It’s a victory he can thank, in part, Gen Z men for.
Voting data from The Wall Street Journal indicates that Trump overperformed among Generation Z, born between 1997 and 2012, compared with his marks from 2020 and made far greater gains among men than women.
Despite what many pundits (including myself) thought, it turns out the Trump-Vance campaign’s frat-bro conservatism strategy worked, and was a big part of what won them the White House.
Against all expectations, Donald Trump won his second term in office. It’s a victory he can thank, in part, Gen Z men for.
Voting data from The Wall Street Journal indicates that Trump overperformed among Generation Z, born between 1997 and 2012, compared with his marks from 2020 and made far greater gains among men than women.
Despite what many pundits (including myself) thought, it turns out the Trump-Vance campaign’s frat-bro conservatism strategy worked, and was a big part of what won them the White House.
Between 2020 and 2024, Gen Z men shifted 15 percentage points rightward, the largest age/gender swing in this election. Women of the same age range moved 7 points in the same direction.
Opinion:Republicans have a Gen Z problem
But it wasn’t just the shift in men that pushed Trump over. He saw spikes in minority voters, according to NBC News data.
Trump overperformed recent preelection polls, which indicated he had about a 20-point disadvantage among Gen Z, and even more of a deficit among likely Gen Z voters.
A big part of that shift has to do with the economy, the No. 1 issue for Gen Z as a whole and one they trust Trump with more than Vice President Kamala Harris: 31% of Gen Z said the economy was their priority before the election, an issue that voters have long preferred Trump on, despite some recent tightening of the polls on the issue.
.
Supporters watch former President Donald Trump campaign for reelection in Raleigh, N.C. on Nov 4, 2024
December’s spate of drone sightings seen in New Jersey and spreading nationwide, sure looks familiar. As does its associated media frenzy—culminating in memes and conspiracy theories about so-called “mystery drones.” The episode bears an eerie resemblance to the UFO phenomenon, or the unidentified anomalous phenomena (UAP) one, that spiked in recent years and has led to significant congressional attention and legislation.
In a way, this is progress. The reason this outbreak looks so familiar is that such drone sightings would previously have been identified as UAP ones. It’s only after years of concerted efforts in education, and transparency by U.S. Department of Defense officials, that UAP sightings have rightfully evolved into common drone identification. That is not to say that the drone sightings are any less of a concern, but fortunately, we can address them without the contagion of the UFO community and the conspiracies associated with it.
Unfortunately, our response has been no less irrational.
A New Jersey state assemblyman has accused federal officials of “lying to us” about drones on CNN. The president-elect suggested we “shoot them down!!!” which is almost (but only almost, sadly) needless to say, a bad idea. So is wasting resources to investigate nonsensical notions of advanced technology related to Iran or, again, aliens. Calls for shooting objects down not only have obvious safety issues but fail to recall that Congress and the White House limited such strikes over U.S. territory after the incidents involving the Chinese high-altitude balloon and other balloons, based on concerns about civilian safety.
There are a couple of things we need to make clear about the drone sightings. First, many of the sightings remain mistaken interpretations of manned aircraft or satellites such as Starlink ones. The real drone sightings fall into two classes: those that are in restricted airspace, and those that are in legal airspace. Restricted airspace surrounds airports as well as national security areas such as Air Force and Navy bases. Most sightings reported fall within the latter category and have been assessed as having no immediate national security or flight safety risk, although the public finds them annoying.
One fact that many people tend to overlook, or at least don’t readily rationalize, is that these drones have lights on them. That lights are present on various flying objects including drones (also known as unmanned aerial vehicles, or UAVs) is a fact I often referred to in my last job, heading a Pentagon office investigating UAP sightings. Lights on a drone are for collision avoidance. They are a safety feature. Flying drones with lights ensures they can be seen; if they were meant to be unnoticed, the operators would turn off or disable the lights. In September 2023, the FAA changed the rules to allow drones to fly at night this way, and this is likely a contributing factor to the increase in sightings. The public and elected officials in Congress continue to believe lights in the sky are scary, however, particularly when they mistake crewed aircraft for drones.
Congressional officials and that unfortunate source of information, social media, continue to make unfounded claims of drone technologies far ahead of U.S. capabilities. The most recent example being the assertion that drones flew from an Iranian mother ship off the coast of the U.S. and demonstrated seven or eight hours of battery life. That fantastic assertion requires there to be evidence that the drones originated from an Iranian ship and were tracked continuously to the U.S. cities. There are no such tracks. The more rational explanation is that they originated near the place of the sighting, that is from domestic operators.
However, that doesn’t mean some drone operations aren’t ill-intentioned.
Several hypotheses (apart from mistaken identity) might explain these drones in legal airspace. They might be academic, professional or hobbyist domestic operators exploring a new technology. YouTube overflows with drone footage from amateur photographers all over the world. Flying in urban settings, in legal airspace, for photography, or maybe even some research such as high resolution thermal or pollution measurements is very plausible. Or they are commercial. Increased commercial activity is unavoidable as industry advances drone technology for delivery, remote sensing, and communications.
Film and Writing Festival for Comedy. Showcasing best of comedy short films at the FEEDBACK Film Festival. Plus, showcasing best of comedy novels, short stories, poems, screenplays (TV, short, feature) at the festival performed by professional actors.