What does an eclipse look like from the moon? Firefly Aerospace’s Blue Ghost lander just sent back a stunning image from the lunar surface.
The commercial space company’s lander, which touched down on the moon without a hitch on March 2 as part of a mission for NASA, took a high-definition image of the eclipse from its top deck in the early hours of Friday — and it’s mighty beautiful.
“Blue Ghost got her first diamond ring! Captured at our landing site in the Moon’s Mare Crisium around 3:30 am CDT, the photo shows the sun about to emerge from totality behind Earth,” Firefly Aerospace wrote online. (A “diamond ring” effect happens during a total solar eclipse when the sun just begins to appear from behind Earth; it also happens just before Earth totally blocks the sun.)
From our perch on Earth, it was a total lunar eclipse, and from the moon it was a total solar eclipse. These unique events happen when the sun, Earth, and moon align, allowing Earth to cast a shadow on the moon and block most sunlight from reaching the lunar surface. But our planet’s atmosphere still allows red wavelengths of light to squeeze through and travel through space, illuminating the moon in reddish, rusty, orangish, or crimson colors.
According to the company, it’s “the first time in history a commercial company will be actively operating on the Moon and able to observe a total solar eclipse where the Earth blocks the sun and casts a shadow on the lunar surface.”
Firefly’s Blue Ghost Mission 1 launched on Jan. 15 and landed on the moon on March 2 after a 45-day trip — and the photos Blue Ghost has been sending back are breathtaking. In the above photo, you can also see the Blue Ghost lander’s NASA equipment, including a Lunar Environment heliospheric X-ray Imager, Lunar Magnetotelluric Sounder mast, and X-band antenna.
.Blue Ghost moon lander just beamed back stunning photo of the eclipse
An Australian man in his forties has become the first person in the world to leave hospital with an artificial heart made of titanium. The device is used as a stopgap for people with heart failure who are waiting for a donor heart, and previous recipients of this type of artificial heart had remained in US hospitals while it was in place.
The man lived with the device for more than three months until he underwent surgery to receive a donated human heart. The man is recovering well, according to a statement from St Vincent’s Hospital in Sydney, Australia, where the operations were conducted.The Australian is the sixth person globally to receive the device, known as BiVACOR, but the first to live with it for more than a month.
“This is certainly an important development in the field,” says Julian Smith, a cardiac surgeon at the Victorian Heart Institute at Monash University in Melbourne, Australia.
“It is incredibly innovative,” says Sarah Aitken, a vascular surgeon at the University of Sydney, but she adds that there are still many unanswered questions about the level of function that people with it can achieve and the ultimate cost of the device. “This kind of research is really challenging to do because it is very expensive” and the surgery involved is very high-risk, says Aitken.
The latest success will help researchers to understand how people cope with this device in the real world, says Joseph Rogers, a heart-failure cardiologist and president of the Texas Heart Institute in Houston. “They weren’t being constantly monitored by medical teams,” says Rogers, who led the first trial of the device in the United States last year.
In all cases, the BiVACOR was used as a temporary measure before a donor heart became available. Some cardiologists say that it could become a permanent option for people not eligible for transplants because of their age or other health conditions, although the idea still needs to be tested in trials. In the United States, close to 7 million adults live with heart failure, but only about 4,500 heart transplants were performed in 2023, in part because of a shortage of donors.
Suspended rotor
BiVACOR was invented by biomedical engineer Daniel Timms, who founded a company named after the device, with offices in Huntington Beach, California and Southport, Australia.
The device is a total heart replacement and works as a continuous pump in which a magnetically suspended rotor propels blood in regular pulses throughout the body. A cord tunnelled under the skin connects the device to an external, portable controller that runs on batteries by day and can be plugged into the mains at night.
Many mechanical heart devices support the left side of the heart, and typically work by pooling blood in a sack, which flexes some 35 million times a year to pump blood. But these devices have many parts and often suffer failures. BiVACOR, which only has one moving part, will in theory experience fewer problems of mechanical wear, says Rogers.
US trials
The Australian recipient of BiVACOR had severe heart failure, and received the titanium device in a six-hour operation in November. In February, he was discharged from hospital, stayed in a residence close by and led a relatively normal life. In March, he received a donor heart.
.
The BiVACOR is a total heart replacement made of titanium. Jason Fochtman/Houston Chronicle via Getty Images
Imagine a world where everyone had brown skin. Tens of thousands of years ago, that was the case, say scientists at Pennsylvania State University. So, how did white people get here? The answer lies in that tricky component of evolution known as a genetic mutation.
Out of Africa
Scientists have long known that Africa is the cradle of human civilization. There, our ancestors shed most of their body hair around 2 million years ago, and their dark skin protected them from skin cancer and other harmful effects of UV radiation. When humans began leaving Africa 20,000 to 50,000 years ago, a skin-whitening mutation appeared randomly in a sole individual, according to a 2005 Penn State study. That mutation proved advantageous as humans moved into Europe. Why? Because it allowed the migrants increased access to vitamin D, which is crucial to absorbing calcium and keeping bones strong.
“Sun intensity is great enough in equatorial regions that the vitamin can still be made in dark-skinned people despite the ultraviolet shielding effects of melanin,” explains Rick Weiss of The Washington Post, which reported on the findings. But in the north, where sunlight is less intense and more clothing must be worn to combat the cold, melanin’s ultraviolet shielding could have been a liability.
Just a Color
This makes sense, but did scientists identify a bonafide race gene as well? Hardly. As the Post notes, the scientific community maintains that “race is a vaguely defined biological, social, and political concept…and skin color is only part of what race is—and is not.”
Researchers still say that race is more of a social construct than a scientific one because people of purportedly the same race can have as many differences in their DNA as people of separate so-called races do. It’s also difficult for scientists to determine where one race ends and another begins, considering that people of supposedly different races may have overlapping features in terms of hair color and texture, skin color, facial features, and other characteristics.
Members of Australia’s aboriginal population, for example, sometimes have dark skin and blond hair of various textures. They share traits with people of African and European ancestry alike, and they are far from the only group not to fit squarely into any one racial category. Scientists posit that all people are roughly 99.5% genetically identical.
The Penn State researchers’ findings on the skin-whitening gene1 show that skin color accounts for a minuscule biological difference between humans.
“The newly found mutation involves a change of just one letter of DNA code out of the 3.1 billion letters in the human genome—the complete instructions for making a human being,” the Post reports.
Skin Deep
When the research was first published, scientists and sociologists feared that the identification of this skin-whitening mutation would lead people to argue that whites, Blacks, and others are somehow inherently different. Keith Cheng, the scientist who led the team of Penn State researchers, wants the public to know that’s not so. He told the Post, “I think human beings are extremely insecure and look to visual cues of sameness to feel better, and people will do bad things to people who look different.”
His statement captures what racial prejudice is in a nutshell. Truth be told, people may look different, but there’s virtually no difference in our genetic makeup. Skin color really is just skin deep.
Not So Black and White
Scientists at Penn State continue to explore the genetics of skin color. In a 2017 study published in the journal Science, 2 researchers report their findings of even greater variants in skin color genes among native Africans.
The same appears to be true of Europeans, given that, in 2018, researchers used DNA to reconstruct the face of the first British person, an individual known as the “Cheddar man” who lived 10,000 years ago. The scientists who took part in the reconstruction of the ancient man’s face say that he most likely had blue eyes and dark brown skin. While they do not know for sure what he looked like, their findings dispute the idea that Europeans have always had light skin.
Such diversity in skin color genes, says evolutionary geneticist Sarah Tishkoff, the lead author of the 2017 study, likely means that we can’t even speak of an African race, much less a white one. As far as people are concerned, the human race is the only one that matters.
In 2025, there have been at least three measles outbreaks in the U.S. The biggest outbreak to date is in Texas, with 198 cases, resulting in 23 hospitalizations and one death of a school-aged child, as of March 7, 2025.1 This child had not received the measles vaccine and had no underlying health conditions.
Unfortunately, amid these scary outbreaks, some misinformation about measles treatments and prevention has also been spreading—specifically, the idea that vitamin A can be used to treat and prevent measles.
This has led the the American Academy of Pediatrics (AAP) to recently release online statements warning parents not to rely on vitamin A for measles prevention, and that too much vitamin A can be dangerous for children.
The AAP emphasizes, “measles-mumps-rubella (MMR) vaccination remains the most important tool for preventing measles.” The AAP also underscores how extremely contagious the measles is, noting the virus can remain in the air as many as two hours after someone infected has left the vicinity. As such, their focus is on the importance of vaccination as the primary preventative tool for measles.
Vaccines Are the Best Way To Prevent Measles
Therese Linnon, DO, a pediatrician at Akron Children’s Hospital says while vitamin A can be used to help with some symptoms of the measles once a patient has been diagnosed, it is far better to prevent the infection in the first place by getting the measles vaccine.
“Vitamin A should not be a replacement for the vaccine. There is no dose of vitamin A that will protect them from getting the measles virus,” she explains.
Mahvash Madni, MD, a pediatrician and spokesperson for the AAP agrees, referencing the hundreds of measles-related child deaths in the U.S. every year, prior to the existence of a vaccine.
“Nutrition and a strong immune system are important in helping prevent disease but certain viruses that are very powerful can overwhelm the immune system regardless of our best efforts,” she says. “Measles is one of these viruses. That is why years of research and effort went into coming up with an extremely safe and effective vaccine which was put into effect decades ago.
As per CDC guidelines, the current measles vaccine—the MMR vaccine—should be given in two doses. The first dose should be given when a child is 12-15 months, and the second dose should be given between 4 and 6 years. Two doses of MMR vaccine is 97% effective against measles infections.
Can Vitamin A Be Used To Treat Measles?
First of all, it’s important to understand that the measles isn’t just a virus that causes an annoying rash.
“Children feel and look very ill,” explains Dr. Madni. “It can cause pneumonia, neurological problems like encephalitis and death.”
As Dr. Madni and the CDC note, for every 1,000 children who get the measles, between one and three of them will die.
Moreover, Dr. Madni emphasizes, there are no treatments or “cures” for the measles. “It has to run its course like most viruses,” she says. So why do some people suggest supplementing with vitamin A?
Ukraine’s mineral wealth has been a key factor in its negotiations with the U.S. as the two countries work out details for a ceasefire agreement in Ukraine’s war with Russia.
After a rocky start to those negotiations, officials from the U.S. and Ukraine announced an agreement on March 11, 2025. The U.S. would resume support and intelligence sharing with Ukraine, with some conditions, and both agreed to work toward “a comprehensive agreement for developing Ukraine’s critical mineral resources to expand Ukraine’s economy and guarantee Ukraine’s long-term prosperity and security.”
The initial announcement from Ukraine’s government stated that critical minerals would also “offset the cost of American assistance,” but that line was removed from the joint statement. Getting Russia to agree to a ceasefire would be the next step.
There’s no doubt that Ukraine has an abundance of critical minerals, or that these resources will be essential to its postwar reconstruction. But what exactly do those resources include, and how abundant and accessible are they?
The war has severely limited access to data about Ukraine’s natural resources. However, as a geoscientist with experience in resource evaluation, I have been reading technical reports, many of them behind paywalls, to understand what’s at stake. Here’s what we know.
Ukraine’s minerals fuel industries and militaries
Ukraine’s mineral resources are concentrated in two geologic provinces. The larger of these, known as the Ukrainian Shield, is a wide belt running through the center of the country, from the northwest to the southeast. It consists of very old, metamorphic and granitic rocks.
A multibillion-year history of fault movement and volcanic activity created a diversity of minerals concentrated in local sites and across some larger regions.
A second province, close to Ukraine’s border with Russia in the east, includes a rift basin known as the Dnipro-Donets Depression. It is filled with sedimentary rocks containing coal, oil, and natural gas.
Before Ukraine’s independence in 1991, both areas supplied the Soviet Union with materials for its industrialization and military. A massive industrial area centered on steelmaking grew in the southeast, where iron, manganese and coal are especially plentiful.
By the 2000s, Ukraine was a significant producer and exporter of these and other minerals. It also mines uranium, used for nuclear power.
In addition, Soviet and Ukrainian geoscientists identified deposits of lithium and rare earth metals that remain undeveloped.
However, technical reports suggest that assessments of these and some other critical minerals are based on outdated geologic data, that a significant number of mines are inactive due to the war, and that many employ older, inefficient technology.
That suggests critical mineral production could be increased by peacetime foreign investment, and that these minerals could provide even greater value than they do today to whomever controls them.
.
Granite being mined on February 26, 2025, in the Zhytomyr region of Ukraine. Despite the ongoing war, many mining companies across the country have continued their operations, extracting resources such as titanium, graphite, and beryllium. bKostiantyn Liberov/Libkos/Getty Images
We have discovered the oldest meteorite impact crater on Earth, in the very heart of the Pilbara region of Western Australia. The crater formed more than 3.5 billion years ago, making it the oldest known by more than a billion years. Our discovery is published today in Nature Communications.
Curiously enough, the crater was exactly where we had hoped it would be, and its discovery supports a theory about the birth of Earth’s first continents.
The very first rocks
The oldest rocks on Earth formed more than 3 billion years ago, and are found in the cores of most modern continents. However, geologists still cannot agree how or why they formed.
Nonetheless, there is agreement that these early continents were critical for many chemical and biological processes on Earth.
Many geologists think these ancient rocks formed above hot plumes that rose from above Earth’s molten metallic core, rather like wax in a lava lamp. Others maintain they formed by plate tectonic processes similar to modern Earth, where rocks collide and push each other over and under.
Although these two scenarios are very different, both are driven by the loss of heat from within the interior of our planet.
We think rather differently.
A few years ago, we published a paper suggesting that the energy required to make continents in the Pilbara came from outside Earth, in the form of one or more collisions with meteorites many kilometres in diameter.
As the impacts blasted up enormous volumes of material and melted the rocks around them, the mantle below produced thick “blobs” of volcanic material that evolved into continental crust.
Our evidence then lay in the chemical composition of tiny crystals of the mineral zircon, about the size of sand grains. But to persuade other geologists, we needed more convincing evidence, preferably something people could see without needing a microscope.
So, in May 2021, we began the long drive north from Perth for two weeks of fieldwork in the Pilbara, where we would meet up with our partners from the Geological Survey of Western Australia (GSWA) to hunt for the crater. But where to start?
.
Shatter cones in ancient rocks of the Pilbara, Western Australia. Tim Johnson, Curtin University
In 1971, a British doctor was trying to puzzle out a mystery: How can a child with no signs of external trauma or injury present with bleeding between the skull and brain? That doctor, A. Norman Guthkelch, was part of a wave of physicians and researchers newly concerned that an epidemic of severe child abuse had been passing, undetected, beneath doctors’ noses.
As one law-review article recounts, “Prior to the 1960s, medical schools provided little or no training on child abuse, and medical texts were largely silent on the issue.” A turning point was the publication of the 1962 article “The Battered-Child Syndrome,” which urged physicians to consider that severe child abuse may be at play when children came in with injuries such as bone fractures, subdural hematomas, and bruising.
The article goes beyond offering medical advice to prescribing an ethical framework that would take hold: “The bias should be in favor of the child’s safety; everything should be done to prevent repeated trauma, and the physician should not be satisfied to return the child to an environment where even a moderate risk of repetition exists.”
Armed with these new insights, Guthkelch hypothesized that the children showing up to his hospital were being abusively shaken. Although they did not show up with the usual fractures or visible forms of physical trauma, the presence of a subdural hematoma could indicate what would come to be widely known as “shaken baby syndrome.”
Decades later, Guthkelch would publicly worry that his hypothesis had been taken too far. After reviewing the trial record and medical reports from one case in Arizona, NPR reported that he was “troubled” that the conclusion was abusive shaking when there were other potential causes. “I wouldn’t hang a cat on the evidence of shaking, as presented,” Guthkelch quipped.
The narrow claim that shaking a baby abusively can result in certain internal injuries morphed into the claim that if a set of internal injuries were present, then shaking must be the cause. On today’s episode of Good on Paper, I talk with a neuroscientist who found himself personally embroiled in this scientific and legal controversy when a caretaker was accused of shaking his child.
Cyrille Rossant is a researcher and software engineer at the International Brain Laboratory and University College London whose Ph.D. in neuroscience came in handy when he delved into the research behind shaken baby syndrome and published a textbook with Cambridge University Press on the scientific controversy that embroiled his family.
Jerusalem Demsas: Many forms of scientific expertise in criminal-justice proceedings have been debunked or come under scrutiny in recent years. Things like bite-mark analysis and blood-spatter analysis used to be commonly understood as rigorous empirical analysis. But these questionable theories often fall apart on closer inspection.
This is how science is supposed to work. Experts observe, they hypothesize, they test, and they revise their previous understandings of the world. And in academia and in scientific journals, that’s all well and good—but what happens when evolving science is brought into the courtroom? In a courtroom, no one is well positioned to rigorously evaluate a scientific debate: not judges, not jurors, and not even the people calling expert witnesses.
Solar flares are bursts of radiation from the sun’s surface, sometimes followed by a bubble of magnetized plasma particles called a coronal mass ejection (CME). If they happen to spray out in Earth’s direction, CMEs can cause geomagnetic storms that damage power systems on the ground or spacecraft in orbit. And solar flare radiation itself can disrupt communication networks and satellite operations.
Unfortunately, solar scientists cannot reliably predict when the sun will belch out a flare. After one is observed, every minute counts in the ensuing scramble to adjust power grids or move satellites before they get damaged.
Now researchers have used data from NASA’s Solar Dynamics Observatory to show that distinctive flickering in the huge loops of roiling plasma that arch up out of the sun’s atmosphere, called the corona, seems to signal that a large flare could soon occur. This link could help researchers brace for the flare and look out for signs that an incoming CME could hit Earth within a couple of days.
Emily Mason, a heliophysicist at San Diego-based research firm Predictive Science, and her colleagues observed coronal loops in magnetically active regions where 50 strong solar flares occurred. They found that the loops’ ultraviolet light output varied erratically a few hours before a flare, the team told a recent meeting of the American Astronomical Society in Maryland. “It gives us one to two hours’ warning, with 60 to 80 percent accuracy, that a flare is coming,” Mason says.
“If we want to be able to predict solar storms earlier, then we have to predict when the flare will happen,” says Mathew Owens, a space physicist at the University of Reading in England. “Small gains there are valuable.”
Crucially, the researchers used a near-real-time data stream with just an hour’s lag rather than working with data that have been processed to improve quality, which can take weeks. Mason and her team observed flares on the sun’s outer edges from our perspective, or limbs, because that is where their light can best be seen from Earth. Flares on the sun’s eastern limb will head away from Earth as the sun rotates, but those on the western limb may hit the planet’s atmosphere, Mason says.
For now, our viewpoint means we can’t easily see loops emanating from elsewhere on the sun. But the European Space Agency is planning to launch a spacecraft called Vigil in 2031 that should give us a side-on perspective. “Being able to see the sun from more different angles is the single most important thing that we can do to improve our predictions,” Mason says. She hopes predicting big flares can help keep astronauts and electrical systems safe.
.
Analyzing huge loops in the sun’s corona (its atmosphere) can predict potentially dangerous solar flares. DETLEV VAN RAVENSWAAY/Science Source
When we think of concrete, water is usually a key ingredient that comes to mind. But what if I told you it’s possible to create concrete without a single drop of water? This innovative approach is transforming the construction industry by offering sustainability and efficiency in areas where water is scarce or conservation is a priority.
In this article, I’ll explore the fascinating world of waterless concrete and how it’s changing the way we build. We’ll delve into:
The science behind waterless concrete: Understanding its composition and how it works.
Benefits and challenges: What makes it a game-changer and the hurdles it faces.
Applications and future potential: Where it’s being used today and what the future holds.
Join me as we uncover the potential of this groundbreaking material and its impact on modern construction.
Understanding Concrete Without Water
Concrete without water is a surprising twist on a staple construction material. Instead of liquid, it uses a dry mix activated by alternative binding agents. This innovative approach cuts down on water use, addressing scarcity issues. According to a study by Chen et al., waterless concrete can achieve strength comparable to traditional mixtures when specific polymers are introduced during mixing.
The core of this technique revolves around polymers and other chemical compounds that replace water. They initiate the hardening process, reducing the dependency on water. Research highlights that this method can decrease construction time, as some dry-mix formulas set faster. For instance, tests revealed that a polymer-based dry mix showed a 30% faster setting time compared to typical concrete.
By cutting water dependency, this technology not only offers sustainability but also expands construction possibilities in arid regions where water’s a vital yet scarce resource. It’s an exciting aspect of construction that could reshape how we think about building materials.
The Science Behind Dry Concrete
Dry concrete, or waterless concrete, relies on innovative technology to enable binding without liquid. The transformation from traditional to waterless methods represents a significant shift in construction.
Components and Composition
Dry concrete consists of cement, aggregates like sand or crushed stone, and specialized additives. These additives play a crucial role. Polymers and other chemicals replace water, allowing dry concrete to achieve necessary strength and durability. According to a study in the Journal of Advanced Concrete Technology, using polymers can lead to compressive strengths of up to 60 MPa, comparable to conventional concrete. This advanced composition reduces reliance on water while maintaining structural integrity.
Chemical Reaction Process
The chemical reactions in dry concrete differ from those in regular concrete. In standard mixes, water triggers hydration, bonding the cement particles. Here, polymers and other additives initiate the curing process. A report by Construction and Building Materials indicates that certain dry mixes set up to 30% faster due to this alternative reaction. These rapid reactions offer practical and logistical advantages, particularly in environments with limited water availability.
Benefits of Using Concrete Without Water
Concrete without water offers several benefits, making it an appealing choice in modern construction. By using innovative alternatives, it addresses sustainability and efficiency.
Environmental Advantages
Concrete without water significantly reduces water consumption, leading to greater sustainability. Producing traditional concrete consumes a substantial amount of potable water, sometimes up to 200 liters per cubic meter according to USGS data. In contrast, waterless concrete eliminates this need, conserving precious water resources, especially crucial in arid regions. Additionally, this type of concrete often involves less dust emissions during mixing and handling, decreasing air pollution and contributing to cleaner project sites.
The question came innocently enough: What do you want to be when you grow up? Lindsay’s daughter, after a brief pause, looked up and confidently replied, “I want to be a client.”
The simplicity of the answer hid the complexity of what she had observed: The clients always seemed to get the very best version of her mother. In her daughter’s young mind, being a client meant holding a special place—one that commands focus, care, and an unwavering commitment.
As two mothers navigating full-time legal careers, that moment was not lost on either of us. It reveals a truth that is often glossed over in the narratives about working women, especially those of us balancing professional intensity with parenting. Beneath the thin veneer of “having it all,” we know all too well the quiet sacrifices and compromises that characterize our balancing act. The spotlight may be on our professional accomplishments, but in the shadows our children wait patiently for our attention, often competing with the demands of a profession that do not easily relent.
The Weight of Expectation
Too often the complexities of ambition, motherhood, and professional duty are distilled into stereotypes that seek to diminish rather than dignify. It’s a familiar story—the notion that a woman with power and responsibility must inevitably be lacking elsewhere. Or that her identity as a mother or partner is somehow contrary to her professional persona. These narratives, however veiled, carry weight.
But let’s say what that really means. It means that the diligence and tenacity we bring to our careers and our clients are identical to the dedication we offer to our families. It means that the long hours spent advocating for clients are juxtaposed with the quiet moments at home, where the stakes are equally high, even if measured in hugs rather than verdicts. It means that, despite the portrayal of women in leadership as one-dimensional, we are more. We are multifaceted, resilient, and deeply invested in both our professions and our roles as mothers.
Living with the Tension
The path of a working mother demands a constant recalibration of priorities where both career and family vie for equal attention and each carries its own form of guilt. The notion of “balance” is a fallacy. At least that’s what we’ve learned from years of trying to juggle our careers and motherhood. Instead, it’s a constant series of trade-offs and compromises leading us to understand that each day is unique.
There’s no neat division between “work” and “life” anymore. Mornings usually start early, working before the rest of the house wakes up. We often work with one eye on the clock, calculating the minutes until we sprint from the office to catch a school or sport event.
Or days when there’s a sick child and no available caregiver, the idea of balance seems laughable. This has forced us to rethink how we define success—not by perfection but by flexibility and resilience. It’s about being okay with the days that feel like controlled chaos and accepting that sometimes one part of life will have to be put on pause for the other.
When our daughters see us in action—they don’t just witness the power, grace, and poise required of our profession; they see the weight of that responsibility and the effort and dedication it takes to give both our clients and our children the best of us.
The Lessons We Teach
As children we dreamed of becoming lawyers, mothers, or both, imagining these roles as ultimate markers of success and happiness. Our daughters, however, have grown up watching us navigate the realities of those choices and their dreams for us are different.
Film and Writing Festival for Comedy. Showcasing best of comedy short films at the FEEDBACK Film Festival. Plus, showcasing best of comedy novels, short stories, poems, screenplays (TV, short, feature) at the festival performed by professional actors.