Wednesday, 30 November 2022

Sony Launches Personal Motion Tracking System for 3D Avatars

(Image: Sony)
Just about every household name in the tech space wants a piece of the metaverse pie, and Sony is no different. On Tuesday the company launched Mocopi, a motion-tracking system that translates one’s physical movements into animation in a virtual space.

Mocopi, named for its motion capture abilities, uses six wireless trackers worn on the wrists, ankles, head, and back to record a user’s movement in real time. The color-coded trackers are each about the size and shape of an Apple AirTag and attach to stretchy bands and a pants clip for easy wearing. As the user moves around, an algorithm translates information from the trackers’ tiny sensors into data received by a dedicated smartphone app.

As of now, it appears Sony’s Mocopi smartphone app serves only as a demo for the system’s motion-tracking capabilities. Its promotional video depicts in-sync 3D avatar dances that can be recorded and played back later. Per Sony’s Japanese press release, however, the company will release a software development kit (SDK) in mid-December. The SDK will allow developers to combine Mocopi’s motion-tracking with metaverse services and 3D development software to create interactive fitness and community experiences.

Though it hasn’t yet confirmed any specific plans, Sony says it will eventually work at partnering with other companies to create exclusive Mocopi experiences. Rather than holding onto bulky controllers to move around in a virtual reality (VR) space, third parties could combine Mocopi with headset-centered experiences to become more immersive and allow “new” movements like kicks—something handheld controllers can’t facilitate.

Mocopi is 49,500 yen, or approximately $356. Though that isn’t prohibitively expensive on its own, it’s a bit much for something that’ll likely only work with a few non-VR headset games. With a headset, you’re looking at quite an expensive setup: Mocopi, a VR headset, and a console or PC can together cost thousands. It’s still cool technology, though, given it’s easier to get into the zone when you aren’t holding a bunch of hardware.

Mocopi’s success will ultimately depend on how many virtual experiences it’s compatible with. The metaverse hasn’t been looking too hot lately, but even if that were to fail entirely, VR and augmented reality (AR) markets might accept Mocopi with open arms. Animators might be interested in real-time motion tracking hardware like Mocopi, too, as it (ideally) helps to reveal people’s natural movements.

Now Read:



3D Radiogram of Mars’ North Pole Uncovers a ‘Hidden Canyon’

Mars is the only known planet aside from Earth that has polar ice caps, but unlike Earth, the ice on Mars is mostly of the “dry” carbon dioxide variety. Naturally, there’s great interest in better understanding the Martian polar regions. A new analysis of Mars using data from NASA’s Mars Reconnaissance Orbiter (MRO) has revealed previously hidden structures under the northern ice cap — as seen above by the Mars Global Surveyor. The researchers found wavy landscapes, impact craters, and even a large canyon all buried under the ice.

The Mars Reconnaissance Orbiter has been observing the red planet from above since 2006. Among its suite of instruments is a special type of penetrating radar called Shallow Subsurface Radar (SHARAD). It emits radar waves between 15 and 25 megahertz, which can pass through up to 4 kilometers of material before bouncing back to the orbiter. It has a depth resolution of about 15 meters. This instrument has been returning data on the ice caps and other regions of Mars for years, but the team from the Planetary Science Institute (PSI) did something new with it.

SHARAD was included on the MRO to complement the MARSIS radar on the Mars Express orbiter. MARSIS can bounce its radar waves deeper into the planet, but SHARAD has a much higher resolution. And with clever data processing, the PSI team boosted the effective resolution. The team processed years of 2D scans from SHARAD using advanced 3D imaging methods to remove noise and interference. The result is a sharper 3D image of the planetary structures below the layers of frozen carbon dioxide. This 3D “radargram” makes it possible to identify things that are difficult or impossible to see otherwise.

The study has been published in the Planetary Science Journal. The image above is a composite, showing a single horizontal slice through the northern polar ice cap (known as Planum Boreum) at the bottom. The other sections are vertical slices. The dark areas near the middle is a 300-kilometer region that MRO cannot see from its orbit. The data reveals surface details like the chasm to the right and the impact crater at the bottom of the horizontal slice.

The team believes this same technique could be used to create 3D representations of structures in other regions of the planet. Next, they plan to scour the Planum Boreum data for more buried impact craters and structures.

Now read:



Trailblazing Orion Snaps Stunning Selfie With Earth, Moon

Image: NASA

Just over halfway through its 26-day mission, the Orion capsule has reached its greatest distance away from Earth. Previously, the Apollo 13 mission had held the all-time record: 248,000 miles. But at its farthest, Orion was 270,000 miles away from the planet’s surface. And while it was out there, it snapped this selfie of itself and the Earth:

Image: NASA

In the full image, you can see the Orion capsule, with the Earth and Moon in the background. At this moment, the capsule was 268,547 miles from Earth and 43,318 miles from the Moon, traveling at 1,679 miles per hour. Telemetry remains nominal as of early Wednesday morning.

While Artemis 1 has no human crew members, the flight does still have passengers. Orion is carrying special mannequins, or ‘moonikins,’ whose prime directive is to test out next-generation space gear. NASA administrator Bill Nelson explained, during a Monday evening briefing from the agency’s Johnson Space Center in Houston:

“Many of us know: Arturo Campos was a NASA engineer who developed a plan to bring the crippled Apollo 13 crew home safely. For a mission where something terrible went wrong, it’s in the annals at NASA as one of our most successful missions — because they saved the crew. Well, on Orion now is Commander Moonikin Campos, his namesake. […] He’s outfitted with sensors to provide data on what crew members will experience in flight. And that data will continue Campos’ legacy of enabling human exploration in deep space.”

Beside Cmdr. Campos ride two other ‘moonikins,’ Helga and Zohar. All three are positively bristling with sensors that will tell NASA about the radiation environment and kinetic forces that lunar astronauts will experience. Cmdr. Campos is also wearing a radiation protection vest that the agency is testing for later Artemis flights. In addition, Helga and Zohar are both built to test out protective gear in more inclusive sizes.

To Boldly Go

NASA has a gender problem. The agency infamously asked its first female astronaut, Sally Ride, whether a hundred tampons would be enough for the two-week flight. Decades later, Ride was still laughing into her coffee about NASA’s desperately oblivious ideas on what female astronauts might need. Despite being literal, actual rocket scientists, Ride told the agency’s History Office in a 2002 interview, these men thought space makeup was an essential part of a female astronaut’s EDC. Never mind a zero-G toilet that can accommodate female anatomy. Gimme that space blush.

Surprising few, the kit was never used. Meanwhile, NASA finally fielded an anatomically inclusive toilet on the International Space Station — in 2020.

Space blush, oy. The powdery particulate alone should have killed this idea before it ever made it off a napkin. Besides, someone’s double standards are showing — I don’t see space guyliner.

But the joke falls flat when the spaceship safety harnesses and space suits don’t fit astronauts right, because they’re sized for just one type of male body. NASA had to shuffle the roster for a 2019 spacewalk outside the ISS, because it simply didn’t have enough mix-and-match space suit parts to garb both Anne McClain and Christina Koch, the two females who would have done the excursion, at the same time. Instead, a male astronaut took McClain’s place on the spacewalk. Female bodies are statistically shorter and slenderer than males. As a result, females sustain a disproportionate number of injuries in accidents and collisions. But the female-bodied ‘moonikins’ aboard Orion are the vanguard of a change.

What Happens Next

Orion just passed the halfway mark in its mission to the moon and back. It will remain in a distant retrograde lunar orbit (DRO) until a few days into December when it makes its first course correction burn to head back Earthside. In this mission itinerary, we’re between steps eleven and twelve:

Image: NASA

During a briefing on flight day 13 (Monday), Artemis 1 mission manager Mike Sarafin said that two-thirds of Orion’s docket is complete or in progress. Many of the spacecraft’s remaining “real-time objectives” take place during the descent phase. “We’re continuing to proceed along the nominal mission,” said Sarafin, “and we’ve passed the halfway point in terms of distance from earth, time in the mission plan, and in terms of mission objectives.”

But the mission is going well. Artemis 1 lead flight director Rick Labrode said that the team opted out of the most recent of Orion’s nineteen scheduled burns. Sarafin also reported that the mission has actually “close[d] one of our anomaly resolution teams associated with the star trackers and the random access memory built-in test hardware that we’re seeing a number of funnies on.”

“The next greatest test for Orion (after the launch),” said Nelson, “is the landing.” Orion will hit our atmosphere at around 25,000 mph. For reference, that’s about Mach 32. The capsule will dip into the atmosphere to slow itself to a mere 17,000 mph, or Mach 22, added Nelson. Artemis 1 will end when Orion splashes down in the Pacific on December 11.

The inaugural Artemis flight had only its ‘moonikins’ aboard. However, future missions will carry human crew members. Artemis 2 will fly four human crew members to lunar orbit. “They are going to the Moon, to lunar orbit, in preparation for Artemis 3,” Nelson said. Rather than confining itself to lunar orbit, Artemis 3 will actually land humans on the lunar surface. For Artemis 3, Nelson said, “We will have four [astronauts] go into a lunar polar elliptical orbit, and we’ll then have two astronauts in the Lander go down to the surface. That will be the first woman, and the next man.”

Now Read:


Samsung’s New GDDR6W Graphics Memory Rivals HBM2

Samsung's new memory features Fan-Out, Wafer-Level Packaging. (Image: Samsung)

In the past, chip companies such as AMD have dabbled in High-Bandwidth Memory (HBM) instead of GDDR to increase memory bandwidth for GPUs. This vertically stacked memory boasts incredible bandwidth, but it’s a costly endeavor. AMD abandoned it in favor of GDDR memory after its ill-fated R9 Fury and Vega GPUs. Now Samsung has created a new type of GDDR6 memory it says is almost as fast as HBM without needing an interposer. Samsung says GDDR6W is the first “next-generation” DRAM technology, and that it will empower more realistic metaverse experiences.

Samsung took its existing GDDR6 platform and built it with Fan-Out Wafer-Level Packaging (FOWLP). With this technology, the memory die is mounted to a silicon wafer instead of a printed circuit board (PCB). Redistribution layers are fanned out around the chip allowing for more contacts and better heat dissipation. Memory chips are also double-stacked. Samsung says this has allowed it to increase bandwidth and capacity in the exact same footprint as before. Since there’s no increase in die size, its partners can drop GDDR6W into existing and future designs without any modifications. This will theoretically reduce manufacturing time and costs.

Samsung’s Fan-Out, Wafer-Level Packaging allows for a smaller package thanks to the absence of a PCB. (Credit: Samsung)

The new memory offers double the I/O and bandwidth of GDDR6. Using its existing 24Gb/s GDDR6 as an example, Samsung says the GDDR6W version has twice the I/O as there are more contact points. It also doubles capacity from 16Gb to 32Gb per chip. As shown above, the height of the FOWLP design is just 0.7mm, which is 36 percent lower than its DDR package. Even though I/O and bandwidth have been doubled, it says it has the same thermal properties as existing DDR6 designs.

Samsung says these advancements have allowed its GDDR6W design to compete with HBM2. It notes that second-generation HBM2 offers 1.6TB/s of bandwidth, with GDDR6W coming close with 1.4TB/s. However, that number from Samsung is using a 512-bit wide memory bus with 32GB of memory, which isn’t something found in current GPUs. Both the Nvidia RTX 4090 and the Radeon RX 7900 XTX have a 384-bit wide memory bus and offer just 24GB of GDDR6 memory. AMD uses GDDR6 while Nvidia has opted for the G6X variant made by Micron. Both cards have around 1TB/s of memory bandwidth, though, so Samsung’s offering is superior.

The big news here is that thanks to Samsung’s chip-stacking, half the memory chips are required to achieve the same amount of memory as current packaging. This could result in reduced manufacturing costs. Overall, its maximum transmission rate per pin of 22Gb/s is very close to GDDR6X’s 21Gb/s. So the gains in the future probably won’t be for maximum performance, but rather memory capacity. You could argue nobody needs a GPU with 48GB of memory, but perhaps when we’re gaming at 16K that’ll change.

As far as products go, Samsung says it’ll be introducing GDDR6W soon in small form factor packages such as notebooks. It’s also working with partners to include it in AI accelerators and such. It’s unclear whether AMD or Nvidia will adopt it, but if they do it’ll likely be far in the future. That’s just because both companies are already manufacturing their current boards with GDDR6/X designs, so we doubt they’d swap until a new architecture arrives.

Now Read:



Tuesday, 29 November 2022

Microsoft to Offer Sony 10-Year Call of Duty License to Appease EU Regulators

(Photo by Drew Angerer/Getty Images)

Microsoft’s bid to acquire Activision Blizzard is on thin ice. Antitrust regulators from several regions have been reviewing the deal all year, and several have recently expressed concern that the acquisition could drastically reduce competition within the video game industry. With European Union watchdogs especially wary, Microsoft appears prepared to do whatever it takes to push the deal through—even if it means making concessions to Sony.

The EU opened a deeper probe into the bid in early November following a marked spike in concerns over Activision’s most successful franchises, particularly Call of Duty. Sony, Microsoft’s biggest gaming competitor, has repeatedly noted in no uncertain terms that Microsoft’s acquisition of Activision could mean a rapid loss of vital content for PlayStation players. Not only would Microsoft’s acquisition of Activision make Microsoft the third largest gaming company in the world, but from Sony’s perspective, sudden Call of Duty exclusivity could push former PlayStation devotees to PC or Xbox.

Microsoft has previously attempted to assuage these worries in two starkly different ways. At first, its tactic was to assure Sony (and the rest of the world) that its console compatibility agreements involving Call of Duty and other major Activision titles would remain in effect past their contracted timelines. Then it changed tack, telling Sony and antitrust regulators that Activision didn’t have any “must-have” titles. (Read: “So just stop stressing about it, okay?”)

(Image: Activision Blizzard)

Those strategies don’t seem to have had the effect Microsoft wanted. According to a new Reuters report, the EU is set to publish a formal list of competition concerns (called a “statement of objection”) regarding the deal in January. Microsoft, clearly eager to get ahead of whatever the EU has in store, is reportedly preparing to offer Sony a 10-year Call of Duty license to sweeten the deal.

The possible 10-year agreement is a touch ironic given Microsoft’s previous insistence that it would keep major Activision titles available on PlayStation regardless, but of course it’s always best to get those types of promises on paper. Still, even if Microsoft does formally submit such an offer, there’s no guarantee that it’ll be accepted both by Sony and by the necessary authorities. If it is, legal experts believe the license could accelerate the review process and appease any concerns raised in January.

This doesn’t mean Microsoft would be cleared for takeoff, however. Three sources told Politico last week that the US Federal Trade Commission (FTC) is likely to challenge the $69 billion deal via lawsuit. Though nothing has been filed yet, an FTC lawsuit could mean the end of Microsoft’s bid, which aims to wrap up its acquisition of Activision by July 2023.

Now Read:



Astronomers ‘Troubled’ as New Satellite Outshines Most Stars in the Sky

Trails in the night sky left by BlueWalker 3 are juxtaposed against the Nicholas U. Mayall 4-meter Telescope at Kitt Peak National Observatory in Arizona, a Program of NSF's NOIRLab. The breaks in the trail are caused by breaks between four twenty second exposures that were stacked to create this image.

Cell phone towers in space may be the next frontier of mobile communication, but astronomers are starting to get worried. AST SpaceMobile successfully deployed its new BlueWalker 3 communication satellite, and the International Astronomical Union (IAU) says this enormous satellite is now one of the brightest objects in the sky. The IAU warns that the proliferation of objects like BlueWalker 3 could have disastrous effects on astronomy.

AST SpaceMobile launched BlueWalker 3 in September aboard a SpaceX Falcon 9 rocket, deploying its expansive communication array earlier this month. The satellite has a total surface area of 693 square feet (64 square meters), and it’s in a low-Earth orbit. Even before launch, many in the astronomical community feared this object would outshine nearly all the stars in the sky, and that’s exactly what happened.

BlueWalker 3 needs that gigantic antenna because of the way it intends to deliver connectivity directly to existing cell phones. The antenna in your phone is designed to talk to nearby towers on the ground — getting connected to a satellite is much harder. Satellite phones usually have bulky adjustable antennas, but no one wants to carry one of those around. Thus, BlueWalker 3 has its giant antenna array to deliver 4G and 5G service. The company plans to use BW3 to test services that could eventually come to partners like AT&T and Vodafone.

Officially, the IAU is “troubled” by the “unprecedented brightness” of BlueWalker 3, but it does not necessarily oppose the launch of such satellites. Increasing connectivity in underserved areas is a noble goal, but the group is asking companies to adopt technologies and designs that minimize the impact these satellites have on astronomy. To make the point, the IAU has provided some sample images of BlueWalker 3 photobombing telescopes. There is also concern that blasting cellular signals from space will increase interference at radio observatories, which are often built as far away from cell phone towers as possible.

Observation using the 0.6-meter Chakana telescope at the Ckoirama Observatory in Chile. This 10-second image shows BlueWalker 3 with a measured apparent magnitude of V=6.6 at a range of 865 km. Observers: Eduardo Unda-Sanzana, Christian Adam, and Juan Pablo Colque.

AST SpaceMobile is not alone in this quest to bring cellular service to space. Apple recently enabled Emergency SOS satellite communication via Globalstar, but it only supports text messaging with significant delays. Meanwhile, SpaceX and T-Mobile want to provide text and voice calls with next-gen Starlink v2 satellites. Astronomers are already up in arms about the existing Starlink constellation ruining images, and the larger Starlink v2 could be almost as bad for astronomy as BlueWalker 3. The skies are getting a lot more crowded, which makes space-based instruments like the James Webb Space Telescope all the more vital.

Now read:



Epson to End All Laser Printer Sales by 2026

(Photo: Raysonho @ Open Grid Scheduler/Wikimedia Commons)
Epson, the Japanese hardware corporation best known for its printers, is sunsetting its laser printer division due to sustainability concerns. The company has quietly chosen to stop selling laser printer hardware by 2026. The company will instead focus on its more environmentally-friendly inkjet printers, according to a statement obtained by The Register. Although the company stopped selling laser printers in the United States a while back, it had maintained the line in other markets, including Europe and Asia. Consumers will no longer be able to purchase new Epson laser printers as of 2026, but Epson has promised to continue supporting existing customers via supplies and spare parts.

Epson itself claims its inkjets are up to 85 percent more energy efficient than its laser units and produce 85 percent less carbon dioxide. These statistics might not matter to individuals who occasionally print at home, but they provide businesses and nonprofit organizations with a way to cut down on their energy bills and carbon footprint.

Inkjets typically require fewer single-use resources, too. While laser printers rely on toner, fusers, developer, and other disposable parts, inkjets simply use an ink and waste ink box. Not only do inkjet printers produce nearly 60 percent less e-waste than their laser counterparts, but their production is a bit kinder to the environment as well: creating one toner cartridge requires burning anywhere from half a gallon to a full gallon of oil.

(Photo: DragonLord/Wikimedia Commons)

The decision to end all laser printer sales is likely a part of Epson’s “Environmental Vision 2050,” a circular economic model the company first committed to in 2018 and revised last year. Its biggest focus is Epson’s promise to become carbon-negative and “underground resource free” by 2050.

That said, inkjet printers aren’t the definitive solution to sustainable printing that Epson would like consumers to believe them to be. Inkjet cartridges dry out relatively quickly, resulting in some printer users buying more ink than they actually use. Inkjet printing costs more per page, too, which means the energy savings gleaned from ditching a laser printer might just be compensated for during use. Epson has also been in hot water recently for forcing some printer users to visit an authorized repair person to fix suddenly-bricked machines. Some Epson L360, L130, L220, L310, and L365 users even have to replace their machines altogether, which only puts more money in Epson’s pocket while producing seemingly unnecessary e-waste.

Now Read:



Tesla Releases Full Self-Driving Beta to Everyone in North America

Tesla has been promising its Full Self-Driving feature would be available “soon” for the last several years, and today might finally be the day. Tesla CEO Elon Musk has tweeted that the Full Self-Driving Beta is now live for anyone in North America who wants it — minus the most important feature. Of course, you need to have paid for Full Self-Driving in order to access it. Otherwise, you’ll be stuck with the lesser Autopilot features.

Full Self-Driving (FSD) mode has been in testing for years — most of those who bought the package with their cars have never even been able to use it. During the limited test, drivers had to log over 100 hours with Autopilot and hit a minimum driver safety score with Tesla’s integrated behavior tracking features. Only then would cars in North America unlock the Full Self-Driving beta. Why so much red tape? Because Full Self-Driving isn’t really what it sounds like. It is more capable than regular Autopilot, but you won’t be napping in the backseat.

Autopilot is one of the features that helped Tesla make its name among the more established automakers. All the company’s vehicles have basic Autopilot features like adaptive cruise control and lane-keeping — today, that’s not much of a differentiator, but a $6,000 upgrade adds Enhanced Autopilot to Tesla’s cars. This package includes Autopilot navigation on the highway, automatic lane changes, smart summon, and more.

At the top of Tesla’s self-driving pyramid is Full Self-Driving, which costs a whopping $15,000 upfront or $199 per month. This feature allows the car to see and react to traffic signals, and theoretically allows it to navigate autonomously on surface streets in addition to highways. However, that feature is still not available even in the beta. Still, Musk says anyone with the Full Self-Driving package in North America can opt into the beta now.

Tesla says that FSD is reliable, but safety advocates question that. Full Self-Driving is still just a “level 2” system, which means drivers are supposed to remain attentive at all times, but research has shown people using Autopilot spend less time looking at the road. It may be just good enough to make people think the car is driving itself. Some demonstrations also suggest pedestrian detection may be particularly bad at identifying children (and therefore stopping before hitting them).

Tesla is also facing increased scrutiny from regulators over the way it designs and markets its autonomous driving features. The National Highway Traffic Safety Administration is investigating a series of accidents in which Tesla vehicles in Autopilot mode collided with stationary emergency vehicles, and the Department of Justice has launched a parallel criminal investigation. Meanwhile, California is suing Tesla for misleading marketing around Full Self-Driving. It’s possible these cases could force changes to Full Self-Driving before the city street navigation feature debuts. Tesla has not given any indication of when that will be.

Now read:



Hubble Telescope Captures a Surreal Galaxy Merger Resulting in a ‘Colossal Ring’

The galaxy merger Arp-Madore 417-391 steals the spotlight in this image from the NASA/ESA Hubble Space Telescope. The Arp-Madore catalogue is a collection of particularly peculiar galaxies spread throughout the southern sky, and includes a collection of subtly interacting galaxies as well as more spectacular colliding galaxies. Arp-Madore 417-391, which lies around 670 million light-years away in the constellation Eridanus in the southern celestial hemisphere, is one such galactic collision. The two galaxies have been distorted by gravity and twisted into a colossal ring, leaving the cores of the two galaxies nestled side by side. Hubble used its Advanced Camera for Surveys (ACS) to capture this scene — the instrument is optimised to hunt for galaxies and galaxy clusters in the ancient Universe. Hubble’s ACS has been contributing to scientific discovery for 20 years, and throughout its lifetime it has been involved in everything from mapping the distribution of dark matter to studying the evolution of galaxy clusters. This image comes from a selection of Hubble observations designed to create a list of intriguing targets for follow-up observations with the NASA/ESA/CSA James Webb Space Telescope, as well as other ground-based telescopes. Astronomers chose a list of previously unobserved galaxies for Hubble to inspect between other scheduled observations. Over time, this lets astronomers build up a menagerie of interesting galaxies while using Hubble’s limited observing time as fully as possible. [Image description: Two galaxies right of centre form a ring shape. The ring is narrow and blue, and the cores of the two galaxies form a bulge on the ring’s side. A bright, orange star lies above the ring. Two smaller spiral galaxies appear left of centre, as well as a few stars. The background is black and speckled with very small stars and galaxies.]

The Hubble Space Telescope has survived more than a decade in orbit without a servicing mission, but it still manages to produce some of the most stunning images and detailed astronomical observations. Case in point, a new snapshot of a galaxy merger known as Arp-Madore 417-391. Hubble pointed its mirror toward these distant interacting galaxies to discover an unusual ring-like formation.

Arp-Madore 417-391 sits around 670 million light-years away in the southern constellation Eridanus. The galaxies may eventually settle into a single, large elliptical galaxy, but for the time being, the collision looks quite messy. The two galactic cores are visible on one side of the formation, and a vast ring has bloomed on the other.

This image comes from Hubble’s Advanced Camera for Surveys (ACS), which was added to the telescope in a 2002 servicing mission. In the past 20 years, the ACS has been responsible for some of the most iconic images of the cosmos, including the Hubble Ultra Deep Field and Abell 1689 (with its 2-million-light-year-wide gravitational lens).

Hubble also used the ACS in 2012 to make accurate measurements of how our Milky Way galaxy is moving with respect to the nearby Andromeda galaxy. Astronomers concluded that the galaxies will collide in about 4 billion years (and it might not be our first). When it happens, the results could be similar to what we see happening with Arp-Madore 417-391.

Astronauts James H. Newman (on the remote manipulator system robotic arm) and Michael J. Massimino install ACS during Servicing Mission 3B. Credits: NASA

You may be wondering why we’re seeing Arp-Madore 417-391 via Hubble instead of the newer and more advanced James Webb Space Telescope — that’s actually the eventual goal. Webb, which just came online earlier this year after more than 20 years of planning and construction, is still in high demand among astronomers. It’s necessary to prioritize observations so that Webb can have the biggest possible impact on astronomy during its life. Unlike Hubble, Webb is too far away for servicing missions.

Thus, Hubble is helping out by conducting observations of previously unobserved objects like Arp-Madore 417-391. The team is fitting these observations in whenever they can in between other scheduled campaigns. Eventually, astronomers will build up a list of objects that warrant follow-up with Webb and advanced ground-based telescopes.

Now read:



Nvidia Is Reportedly Ending Production of Its Most Popular Turing GPUs

(Credit: PCMag)
Now that Nvidia has launched the RTX 4090 and 4080, it is desperately trying to clear the channel of older GPUs. The end of crypto mining and economic unease has resulted in a deluge of GPUs, often at bargain prices. At least, that’s been true for AMD, as Nvidia GPUs are still priced higher than expected. Still, Nvidia really needs to give people fewer options when it comes to shopping for GPUs. One of the ways it’s reportedly doing that is by ending production for two of its most popular series: the RTX 2060 and the GTX 1660.

Word of the impending shutdown of production on these mainstream GPUs comes from Chinese media( via TechSpot). Nvidia reportedly ended production on the RTX 2060 cards in early November. Now it’s adding the wallet-friendly GTX 1660 to the list as well. Both cards resonated with gamers seeking 1080p gaming on a budget. The RTX 2060 is currently the second most popular GPU in the Steam Hardware Survey. The GTX 1660 is the eighth. The RTX 2060 lineup includes several models: the OG RTX 2060 from 2019 with 6GB of RAM, the 12GB version from 2020, and the RTX 2060 Super with 8GB of VRAM. These GPUs range in price from around $170 to $400.

The Asus TUF GTX 1660, from an era where GPUs were tiny. (Credit: Asus)

The GTX 1660 was always a curiosity, as it was released to seemingly counter the bad press Nvidia’s RTX cards were getting. If you recall, the Turing line was the first to support ray tracing. However, very few titles supported it, and enabling it had a profound impact on performance. This seemingly caused Nvidia to release a Turing GPU without ray tracing, aka the GTX 1660. This line is comprised of three GPUs: The original 1660, the 1660 Ti, and the Super version, all with 6GB of VRAM. This is a true bang-for-the-buck GPU, with some models going for a smidge over $100 on eBay.

This is seemingly the latest attempt by Nvidia to clear the field for its upcoming RTX 4060. It also is trying to get rid of its existing RTX 3060 stock as well, so giving buyers fewer options could push people upwards in the GPU food chain. It’s unclear what kind of pricing the RTX 4060 will have, but if the past is precedent, it’ll be expensive. Nvidia has increased pricing significantly for its 40-series GPUs. Although that’s worked out fine for the flagship 4090, it’s not the case with the $1,200 RTX 4080. Buyers are seemingly fed up with what they see as price gouging on these high-end models.

Despite Nvidia’s efforts, these GPUs will still exist for some time, at least on eBay. Once they disappear, the market will see a dearth of affordable GPUs if Ampere is your only non-40-series option. The RTX 3050 is the most affordable card, and it’s almost $300. The RTX 3060 just goes up from there. It’s not a fantasy to envision the RTX 4060 costing $499 or something similar, either. It’s a bad situation that is seemingly only going to get worse—unless AMD can undercut Nvidia with its midrange cards the way it’s doing with its high-end RDNA3 GPUs.

Now Read:



Monday, 28 November 2022

Old Zero-Day Vulnerabilities Remain Unpatched on Samsung, Google Phones

Google’s Project Zero team is on the front lines of digital security, analyzing code, reporting bugs, and generally making the internet safer. However, not every vulnerability gets fixed in a timely manner. A recent batch of serious flaws in Arm’s Mali GPU were reported by Project Zero and fixed by the manufacturer. However, smartphone vendors never implemented the patches, among them Google itself. So, that’s a little embarrassing.

The story starts in June 2022 when Project Zero researcher Maddie Stone gave a presentation on zero-day exploits — known vulnerabilities for which there is no available patch. The talk used a vulnerability identified as CVE-2021-39793 and the Pixel 6 as an example. This flaw allowed apps to access read-only memory pages, which can leak personal data. Following this, researcher Jann Horn started looking more closely at ARM Mali GPU code, finding five more vulnerabilities that could allow an attacker to bypass Android’s permission model and take control of the system.

Some of these issues were allegedly available for sale on hacking forums, making them especially important to patch. Project Zero reported the issues to ARM, which followed up with source code patches for vendors to implement. Project Zero waited another 30 days to disclose the flaws, which it did in August and mid-September 2022. Usually, this would be the end of the story, but Project Zero occasionally circles back to assess the functionality of fixes. In this case, the team found a “patch gap.”

Google believes the Mali issues it uncovered were already available in the zero-day market.

Although ARM released the patches over the summer, vendors hadn’t integrated them into their regular Android updates. The issues affect numerous devices that run a system-on-a-chip featuring a Mali GPU, including Android phones from Samsung, Xiaomi, Oppo, and Google. Snapdragon chips are spared as they use Qualcomm’s own Adreno GPU. So, Samsung phones in North America are safe, but those sold internationally with Exynos chips are at risk.

In past years, this might not have affected Google, but the company switched from Qualcomm to the custom Tensor chips for Pixel phones in 2021. Tensor uses a Mali GPU, so Google’s security team found flaws that the Pixel team failed to add to the regular software updates. Google is not alone in making this mistake, but it’s still not a great look. Google now says that the Mali patches will be added to Pixel phones “in the coming weeks.” Other vendors haven’t offered a timetable yet.

Now read:



Hawaii’s Mauna Loa Volcano is Erupting for the First Time Since 1984

(Photo: USGS)
For the first time in nearly four decades, the world’s largest active volcano—Hawaii’s Mokuʻāweoweo, or Mauna Loa—has begun to erupt.

The US Geological Survey (USGS) issued a red alert late Sunday night at the first sign of activity. Mauna Loa’s impact was confined to its summit at the time, precluding any immediate evacuations nearby. On Monday morning, lava was still overflowing from the volcano’s caldera. Though authorities weren’t immediately concerned for any downhill communities, they did express that ash from the eruption could float to and accumulate in nearby areas, presenting possible health and infrastructure concerns.

The USGS and the Hawaii County Civil Defense Agency first became wary of Mauna Loa’s impending eruption in September, when seismic activity near the volcano began to spike. The USGS shared that Mauna Loa was “in a state of heightened unrest” but assured the public that there were “no signs of an imminent eruption” at the time. Authorities prohibited backcountry hiking at Mauna Loa nonetheless. Just a month later, 36 small earthquakes occurred near the volcano’s base in just two days, extending Mauna Loa’s “heightened unrest” status.

Lava flow from Mauna Loa’s 1984 eruption. (Photo: National Park Service)

Mauna Loa—which is located just southwest of the Big Island’s center—last erupted in 1984. The eruption itself lasted 22 days, with lava flow stopping just four miles away from the nearby city of Kilo. For the first time, scientists were able to thoroughly monitor Mauna Loa’s lava flow, generating what are now undoubtedly priceless insights regarding this year’s eruption and its possible effects. Before 1984, Mauna Loa was estimated to have erupted approximately every six years; the 38-year gap between then and this year’s eruption bookends the volcano’s longest known period of quiescence.

This time around, the public has multiple tools at its disposal through which to monitor Mauna Loa’s activity. Not only is the USGS delivering real-time updates through its Twitter page and its Mauna Loa web page, but people from around the world can view the eruption through the USGS live webcam, which sits on the volcano’s northwest rim.

Now Read:



New Wireless Smart Bandage Accelerates Chronic Wound Healing

(Photo: Jian-Cheng Lai, Bao Research Group/Stanford University)
Chronic wounds are an under-acknowledged medical concern. At any given time, more than 600,000 Americans are thought to experience physiologically-stunted wounds that won’t heal. Chronic wounds aren’t just inconvenient and painful; they also rack up individual healthcare costs and prevent people from engaging in certain activities, resulting in a decreased quality of life.

Thanks to new research, this might not always be the case. A team of scientists at Stanford University has developed a wireless “smart bandage” that simultaneously monitors wound repair and helps to speed up healing. The bandage could shorten the time people suffer from chronic wounds while mitigating the physical damage and discomfort caused by conventional healing methods.

In a study published last week in Nature Biotechnology, the scientists describe a flexible, closed-loop device that seals wounds while transmitting valuable biodata to an individual’s smartphone. Hydrogel makes up the bandage’s base: While conventional bandages tug and tear at the skin when they’re pulled away, hydrogel allows the smart bandage to attach securely without causing secondary damage during removal. On top of the hydrogel sits the electronic layer responsible for wound observation and healing. At just 100 microns thick, this layer contains a microcontroller unit (MCU), electrical stimulator, radio antenna, memory, and a series of biosensors.

(Image: Jian-Cheng Lai, Bao Research Group/Stanford University)

The biosensors look for two types of information: changes in electrical impedance and temperature fluctuations. Impedance is known to increase as wounds heal, while temperatures drop during wound resolution. Real-time insights regarding both of these indicators can inform the smart bandage’s repair-accelerating function, which utilizes electrical stimulation to encourage the activation of pro-regenerative genes. One of these genes, Apolipoprotein E, boosts muscle and soft tissue growth, while Selenoprotein P reduces inflammation while helping to clear out pathogens.

When tested on mice, the smart bandage’s stimulation indeed promoted the activation of both genes while increasing the number of white blood cells in each test subject. Mice that received treatment via smart bandage healed 25 percent faster than control mice. Treated mice also experienced a 50 percent enhancement in dermal remodeling, suggesting an improved quality of treatment and physical resolution.

As of now, the scientists’ smart bandage is just a prototype. The team hopes to scale the device’s size to fit humans while finding ways to reduce cost. There also might be a case for adding additional biosensors that track pH, metabolites, and other data. Still, the bandage presents a bit of hope for those who struggle to heal from persistent, life-disrupting wounds.

Now Read:



Microplastics Found in Formerly ‘Pristine’ Antarctic Water, Air, Sediment

(Photo: Nekton Mission)
We’ve long thought Antarctica to be relatively free from human influence, thanks to its extreme climate, general lack of human presence, and distance from inhabited land. Unfortunately, what was once considered the last “pristine” wilderness might no longer be. An Antarctic research expedition has found microplastics in the continent’s water, air, and sediment, suggesting a level of pollution far higher than expected.

Nekton, a nonprofit ocean research initiative, partnered with forensic scientists at the University of Oxford to study microplastic pollution in the Weddell Sea—one of the Antarctic’s most remote regions. During an expedition in 2019, scientists gathered samples of the Weddell Sea’s air, subsurface seawater, sea ice, and benthic (underwater) sediment. The team used a polarized light microscope to examine each of the 82 samples for microplastics.

Every single sample contained some form of microplastic pollution. Polyester fibers, which are most often used in the production of synthetic textiles, were by far the most ubiquitous with a presence in 60 percent of samples. Most other pollutants were determined to be nylon, polypropylene, and acrylic fragments of varying shapes and colors. Although the team believes some of these originate from nearby research vessels or from fishing gear used by fleets in the neighboring Scotia Sea, most microplastics appear to arrive by unexpected means: the wind.

Polarized light microscopy image of a polyester textile fiber from one of the team’s samples. (Photo: Nekton Mission)

One might expect seawater to contain the highest microplastic diversity, but a wider spread of microplastic categories was found in the team’s Weddell Sea air samples than anywhere else. Most microplastics appear to float in from South America. Once they arrive in the Antarctic, they’re typically there for good; as a result, the Weddell Sea acts as a sink for plastic particles from a whole other continent.

The expedition’s forensic results challenge the longstanding assumption that the Antarctic Circumpolar Current (ACC), a deep, eastward-flowing current with an associated polar front, isolates most of Antarctica from pollution affecting the rest of the ocean. Some scientists have theorized that the ACC would protect the Weddell Sea and other regions from the increasingly dire issue of microplastic pollution, but it’s now clear this isn’t the case. Worse, the expedition’s findings appear to suggest that microplastics that cross the ACC can get “stuck” there, creating the potential for buildup over time.

The news of Antarctica’s surprising plastic pollution levels couldn’t have come at a more impactful time. This week, more than 150 nations’ environmental representatives are meeting in Uruguay to form an international, legally binding text on plastic pollution. The text will help coordinate a global response to microplastics’ rapidly-increasing presence in food, water, and (as this study has shown) even the air we breathe.

“The results shown here from a remote, arguably near-pristine system, further highlight the need for a global response to the plastic pollution crisis” to conserve marine systems, the researchers for Frontiers in Marine Science wrote.

Now Read:



MIT Is Working on Self-Assembling Robots

Today, humans build robots, but in the future, robots could be programmed to build more of themselves. Researchers at MIT’s Center for Bits and Atoms (CBA) have created robotic subunits called “voxels” that can self-assemble into a rudimentary robot, and then collect more voxels to assemble larger structures or even more robots.

The researchers, led by CBA Director Neil Gershenfeld, concede that we’re still years away from a true self-replicating robot, but the work with voxels is answering some vital questions that will help us get there. For one, the team has shown that it’s feasible to make the assembler bot and the structural components of whatever you’re building can be made of the same subunits — in this case, voxels.

Each robot consists of several voxels connected end-to-end. They use small but powerful magnets to latch onto additional subunits, which they can use to assemble new objects or make themselves larger. Eventually, a human operator might simply be able to tell these self-assembling robots what they want to be built, allowing the machines to figure out the specifics.

For example, if one robot isn’t enough to build the required structure, it can make a copy of itself from the same voxel components to split the work. When building something large, the robots could also decide to make themselves bigger and thus more efficient for the task. It could also be necessary for large robots to split into smaller ones for more detailed work.

The voxels (a term borrowed from 3D modeling) are based on components developed for previous MIT experiments. However, those voxels were simply structural pieces. The voxels used in the new research have been enhanced with the ability to share power and data when connected. The add-on voxel components don’t have any moving parts, though. All the movement and smarts come from the base units, which are like feet that allow the robot to inch along the magnet-studded substrate.

A large part of this research is simply refining the algorithms that govern how the robots grow and replicate, ensuring they can work together without crashing into each other. Although computer simulation shows the system could build larger objects (and more robots), the current hardware is limited. The magnetic connections are only strong enough to support a few voxels, but the team is developing stronger connectors that will allow one robot to build another. By tweaking the strength of the actuator force and joint strength, the researchers believe they can build on this success.

Now read:



Friday, 25 November 2022

Unchecked Carbon Dioxide Is Shrinking Earth’s Upper Atmosphere

(Photo: Astro Alex/Wikimedia Commons)
We’re already aware of the consequences of unmitigated carbon emissions, particularly as they relate to climate change here on Earth. But according to two new studies, greenhouse gas buildup affects more than just our immediate surroundings. Researchers have found that rising carbon dioxide levels are causing Earth’s upper atmosphere to contract, which could have serious implications for future space operations.

A team of environmental scientists, atmospheric chemists, and space physicists analyzed data from NASA’s Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) mission, which launched in 2001. The TIMED satellite’s Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument has been gathering insights on atmospheric infrared radiation, or heat, since just a year after launch. These nearly two decades of insights are what the researchers used to assess how building CO2 levels have impacted the atmosphere over time.

(Image: Creative Commons Attribution-Share Alike 4.0)

According to the dual studies they published in the Journal of Geophysical Research: Atmospheres, the researchers found that Earth’s upper atmosphere is indeed contracting—something scientists have suspected for a while, but that’s been difficult to confirm. In the lower atmosphere, CO2 absorbs and re-emits heat in all directions, creating a warming effect. But in the upper atmosphere, CO2 is allowed to escape into space, resulting in a gradual cooldown instead. This cooling effect causes not only the stratosphere to contract but also the mesosphere and thermosphere (together referred to as the MLT).

The MLT contracted by 1,333 meters in just under 20 years. The researchers estimate that approximately 342 of those were a direct result of CO2 cooling. MLT cooling negatively correlates with atmospheric drag; as the MLT grows colder, atmospheric drag drops. Given atmospheric drag is essential to ships’ and satellites’ ability to deorbit, unabated carbon buildup could end up impacting future (or even current long-term) missions. This includes the increasingly necessary task of removing space debris.

The team believes CO2-related MLT cooling could affect the larger aerospace industry sooner than we think. “As long as carbon dioxide increases at about the same rate, we can expect these rates of temperature change to stay about constant too,” they write. “It seems likely that ongoing changes in space climate will become important issues in space law, space policy, and in the business of underwriting insurance for endeavors in space.”

Now Read:



Thursday, 24 November 2022

Tax Filing Websites Caught Sending Users’ Financial Data to Facebook

(Photo: Olga DeLawrence/Unsplash)
Filing your taxes is already stressful enough without the worry that your data will end up in the wrong hands. Thanks to several tax websites’ newly-discovered data sharing practices, this concern is likely to be prevalent during the 2023 tax filing season. H&R Block, TaxAct, and TaxSlayer have been found to send users’ financial data to none other than Facebook.

The three websites—which together help more than 25 million Americans file their taxes annually—use Meta’s JavaScript code (called “Meta Pixel”) to capture user data and send it Facebook’s way, according to the nonprofit tech investigations newsletter The Markup. H&R Block, one of the country’s most recognizable tax filing firms, was found using Meta Pixel to obtain users’ health savings account usage data as well as dependents’ college expense information. TaxAct was caught using the code to track users’ filing status, dependents, adjusted gross income, and refund totals. TaxAct appears to have lazily attempted to anonymize this data by scrambling dependent names and rounding income and refunds to the nearest thousand and hundred respectively; however, The Markup found the former obfuscation to be easily reversible.

(Photo: H&R Block)

TaxSlayer appears to have used Meta Pixel to capture the most detailed user information. Using a “Meta Pixel Inspector” it developed earlier this year, The Markup found that TaxSlayer habitually gathered users’ names, phone numbers, and dependent names. A specific form of TaxSlayer incorporated into personal finance celebrity Dave Ramsey’s websites also obtained users’ income and refund totals. When The Markup asked Ramsey Solutions about its use of Meta Pixel, the company said it hadn’t known about the code’s data-grabbing element and allegedly removed it from its sites. TaxAct similarly stopped capturing users’ financial data but continued to record dependents’ names.

But why? What incentive does Facebook have to grab Americans’ tax information? As it nearly always does, the answer comes down to money. Meta, Facebook’s parent company, regularly uses its approximately 2 million Meta Pixels to capture web users’ browsing activity, demographic data, and more. This information is then used to ensure Facebook and Instagram users are seeing ads they might actually click, thus supporting Meta’s lucrative marketing operations.

The IRS, having been made aware of the tax websites’ Meta Pixel usage, could render Facebook’s tax data harvesting financially useless. Websites that share users’ tax information without consent can face steep fines, and as of Tuesday, H&R Block, TaxAct, and TaxSlayer lacked the disclosures necessary to claim consent.

Now Read:



Webb Telescope Collects First-Ever Atmospheric Data From an Exoplanet

The James Webb Space Telescope has been in space for less than a year, but it’s already racked up an impressive list of firsts, from capturing the bones of another galaxy to the first-ever detection of what may be among the oldest galaxies in the universe. Now, Webb is making history again by collecting a full chemical profile from the atmosphere of a distant exoplanet. The new data, published by multiple international teams across five studies (1,2,3,4,5), makes WASP-39 b possibly the best-studied planet outside our solar system.

WASP-39 b orbits a sun-like star about 700 light years away from Earth, but it orbits it extremely closely. The exoplanet, which is roughly the size of Saturn, is eight times closer to its star than Mercury is to ours. The exoplanet was originally discovered using transit photometry, which analyzes small dips in light for evidence that an exoplanet has passed in front of a star. This method is the most effective we have for identifying exoplanets, but it only works if the orbital plane passes in front of the star from our perspective.

By the same token, Webb can watch for WASP-39 b to transit the star to gather data from its atmosphere. As WASP-39 batters the planet with radiation, some of that energy is absorbed by molecules in the gas giant’s atmosphere. Thus, it’s possible to get data on the chemical processes at work, and there are a few notable things going on. For example, WASP-39 b is now the first exoplanet known to have sulfur dioxide in its air. This production of this molecule is powered by high-energy light from the star, and given its location, WASP-39 b has plenty of that. This is the first confirmation of photochemistry on an exoplanet.

Data from the studies (three of which are published in Nature, and two that are still pending) also showed the presence of molecules like carbon monoxide and carbon dioxide, confirming a previous Webb observation. There’s also sodium, potassium, and lots of water vapor. Again, the latter confirms some previous ground and space-based observations. Knowing all these details helps scientists hypothesize about the formation of WASP-39 b, including the possibility that it became so enormous by swallowing up smaller planets inside the WASP-39 system — that’s a likely conclusion based on the high ratio of sulfur to hydrogen. High oxygen content also suggests WASP-39 b formed farther away from its host star before migrating inward.

This is only a hint of what the James Webb Space Telescope can do. Its ability to characterize exoplanet atmospheres is shaping up to be more robust than astronomers had dared hope. When turned toward small, rocky planets like those in the TRAPPIST-1 system, Webb could make even more incredible discoveries.

Now read:



Wednesday, 23 November 2022

Vivo X90 Pro Plus Is the First Phone With the Snapdragon 8 Gen 2 Chip

Qualcomm announced its next flagship mobile processor just a few days ago, and Vivo is wasting no time while the new chip is still top of mind. The Chinese smartphone giant has announced the new X90 Pro Plus, the first smartphone to ship with the Snapdragon 8 Gen 2. It will go on sale in the coming days, but sadly, only in the Chinese market for the time being.

Vivo actually unveiled three versions of the X90, but the X90 and X90 Pro rely on the similar MediaTek Dimensity 9200 chip. Only the Pro Plus gets the latest silicon from Qualcomm, but that’s not all that sets the X90 Pro Plus apart. It also sports an enormous quad-camera array, anchored by the Sony IMX989, a 1-inch camera sensor that promises unparalleled light sensitivity. It has a 2x telephoto, a 3.5x periscope telephoto, and an ultrawide sensor. The phone isn’t shy about showing off all that camera hardware in the bulbous camera bump that dominates the back.

The rest of the specs are equally impressive. The X90 Pro Plus has a 6.78-inch 1440 x 3200 OLED that supports 120Hz refresh and 1,800 nits of peak brightness. That’s even brighter than the Samsung Galaxy S22 Ultra’s display. The X90 Pro Plus also has 12GB of RAM and 256GB of the latest UFS 4.0 storage. The only possible hardware shortcoming is the battery, which rates at just 4,700mAh. That’s a bit on the small side for a device with such a big display and the most powerful mobile processor on the market. Qualcomm claims the Snapdragon 8 Gen 2 is more efficient than its predecessor, but no one has used it in a retail device yet.

Qualcomm announced the Snapdragon 8 Gen 2 at its Snapdragon Summit last week, with the usual laundry list of performance and efficiency uplifts. The new chip has eight CPU cores, including the new Cortex-X3 “Prime Core.” There are also four high-power CPU cores, split evenly between the Cortex-A710 and A715. The older A710s ensure the chip will have powerful native support for 32-bit apps, which are still common in China. There are also three efficiency cores (Cortex-A510) that also run 32-bit apps.

The Snapdragon 8 Gen 2 also features an improved Adreno 740 GPU with support for the latest Vulkan 1.3 API and hardware ray tracing. The chip has Wi-Fi 7 support, but the X90 Pro Plus doesn’t implement that.

Vivo will start taking pre-orders for the X90 and X90 Pro today, and the X90 Pro Plus will follow on Nov. 30. All the phones will ship on Dec. 6 in China. There’s no news on an international launch, but you won’t have to wait long to get a Snapdragon 8 Gen 2 device in the US. Motorola, Samsung, and others are expected to announce flagship devices with the chip in late 2022 and early 2023.

Now read: