Tuesday, 31 May 2022

Astronomers Prepare to Turn Webb Telescope Toward Nearby Super-Earths

The James Webb Space Telescope (JWST) has been in space for about six months, which is just a fraction of the time NASA spent designing and building it. All that effort is about to pay off, though. Webb will begin science operations this summer, and some of its first targets will be a pair of nearby exoplanets. These planets, 55 Cancri e and LHS 3844 b are in a category of larger rocky exoplanets known as super-Earths. Webb could provide scientists the best view yet of terrestrial planets outside of our solar system. 

NASA designed the JWST as a follow-up to the hugely successful Hubble Space Telescope. While Hubble needed a service mission after launch to work correctly, Webb appears to be a prime example of optical perfection. NASA reports the telescope’s instruments are “diffraction-limited,” which means it’s as good as it can possibly be given the size of the mirror

So far, we’ve only seen a few test images from Webb, but they already show how much more powerful it is than past instruments. That could be a boon to the study of exoplanets, which are too dim to be observed in detail. However, Webb has a much larger mirror than even Hubble, and it can peer deeply into the infrared part of the light spectrum. That could allow it to collect data from 55 Cancri e and LHS 3844 b like never before. 

55 Cancri e orbits a binary star about 41 light years away and is believed to be eight times Earth’s mass. This solar system is home to five known exoplanets, and 55 Cancri e is the innermost. It completes an orbit of the stars once every 18 hours, and scientists believe its surface is molten. However, observations show that heat is distributed away from the sunward side of the alien world. That might be because 55 Cancri e has a thick atmosphere that moves heat around, or it could effectively rain lava across the surface. Researchers will use the Near-Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI) to capture thermal emission spectra from 55 Cancri e in hopes of identifying the mechanism for heat bleed. 

The other target, LHS 3844 b, is a slightly cooler super-Earth that is only about 30 percent more massive than our planet. It also has a short orbital period (11 hours), but the star (48 light years away) is smaller and cooler than our own. Data from the Spitzer telescope suggest it has no atmosphere, but that could make it easier to gather data on its composition. Thermal emission spectra could be compared to known rocks, and if the planet is volcanically active, Webb might even be able to detect trace gasses expelled from inside the planet. 

Astronomers believe getting this data from Webb will help us better understand the nature and evolution of Earth-like planets across the galaxy. There are more than 5,000 known exoplanets, so taking a look at two of them with Webb is just the start.

Now Read:



Microsoft Was Going to Release a Game Streaming Dongle, But Now It’s Starting From Scratch

Microsoft is putting a lot of energy into promoting its Game Pass subscription services. While it’s not included in the base subscription, Microsoft’s Xbox Cloud Gaming (formerly xCloud) could be its big play for the future, but that future might be a little further off than we thought. Microsoft confirms it was working on a streaming dongle code-named Keystone, but it has decided to scrap that project and start over

Microsoft’s Game Pass offerings are a bit convoluted, spanning both PC and Xbox with platform-specific subscriptions and an all-in sub that adds cloud gaming. That $15 per month Ultimate plan lets you download a selection of titles on your local gaming hardware (PC or Xbox), but you can also stream a subset of games to a browser, mobile device, Xbox, or PC. The addition of a streaming dongle a la the Chromecast to expand compatibility seems like an easy win, but Microsoft isn’t so sure. 

Keystone would have been a small HDMI-equipped device that could bring Xbox Cloud Gaming to any TV or monitor. According to Microsoft, it will take what it learned making Keystone and apply that to new game streaming hardware. However, it doesn’t have any timelines or even vague concepts to share right now. Microsoft is starting from scratch. 

When Google announced Stadia, the availability of inexpensive Chromecast devices was cited as a significant advantage, but Stadia hasn’t exactly been lighting up the internet. Developers seem mostly uninterested in porting AAA games to Google’s platform. It’s possible that Microsoft only felt it needed the streaming stick, which it pre-announced in 2021, as a foil to Google’s streaming platform. Xbox Cloud Gaming has some major advantages over Google, even in what most consider a dry spell for Game Pass

Microsoft is probably feeling much more secure with its cloud gaming prowess right now. Unlike Google, Microsoft has a raft of first-party titles from the studios it has gobbled up in recent years, like Bethesda and Activision-Blizzard. It promises all Microsoft Game Studio titles will come to Game Pass on day-one, although they might not all be available to stream right away. 

We don’t know what form Microsoft’s streaming explorations will take. The company is committed to boosting Game Pass subscription numbers, but is it so committed that it will create a dongle that competes with the Xbox? Sony is set to roll out its updated PS Plus service, which supports downloadable and streaming games, but it will only work on Sony consoles and PC. Without a strong challenge from Google, there’s little reason for Microsoft to make a cheap streaming dongle.

Now read:



Northwestern University Builds Tiny Robot Crabs

(Photo: Northwestern University)
Engineers at Northwestern University have created micro-robots that mimic the peekytoe crab—but on an almost unbelievably smaller scale. 

The half-millimeter “crabs” constitute the world’s smallest remote-control robots. Smaller than a flea, they’re able to walk along the edge of a penny or thread a sewing needle. Despite (or perhaps because of) their size, the micro-robots are able to “bend, twist, crawl, walk, turn and even jump,” giving researchers hope that tiny robots may someday be able to perform tasks for humans in highly-constrained environments. 

“You might imagine micro-robots as agents to repair or assemble small structures or machines in industry or as surgical assistants to clear clogged arteries, to stop internal bleeding or to eliminate cancerous tumors—all in minimally invasive procedures,” said bioelectronics engineer John A. Rogers in a Northwestern University statement. Rogers and his colleague Yonggang Huang, a mechanical engineer, conducted their research in experimental and theoretical halves. The product of their work has since been published in the journal Science Robotics.  

If you’re wondering how Rogers and Huang packed complicated hardware into such a tiny structure, you’d be right to ask—because they didn’t. The micro-robots are made up of a shape-memory alloy that, when heated, returns to a “default” shape. A scanned laser beam quickly heats the micro-robot at multiple points throughout its body, while a thin glass coating helps it to return to its deformed shape. This rapid back-and-forth allows the micro-robot to move from one location to another, covering a distance equal to half its body length per second. The robot crab also moves in whichever direction the laser is scanned toward; if the operator points the laser to the right, the micro-robot travels right. 

This isn’t the first time Rogers and Huang have teamed up to engineer tiny tech. Less than a year ago, the duo unveiled the world’s smallest flying structure, a winged microchip about the size of an ant’s head. Before that, they worked with a team of biomedical researchers to create small bioresorbable cardiac pacemakers that can be left in the body to disintegrate after their temporary purposes are fulfilled. 

And because experimental engineering can in fact contain a bit of levity, the engineers chose to model their micro-robots off of crabs just because they were amused by the resulting movement. They also found themselves capable of producing micro-robots that looked and behaved like inchworms, beetles, and crickets, but it was the crabs they found funny and inspiring. “It was a creative whim,” Rogers said.

Now Read:



Sony: We Are Going to Build So Very Many PlayStations 5s

Sony says the PS5 is the preferred way to play Cyberpunk. Good luck finding one.

After more than a year and a half, it’s still almost impossible to purchase a new game console for retail price. Sure, there’s supply if you don’t mind a 50-75 percent markup, but everyone should mind that. Thankfully, Sony might be riding to the rescue soon. The PlayStation 5 maker promises it’s going to ramp up production to unprecedented levels. We’ll believe it when we see it, but it’s still encouraging to hear. 

The PlayStation 5 and Xbox One X launched at the tail end of 2020, right in the midst of a historic shortage of components and a once-in-a-century viral pandemic that kept people at home and bored. It was possibly the worst time to go looking for an expensive new piece of gaming hardware. Predictably, scalpers managed to collect all the inventory and resell it at inflated rates. 

Retailers have implemented some policies to slow down resellers, but it’s still hard to find a PS5 in stock that isn’t marked up to $800 or $900 from the $500 MSRP. In a briefing with investors, Sony Interactive Entertainment President and CEO Jim Ryan pledged to increase supply of the consoles. It’s easy to see why investors would want that assurance. Sony only sold two million consoles in the first quarter of 2022, a significant decline from the previous quarter, but demand has not slumped. Sony says it can sell 80,000 PS5s in just 82 minutes, whereas it would have taken nine days to move than many PS4s at the 18-month mark. By not having more units, Sony is leaving a ton of money on the table. 

(Photo: Onur Binay/Unsplash)

Apparently, Sony believes it can boost production to a level that would allow the PS5 to surpass the total number of PS4 units in 2024. At that point in its life, the PS4 has moved between 40 and 50 million units, but the PS5 is currently hovering just under 20 million total. Sony will get there by working with more suppliers to guarantee access to components, but the rest of Ryan’s claims were sufficiently vague to be meaningless. 

Until Sony can boost production numbers, gamers may continue playing on older hardware. At this point in the PS4 era, there were just 36 million PS3 players. Currently, there are 84 million still using the PS4, and that number won’t sink much until you can swing by your local retailer (or Amazon listing) and buy the PS5 for the real price.

Now Read:



Researchers Sequence the DNA of Man Who Died at Pompeii

(Photo: Denise Jones/Unsplash)
Despite how much we already know about the tragic volcanic eruption that occurred almost 2,000 years ago in Pompeii, there’s still a lot to be discovered about the people who lived there. Scientists have made a major stride in this area by sequencing a complete genome of a man who died that fateful day. 

The eruption at Pompeii infamously encased the city in pyroclastic flows, creating an eerie snapshot of its residents’ final moments. Because the ash and volcanic debris preserved everything from human bodies to food, scientists have been able to turn the city—just a few miles southeast of Naples, Italy—into an archaeological site that offers glimpses of life in 79 CE. One such glimpse can be found at the House of the Craftsman, a small structure in which the remains of two ash-engulfed humans were first found nearly a century ago.  

The remains belonged to one 50-year-old woman and one man believed to be between 35 and 40 years old. Dr. Serena Viva, an anthropology professor at Italy’s University of Salento, worked with geogeneticists to extract DNA from both skeletons. But according to a report published in the journal Nature, the team was unable to obtain quality information from the woman’s DNA, leaving them capable of analyzing only the man’s DNA. 

Researchers believe the pair experienced a quick death as a cloud of superheated ash overtook the home. Their positions suggest they did not attempt to escape. (Photo: Notizie degli Scavi di Antichità, 1934)

A small amount of bone taken from the base of the man’s skull provided enough intact DNA for the researchers to sequence a complete genome. His genome revealed that while he shared genetic similarities with other people who lived in Italy during the Roman Imperial age, he also possessed genes typical of individuals from Sardinia, an island off of Italy’s eastern coast. This tells the researchers that the Italian Peninsula may have harbored more genetic diversity than originally thought.  

The man’s remains also contained ancient DNA from Mycobacterium tuberculosis, a bacterial disease that primarily affects the lungs. Because a few of the man’s vertebrae showed signs of disease, Dr. Viva’s team believes he was suffering from the disease prior to the eruption. 

Thanks to the effectiveness of modern sequencing machines—and the success of this study—researchers are likely to continue analyzing DNA from preserved remains at Pompeii and other archaeological sites, like Herculaneum (which was also engulfed in volcanic ash). “Our initial findings provide a foundation to promote an intensive analysis of well-preserved Pompeian individuals,” the study reads. “Supported by the enormous amount of archaeological information that has been collected in the past century for the city of Pompeii, their paleogenetic analyses will help us to reconstruct the lifestyle of this fascinating population of the Imperial Roman period.”

Now Read:



Astroid-Mining Company AstroForge Books Its First Test Mission

Bennu, as seen by OSIRIS-REx.

We’ve been hearing about asteroid mining for years, and while it wasn’t crazy to speculate on the possibility, there were plenty of barriers. However, humanity has recently studied asteroids up close, landed on them, and even shot one with a high-speed projectile. The day may be coming when asteroid mining will be viable, and a startup called AstroForge aims to be the first. This newly founded company has announced its plans to begin mining asteroids for rare metals, and it already has a test mission planned. 

Scientists estimate that even a small asteroid could hold billions of dollars worth of precious metals and other resources, but the problem is getting that material to Earth without breaking the bank. Past efforts to mine asteroids have made water their initial focus. With a supply of water, you can split it into oxygen and hydrogen for fuel. However, AstroForge is going right for the shiny stuff, saying there’s no market for fuel depots in space, and vessels like the upcoming Starship could potentially heft enough water into orbit that it’s not worth collecting it from asteroids. 

AstroForge intends to focus its efforts on resources that are in high demand here on Earth: platinum-group metals like osmium, iridium, palladium, and of course, platinum. Mining these materials on Earth is a dirty business, taking up large swaths of land and producing extensive pollution. The US, where AstroForge is based, is also not blessed with large deposits of platinum-group metals, so having an extraterrestrial source of these materials could be a boon to national security, AstroForge CEO Matt Gialich recently told Space.com

NASA’s OSIRIS-REx is believed to carry about 2kg of asteroid regolith, but the mission comes with an $800 million price tag.

The company claims to have developed a “lab tested” technology to process asteroid material in space so it can be returned to Earth. It has raised $13 million to fund its operations, including a flight on a SpaceX Falcon 9 rocket to test the tech in orbit. However, reaching an asteroid to mine it could be the real problem. 

So far, space agencies like NASA and JAXA have managed to get a few robotic probes to nearby asteroids. But “nearby” still means millions of miles. It takes years just to reach the target, and a return trip only adds to the expense. JAXA spent about $150 million on the Hayabusa2 mission to collect 5.4 grams of surface material from the asteroid Ryugu. It dropped the payload back home in 2020. Meanwhile, NASA’s OSIRIS-REx mission recently scooped up an estimated two kilograms of asteroid regolith, but this one cost about $800 million so far — the sample won’t be back on Earth until 2023. Unless AstroForge’s mining technology is truly revolutionary, the economics of asteroid mining is still very questionable. Hopefully, we get an update after the upcoming test flight.

Now Read:



Friday, 27 May 2022

Deathloop: Putting AMD’s FidelityFX Super Resolution 2.0 to The Test Against Nvidia’s DLSS

Earlier this month, AMD offered us the chance to preview FidelityFX Super Resolution (FSR) 2.0, courtesy of the game Deathloop. Deathloop is currently the only title to support both versions of FSR as well as Nvidia’s Deep Learning Super Sampling (DLSS), making this an excellent opportunity to take them both out for a collective spin.

Despite the name, FSR 2.0 is not an update to FSR 1.0. It’s an entirely new approach that’s designed to bypass some of FSR 1.0’s weaknesses. AMD’s first attempt at this kind of upscaling was based entirely on spatial upscaling and did not use temporal information. It also required a high-quality anti-aliasing method to work properly, which many games don’t support. Deathloop lacks this support, and FSR 1.0 in the game isn’t much to write home about.

AMD is more confident in FidelityFX Super Resolution 2.0 than it ever was in FSR 1.0. FSR 1.0 was positioned as an alternative feature along the lines of Radeon Image Sharpening, while FSR 2.0 is positioned as more of a direct competitor to DLSS.

FSR 2.0 incorporates the temporal data that FSR 1.0 lacks and it doesn’t require a game to support high-quality antialiasing in order to render acceptable output. AMD previewed it back in March, but this is the first time we’ve gotten to play with it. We’ve taken FSR 2.0 out for a spin against DLSS on a 1440p panel to capture the rendering differences.

One thing to note before we get started. The small gray spots you see on some images are not errors introduced by any of these visual settings. They’re transient phenomena. There’s one place where FSR and DLSS both introduce errors in our comparison and we’ll call it out when we get to it.

How to Compare DLSS and FSR in This Article

This article contains a mixture of directly embedded images as well as links out to Imgsli. Imgsli is an excellent free method for comparing two or more images in an A/B(C) method. Because of the number of comparison points, we’re going to queue comparisons in DLSS, FSR 2.0, and then directly head-to-head for our selected scenes. You can select which image you compare in Imgsli using the drop-down menu at the top of each image.

Nvidia DLSS: Beach

Let’s get started. Deathloop begins with your own murder, after which you wake up on a deserted beach. Here’s the zoomed-out, native 1440p version of the image with no AA or alternative upsampling:

Nvidia Native versus DLSS. Adjustable image comparison available at Imgsli.com

But zoomed out doesn’t give us the best view of what changes in each scene. It’s actually hard to tell what’s different across these frames. (From Nvidia’s perspective, that’s a good thing). A 600 percent zoom is a much better way to see rendering subtleties.

Nvidia DLSS quality comparison. Close-up of shot above. User-adjustable image available on Imgsli.

Quality DLSS settings substantially reduce jaggies compared to the 2560×1440 native resolution. This is expected — DLSS performs antialiasing, in addition to its other functions — but the difference is large. Shifting down to “Balanced” hardly impacts image quality at all. One downside to DLSS (and this is present in every mode) is that ground textures are a bit blurred compared to the native image. This is really only visible at tight zoom, however.

AMD FSR 2.0: Beach

According to AMD, FSR 2.0 is better than FSR 1.0 at every quality level. We focused our testing on FSR 2.0 for these AMD comparisons, but include some FSR 1.0 shots as well, to show the degree of uplift.

AMD native 1440p versus FSR 2.0 versus FSR 1.0. User-comparable results available on Imgsli.

The improvement from FSR 1.0 to FSR 2.0 is immediately obvious. FSR 1.0 blurs content heavily and the line leading away from the pole is a vague smear. With FSR 2.0, it resolves into a distinct line. Switch to “Performance” for both tests, and you’ll immediately see how much better FSR 2.0 is compared to FSR 1.0. AMD claimed that every FSR 2.0 quality setting was better than every FSR 1.0 quality setting, but this comparison shows AMD is actually underselling its own feature. Even FSR 2.0’s “Performance” setting is better than FSR 1.0’s “Quality,” though Deathloop isn’t considered a great test case for Fidelity FX Super Resolution’s first iteration.

AMD 1440p native versus FSR 2.0 and FSR 1.0. User-adjustable comparison available at Imgsli.

FSR versus DLSS: Beach

When it comes to FSR 2.0 versus DLSS, FSR 2.0 wins the comparison in this set of images. Note: We’ve combined the standard shot and closeups in this comparison to try to keep the amount of clickable material to some kind of reasonable limit.

AMD FSR 2.0 versus Nvidia DLSS. User-adjustable comparison available at Imgsli.

FSR 2.0 is much less blurry than DLSS, at every detail setting. We’ve included both the zoomed-out and zoomed-in shots to illustrate the distinction in both modes. FSR 2.0’s “Balanced” preset offers better image quality than DLSS’ “Quality” preset. One thing we do encourage you to keep in mind is that the relative quality of DLSS and FSR can vary considerably depending on the suitability of the game engine for the format and the amount of work invested by the developer. These comparisons might play out differently in another title.

The gains from FSR 1.0 Ultra Quality to FSR 2.0’s “Quality” mode are quite impressive. Even at top quality, FSR 1.0 struggled to distinguish the wire strung up at the pole from background clutter, and lower-quality versions of the feature all but lose the strand. One of AMD’s promises for FidelityFX Super Resolution 2.0 was that the feature’s “Balanced” mode would be better than FSR 1.0’s “Ultra Quality” mode. In some ways, FSR 2.0’s “Performance” mod is better than UQ FSR 1.0, though we wouldn’t actually recommend using Performance mode.

Nvidia DLSS: Bunker

Let’s move from the beach to the interior of the starting area and check out a nearby underground bunker.

Nvidia native rendering versus DLSS. User-adjustable image comparison via Imgsli.

Nvidia DLSS and AMD’s FidelityFX Super Resolution both create a weird textured problem on the ground in this scene. You might not know this was an accident if you didn’t look closely at other rendering modes — while it’s a bit odd looking, the texture doesn’t flicker or change dramatically as one moves around the room. Light across the scene is a fair bit different between 1440p with no DLSS and DLSS engaged, but you can see how DLSS prevents horizontal line shimmer where there are lines across surfaces.

Apart from the introduced error, I consider DLSS Quality to improve the overall image (and FSR also creates the same error). DLSS Balanced, on the other hand, not so much. It’s not that DLSS Balanced doesn’t have any advantages over native 1440p, but there are trade-offs as well, especially considering the damage. We’ll look at a few of these when we zoom in. Temporal AA is the best quality of all, if only because there’s no error on the ground.

Our bunker close-up shot focuses on the map board at the back of the room. It’s striking how bad the default native rendering is. From our vantage point in front of the orange tarp, the close-up native line isn’t actually a solid line of string at all, but a series of dashes. DLSS Quality fixes both the dashes and the detailing on the metal box to the left of the map. DLSS Balanced and DLSS Quality are quite similar here.

Nvidia native rendering versus DLSS, bunker close-up. User-adjustable image comparison available on Imgsli.

Interestingly, Temporal AA is worse on this map closeup than the other settings, even if it looks better in the scene as a whole. Line weights and handwritten text on notes pinned to the board are both stronger with DLSS. Temporal AA manages to beat native, but the setting does not impress here.

AMD FSR: Bunker

The bunker on an AMD GPU has the same visual problem that DLSS has. Both DLSS and FSR change how shiny certain surfaces are, and how reflective they look. It’s not a bad thing, but it does stand out as a difference between enabling and disabling these technologies, even if the floor wasn’t oddly textured. The problem, despite being quite visible, doesn’t really stand out in gameplay with FSR, either.

AMD FidelityFX Super Resolution versus native resolution. User-adjustable comparison available on Imgsli.

Pan back and forth in the image comparison above between native 1440p and DSR 2.0 Quality, and you may notice that one of the lockers in the back appears to lose a line that defines an upper compartment. Zoom in, and it’s easier to see that while the line wasn’t removed, it shimmers less and is less visible. You can also see that FSR 2.0 improves the string rendering on the wall map in the back, even without zooming. FSR 1.0 Ultra Quality looks somewhat worse than native 1440p with no AA technology enabled.

Comparison between native AMD rendering and FSR 2.0. User-adjustable comparison available on Imgsli.

Not much new to say here. Native looks bad on AMD as well, and FSR 2.0 is substantially better than FSR 1.0. I forgot to grab a “Balanced” screenshot for FidelityFX Super Resolution 2.0 for this one — my apologies. But this is an easy win for FSR 2.0, without much more to say about it.

FSR versus DLSS: Bunker

Nvidia DLSS versus AMD FSR 2.0. User-adjustable comparison available on Imgsli.

Both companies’ solutions create an error on the floor, so we’re going to call that a wash and compare on the basis of other characteristics. You may be hard-pressed to see much in the way of variance unless you zoom in, at which point some distinctions appear. Once again, FSR 2.0 is a slightly sharper solution while DLSS blurs just slightly more. Differences this small typically come down to personal preference — do you like a bit of blur to guard against shimmer and jagged lines, or do you prefer maximum detail?

DLSS versus FSR 2.0: Bunker Close-Up

Neither DLSS nor FSR 2.0 look fabulous in this close-up shot, but DLSS gets the nod from us for its ability to create slightly more legible text. Line strength is better with FSR 2.0 compared to Deep Learning Super Sampling, but we’d give the nod to Nvidia overall.

User-adjustable image comparison available via Imgsli.

Nvidia DLSS – Panel Close-Up

We’ve pivoted (literally) towards the console panel you can see above, to get some close-up shots and measure DLSS versus FSR at minimum range. We’ll start with the DLSS comparisons, though we’ve also chucked an Nvidia run of FidelityFX Super Resolution into the mix, just to see how an Nvidia card fares when using AMD’s older rendering method.

Nvidia Native 1440p versus DLSS Quality. User-adjustable image comparison available at Imgsli.

DLSS Quality looks quite similar to native resolution here. While there’s a slight blurring, it’s not very much. AA methods often create at least a small amount of blur, after all. Balanced quality is noticeably worse, however, with significant gauge blur and fine detail loss. Temporal AA deals with some bright jagged lines that DLSS Quality doesn’t and changes the overall lighting a bit. FSR 1.0 does a reasonable job cleaning up the image in some places, but it creates text distortion in the gauge readouts.

Nvidia running FSR 1.0 versus AMD. Image isn’t perfectly aligned due to the need to swap GPUs. User-adjustable image comparison available at Imgsli.

Here, the slight blurring from DLSS Quality is preferable to the increased jaggies in the FSR 1.0 image. FSR 1.0 isn’t really the point of this article, but we wanted at least one comparison between Nvidia and AMD on this point. While FSR 1.0 output isn’t literally identical between the two companies — AMD’s text on the panels is ever-so-slightly blurrier than Nvidia’s — the two are close enough to demonstrate equivalent support.

AMD FSR 2.0: Panel Close-Up

Here’s AMD’s close-up on the instrument panel, compared across native resolution, FSR 2.0, and FSR 1.0.

Native resolution versus FSR 2.0 versus FSR 2.0. User-adjustable comparison available at Imgsli.

FSR 2.0 really shines here. The panel is higher quality with less blurring with FidelityFX Super Resolution 2.0 enabled in Quality mode than it is in native 1440p, as shown below:

A zoomed-in comparison of the two images shown above.

FSR 2.0 improves AMD’s image quality over and above baseline. That’s a trick FidelityFX 1.0 can’t match.

AMD FSR 2.0 versus DLSS: Panel

AMD’s FSR 2.0 wins this comparison against DLSS. The sharper rendering DLSS 2.0 offers pays dividends here, rendering written text and gauge numbers easier to read compared to DLSS. DLSS, in turn, renders significantly better text than FSR 1.0. Both technologies perform well here and the gap between them isn’t huge.

DLSS vs. FSR. User-adjustable comparison available on Imgsli.

While we preferred DLSS for the background map and text in our previous comparisons, we like FSR 2.0 more for the panels and associated gauges.

Putting It All Together: Who Comes Out on Top?

Between DLSS and FSR 2.0, I narrowly prefer FSR 2.0. Honestly, it’s a wash at anything less than a painstaking comparison — it’s not as if you notice a fractional difference in text that’s too blurry to read when playing the game normally. Both technologies broadly deliver what they say they will — namely, a performance improvement even at the highest quality settings.

What matters more for AMD is matching Nvidia’s ability to field an image-enhancing algorithm that improves performance instead of hurting it. In that regard, FSR 2.0 succeeds tremendously.

Technologies like FSR 2.0 could be particularly helpful to mobile and low-power device gaming, especially on products like the Steam Deck. Tests show that technologies like DLSS and FSR can improve rendering performance by 20 – 40 percent depending on the title and your preferred settings. Improving performance this much typically requires buying a new GPU at a substantially higher price.

This shift has short-and-long term implications. Because FSR 2.0 requires RDNA2 support, unlike FSR 1.0, the number of people who can take advantage of this technology is small. Over time, however, this feature will be a mainstream capability in every GPU that AMD manufacturers. Intel will presumably follow suit. Once that happens, gamers can look forward to substantially better performance.

Long term, we expect Intel, Nvidia, and AMD to shift their efforts towards a mixture of AI and non-AI techniques intended to improve image quality without paying the penalty of rendering pixels at their native resolution. FSR 2.0 is an important step on that journey.

Now Read:

 



Thursday, 26 May 2022

TCL Has No Idea When the PS5 Pro, Xbox Series Next Will Arrive

A slide TV manufacturer TCL presented during one of its briefings today has caused no small amount of ruckus online, given that it appears to point towards a next-gen console refresh cycle kicking off next year. There is no evidence that this is actually happening and a great many reasons to think it isn’t.

The only companies that know what Sony and Microsoft’s long-term upgrade plans are for the current console generation are Sony, Microsoft, and the company that designs the SoCs, AMD. Anyone else is speculating. Sometimes a peripheral manufacturer or game developer will let something slip, but development knowledge like this is kept under tight wraps and TCL isn’t in the “need to know” category.

Beyond the fact that Sony and Microsoft wouldn’t have shared this information, there are other facets to consider. While both consoles have launched mid-cycle console refreshes in the past, the Xbox Series X and PlayStation 5 have fewer next-generation titles available at this point in their life cycles than any previous successful platform launch in history.

The Xbox Series X and PlayStation 5 are both ~18 months old. Typically consoles launch with a mixture of platform exclusives and some third-party games available on both the previous generation and the new one. Microsoft has put a huge focus on backwards compatibility this generation, so the lack of exclusives for its platform is less surprising, but Sony doesn’t exactly have a ton of next-gen titles, either. If you strip out remakes and upgraded PS5 titles that also debuted on the PS4, the list is even shorter. Ratchet and Clank: Rift Apart and Returnal are both well-regarded, and there are more games shipping between now and mid-2023, but the chance of a near-term upgrade is vanishingly small given how little use players have gotten out of hardware they already purchased.

Things Were Very Different in 2016

When the PS4 and Xbox One launched, one major reaction we heard from lots of players, especially Microsoft customers, boiled down to “This is it?” After losing gobs of money on the Xbox 360 and PS3, both manufacturers had targeted more restrained tech specs for their next iterations. Microsoft compounded its problem by making a $100 bet on the future of game controls that didn’t really pan out. Furthermore, 4K TVs, which had been quite new in 2013, were more common by 2016. With a new console generation still some years away, it made sense to launch high-end SKUs that would offer console enthusiasts a better platform to game on.

None of these factors are in play in 2022. The PS5 and Xbox Series X were fabulous deals at launch relative to the amount of gaming PC you could get for the same amount of money. Backwards compatibility and services like Xbox Game Pass have been big draws for Microsoft, while Sony has aimed for more of a regular console cycle with exclusive launch titles, but the adoption rates of both platforms have been depressed by semiconductor shortages, and price gouging. 18 months after launch, at least some would-be adopters are still waiting for these factors to resolve themselves.

Finally, despite TCL’s most fervent hopes, 8K TV is not on the horizon. 8K TV sales have actually fallen and accounted for 0.15 percent of all TV shipments in 2021. This isn’t going to change in the near future, for multiple reasons. Game engines and consoles are nowhere near ready to tackle 8K as a playable resolution and there is no time table for when that is likely to change. There is no push to introduce 8K content on any service. The semiconductor shortage shenanigans that have roiled the market for years now are easing, but they aren’t gone yet, and it’s going to take at least another year before the market entirely returns to normal.

We haven’t heard so much as a whisper about a mid-generation upgrade for either console yet. We don’t expect to for a while.

 

Now Read:



AMD Shows Off Zen 4 Overclocking, But Questions Remain

Back in the April AMD made news by saying it was “gonna try to make a big splash with overclocking” with its upcoming Zen 4 CPUs. That would be somewhat of a departure from Zen 3, as it’s not exactly known for its overclocking headroom. Now that Computex has come and gone, we’ve been able to see Zen 4 in action.Despite AMD’s statements, it’s not clear how much the overclocking situation has changed. While Zen 4 clearly allows for overclocking, Zen 3 has never impressed in this regard. Zen 4 may not, either.

As a refresher, at Computex AMD showed a prototype 16C/32T Ryzen 7000-series “Raphael” CPU running Ghostwire: Tokyo. A CPU clock speed monitor was running in the corner, so we could see its clock speeds. Although clocks fluctuated in the low 5GHz range during the demo, the chip did hit a notable peak of 5.5GHz. Dr. Su said this is normal, as it will hit variable clocks depending on the workload. We don’t know what the actual boost clock of the chip is, but it’s clearly higher than the 5950X’s 4.9GHz.

In an interview with PCWorld, AMD’s Robert Hallock confirmed nothing fancy was required to hit those clocks. He said they were using a standard 280mm AIO cooler you can buy online. This is a not-so-subtle reference to the time Intel was caught using a chiller to cool a 28-core desktop Xeon chip. Regardless, he said the CPU wasn’t overclocked, and that “most of the threads” were running at 5.5 GHz. This begs the question: if it can hit 5.5GHz on its own, how high can it go with an overclock?

There’s one additional thing to point out. AMD released info on its upcoming AM5 chipsets (above) and you’ll note it doesn’t list overclocking as a feature offered on B650 boards. AMD has clarified that, saying it will indeed allow overclocking. This means every motherboard in the stack will allow it, so it’s open season when AM5/Zen 4 launch this fall.

But what kind of results can we expect? Color us skeptical, but we’re still not expecting much. For example, the 5.5GHz AMD showed off is the current high water mark for 16 core CPUs. It’s the current single core boost clock of Intel’s binned Core i9-12900KS, after all. If AMD is allowing its 7000-series CPU to get to 5.5GHz on its own, right out of the box, it seems like going even further might be a fool’s errand. As we’ve stated before, if AMD could get it to run at 6GHz without fancy cooling, why limit it to 5.5GHz? Even if it’s rated for a single-core boost of 5.5GHz, and you get it up to 5.7GHz, that’s still less than a four percent single-core overclock.

Over the last five years, AMD has chosen to leave relatively little on the table for manual overclockers, preferring instead to ship CPUs that run quite close to their maximum possible frequencies out of the box. While it may still be possible to manually overclock an AMD Ryzen for performance gain, we’ve had far more luck cranking up clocks on high core count CPUs like the Ryzen Threadripper 3990X, where all-core overclocks of 300-500MHz are possible given sufficient cooling. In this kind of scenario, OCing can still pay dividends over and above stock clock — but Threadripper is a workstation platform and a workstation platform limited to artificially lower clocks at that.

For Zen 4 AMD has cranked up the power requirements by a significant amount, which will also allow it to raise clocks. It’s gone from 105W TDP on the 5950X to 170W TDP, with a maximum socket power of 230W. That’s a huge boost, and will give AMD some added flexibility. Still, it seems like the song will likely remain the same. The lion’s share of the benefits could ultimately come down to overclocking, but not on core clock speeds. Instead it’ll be left to overclocking memory and Infinity Fabric, just as it was on Zen 3. Even Robert Hallock himself has noted that’s where most of the gains have historically come from on AMD’s CPUs. This is seemingly confirmed by reports that AMD is focusing heavily on memory overclocking with Zen 4, via its new EXPO technology.

None of this is meant to be a slight to AMD, because as we’ve said before the world has changed when it comes to overclocking. For both AMD and Intel, the days where they could leave 20-30 percent of a CPU’s clock improvement (or more) on the cutting room floor are long gone. As transistor density increases and node sizes decrease, it’s becoming more difficult to achieve higher clock speeds while keeping thermals in-check. This has been the pattern for some time now, and there’s no reason to think that will suddenly change with Zen 4. Intel and AMD may make some limited carve-outs for overclockers, but we expect both companies to reserve the vast majority of their performance improvements for themselves.

Now Read:



Scientists Accidentally Turn Gene-Edited Hamsters Into Aggressive Bullies

(Photo: Andy Holmes/Unsplash)
Sometimes experiments don’t go as planned. Case in point: when some scientists set out on making hamsters more “peaceful” via gene editing, they accidentally made the fuzzy little rodents more aggressive instead. 

Neuroscientists at Georgia State University (GSU) wanted to see how vasopressin, a mammalian hormone, influenced social behavior. They used a relatively novel technology called CRISPR-Cas9—which allows scientists to edit organisms’ genomes—to suppress a vasopressin receptor in Syrian hamsters. The expectation was that preventing the hamsters’ bodies from utilizing vasopressin would result in calmer, more peaceful behavior—but the result was anything but. 

Instead, cutting vasopressin out of the hamsters’ system resulted in “high levels of aggression,” particularly toward hamsters of the same sex. Though “normal” male hamsters are notoriously more aggressive than female ones, the startling change occurred in both sexes.

(Photo: Henri Tomic/Wikimedia Commons)

Previous studies have suggested that more—not less—vasopressin correlates with higher levels of cooperability. In 2016, researchers at the California Institute of Technology, Pasadena found that administration of vasopressin in humans resulted in an increased tendency to “engage in mutually beneficial cooperation.” This aligns with even earlier research, which showed that vasopressin may be responsible for regulating social behaviors related to sexual expression and aggression. 

“This suggests a startling conclusion,” said H. Elliott Albers, a neuroscience professor and the leader of the study, in a GSU statement. “Even though we know that vasopressin increases social behaviors by acting within a number of brain regions, it is possible that the more global effects of the Avpr1a receptor are inhibitory. We don’t understand this system as well as we thought we did.”

Syrian hamsters are ideal test subjects for a number of research purposes, including those targeting social behavior, cancer, and even COVID-19. “Their stress response is more like that of humans than other rodents. They release the stress hormone cortisol, just as humans do. They also get many of the cancers that humans get,” said Professor Kim Huhman, Associate Director of the Neuroscience Institute at GSU. “Their susceptibility to the SARS-CoV-2 virus that causes COVID-19 makes them the rodent species of choice because they are vulnerable to it just as we are.”

GSU’s Neuroscience Institute and similar establishments intend to continue investigating the effects of suppressed or increased vasopressin in mammals. As the related body of research grows, so might treatments for depression and other mental illnesses.

Now Read:



The Death of Internet Explorer Is Just Weeks Away

Decades ago, Microsoft used its might to turn Internet Explorer into the de facto standard for web browsing. That got the company into hot water with regulators, but it was all for naught. Internet Explorer was on the decline for a long time before Microsoft moved to Edge. Now, it’s the end of the line for IE. Microsoft is following through with its plans to retire IE as announced last year, and the big day is just three weeks away on June 15th. 

As recently as the early 2000s, Microsoft’s browser held almost the entire market. Then came the likes of Firefox and Opera, which began eating into Microsoft’s lead. The tipping point came in 2009 at a time when people began expecting more from their desktop browsers, but Microsoft was hesitant to add modern features to IE. That’s the year Google released Chrome, which caused a precipitous drop in IE usage. Google was neck and neck with IE by mid-2012, and then it left Microsoft’s browser (and everyone else) in the dust. 

Microsoft began updating Internet Explorer more regularly, but the damage was done. It moved to the Edge browser in 2015 with the release of Windows 10, but even that didn’t reverse the company’s online fortunes. It recently scrapped the old Edge and moved to a version based on the same open source Chromium code as Google Chrome. 

Microsoft has been moving toward killing IE for several years, but its usage share is still around 0.38 percent, according to StatCounter. Edge, meanwhile, enjoys a four percent market share, and Chrome is around 64 percent. While IE’s user base is tiny, even a fraction of a percent of the entire internet-using population is still a lot of people. Microsoft is naturally urging these folks to upgrade to Edge in advance of June 15th, and it’s not taking “no” for an answer. 

The upcoming deadline isn’t just the end of support — Microsoft hasn’t updated Internet Explorer since 2020. This is when Microsoft will begin actively disabling Internet Explorer on Windows 10 systems. These computers will all have Edge pre-installed, so users will have another browser ready to go. In the days and weeks after June 15th, users who try to load Internet Explorer will find themselves redirected to Edge. Microsoft has recently clarified that it may also roll out a system update that streamlines the process. 

Anyone still on an older (and unsupported) version of Windows with Internet Explorer could technically continue using it, but this setup is an inadvisable security nightmare. It’s time for IE stragglers to get with the times, whether they want to or not. For those who need to access sites and services that inexplicably only play nicely with Microsoft’s old browser, Edge offers an IE tab mode

Now Read:



NY State is Giving Robots to The Elderly

(Photo: Intuition Robotics)
New York State is helping hundreds of older residents remain connected with loved ones by distributing robots to their homes. 

The program is being organized by the New York State Office for the Aging (NYSOFA) in partnership with Intuition Robotics, an Israeli tech startup. Intuition Robotics’ central product is ElliQ, a robot “sidekick” tasked with preventing loneliness among the elderly. The voice-activated robot doesn’t perform physical tasks, but rather attempts to keep older adults in touch with their families and communities while monitoring basic wellness goals. 

The NYSOFA will give out more than 800 ElliQ robots as part of its ongoing effort to “battle social isolation and support aging-in-place,” per the organization’s press release. Older adults—especially those who “age in place,” meaning they remain in their homes instead of going to a care home or assisted living community—have always been more at risk of isolation due to decreased mobility and the recent aging of the baby boomer generation. The first year or so of the COVID-19 pandemic exacerbated this problem, given widespread advisories to stay home and limit face-to-face interaction. And in the US, elderly adults are more likely to live alone than anywhere else in the world. 

ElliQ aims to mitigate this issue by gently reminding its human companions to call their loved ones—something they can do using ElliQ itself. The aesthetically-pleasing tabletop robot looks almost like a lamp and sits on a flat base, to which a speaker and a simple tablet are also attached. Older adults can use the tablet to conduct video calls with family, send text and photo messages, and participate in exercise programs. ElliQ can also present companions with the news, the weather, music, games, and other information or entertainment options. The robot uses daily check-ins and regular assessments to help companions track their mental and physical health, then share that health information with trusted loved ones with the companion’s consent. 

(Photo: Intuition Robotics)

“Despite misconceptions and generalizations, older adults embrace new technology, especially when they see it is designed by older adults to meet their needs,” said NYSOFA Director Greg Olsen in the release. “For those who experience some form of isolation and wish to age in place, ElliQ is a powerful complement to traditional forms of social interaction and support from professional or family caregivers.”

NYSOFA case managers will determine eligibility for the ElliQ distribution program using a few criteria, like age, Wi-Fi access, and ease of socialization with those outside their homes. Once recipients are identified, Intuition Robotics will meet with them to provide installation and training. 

“We’ve long believed that connecting older adults with local communities via ElliQ will add an important element in providing holistic support to older adults aging in place,” said Intuition Robotics co-founder and CEO Dor Skuler. “This partnership with NYSOFA helps us further that mission through an innovative initiative that we are incredibly proud to be part of.”

Now Read:



NASA Moves Forward With Next-Gen Solar Sail Project

Getting from point A to point B in the solar system is no simple feat, and inefficient, heavy rockets aren’t always the best way. Therefore, NASA has announced it is moving ahead with a new solar sail concept that could make future spacecraft more efficient and maneuverable. The Diffractive Solar Sailing project is now entering phase III development under the NASA Innovative Advanced Concepts (NIAC) program, which could eventually lead to probes that use solar radiation to coast over the sun’s polar regions. 

The concept of solar sails is an old one — they were first proposed in the 1980s. The gist is that you equip a vessel with a lightweight sail that translates the pressure from solar radiation into propulsion. The problem is that a solar sail has to be much larger than the spacecraft it’s dragging along. Even a low-thrust solar sail would need to be almost a square kilometer, and you need to keep it intact over the course of a mission. Plus, you have little choice but to fly in the direction of sunlight, so you have to make tradeoffs for either power or navigation. Futuristic diffractive light sails could address these shortcomings. 

This work is being undertaken at the Johns Hopkins University Applied Physics Laboratory under the leadership of Amber Dubill and co-investigator Grover Swartzlander. The project progressed through phase I and II trials, which had the team developing concept and feasibility studies on diffractive light sails. The phase III award ensures $2 million in funding over the next two years to design and test the materials that could make diffractive light propulsion a reality. 

A standard lightsail developed by the Planetary Society in 2019.

A diffractive light sail, as the name implies, takes advantage of a property of light known as diffraction. When light passes through a small opening, it spreads out on the other side. This could be used to make a light sail more maneuverable so it doesn’t need to go wherever the solar winds blow. 

The team will design its prototypes with several possible mission applications in mind. This technology most likely won’t have an impact on missions to the outer solar system where sunlight is weaker and the monumental distances require faster modes of transportation. However, heliophysics is a great use case for diffractive lightsailing as it would allow visiting the polar regions of the sun, which are difficult to access with current technology.

A lightsail with the ability to essentially redirect thrust from a continuous stream of sunlight would be able to enter orbit over the poles. It may even be possible to maneuver a constellation of satellites into this difficult orbit to study the sun from a new angle. In a few years, NASA may be able to conduct a demonstration mission. Until then, it’s all theoretical.

Now Read: