Friday, 29 October 2021

Apple’s New MacBooks Include a Notch-Hiding Feature by Shrinking the Screen

(Photo: Apple)
Some despise it, some don’t mind it. Apple’s latest 14” and 16” MacBook Pros come with a notch at the top of the screen where the device’s camera lies, similar to that of the iPhone. But users report that since getting their hands on the new MacBooks, certain applications haven’t interacted well with the notched display; rather than considering the notch in their formatting, the applications act as though it isn’t there, thus losing a small chunk of their screen real estate. 

The issue has prompted Apple to push out a temporary workaround, according to a new support document. In an effort to avoid the notched area entirely, the company has provided 2021 MacBook Pro users with a way to display an app entirely below the camera area in lieu of a slim band of screen space. Users can now go into an app’s settings, open the Info window, and select an option called “Scale to fit below built-in camera.” Apple notes that even after this option is selected, all open apps or apps that share the same space will appear below the camera until users quit the app using the scaled setting.

(Photo: Apple)

The workaround is a band-aid fix intended only to be used until more apps interact with the camera notch properly. Prior to the fix, users complained that toolbars were losing their middle area to the notch, or that cursors would awkwardly jolt around the notch as if they don’t know what to do with it. Also, some toolbar options have been accessible under the notch, and some haven’t. Overall, the UI involved in one of Apple’s shiny new MacBook Pro features has been disappointing, to say the least.  

Apple’s temporary solution isn’t the most attractive one either, given that many new MacBook Pro owners were drawn to the device for its larger screen. With thicker bezels like those of the old MacBook Pros, it’s harder to show off that you have the latest model at your local coffee shop or in the office. In that way, the notch is a status symbol, something you want to notice and have others notice—just not in the way users have experienced so far.

Now Read:



Project Pele: Why the DoD is Betting on Tiny Nuclear Reactors to Solve Its Power Woes

In 2019, the government signed  mandating that we develop an itty bitty nuclear reactor by 2027. In compliance with that order, the US Air Force is launching a “microreactor” pilot project at Eielson AFB, in Alaska.

Per usual, the Air Force is playing its cards pretty close to the chest. As of October 27, the Office of Energy Assurance (OEA) hasn’t even announced that they’ve chosen a specific reactor technology. But all evidence suggests that this new installation is part of an energy-resilience effort known as Project Pele. The goal of Project Pele, according to the Dept. of Defense’s Research and Engineering office, is to “design, build, and demonstrate a prototype mobile nuclear reactor within five years.” Three separate development contracts have been awarded, with the final “mature” design submissions TBA.

Project Pele has two main themes: the reactor has to be 1) small, and 2) safe. What we’ve learned from Chernobyl and Fukushima is that failure of the coolant system can have terrible consequences, and in both cases, power failure to the cooling system is what allowed the fuel to become so hot that it entered meltdown. Failure is simply unacceptable. With nuclear power, we also have to consider decay heat and spent fuel disposal. Inability to dispose of hazardous byproducts counts as being unsafe. Even worse, the same stuff we use to make the power can be used to make weapons. But the new Generation IV reactors can further the conversation.

Without getting all breathless, I want to talk about one of the three designs likely being put forth in particular. One of the commercial contractors chosen to submit a design is a domestic outfit called X-Energy, whose higher-ups come from NASA and the US Department of Energy. Its CEO, Jeffrey Sells, previously served as Deputy Secretary of Energy, and founder Kam Ghaffarian operated a NASA service contractor that supported the former Mission Operations Data Systems at Goddard. The X-energy model is a Gen IV high-temperature gas-cooled pebble bed reactor. It uses TRISO fuel pellets or “pebbles” (TRISO stands for TRi-structural ISOtropic particle fuel) loaded into a column that’s then flooded with a heavy, nonreactive gas. And the whole thing is absolutely tiny: X-energy’s website describes their reactors not as building sites, but as modular products, shippable using existing road and rail.

The pebble-bed model used by X-Energy is clearly meant to specifically address many known failure points of nuclear power production. Whether it actually delivers on that promise is yet to be seen, because this is all still in the planning stages, but the design principles are there. First and worst is meltdown, which X-Energy is mitigating via the composition of the fuel itself. The TRISO pebbles are made of granules of uranium oxycarbide the size of poppyseeds, layered with pyrolytic graphite and embedded within a silicon carbide firebreak. The whole thing is the size of a cue ball.

Silicon carbide is what NASA uses in the heat shielding for numerous spacecraft. It’s tough stuff, very strong under pressure, and very difficult to melt. Carbides aren’t melted and cast like regular metals, because their melt points are higher than any other metal. Instead, uranium oxycarbide is created using spark plasma sintering. TRISO pebbles are also passively governed by a negative-feedback mechanism that starves the fuel of neutrons as the temperature rises, independent of any active or mechanical control. Higher temperatures mean falling reaction power, enforced by the nature of the material itself. It’s hard to have a meltdown if your fuel just… won’t melt.

Explosions also present their own set of dangers, including particulate from burning fissile material or graphite shielding. In this design, the reaction is held at temperatures far above the annealing point of graphite. This prevents stray potential energy from neutron bombardment from getting “stuck” in the graphite’s crystal lattice and eventually escaping in an uncontrolled burst, which is what happened in the Windscale fire. Pyrolytic carbon can burn in air if it’s also in the presence of enough water to catalyze the reaction, but there is no water-cooling loop, which prevents a steam explosion.

The use of uranium oxycarbide instead of uranium oxide or carbide is intended to reduce the oxygen stoichiometry; carbides are strong under pressure but not under expansion, so the oxycarbide should produce less gas under decomposition. That means that even if one of the carbide pebbles should rupture, smothered in the heavier-than-air gas, it won’t catch fire. The coolant never leaves the gas phase. The design relies on simply placing a critical mass of fissile material inside a gas-cooled reaction vessel, where it will go critical on its own. They’re just sitting a bunch of angry jawbreakers in the bottom of a tank, where they irritate one another into producing energy. Instead of shutting down to replace fuel rods, in pebble bed reactors, at regular intervals a pebble is collected from the bottom of the container by way of gravity, tested, and recycled to the top of the column.

Look at it. It’s the worst Gobstopper.

Once fully operational, the reactor will produce between one and five megawatts. That’s quite small for any power plant, and even more so for a nuclear plant — nuclear plants are often rated in the hundreds of megawatts or even the gigawatt range. At five megawatts it still barely clears a third of the Eielson base’s gross energy budget. But the micro-reactor isn’t being installed so that it can handle the base’s power consumption. This is a proof of concept, for both a reactor design that fails toward safety, and a portable source of radiant energy that doesn’t require a constant external material supply.

One serious weak spot this reactor could address is the way the armed forces get power in the field. For example, in Iraq and Afghanistan, the military used fuel convoys to truck in diesel to their installations, which ran on diesel generators. But generators are loud, dirty, expensive, and prone to breakdowns. They are also a hazard to human health: fuel-burning generators produce dangerous fumes and super-fine particulate. Furthermore, the convoys themselves were low-hanging fruit for insurgent attacks. All of this requires maintenance and lots of security. Much of the reason Eielson was chosen over any other site comes down to its reliance on fossil fuels that have to be transported in, like coal and diesel. The armed forces have a direct strategic interest in weaning their operations off petroleum fuels, to the extent they can.

What benefits the military, though, often ends up also improving civilian lives. Eielson AFB is only about a hundred miles south of the Arctic Circle. During the heating season, the base can burn 800 tons of coal every day. Like much of Alaska, it is beholden to energy supply lines prone to failure exactly when they’re most needed. Most of the state uses coal or diesel to provide electricity and heating. Much of Alaska is also only accessible by boat or plane. Juneau doesn’t even have a road connecting it to the outside world, because the terrain is so uncooperative. One failure point can easily line up with another. Eielson’s northerly location, along with its inexhaustible need for fuel, make it an excellent sandbox (snowbank?) for field testing the microreactor. Greater Alaska is also keenly interested: According to the Anchorage Daily News, “a cost-effective 1-5 MW power generator that doesn’t require refueling could represent a sea change for rural power in our state, as that range covers the needs of dozens of villages off the road system that currently have some of the most costly power in the state — and which are vulnerable to generator breakdowns in the dead of winter, when the consequences can be life-threatening.”

The issue of waste disposal remains unresolved. Shiny and chrome though these pebbles may be, they still embody about the same radioactivity per kilowatt hour as spent conventional fuel — it’s just spread across a larger volume. While this makes any generated waste hypothetically less awful to handle, there’s more of it, and that complicates the already manifold problems with waste handling and storage.

Final designs are to be chosen in fiscal 2022. From there, the DOD wants a reactor up and running by 2027.

Now Read:

 



Facebook Teases New High-End ‘Project Cambria’ VR Headset

Facebook’s current Oculus headsets clock in at much lower prices than the headsets of yesteryear, and yet they’re still impressive pieces of hardware. However, the social media giant is working on a headset that isn’t going to compete on price. The company teased the upcoming Project Cambria VR headset at the Connect conference, saying it will be a high-end experience rather than a replacement for the Quest. It won’t launch until 2022, though. 

It’s hard to judge from the teaser, but the new headset does look more svelte than the current Quest 2. A big part of that is apparently thanks to the “pancake” optics. These new lenses work by bouncing light back and forth several times to allow for a more compact form factor. According to Facebook founder and CEO Mark Zuckerberg, the result is a more compact, comfortable headset. The company also announced it was pulling a Google to reorganize under a new parent company called Meta. So technically, Oculus and Facebook are both Meta companies now. 

Project Cambria should also be a much more immersive experience than current VR headsets. According to Zuckerberg, the premium headset will include eye and face tracking, so your virtual avatar will be able to maintain eye contact and change expressions to match your own — it sounds a bit like Memoji on Apple devices. Zuck also hinted at body-tracking, which could let you interact with virtual and augmented reality spaces in a more intuitive way. 

It’s also a safe bet that Project Cambria will include higher resolution displays. It will include high-resolution cameras that can pass full-color video to the headset’s display. This opens the door to augmented reality applications and virtual workspaces that can be overlaid on your boring old desk. 

We haven’t seen the device in the flesh yet, but the silhouette above sure does look like a recently leaked “Oculus Pro.” That headset is alleged to have body-tracking capabilities, augmented reality features, and a controller dock. However, the leaked videos hardly constitute confirmation. Facebook says the Project Cambria headset won’t launch until next year, and a lot can change in the meantime. 

Zuckerberg didn’t talk about pricing during his keynote except to say it would be more than the Quest 2, which starts at $299. That’s a good value for what is arguably the most capable and well-supported VR headset on the market. But how much can Facebook push the price before the burgeoning interest in VR peters out? Probably not as much as Zuck would like.

Now Read:



Thursday, 28 October 2021

NASA Wants Your Help Improving Perseverance Rover’s AI

NASA’s Perseverance rover is the most advanced machine ever sent to the red planet with a boatload of cameras and a refined design that should stand the test of time. Still, it’s just a robot, and sometimes human intuition can help a robot smarten up. If you’re interested in helping out, NASA is calling on any interested humans to contribute to the machine learning algorithms that help Perseverance get around. All you need to do is look at some images and label geological features. That’s something most of us can do intuitively, but it’s hard for a machine. 

The project is known as AI4Mars, and it’s a continuation of a project started last year using images from Curiosity. That particular rover arrived on Mars in 2012 and has been making history ever since. NASA used Curiosity as the starting point when designing Perseverance. The new rover has 23 cameras, which capture a ton of visual data from Mars, but the robot has to rely on human operators to interpret most of those images. The rover has enhanced AI to help it avoid obstacles, and it will get even better if you chip in. 

The AI4Mars site lets you choose between Opportunity, Curiosity, and the new Perseverance images. After selecting the kind of images you want to scope out, the site will provide you with several different marker types and explanations of what each one is. For example, the NavCam asks you to ID sand, consolidated soil (where the wheels will get good traction), bedrock, and big rocks. There are examples of all these formations, so it’s a snap to get started. 

With all this labeled data, NASA will be able to better train neural networks to recognize terrain on Mars. Eventually, a rover might be able to trundle around and collect samples without waiting for mission control to precisely plan each and every movement. It’ll also help to identify the most important geological features, saving humans from blindly combing through gigabytes of image data. 

The outcome of Curiosity’s AI4Mars project is an algorithm called SPOC (Soil Property and Object Classification). It’s still in active development, but NASA reports that it can already identify geological features correctly about 98 percent of the time. The labeled images from Perseverance will further improve SPOC, which includes more subtle details including float rocks (“islands” of rocks), nodule-like pebbles, and the apparent texture of bedrock. In some images, almost all of the objects will already be labeled, but others could be comparatively sparse. 

The Curiosity AI project resulted in about half a million labeled images. The team would be happy with 20,000 for Perseverance, but they’ll probably get much more.

Now Read:



Microsoft Reportedly Working on Windows 11 SE, New Low-Cost Surface

One of Microsoft’s currently available Surface models. (Photo: Zarif Ali/Unsplash)
Microsoft may soon take on the classroom. In an effort to compete with Google’s Chromebooks, the company is working on building a low-cost Surface laptop and a scaled-back version of Windows 11, according to Windows Central. 

The more affordable Surface will likely have a 11.6-inch screen with a 1366 x 768 resolution. Sources for Windows Central claim the device—codenamed Tenjin during development—will possess a fully plastic exterior, ideal for exchanging hands every day in a classroom. It will also contain an Intel Celeron N4120 (commonly used in budget- and classroom-friendly laptops) and offer up to 8GB of RAM. Other specs include a full-sized keyboard and trackpad, a USB-A port, a USB-C port, a headphone jack, and a barrel-style charging port.

As for the device’s operating system, sources say Microsoft is in the process of creating a new edition of Windows 11 titled Windows 11 SE. This OS will likely offer fewer customization options and contain less bloatware than Windows 11, as it needs to be light enough to function smoothly on modest hardware. Though the general characteristics of Windows 11 will still be there, students may not be able to download apps outside of those offered on the Windows Store, including alternative web browsers. Microsoft has previously offered simplified versions of its operating systems using the “S” or “SE” moniker, such as with Windows 10 S, the last slimmed-down Windows OS to have entered the classroom.

Smaller laptops with scaled-back operating systems have made their way into the classroom in recent years. (Photo: Jeswin Thomas/Unsplash)

While there are a handful of durable laptops geared toward education on the market, Microsoft’s most obvious competitor would be the Chromebook. Available from a wide range of manufacturers such as Acer, HP, Lenovo, and Samsung, Chromebooks exclusively run Google’s Chrome OS and are intended to be affordable and easy to use. Laptops have been growing in popularity within the confines of K-12 classrooms for a few years now, but with the rise in hybrid learning, dependence on these devices has spiked. Even without the unique routines necessitated by Covid-19, more and more edtech companies are producing virtual learning and tutoring tools, making laptops and tablets all the more practical at school.  

As of now, the low-cost Surface and Windows 11 SE are just rumors (albeit pretty solid ones). Neither the device nor its OS have a public release date.

Now Read:



EU Opens Full Inquiry Into Nvidia’s $54 Billion ARM Acquisition

The European Union will conduct a full investigation into Nvidia’s bid to purchase ARM after concessions that the company offered earlier this month failed to assuage regulators’ concerns. While this isn’t the decision Nvidia would prefer, it’s also not a huge surprise.

Two weeks ago, news leaked that Nvidia had made unspecified early concessions as part of its offer, and that regulators had extended the deadline for the ruling until October 27 as part of their evaluation of Nvidia’s proposal. At the time, Bloomberg noted that the review was likely to run another four months and that the additional time would give the EU time to hammer out a more complex set of requirements. Today’s announcement confirms that timeline.

“Our analysis shows that the acquisition of ARM by Nvidia could lead to restricted or degraded access to ARM’s IP, with distortive effects in many markets where semiconductors are used,” said Margrethe Vestager, Commission Executive Vice-President for Competition Policy.

“Our investigation aims to ensure that companies active in Europe continue having effective access to the technology that is necessary to produce state-of-the-art semiconductor products at competitive prices.”

According to EENewsEurope, the EU is concerned that Nvidia’s purchase of ARM could allow it to degrade the market for semiconductor IP by restricting or degrading the licensing terms it offers that IP under. Nvidia has pledged to maintain the ecosystem that ARM has fostered over the past few decades during its rise to power, but the EU is also concerned that ARM licensees might be less willing to share data with Nvidia, or that Nvidia might refocus ARM R&D towards business segments that are more profitable for itself and less useful to its licensees.

Attempting to parse whether or not Nvidia has such plans could be genuinely difficult given how quickly the silicon market is changing. More workloads are likely to shift towards AI, and it would be surprising if the CPUs we buy 10 years from now don’t have some new functionality (or practical use cases) that is currently limited to the cutting edge, or doesn’t exist yet. The rise of the IoT, edge computing, and AI have pushed silicon designs in different directions.

Silicon shipments keep growing, despite the near-term difficulty of increasing wafer shipments.

The current silicon shortage may also be adding to concerns around this issue. While ARM is already owned by a non-European company, concerns about the EU’s ability to influence the chip foundry business and the paucity of EU-owned top silicon companies have raised more concerns around sovereignty and national security than we might have otherwise seen. Silicon’s importance in the 21st century has been compared to oil’s prominence in the 20th, and the EU has been concerned about its own inability to secure foundry manufacturing priority during the COVID-19 pandemic. TSMC and Intel have gone around several times on this one; Intel is planning a major set of investments in European production facilities, while TSMC continues to insist that these facilities are unnecessary and redundant.

There’s a lot of uncertainty roiling the semiconductor industry, but a titanic amount of long-term earnings are in play as well. Nvidia seems to think (as of this writing) that it will be able to resolve the EU regulators’ concerns in the long term. There have also been rumors that RISC-V could be a major beneficiary if Nvidia buys ARM as companies bring up new designs — but the idea that companies are hedging bets with RISC-V isn’t the same as predicting it’ll emerge as a major competitor in the near term. While the ISA continues to see strong engagement and rapid growth, we aren’t quite at the point where top-end RISC-V cores from any company are ready to challenge ARM or x86 in premium devices.

Now Read:



New DMCA Exemptions Guarantee a De Facto Right to Repair

The S20 Ultra's giant camera assembly, courtesy of iFixit.

It stands to reason that if you own something, you should be able to tear it apart, tinker with it, and (hopefully) repair it. However, the great importance ascribed to copyright in US law makes that difficult. However, new copyright exemptions have gone into effect today that could help promote the right-to-repair movement, but hardware manufacturers still don’t have to make it easy on you. 

This new exemptions are part of the regular rule-making process at the US Copyright Office. Every three years, the office seeks recommendations for exemptions to the Digital Millennium Copyright Act (DMCA). This legislation is what makes it illegal to circumvent any security measure that controls access to a copyrighted work. Over the years, the so-called Section 1201 exemptions have made it legal to carrier unlock your cell phone or backup abandoned video games for archival purposes. It’s up to the Librarian of Congress to sign off on these exemptions every three years, and Gizmodo reports the latest round is more expansive than usual. 

The changes take effect today (10/28), and they include recommendations from the Electronic Frontier Foundation, iFixit, and others. The gist is that the copyright office is not picking and choosing which devices are eligible for a 1201 exemption and which aren’t. As of today, you are allowed to hack around with any consumer product that is controlled by software. That covers a huge swath of devices including phones, game consoles, and laptops. 

Farmers hoping for help in their battle with John Deere were disappointed—the new rules focus on consumer hardware.

It’s not a complete free-for-all, though. There are some unexpected limitations, and some that make sense given the context. Most importantly, you can only take advantage of the exemption if your goal is “diagnosis, maintenance, and repair.” If you’re looking to modify a product on a whim, these changes won’t help you avoid legal consequences. There’s also a very narrow allowance for game consoles. Consumers are only allowed to circumvent copyright on consoles to repair the optical drive, which not all machines even have anymore. After the fix, you are also required to restore copyright protection features. Good luck enforcing that one, copyright cops. 

Farmers who were hoping for help in their battle with John Deere were disappointed. After spending hundreds of thousands of dollars on equipment, farmers feel they should be able to repair them without going through John Deere. However, the company refuses to give them access to the firmware, parts, and diagnostic tools they would need, and the 1201 exemptions don’t open things up. The new exemptions focus entirely on consumer devices. We’re also left with the age-old issue of manufacturer control. While it’s no longer illegal to undertake your own repairs on many devices, no one can make the original manufacturer sell you official parts. They can also design products in such a way that bypassing the protections is even more onerous. Still, it’s a step in the right direction.

Now Read:



Microsoft Is Pushing the PC Health Check App to Windows 10 Machines

Microsoft is no stranger to frustrating OS updates after years of rolling out unblockable OTAs for Windows 10. Now there’s a new update rolling out, and this one adds a new app to your machine. Don’t want it? Tough, you’re getting the Microsoft PC Health Check app regardless of whether or not you’ve expressed interest in Windows 11. 

Ostensibly, the Health Check app tells you about the status of your PC, links to important settings, and helps you plan your upgrade to Windows 11. It’s that last item that probably encouraged Microsoft to push the app to everyone. The Windows 11 functionality is listed right at the top. One click, and you can find out if your PC is compatible. 

Making sure your PC is compatible is more important than it was for past Windows Operating Systems, as Microsoft has narrowed the OS’s hardware support with mandatory features like TMP 2.0 and a modern CPU. However, checking these things only takes a few minutes, and then you’re left with an app you’ll probably never open again.

The PC Health Check app examining a system that doesn’t have TPM 2.0, and thus can’t run Windows 11.

The PC Health CHeck app will arrive on Windows 10 systems as part of the KB5005463 update. Once it’s installed, you can’t even roll back the change. You can uninstall the app, but it will come back. You can, however, block the installation as long as you don’t mind mucking around in the registry. You can head into the registry editor and navigate to [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PCHC], and then find “PreviousUninstall.” Change the value to “1,” and Windows won’t install the app again the next time you check for updates. Alternatively, you can ignore the app. It won’t run in the background or collect your data — but this is more about the principle. 

This is probably just one of many ways Microsoft is going to try and nudge people toward upgrading to Windows 11. The software launched earlier this month on new PCs, and as a free upgrade for some Windows 10 users. Existing machines will be upgraded slowly, so most people haven’t gotten a prompt to upgrade yet. However, Microsoft says this process is intentionally cautious. PCs will be upgraded in batches while Microsoft monitors any problems that arise. If you wait for Microsoft’s prompt, you might not have Windows 11 until next year. If you’re itching to upgrade though, you can manually install the update on a compatible system by heading to Microsoft’s site and getting the installation assistant.

Now Read:



Wednesday, 27 October 2021

Meet Starlab, a Private Space Station That Could Fly by 2027

(Image: Nanoracks>
With the rise in commercial space exploration comes a new intergalactic destination: a private space station called Starlab. It’s being developed via a partnership between Voyager Space, Nanoracks, and Lockheed Martin, and they announced late last week that they’re planning on having Starlab operational by 2027. Starlab will be the first free-flying commercial space station. 

The purpose behind Starlab is twofold. To start, it would serve as a low earth orbit (LEO) tourist destination, which is the next step in the development of a rapidly-expanding industry that seeks to commercialize space. This facet of Starlab’s endgame depends on an inflatable 340-cubic meter habitat developed by Lockheed Martin. As Starlab’s ideation has only just begun, it’s currently unclear what it will look like for space tourists to pay a visit to the station (or how much it will cost).  

Starlab’s second purpose is to eventually replace the International Space Station, given that the ISS is set to retire by 2030 due to its $4 billion annual operation cost  The core of the outpost, called the George Washington Carver Science Park, will feature four operational departments: a biology lab, plant habitation lab, physical science and materials research lab, and an open workbench area, where up to four astronauts will be able to conduct research at a time. Though Starlab won’t be nearly as roomy as the ISS, NASA’s director of commercial spaceflight, Phil McAlister, says researchers “will not need anything near as big and as capable” as the ISS moving forward. Nanoracks’ website claims Starlab will incur significantly lower construction and operational costs than its predecessor, offering benefits to both taxpayers and commercial partners.

Other elements of Starlab’s construction will include a metallic docking node, a 60kW power and propulsion element, and a robotic arm intended to service cargo and payloads. It will also have a payload capacity of 22 cubic meters, equivalent to that of the ISS.

“We’re excited to be part of such an innovative and capable team—one that allows each company to leverage their core strengths,” said Lisa Callahan, Vice President and General Manager of Commercial Civil Space at Lockheed Martin in the press release. “Lockheed Martin’s extensive experience in building complex spacecraft and systems, coupled with Nanoracks’ commercial business innovation and Voyager’s financial expertise allows our team to create a customer-focused space station that will fuel our future vision. We have invested significantly in habitat technology which enables us to propose a cost-effective, mission-driven spacecraft design for Starlab.”

Now Read:



Biden Taps Jessica Rosenworcel as FCC Chair, Opening Path to Reinstate Net Neutrality

Following Joe Biden’s election as president, many hoped we would see a quick return to the era of enforceable net neutrality. However, that can only happen once there are changes at the Federal Communications Commission (FCC), and the White House is just now beginning that process. Biden has nominated acting FCC Chair Jessica Rosenworcel to formally take over that role. He also nominated activist Gigi Sohn to fill the open seat. Once confirmed, the new commissioner will give the FCC what it needs to tackle net neutrality again.  

The tumult at the FCC stretches back to the Obama presidency. Chairman Wheeler eventually got behind full net neutrality after initially expressing an interest in setting up internet “fast lanes.” The eventual rules prevented favoring traffic, although there were healthy carve-outs for network management. That wasn’t good enough for Ajit Pai, who became Chairman under President Trump. Pai repealed the net neutrality rules, prompting possibly the first bomb threat at a meeting of the FCC. 

Pai didn’t stick around after Biden’s election, even though he had time left in his term. However, even with Rosenworcel in the big chair on a provisional basis, the commission was still deadlocked. With two Democratic votes and two Republican votes, it has been impossible to advance Biden’s telecom agenda. 

FCC-Feature

In order to make progress, the Senate needs to confirm both nominations by the end of the year. At that point, the FCC can act on net neutrality and a raft of other changes. Sohn has even expressed interest in clamping down on ISPs more aggressively with net neutrality rules. If the Senate does take up the nominations soon, you’re going to start hearing more back-and-forth about the tenets of net neutrality. If things follow the usual pattern, Democrats will warn against the possible excesses of ISPs, and Republicans will claim this is the government trying to stifle innovation. Sunrise, sunset. 

Regardless of whether net neutrality returns this year or next, it probably won’t herald any major changes. The constant back-and-forth of regulation has resulted in great uncertainty for ISPs that want to develop profitable non-neutral services. Congress could crystalize the country’s internet regulations through legislation, but Democrats probably don’t have to votes to pass anything that Biden would sign at this time, so we might be in for more FCC drama in the coming years.

Now Read:



Intel Releases Specs, Performance Data on Upcoming Alder Lake Core i9-12900K

Intel’s Alder Lake line of CPUs will hit store shelves on November 4, and the chip giant is sharing some additional data about specs and pricing ahead of its debut next week. We’ve written several deep dives on Alder Lake already this year, but to recap:

Alder Lake is Intel’s next-generation desktop chip. It’s built on a refined version of Intel’s 10nm node (now rebranded as Intel 7), and it’s the first hybrid x86 architecture. It’s called a ‘hybrid’ architecture because it includes a mixture of larger and smaller cores. Anecdotally these are typically known as “big” and “little” cores, though Intel’s claimed performance targets for the Efficiency cores implies that “big” and “bigger” might be a better way to describe the relationship between the two core clusters.

Features and Performance Claims

Intel is promising a 1.19x overall uplift for Alder Lake when IPC and frequency are taken into account. The overall IPC uplift at identical frequency is ~1.15x. In gaming, Intel showed a spread of results ranging from parity with chips like the Ryzen 5950X to results showing itself 20-30 percent ahead. This isn’t particularly surprising; companies typically present a range of game benchmarks that collectively show varying relationships between themselves and their competitors:

Many of Intel’s performance claims, however, are significantly larger than 1.19x.

Alder Lake’s own leaks have contributed to user perceptions of this oddity. In some cases, leaked benchmarks have implied that Alder Lake could be much faster than the 1.19x uplift Intel has told us to expect. In other cases, we’ve seen leaks imply that AMD’s Ryzen 9 5950X can hold off the newcomer. When we asked Intel about the variation in some of its claimed improvements, the company told us that it’s measuring the execution speed of multiple workloads simultaneously in various content creation applications.

While most reviewers tend to run one application at a time, running multiple apps simultaneously isn’t necessarily unusual. A content creation workflow with a CPU-heavy and a GPU-heavy component can sometimes run both simultaneously, improving overall throughput.

That’s part of why Intel may be predicting the huge content creation gains we see the company claiming here. Alder Lake’s high-efficiency cores give it an extra eight threads to work with, in applications where boosting from 16 to 24 threads likely comes with its own rewards.

According to Intel, peak single-thread performance for 12th Gen Alder Lake is 28 percent above Comet Lake S, and ~1.14x above Rocket Lake S. Intel’s E-core performance claims are also significant, however, since the company is predicting that its “efficiency” cores are fully the match for a Comet Lake S core at the same frequency. As we said up top — this is less “big/little” and more “big/bigger.”

This is a very interesting slide. According to Intel, the Core i9-12900K is capable of 1.5x more multi-threaded performance at 241W as the Core i9-11900K at 250W. The reason this slide is interesting is because peak power consumption on the Core i9-11900K is well above 250W; the chip has been measured at nearly 300W peak power. Rocket Lake is a power-hungry architecture, so hauling down to 250W improves the comparison for Alder Lake.

There’s another very interesting admission in the slide above. According to Intel, Alder Lake at 125W offers 30 percent faster MT performance than Rocket Lake at 250W. Even if we allow for some corporate fudging, that’s much better power efficiency — but we also know that Alder Lake is just 50 percent faster than Rocket Lake at 241W.

Intel’s slide indicates it needs to increase Alder Lake’s power consumption by 1.92x (125W -> 241W) in order to increase performance by 1.15x (from 1.3x Rocket Lake performance to 1.5x Rocket Lake performance). In aggregate, this graph states that Intel takes a 3.7x power penalty in order to improve performance 1.5x over baseline.

It’s not clear how AMD would perform if we charted the Ryzen 5000 series on a similar graph, but x86 CPUs don’t generally compare well to chips like Apple’s M1 Pro and M1 Max right now when it comes to performance per watt. A major question for Alder Lake is whether it can change that impression and, if so, by how much.

Oh, one other thing on power consumption. After taking fire for the vast gap between PL1 and PL2, Intel has responded to the criticism by redefining PL2 to equal PL1 on unlocked processors.

Intel is claiming that Alder Lake will also offer enhanced thermal performance courtesy of a thinner die and thinner interface material. The company announced this will be offset with a thicker heat spreader, but did not offer any additional commentary on why it had thickened the heat spreader in the first place.

Pricing

Here’s the upcoming Alder Lake product stack, with pricing included:

At the top of the stack, the 24-thread / 16-core Core i9-12900K will be clocked at 3.2GHz / 5.2GHz (P-Core) and 3.4GHz – 3.9GHz (E-Core), with 20 PCIe 5.0 lanes and support for DDR5-4800 for $589. Chips like the Core i5-12600K (6P + 4E) will offer 10 cores and 16 threads for $289. The i5-12600K is priced to move against the Ryzen 5 5600X, which is currently selling for around $299.

Of course, all of this assumes the Core i5-12600K won’t hit market, skyrocket in price, and promptly vanish like water in the Atacama. No bets on that score.

We’ll have more details on Alder Lake — the kind that come with performance figures attached — in the not-too-distant future. For now, it’s enough to note that Intel is claiming some significant performance improvements and power efficiency gains. Leaked benchmarks point to Alder Lake being a contender, but power consumption against AMD is an unknown at this time.

And finally, Intel is claiming that Alder Lake will be a great overclocker. We’ll see on that score. It’s been years since either Intel or AMD shipped a CPU we considered a “great overclocker,” and headroom has been dropping on top-end chips for years. Either way, though, the chip’s hybrid architecture is the beginning of a new (hopefully fruitful) era in x86 design.

Now Read:



Verizon Partners with Amazon to Deploy Satellite Internet

Verizon has been pushing its mobile network as a home broadband option ever since it backed away from new fiber deployments. However, it’s been slow-going to actually deploy wireless broadband in places were people are likely to want it. Now, Verizon has a partner in this endeavor. Verizon will use Amazon’s upcoming Kuiper satellite internet for backhaul, boosting coverage in rural areas where bandwidth is in short supply. 

Verizon started its 5G deployment with millimeter wave (mmWave), which is very fast but has a short range. It toyed with 5G home internet in these frequency ranges, and the carrier has since added mid-band frequencies, which it calls “Nationwide 5G.” Yes, branding in the era of 5G is confusing. These signals can reach farther, but they aren’t as fast as mmWave. This is only a concern for the tower connection side — to deliver internet access to remote areas wirelessly, Verizon still needs a data connection to the outside world, which is the backhaul Amazon will deliver. 

To start, Verizon and Amazon will cooperate to deploy terrestrial antennas for backhaul that are based on Project Kuiper technology. Eventually, Amazon’s orbiting constellation could slot in to deliver more robust backhaul and direct satellite connectivity. However, this hybrid approach isn’t going to happen any time soon. Amazon has only just scheduled its first batch of Kuiper launches, consisting of nine ULA Atlas V rockets delivering more than 3,200 satellites to orbit. It aims to have half the satellites in space by 2026 and the remainder by summer 2029. 

Amazon testing one of its Kuiper antennas, which is the technology that will augment Verizon’s wireless broadband offerings.

Of course, there’s an elephant in the room: SpaceX. Elon Musk’s spaceflight company has been moving quickly to deploy the Starlink satellite internet platform. Its control of the advanced Falcon 9 rocket gives SpaceX a leg-up when it comes to deploying its constellation, which already includes almost 2,000 nodes. It has clearance from regulators to launch up to 12,000 more, and tens of thousands beyond that could be approved soon. 

SpaceX is offering broadband satellite service to consumers already, and the speeds are competitive with many wired broadband options, and it puts traditional satellite internet to shame. It’s possible the Verizon-Amazon hybrid solution could work better. Adding terrestrial 5G towers to a satellite-based system could help limit latency and expand coverage to more devices, but Starlink might have the market cornered by 2029.

Now Read:



Putting the New DJI Action 2 Camera Through its Paces

For the last decade the action camera market has been dominated by a traditional “point-and-shoot” form factor, with only slight variations. DJI’s new 4K Action 2 ($399-$515) presents us with a radical new approach. Its system is composed of a number of modules and uses strong magnets as its primary means of attachment. Each module is about the size of an ice cube.

DJI Action 2 By the Numbers

The main camera module is a 4K shooter capable of 120 fps. It uses a 1/1.7-inch sensor with a lens providing it with a 155-degree Field of View. Along with the main sensor there is also a Color Temperature Sensor for calculating white balance. The main module has a 1.76-inch OLED touchscreen covering the back, which you can use with quite a variety of different gestures to control the camera. It weighs in at 56 grams, and leaves 21.5GB of its built-in storage open for videos, which is important since it doesn’t include a card slot.

DJI Action 2 with second module attachedThe main module is waterproof down to 10 meters, and has a set of contacts, a strong magnet, and recesses for clipping on accessories. A waterproof housing with a traditional action mount is also available. Available and planned accessories include a clip that adds a 1/4-inch tripod mount, a clip that adds an action camera mount, a small tripod, and a small tripod that includes a detachable remote button. There is also a lanyard mount, a floating handle, a stereo microphone, and a headband mount.

Battery life is rated at 70 minutes for the camera module, and up to 160 minutes when used with the front-facing touchscreen, or 180 minutes when used with an add-on power module.

DJI Action 2 Shooting Modes and Features

Along with DJI’s newest Electronic Image Stabilization (EIS), RockSteady 2.0, there is also a feature they call HorizonSteady that helps keep your horizon consistent. As expected with a DJI camera, it also features an extensive set of timelapse, hyperlapse, and quick clip options. Most of these can be controlled directly from the touchscreen, but for full flexibility it is easier to use the Mimo mobile app. Here’s the recording options:

  • Slow Motion: Record video at 4x (4K/120fps) and 8x (1080p/240fps).
  • Hyperlapse and Timelapse: During Hyperlapse you can switch recording speeds.
  • QuickClip: Take short 10, 15, or 30-second videos.
  • Livestream: Broadcast a livestream with an output of up to 1080p/30fps.
  • UVC: Use as a USB webcam.

For photos and videos, you need to switch to “Pro” mode to control exposure, shutter speed, and ISO. In most modes you also get a choice of the Standard FOV which does some de-warping and loses a bit of the total image or a Wide mode that gives you everything.

The review sample of the Action 2 came with a slick small tripod with a detachable remote. One takeaway from shooting with the stick at full extension is that while DJI’s RockSteady image stabilization does an excellent job of frame-to-frame stability, it’s — not surprisingly — not the same as having a true mechanical gimbal when it comes to freehand panning.

Timelapse Results

DJI Action 2 on Extended tripod with remoteThe “selfie-stick” with remote made it easy to set up the camera in a remote location and start and stop it. When using it with the Mimo app (if you’re close enough to get good connectivity) you preview what you’ll get. Otherwise in a bright setting it is difficult to discern your precise framing on the small screen.

DJI’s New Module System Takes Practice

When I first got the Action 2 to review I was incredibly excited by the possibilities afforded by such a small camera. However, as I shot with it I found myself having to deal how to handle, use, and charge three separate modules (the main camera, the front touchscreen, and the battery). Plugging into a computer to unload a microsSD card, for example, requires confirmation on the touchscreen module, not the main screen. And each module has a separately-charged battery.

I suspect most users will develop a couple basic styles for using the Action 2 and stick with them. One quirk I found slightly annoying is that every time I wanted to reconnect to the camera from my phone it took awhile and then required several clicks. I’d like to see something fast and simple.

The First-person Lanyard is a Cool Feature

DJI has taken advantage of the magnetic attachment system to create an accessory that allows you to put a lanyard under your shirt and ‘snap’ the camera onto the outside. This makes for a quick and sometimes-interesting perspective. The only complication is that it needs another magnetic clip attached to the camera, so that’s one more thing to keep track of. There is also a traditional headband accessory available, but I didn’t get a chance to test it.

Heat is a Real Issue with the Action 2

The biggest issue I had with the Action 2 is heat. Simply charging the camera’s battery made it almost too hot to touch. The same is true with recording 4K video at 60fps and120fps. At the default temperature cutoff, it would only capture a few minutes at that speed before turning off. There is an option to increase the thermal limit, but of course that makes me a little bit nervous about what effect it might have on the lifetime of the components.

When I asked DJI about the heat issue, they responded by suggesting I use a lower frame rate or resolution, which is of course realistic, but doesn’t exactly solve the issue. I wouldn’t be surprised to see some clever cooling accessories for it. DJI has some preliminary data on how long before the Action 2 will shut off in each mode, but nothing final enough to publish yet.

Summary, Price and Availability

Clearly DJI is aiming the Action 2 squarely at the Hero 10 family from GoPro. Overall, I’m glad to see the form-factor innovation by DJI, and I think it will help shake up the market a bit. For anyone willing to put in the time required to learn a new system, it provides a lot of unique features, although I’d like to see some progress on the thermal front.

The DJI Action 2 is available for purchase today from DJI and authorized retail partners in several configurations. The DJI Action 2 Dual-Screen Combo retails for $519 USD and includes the DJI Action 2 Camera Unit, Front Touchscreen Module, Magnetic Lanyard, Magnetic Ball-Joint Adapter Mount, and Magnetic Adapter Mount. The DJI Action 2 Power Combo retails for $399 USD and includes DJI Action 2 Camera Unit, Power Module, Magnetic Lanyard, and Magnetic Adapter Mount. All other accessories will be sold separately.

For more information on all the new features, accessories, and capabilities, please visit https://www.dji.com/dji-action-2.

[Video credits: David Cardinal]

Now Read:



Tuesday, 26 October 2021

Wooden Steak Knife Reportedly 3x Sharper Than Steel

(Photo: Bo Chen/UMD)
Scientists have found a way to make wood sturdy and sharp enough to cut through steak. A team at the University of Maryland discovered that removing key polymers from wood allows them to “supercharge” the strength of the material and make it three times stronger than steel.

In the latest issue of the materials science journal Matter, the UMD scientists detail their process. First they put the natural wood through a chemical process called delignification, in which the lignin (a polymer that lends rigidity to wood and bark) is removed from the material. This makes the wood flexible and squishy, which may sound backwards at first, but is vital to the next step in the process. The scientists then densify the wood by putting it in a hot press, in which both heat and pressure are applied. The result? Hardened wood, or HW, which is 23 times harder than the starter material. 

The strength found in HW relies on the cellulose packed inside, which makes up nearly half of wood’s natural components. The cellulose offers more structural integrity than certain man-made ceramics and metals, making wood a viable option for construction and cooking. 

The process of turning natural wood into hardened wood. (Image: UMD)

From here, it’s just a matter of turning the HW into a usable product by carving the material and polishing it with mineral oil (an essential agent in making the HW water-resistant and long-lasting). The team at UMD used their fresh-pressed HW to create nails and knives, both of which stood up to their more traditional metal counterparts. As it turns out, HW nails are as strong as steel nails, but come with the added bonus of being rust-resistant. HW dinner knives boast triple the strength of steel dinner knives—and yes, they can be thrown in the dishwasher after a nice steak dinner.

Much of the team’s motivation for creating HW tools appears to be environmentally-focused. “Widely used hard materials, e.g. alloys and ceramics, are often nonrenewable and expensive. Their production requires high energy consumption and often leads to negative environmental impacts,” the study reads. Steel, for instance, is particularly susceptible to supply chain issues and must be forged under extremely high temperatures. Finding ways to turn bulk natural wood into HW allows scientists to create the potential for sustainable and low-cost alternatives to current manufacturing methods. 

Plus, there’s something to be said about the elegance of a wooden knife on the dining table. 

Now Read:



The Apple M1 Pro and M1 Max’s Power Efficiency Should Rattle Intel, AMD

Reviews of Apple’s M1 Pro and M1 Max landed yesterday. While most articles focused on the overall performance of the laptops, we’re more interested in the comparative performance between Apple’s larger M1 CPUs and the x86 CPUs it competes with from Intel and AMD. The competitive data now available suggests the scaled-up M1 remains a potent threat to Intel and AMD.

The problem for the x86 manufacturers isn’t necessarily raw performance. A number of reviews today, including one at our sister site, PCMag, show that there are various 11th Gen Intel laptops that can more-or-less keep pace with the M1 Pro and M1 Max in CPU workloads. There are also workloads like Cinebench R23, which show Intel’s top-end 11th Generation CPUs beating their MacBook Pro counterparts. Between the two, it might seem like Apple has matched — but not exceeded — what x86 machines are capable of. But look a little deeper, and the wheels start coming off that narrative.


According to Anandtech, the M1 Max draws 39.7W at the wall to deliver a Cinebench R23 score of 12,375. The Core i9-11980HK inside the MSI GE76 Raider beats that score, with a 12,830 — but it draws no less than 106.5W at the wall.

Chart and data by Anandtech

Anandtech’s tests show a clear pattern. x86 CPUs can sometimes match the performance of the M1 Max, but they need to draw far more electricity to deliver the same performance. In some floating point tests, the M1 Max offers more performance than any x86 CPU, courtesy of its mammoth on-chip bandwidth. Anandtech’s tests show that the CPU doesn’t have access to the full 400GB/s that Apple claims, but it still can tap ~200GB/s.

So far we’ve confined ourselves mostly to a discussion of the new CPUs, but the GPU deserves a nod here too. Apple’s new graphics solution looks amazing, if one only consults synthetic tests. Factor in its real-world gaming performance and the results are a bit lackluster. Apple’s overall performance is still up substantially compared to its previous generation of products, however, which tapped relatively low-end AMD mobile GPUs. GPU-centric content creation benchmarks also showed better performance results than gaming.

All that said, the M1 Max and M1 Pro are not automatic, must-have processors for everyone. Apple’s support for gaming is still practically nonexistent, and it’s no longer as easy to run Windows on a Mac as it once was, which may matter for some users. Also, there’s enormous inertia in the PC universe and 10 percent of the PC market isn’t just going to tromp over to macOS in a year, no matter how good the M1 Max is.

Power Efficiency is Just Performance a Company Hasn’t Tapped Yet

The biggest problem for Intel and quite possibly AMD isn’t the M1 Pro / M1 Max’s raw performance. Though it may take current x86 CPUs far more power to match Apple’s newest silicon, high-end chips from both manufacturers can hang with the top-end M1. It’s also possible that AMD’s Zen 3 would compare more favorably to Apple’s power consumption than Intel does.

It’s power efficiency where Apple’s latest systems seem to leave x86 in the dust. Every CPU is more and less efficient at a given clock speed. As CPU clock speeds rise, the amount of power required to increase performance an additional one percent grows. The high clocks Intel and AMD are forced to utilize to offer equivalent performance are not an advantage against the M1.

We don’t know what the M1 family’s power consumption looks like across its entire range of potential operating frequencies, but Apple is having no problem hammering x86 on power efficiency at 3.2GHz. This implies there’s at least some headroom left in the core. Apple could use that headroom to launch a desktop chip running 15 – 25 percent faster than its current laptop processors, or it might spend its power budget scaling the chip’s core count and improving internal parallelism. Likely, it’ll be a mix of both. Either way, this comparison is going to get tougher for the x86 manufacturers once the fight moves to the desktop arena.

CPUs that offer a combination of high power efficiency and high performance tend to succeed in the consumer market. Ryzen was vastly more efficient than the Bulldozer family of products that preceded it. Intel’s Core 2 Duo was far more efficient than the Pentium 4 architectures. Both launches ushered in a new era for their respective companies.

The length of CPU design cycles means challenges like the one Apple is mounting play out in slow motion. Intel’s upcoming Alder Lake launch will provide an updated comparison point and a first look at a high-end x86 hybrid CPU. AMD also has plans to introduce V-Cache for Zen 3 and Zen 4 beyond that, in 2022. There will be responses to this launch.

But make no mistake: If the original M1 CPU was Apple’s warning shot, the M1 Pro and M1 Max are the opening salvo. A lot of factors go into whether or not a system is attractive, including ecosystem support and familiarity, but if Apple continues to deliver better performance than x86, content creators are going to notice. It might not pick up in the gaming community — serious gamers are not well-served by Apple systems at this time — but it could let Apple make inroads in other markets.

Now that we’ve seen what Apple can do in a high-performance mobile form factor, there’s no reason to doubt the company’s ability to introduce a high-end Mac desktop with CPU performance that can match x86. It may not launch such a chip for six months to a year, and the resulting system may be far too expensive to be practical for most buyers, but the M1 Pro and M1 Max prove that Apple’s silicon can scale. It may have taken a decade longer than anyone expected back in 2011, but the long-awaited battle between x86 and ARM is finally happening, one market segment and product launch at a time.

Now Read:



Astronomers May Have Found the First Exoplanet in Another Galaxy

Astronomers once wondered if there were other planets in the heavens, and we certainly know the answer to that one: a resounding yes. With the help of instruments like the dearly departed Kepler Space Telescope, we’ve discovered thousands of exoplanets, but most of those are within a few thousand light-years of Earth. A new discovery courtesy of NASA’s Chandra X-ray Observatory could point the way to yet another exoplanet, but this one is a bit more distant. If confirmed, it would be the very first exoplanet discovered in another galaxy. Confirmation is not likely unless you’re very patient, though. 

The planet in question, if it exists, is in a galaxy known as M51 about 28 million light-years away. You may know it as the Whirlpool Galaxy — even if you don’t know it by name, you’ve probably seen a picture of it (above) as it’s one of the prettiest spiral galaxies visible from Earth. The team, largely from Harvard-Smithsonian Center for Astrophysics, identified a potential planet in a solar system of M51 dubbed M51-ULS-1. The only reason we have any idea of what’s going on in ULS-1 is because it’s so unlike our solar system. 

M51-ULS-1 is an X-ray binary, which means it has a sun-like “main sequence” star with a smaller companion in orbit. That companion is either a neutron star or a black hole, and as a result, the system is a powerful X-ray source. Those X-ray emissions appeared in data collected by Chandra, which can be used in a similar fashion to Kepler’s data. 

With Kepler and other planet-hunting instruments, astronomers are looking for exoplanet transits. That means the exoplanet passed in front of its star from our perspective here on Earth, causing a small dip in luminance. That kind of signal would be too faint in the visual spectrum to pick up from millions of light-years away, but an X-ray binary’s signature is more intense and compact. Thanks to Chandra, the team was able to identify what looks like a transit in front of the X-ray source, which could be an exoplanet. 

With most exoplanet detections, we don’t need to wait long for confirmation. Our methods are better at identifying large exoplanets that orbit close to their stars, but M51-ULS-1 is different. Astronomers estimate because of the suspected orbit, they would need to watch the X-ray source for another 70 years to confirm a transit, but it’s unlikely anyone will still be looking at that point. We will hopefully have much more advanced ways of studying exoplanets by the dawn of the 22nd century. However, it shows this method could be capable of definitively identifying a different exoplanet in another galaxy. It’s probably only a matter of time.

Now Read: