Thursday, 30 April 2020

ET Deals: $450 Off Refurbished Dell XPS 13 Touchscreen Laptop, $140 Off 27-Inch UltraSharp 4K USB-C Monitor, 1TB WD Black SN750 SSD Just $200

Today you can get a refurbished 13.4-inch Dell laptop with a Core i7 processor and a $450 discount from PCMag’s online shop.

Dell XPS 13 7390 Intel Core i7-1065G7 13.4-Inch Laptop w/ 16GB DDR4 RAM and 512GB M.2 NVMe SSD — Refurbished ($1,199.00)

Dell designed this notebook to be a high-end solution for work and travel. The metal-clad notebook features a fast Intel Core i7-1065G7 quad-core processor and a 1920×1200 display. According to Dell, this system also has excellent battery life and can last for up to 21 hours on a single charge. Right now you can one that’s refurbished from PCMag’s online shop marked down from $1,649.99 to $1,199.99.

Dell UltraSharp U2720Q 27 4K USB-C Monitor ($579.99)

Working on a 4K monitor has some major advantages including being able to fit more on-screen at any given time. This display from Dell utilizes a 27-inch 4K panel that also supports 1.07 billion colors, making it well-suited for image editing. Right now you can get one from Dell marked down from $719.99 to $579.99.

Western Digital Black SN750 1TB NVMe PCI-E SSD ($199.99)

Upgrade your PC with one of WD’s SN750 SSDs, which will give you an extremely fast storage device with plenty of space. The drive can read data at up to 3470MB/s, and right now you can get one from Best Buy marked from $259.99 to $199.99.

Featured Deals

  • Dell XPS 13 (7390) Intel Core i7-1065G7 Quad-core 13.4″ 1920×1200 2-in-1 Touch Laptop (Refurb) for $1199 at PCMag Shop (list price $1649)
  • Dell U2720Q UltraSharp 27″ 4K IPS USB-C Monitor for $579.99 at Dell (list price $719.99)
  • WD Black SN750 1TB NVMe M.2 SSD for $199.99 at Best Buy (list price $259.99)
  • Dell P2719HC 27″ 1080p USB-C IPS Monitor for $304.99 at Dell (list price $379.99)
  • WD Black D10 8TB USB 3.2 External Hard Drive for $169.99 at Best Buy (list price $219.99)
  • PNY GeForce GTX 1660 Blower 6GB Graphics Card for $209.99 at Dell (list price $280.99)
  • VisionTek Radeon RX 5500 XT 8GB GDDR6 Graphics Card for $179.99 at Dell (list price $275.99)
  • Intel Core i5-9400 6-core 2.9GHz Desktop Processor for $149.99 at Best Buy (list price $189.99)
  • AMD Ryzen 7 3700X 8-Core Processor with Wraith Prism LED Cooler for $299.99 at Walmart (list price $329)
  • Logitech G903 SE LIGHTSPEED Wireless Gaming Mouse for $69.99 at Best Buy (list price $149.99)
  • Logitech G910 Orion Spectrum RGB Wired Mechanical Gaming Keyboard for $84.99 at Best Buy (list price $129.99)
  • Corsair K70 RGB MK.2 Rapidfire Wired Mechanical Gaming Keyboard with Cherry MX Speed Switches for $99.99 at Best Buy (list price $169.99)

Note: Terms and conditions apply. See the relevant retail sites for more information. For more great deals, go to our partners at TechBargains.com.

Now read:



Teracube Launches A $300 Smartphone Designed To Work For 4 Years

If you cringe at the thought of spending up to almost $1,100 for a new iPhone — or even $600 or $700 for a respectable Android phone, you’d probably welcome an alternative. Recently, a company called Teracube came up with one. And their plan is already winning over converts on Kickstarter.

Their plan is simple. Teracube argues that most smartphones are expensive to purchase, expensive to repair, and only living an average of 2 to 3 years anyway. On top of that, all those dead phones — up to 350,000 per day — are adding to the major glut of e-waste facing our globe, a figure already topping 50 million tons a year.

Which brings us to the Teracube’s modest proposal to help out the planet and your wallet: get a Teracube Smartphone for $298.99. It comes with a four-year premium care warranty, so no matter what you do to break your phone during those four years, Teracube will send you another one for just $39. With four years guaranteed, Teracube keeps millions of discarded phones out of scrap heaps — and keeps a sound, functioning phone in your pocket at a minimal price.

Unless you’re a hardcore phone snob, the Teracube should have more than enough performance to keep the average user happy. It’s powered by a Mediatek P60 octa-core processor with 6GB of RAM and 128GB of storage space. There’s also a 6.2-inch Full HD display, a 12-megapixel rear camera and an 8-megapixel front-facing camera for all your picture taking, and a 3400 mAh battery capable of keeping you up and running virtually all day.

The Teracube also comes unlocked, so you can easily add it to a service plan with GSM carriers AT&T, T-Mobile, MetroPCS, or a host of other providers almost immediately.

Teracube’s gutsy swing at breaking the disposable smartphone game enticed Kickstarter contributors to kick in $125,000 toward its funding. You can join the movement now by picking up a Teracube phone with the four-year guarantee for $298.99, which is $50 off the regular retail price.

Note: Terms and conditions apply. See the relevant retail sites for more information. For more great deals, go to our partners at TechBargains.com.

Now read:



No, AMD Isn’t Building a 48-Core Ryzen Threadripper 3980X

Rumors are rocketing around the ‘net that AMD is preparing to launch a 48-core Ryzen Threadripper 3980X, based on a fake image being passed around on Twitter. It’s a superficially tempting thought, as it offers the prospect of a high-core-count CPU with perhaps slightly higher clocks than the 3990X. But there are multiple reasons to believe this screenshot is untrue. Said fake looks like this:

Fake 3980X screenshot.

AMD has already stated it has no plans to launch a 48-core chip and none of the information we’ve uncovered about this screenshot suggests it has any intention of changing them. In the past, AMD has indicated that the mid-tier Threadripper parts don’t tend to sell very well; customers either go for the sweet spot chips or the highest-end parts, but not much in-between. This explains why the company has taken the approach it has with third-generation Threadripper. This time around, there’s an entry-level 24-core part, one “sweet spot” CPU (3970X, 32-cores), and one halo part, the 3990X. The previous entry-level TR CPU, of course, became the 16-core desktop 3950X and bumped down to the top of that product stack rather than introducing the workstation Threadripper family.

The second reason to believe this CPU screenshot is fake is that the author forgot to change the “3990X” moniker in the “Specification” field and left it reading 3990X instead.  This isn’t how engineering chips are badged, either. An ES chip might have a seemingly random code in place of its formal product name or a non-standard entry in another field, but you aren’t going to find a 3980X that’s accidentally been badged as a 3990X. Doesn’t happen that way.

Finally, there’s the fact that AMD currently has no reason to release a 48-core Threadripper. A 48-core version of the CPU will have the same scheduling problems that the 3990X does, because Microsoft hasn’t bothered to fix the Windows scheduler yet to support more than 64 logical CPU cores per group (initial reports that Windows 10 Enterprise would outperform Windows 10 Pro did not survive additional analysis). The 3990X offers some real performance improvement over the 3970X and our sample was a great overclocker, but if you aren’t doing the right kinds of applications to benefit from a 3990X, the 3970X will be a better performer.

Right now, the 3990X is a specialty halo part that really only makes sense for a small number of people with specific workload requirements. It’s a technology demonstration as much as a commercial product, and it’s not a product market we’d expect AMD to build out until Windows 10 is more friendly to these high core count configurations. Any CPU above 32C/64T will have the same Processor Group limitation as the 3990X.

None of this means AMD won’t ever release a 48-core chip, but I don’t think we’ll see the firm buffing up its consumer core counts in quite that way just at the moment. A lower-cost 48-core chip is the sort of thing I’d expect AMD to either reserve for a new Threadripper debut or as a competitive response to something Intel had built. Intel isn’t going to be launching any HEDT CPUs with that many cores in the near future either, and AMD has little reason to introduce one now.

Right now, in fact, it looks as if the big fight between 10th Gen Core and AMD’s 7nm Ryzen will be happening in the lower-end and midrange market. AMD now has 4C/8T chips in-market for $100, while Intel has new Core i3 CPUs in a similar configuration starting for $122. The gains from moving to 4C/8T from 4C/4Tmay not be as large as the improvement from 2C/4T to 4T/4T, but this will be the second significant thread-count upgrade the Core i3 has gotten in a relatively short period of time, courtesy of AMD’s willingness to push the envelope at every point in the desktop market.

Now Read:



New Fossils Prove Spinosaurus Was an Underwater Terror

Credit: Kumiko https://www.flickr.com/photos/kmkmks/27388394090/ CC BY-SA 2.0

We’re all familiar with Tyrannosaurus Rex, a massive theropod dinosaur from the Cretaceous period and star of several movies about dinosaurs eating people. However, there were even larger, potentially more terrifying beasts on Earth all those millions of years ago. Spinosaurus was even bigger than the T-rex, and new discoveries indicate you wouldn’t have been safe even in the water. Spinosaurus, it turns out, was an excellent swimmer thanks to its large, paddle-like tail. 

Spinosaurus was a theropod like the Tyrannosaurus — that just means it had hollow bones and three-toed limbs. The descendants of theropods most likely evolved into modern birds, but Spinosaurus was more dangerous than any bird. Adults could weigh as much as 7.5 tons and grow to more than 50 feet in length, making them among the largest theropod dinosaurs. 

Researchers first proposed that Spinosaurus was primarily an underwater predator several years ago, but the scientific community was unconvinced. Donald Henderson, a paleontologist at Canada’s Royal Tyrrell Museum, noted that Spinosaurus was probably top-heavy with its distinctive back sail and would not have been able to dive underwater. Nazir Ibrahim, lead author of the study, believed the answer would be found in fossils. Previous excavations had only uncovered a few Spinosaurus tail sections, but the team uncovered an almost full set of tail bones at a fossil site in Morocco between 2017 and 2018. 

The newly reconstructed Spinosaurus was undeniably at home in the water. Rather than having a tapered whip-like tail, Spinosaurus had a giant fin attached to its backside. Some of the fossil bones were 12-inches thick, indicating the tail would have been a powerful mode of underwater propulsion. The team speculates Spinosaurus might have spent most of its time in the water. 

The team created a computer model to assess the capabilities of Spinosaurus’ tail, comparing it with modern land-dwelling dinosaurs and semi-aquatic creatures like crocodiles. Unsurprisingly, the Spinosaurus tail fin was about 2.6 times more efficient in the water than the tails of other theropods. 

Museums around the world will have to update their Spinosaurus models in the wake of this discovery, but that’s nothing new. The fossil record is incomplete, and sometimes we get details wrong when trying to reconstruct an entire animal from partial remains. The Tyrannosaurus Rex skeleton at the American Museum of Natural History in New York stood in the incorrect upright posture until 1992 before adopting the correct parallel position. Oh, the developers of Animal Crossing will have to update their inaccurate Spinosaurus fossils, too.

Top image credit: Kumiko/Flickr/CC BY-SA 2.0

Now read:



Overclocking Results Show We’re Hitting the Fundamental Limits of Silicon

Update (4/30/20): The formal unveiling of Intel’s 10th Generation Core i9 family is an excellent opportunity to revisit the points made in this November 2019 article. As of this writing — some five months later — Silicon Lottery is out of 9th Gen chips and waiting on Comet Lake CPUs to arrive. The handful of 7nm AMD CPUs show very similar patterns to what we identified back in November. A 3950X @ 4GHz is just $750, but an all-core 4.1GHz is $850 and a 4.15GHz chip is $999.

Now, with its 10th Gen Comet Lake, Intel has adopted strategies like die-lapping and adding copper to its IHS to improve thermal transfer off the core, while allowing much higher levels of power consumption. It’s not that there’s something wrong with the parts from either company — manufacturers increasingly have no additional firepower to leave on the table for enthusiasts to enjoy.

Original story below:

Silicon Lottery, a website that specializes in selling overclocked Intel and AMD parts, has some 9900KS chips available for sale. The company is offering a 9900KS verified at 5.1GHz for $749 and a 9900KS verified at 5.2GHz for $1199. What’s more interesting to us is the number of chips that qualify at each frequency. Thirty-one percent of Intel 9900KS chips can hit 5.1GHz, while just 3 percent can hit 5.2GHz. The 5.2GHz option was available earlier on 11/4 but is listed as sold-out as of this writing.

The 9900KS is an optimized variant of Intel’s 9900K. The 9900K is Intel’s current top-end CPU. Given the difficulties Intel has had moving to 10nm and the company’s need to maintain competitive standing against a newly-resurgent AMD, it’s safe to assume that Intel has optimized its 14nm++ process to within an inch of its life. The fact that Intel can ship a chip within ~4 percent of its apparent maximum clock in sufficient volume to launch it at all says good things about the company’s quality control and the state of its 14nm process line.

What I find interesting about the Silicon Lottery results is what they say (or said, as of November 2019) about the overall state of clock rates in high-performance desktop microprocessors. AMD is scarcely having an easier time of it. While new AGESA releases have improved overall clocking on 7nm chips, AMD’s engineers told us they were surprised to see clock improvements on the Ryzen 7 3000 family at all, because of the expected characteristics of the 7nm node.

AMD and Intel have continued to refine the clocking and thermal management systems they use and to squeeze more headroom out of silicon that they weren’t previously monetizing, but one of the results of this has been the gradual loss of high-end overclocking. Intel’s 10nm process is now in full production, giving us some idea of the trajectory of the node. Clocks on mobile parts have come down sharply compared with 14nm++. IPC improvements helped compensate for the loss in performance, but Intel still pushed TDPs up to 25W in some of the mobile CPU comparisons it did.

I think we can generally expect Intel to improve 10nm clocks with 10nm+ and 10nm++ when those nodes are ready. Similarly, AMD may be able to leverage TSMC’s 7nm node improvements for some small frequency gains itself. It’s even possible that both Intel and TSMC will clear away problems currently limiting them from hitting slightly higher CPU clocks. Intel’s 10nm has had severe growing pains and TSMC has never built big-core x86 processors like the Ryzen and Epyc chips it’s now shipping. I’m not trying to imply that CPU clocks have literally peaked at 5GHz and will never, ever improve. But the scope for gains past 5GHz looks limited indeed, and the 5.3GHz top frequency on Comet Lake doesn’t really change that.

Power per unit area versus throughput (that is, number of 32-bit ALU operations per unit time and unit area, in units of tera-integer operations per second; TIOPS) for CMOS and beyond-CMOS devices. The constraint of a power density not higher than 10 W cm2
is implemented, when necessary, by inserting an empty area into the optimally laid out circuits. Caption from the original Intel paper.

The advent of machine learning, AI, and the IoT have collectively ensured that the broader computer industry will feel no pain from these shifts, but those of us who prized clock speed and single-threaded performance may have to find other aspects of computing to focus on long-term. The one architecture I’ve seen proposed as a replacement for CMOS is a spintronics approach Intel is researching. MESO — that’s the name of the new architecture — could open up new options as far as compute power density and efficiency. Both of those are critical goals in their own right, but so far, what we know about MESO suggests it would be more useful for low-power computing as opposed to pushing the high-power envelope, though it may have some utility in this respect in time. One of the frustrating things about being a high-performance computing fan these days is how few options for improving single-thread seem to exist.

This might seem a bit churlish to write in 2020. After all, we’ve seen more movement in the CPU market in the past 3 years, since AMD launched Ryzen, than in the previous six. Both AMD and Intel have made major changes to their product families and introduced new CPUs with higher performance and faster clocks. Density improvements at future nodes ensure both companies will be able to introduce CPUs with more cores than previous models, should they choose to do so. Will they be able to keep cranking the clocks up? That’s a very different question. The evidence thus far is not encouraging.

Now Read:



Intel Unveils Comet Lake Desktop CPUs: Up to 5.3GHz, 10C/20T, and Some Unanswered Questions

The existence and imminent launch of Comet Lake, Intel’s first 10th Generation mainstream desktop CPU family, has been an open secret for several months. But Intel has stayed quiet about the specs and capabilities of the family even as rumors mounted. Now the company is finally talking about the CPUs it will launch later in May. There are some good reasons for Intel enthusiasts to be excited, but we’ve definitely got questions heading into the launch.

Let’s hit the high points first: As anticipated, the Comet Lake S 10th Generation Core desktop CPU family hauls out all the stops in an effort to push frequencies and core counts a little higher. The new Core i9-10900K is a 10C/20T CPU with a boost clock of up to 5.3GHz. That’s a 1.06x increase in top-line clock over the 9900K combined with a 1.25x core count improvement. That’s not an unreasonable level of improvement over the Core i9-9900K given that both are 14nm CPUs.

Intel-10thGen-Launch1

Hitting these higher frequencies, however, has required Intel to tweak a number of dials and levers. The company is now lapping its own die to reduce the amount of insulative material in-between the transistors themselves and the TIM. At the same time, Intel has increased the amount of copper in its IHS to improve its thermal conductivity.

Intel-10thGen-Launch2

Die-lapping has been discussed in overclocking circles as a method of improving thermals and possibly overclocks with the 9900K, but seeing Intel officially adopt it here illustrates how difficult it is for even Intel to keep pushing CPU clocks higher. The reason for the slightly thicker IHS is to keep z-height identical to allow for cooler re-use. The higher percentage of copper in the IHS more than offsets its increased thickness.

Other new features of the 10th Gen family include the ability to set a voltage/frequency curve in utilities like XTU, the use of Hyper-Threading across all Intel Core CPUs (i3/i5/i7/i9), formal support for DDR4-2933, integrated Wi-Fi 6 (802.11ax) support, new 2.5G Intel ethernet controllers based on Foxville, and some software optimizations in games like Total War: Three Kingdoms and Remnant: From the Ashes.

What Intel Hasn’t Said

There are several topics Intel hasn’t clarified and didn’t answer questions about during the conference call. No new details about the Z490 chipset or the status of its PCIe 4.0 support were given, even though multiple motherboard OEMs are claiming support for that standard is baked into upcoming boards. There have been rumors of a flaw in the 2.5G Ethernet controller that haven’t been clarified.

The additional pins added to LGA1200 are reportedly for power circuitry and we know the board TDP has bumped to 125W, but that number seems fairly meaningless in light of what we know about power consumption on modern high-end Intel CPUs. Unless you specifically program them to draw no more than their rated TDPs, high-end chips like the 9900K draw far more than 95W while boosting under load. Sustained power draw is also much higher. Neither AMD nor Intel sticks to TDP as a measure of actual power consumption, but the rumors concerning the 10900K have implied it could draw as much as 250-280W.

Ignore the “Up to’s” in the base clock column in the image above. Intel has informed ExtremeTech that these frequencies are meant to be listed as static clocks. “Up to” only applies to the boost clock frequency. Overall, these new CPU configurations are an improvement over what Intel has had in-market with 9th Gen.

The Core i9-9900K is an 8C/16T with a 3.6GHz base clock and 5GHz all-core boost, with an official price of $500. The Core i7-10700K is an 8C/16T CPU with a 3.8GHz base clock and a 5.1GHz boost clock, with the same 4.7GHz all-core boost as the 9900K. Price? $375.

I’m not going to speculate on how the 10700K will compare against CPUs like the 3700X or 3900X until we have silicon in-hand, but the 10700K is much better-positioned against AMD than its predecessor was. It isn’t clear how much performance will improve from the 9900K to the 10700K, but the 10700K should offer at least 100 percent of the Core i9’s performance for 75 percent its price.

The price cuts and performance adjustments continue down the stack, to good overall effect. The bottom-end Core i3, the Core i3-10100, is a 4C/8T CPU with a 3.6GHz base clock and 4.3GHz turbo for $122. The equivalent 9th Gen CPU, the Core i3-9100, is a 4C/4T CPU at 3.6GHz/4.2GHz. The addition of HT should translate to a 1.15x – 1.2x improvement across the board.

Comet Lake and LGA1200 will definitely deliver some improvements over 9th Gen, but we want to see exactly how these chips and platforms compare before we say more. One thing we are sure of — anyone planning to play at the top of the Comet Lake stack will want a high-end CPU cooler to make certain they squeeze every last bit of performance out of the chip.

Now Read:



Toshiba Clarifies Which of Its Consumer HDDs Use Shingled Magnetic Recording

After news broke that the major hard drive companies have all been shipping SMR drives into various consumer products, some of the manufacturers involved have been clarifying which of their own HDDs are actually SMR drives instead of CMR. Toshiba is the latest company to release this information, though there are limits to its report that make it potentially less useful than we’d like.

seagate-smr-vs-conventional-hard-drive-writing

Seagate has also deployed shingled magnetic recording to boost areal density of its drive platters, though the technology isn’t a great fit for consumer products.

As a reminder: SMR stands for Shingled Magnetic Recording and refers to the placement of tracks on the HDD platter itself. With conventional recording, a gap is left in-between each track, allowing the track to be individually read and written. With SMR, the tracks are layered directly next to each other, rather like shingles. This means that writing data to the drive requires reading and writing multiple tracks at once.

Data and graph by Anandtech.

The impact on read speeds is small-to-nil, but the write speed impact for using SMR can be significant. There’s not a ton of data on how this hits consumer use-cases because, up until now, reviewers haven’t been treating SMR drives as if they were likely to wind up being used for primary hard drives. I’d be surprised if we don’t start seeing more reviews on this in short order.

In any event, here’s what Toshiba has to say. The P300 6TB HDWD260UZSVA and P300 4TB HDWD240UZSVA are both SMR desktop HDDs that Toshiba only ships to bulk OEMs — which means any laptop you buy with a 4TB or a 6TB Toshiba HDD has to be checked to see if it uses one of these two models. Retail P300 drives top out at 3TB.

Now, unlike the desktop family, multiple bulk and retail L200 laptop drives also use SMR, including:

HDWL120UZSVA (2TB, bulk)
HDWL120EZSTA (2TB, retail)
HDWL120XZSTA (2TB, retail)

The 1TB drives at 7mm thick are also impacted (HDWL110UZSVA, HDWL110EZSTA, and HDWL110XZSTA). The first is a bulk drive, the second two are retail products.

Unfortunately, because Toshiba is selling these drives in bulk, it may be difficult to make certain you aren’t buying one. ExtremeTech does not recommend using an SMR drive in a consumer system as primary storage unless you are specifically aware of its likely performance characteristics and do not mind them. The dramatically lower write performance that SMR offers in some instances is of less concern for personal backup drives or similar applications, but hard drives are already poor solutions for storage speed compared with SSDs, and SMR drives are lower than CMR (conventional magnetic recording) in several additional aspects.

We are glad Toshiba came forward with this information, but we can only recommend buying a system with a Toshiba HDD if you either know exactly what you’ll be getting into or can confirm you aren’t buying an SMR drive.

Now Read:



Chiplets Do Not ‘Reinstate’ Moore’s Law

Ever since chiplets became a topic of discussion in the semiconductor industry, there’s been something of a fight over how to talk about them. It’s not unusual to see articles claiming that chiplets represent some kind of new advance that will allow us to return to an era of idealized scaling and higher performance generation.

There are two problems with this framing. First, while it’s not exactly wrong, it’s too simplistic and obscures some important details in the relationship between chiplets and Moore’s Law. Second, casting chiplets strictly in terms of Moore’s Law ignores some of the most exciting ideas for how we should use them in the future.

Chiplets Reverse a Long-Standing Trend Out of Necessity

The history of computing is the history of function integration. The very name integrated circuit recalls the long history of improving computer performance by building circuit components closer together. FPUs, CPU caches, memory controllers, GPUs, PCIe lanes, and I/O controllers are just some of the once-separate components that are now commonly integrated on-die.

Chiplets fundamentally reverse this trend by breaking once-monolithic chips into separate functional blocks based on how amenable these blocks are to further scaling. In AMD’s case, I/O functions and the chip’s DRAM channels are built on a 14nm die from GF (using 12nm design rules), while the actual chiplets containing the CPU cores and the L3 cache were scaled down on TSMC’s new node.

Prior to 7nm, we didn’t need chiplets because it was still more valuable to keep the entire chip unified than to break it into pieces and deal with the higher latency and power costs.

AMD-Epyc-Chiplet

Epyc’s I/O die, as shown at AMD’s New Horizon event.

Do chiplets improve scaling by virtue of focusing that effort where it’s needed most? Yes.

Is it an extra step that we didn’t previously need to take? Yes.

Chiplets are both a demonstration of how good engineers are at finding new ways to improve performance and a demonstration of how continuing to improve performance requires compromising in ways that didn’t used to be necessary. Even if they allow companies to accelerate density improvements, they’re still only applying those improvements to part of what has typically been considered a CPU.

Also, keep in mind that endlessly increasing transistor density is of limited effectiveness without corresponding decreases in power consumption. Higher transistor densities also inevitably mean a greater chance of a performance-limiting hot spot on the die.

Chiplets: Beyond Moore’s Law

The most interesting feature of chiplets, in my own opinion, has nothing to do with their ability to drive future density scaling. I’m very curious to see if we see firms deploying chiplets made from different types of semiconductors within the same CPU. The integration of different materials, like III-V semiconductors, could allow for chiplet-to-chiplet communication to be handled via optical interconnects in future designs, or allow a conventional chiplet with a set of standard CPU cores to be paired with, say, a spintronics-based chip built on gallium nitride.

We don’t use silicon because it’s the highest-performing transistor material. We use silicon because it’s affordable, easy to work with, and doesn’t have any enormous flaws that limit its usefulness in any particular application. Probably the best feature of chiplets is the way they could allow a company like Intel or AMD to take a smaller risk on adopting a new material for silicon engineering without betting the entire farm in the process.

Imagine a scenario where Intel or AMD wanted to introduce a chiplet-based CPU with four ultra-high-performance cores built with something like InGaAs (indium gallium arsenide), and 16 cores based on improved-but-conventional silicon. If the InGaAs project fails, the work done on the rest of the chip isn’t wasted and neither company is stuck starting from scratch on an entire CPU design.

The idea of optimizing chiplet design for different types of materials and use-cases within the same SoC is a logical extension of the trend towards specialization that created chiplets themselves. Intel has even discussed using III-V semiconductors like InGaAs before, though not since ~2015, as far as I know.

The most exciting thing about chiplets, in my opinion, isn’t that they offer a way to keep packing transistors. It’s that they may give companies more latitude to experiment with new materials and engineering processes that will accelerate performance or improve power efficiency without requiring them to deploy these technologies across an entire SoC simultaneously. Chiplets are just one example of how companies are rethinking the traditional method of building products with an eye towards improving performance through something other than smaller manufacturing nodes. The idea of getting rid of PC motherboards or of using wafer-scale processing to build super-high-performance processors are both different applications of the same concept: Radically changing our preconceived notions on what a system looks like in ways that aren’t directly tied to Moore’s Law.

Now Read:



Wednesday, 29 April 2020

Microsoft to Include More Efficient Search Indexing in May 2020 Windows 10 Update

Every Windows 10 update brings its fair share of changes, but the upcoming May 2020 version is shaping up to be more significant than most of Microsoft’s feature updates. After the update, Windows 10 will be kinder to your hard drive when keeping tabs on your files and programs, and that won’t come at the expense of search performance. It’s all thanks to a tweaking indexing system and a little common sense. 

Years back, it was common for Windows tips articles and optimization guides to recommend disabling the Windows indexing service. Microsoft implemented this feature so you could pop open the search bar and get immediate results as you typed. However, that also meant the operating system needed to continuously scan your hard drive for changes. In some configurations (particularly those with slower spinning drives), that could really kill system performance. 

Today, solid state drives (SSDs) have overtaken spinning drives for running Windows — many people still use big, cheap spinning drives for bulk storage, though. Even if most of us are using SSDs, many cheap PCs still rely on spinning drives, and Windows 10 build 2004 should go easier on them. 

The new indexing service is more efficient when it’s running, which will help spinning drives maintain a minimum level of performance and avoid noticeable lag. Windows 10 build 2004 also added a feature that throttles or pauses indexing activity when the user is transferring or deleting files. That could even help boost SSD performance, depending on your configuration. 

The search indexing changes will be great for a subset of users, but there are a ton of changes in this version. There’s a new version of the Linux subsystem, this time with a proper open-source kernel that will get updates. The search UI to go with that new indexing system will also include some quick search buttons and more relevant results. Finally, there’s DirectX 12 Ultimate with support for ray tracing. The update will, sadly, not come with a fancy new video card that can handle ray tracing. 

Microsoft has reportedly finalized the May 2020 update, so build 2004 might be the one that rolls out to users in a few weeks. As usual, the update will hit a small number of users before launching widely. If you want early access, you can join the Windows Insiders program. 

Now read:



Land Over $1,000 Tech Training Courses For Only $80 Today

Learning is a lifelong pursuit with no finish line. Whether you’re happy in your current job or actively seeking to branch out and do something different, your next step will almost undoubtedly require learning a new skill, a new discipline, or a new set of equipment to succeed.

The problem is that it isn’t always clear what you’ll need to know next. So to make sure your options are always open, engaging with a service like the LearnNow Complete Developer and IT Pro Library can go a long way toward satisfying those education needs, no matter what it is you need to learn. Right now, students can save over 90 percent off the price of this giant course package with unlimited lifetime access for only $79.99.

And the term library is not used loosely with LearnNow. One of the web’s premier online learning platforms, they now house over 1,000 courses aimed at helping students master today’s most in-demand IT and software development skills.

After more than 25 years training students from tech industry titans like Microsoft, Intel, AT&T and more, this catalog of LearnNow courses covers virtually any tech topic users ever need to learn, including AWS, Azure, Python, C#, ASP.net Core, Linux, SQL servers, cybersecurity and more.

Whether you learn best from watching, listening, reading, or doing, the coursework here provides learners with the tools to support their best learning style. And, all classes are taught by expert instructors with years of expertise in their fields, always ready to help the next generation of experts face new challenges.

Emerging developers can dive into Java or Open Source instruction, while project managers can take courses in Agile, PMP, PRINCE, and more. There’s complete Microsoft Office training, classes in using business analysis systems like BABOK or creative and web design courses covering Adobe Creative Cloud, Final Cut, photography, HTML, and other disciplines.

Unlimited lifetime access to this course collection is usually priced at $996, but with the current offer, you can access it for a fraction of that cost, just $79.99.

Note: Terms and conditions apply. See the relevant retail sites for more information. For more great deals, go to our partners at TechBargains.com.

Now read:



The Galaxy Z Flip Is a Fascinating Glimpse of the Future That Came At the Worst Time

Smartphones have been getting more expensive year after year, and all the while upgrading has become less worthwhile. Each new Galaxy or iPhone is only marginally better than the one that came before, and they’re all flat glass slabs. For all their flaws, foldable phones like the Galaxy Z Flip are trying to do something new, and that’s exciting. The Z Flip is the first foldable that ably does all the boring smartphone things while innovating on the hardware side. The technology is still far from perfect, but this is the most fun smartphone I’ve used in years. 

It’s a tough time to launch expensive new gadgets, but this phone gives me hope that foldables could be almost as transformative as the original iPhone. I apparently believe this so passionately that I talked myself into spending $1,400 on a Galaxy Z Flip, which is the first phone I’ve purchased in years. 

Returning to the Flip

I started writing about technology in 2008, which was in the very early days of the iPhone and Android. Back then, Android smartphone makers were trying absolutely anything. There were phones with flip-out secondary screens, slide-out keyboards, Sidekick-style hinges, and more. More than a decade of experience has shown smartphone makers what works, and diverging from that formula has often led to disaster. Phones today are better, but they’re also a snooze. 

After a decade of covering mobile technology, I have grown accustomed to getting review units and simply moving from one loaner to the next. I bought the Z Flip because Samsung didn’t want to send me one, and it’s just a fascinating piece of hardware. The Galaxy Z Flip stole the show from the S20 family at Samsung’s 2020 Unpacked event with its slick form factor and next-generation flexible display technology. 

There was a time when flip phones ruled the world, and going back to this form factor has been amazing after all these years. I’ve become so accustomed to pocket-busting phablets that sometimes I forget I even have the Z Flip in there. Open it up, and there’s a huge 6.7-inch OLED panel. The ultra-thin glass display makes the Z Flip feel much more sturdy — it’s almost indistinguishable from a “regular” smartphone when it’s open, and that’s what makes it so cool. The crease isn’t even very noticeable. 

Unlike the foldable Moto Razr, Samsung engineered a hinge that remains open at various angles. That lets you prop the phone up in “Flex Mode” to access apps in different ways. For example, the gallery app fills the top half of the fold with the image, and you can swipe on the bottom half to navigate. Similarly, YouTube keeps the video on the top half in Flex Mode and puts the comments and other content on the bottom. The software optimizations are admittedly minimal right now, but we’re still in the early days of foldable tech. 

I still have to switch phones frequently for reviews, but the Z Flip is increasingly the phone I want to go back to when the review is done. There are compromises, but I’m willing to put up with them. 

Folding Evolution

You might think all that praise is leading up to a recommendation that you run out and buy a $1,400 smartphone. Nope. For most people, buying a Z Flip would be a real mistake. It’s cool technology, and possibly a sign of long-overdue innovation in the smartphone market. At the same time, it’s ludicrously expensive and includes several notable drawbacks. 

Perhaps most notably, there’s a hinge that will probably be the first thing on the phone that fails. While Samsung seems to have worked out the kinks with its foldable hinges, laboratory testing can never tell us how a device will hold up in real life. Maybe the Z Flip will open like a rusty door in 12 months — we just don’t know yet. There’s also no way to make foldable phones water-resistant right now, so no IP rating here. 

Samsung also markets the Z Flip as having foldable glass, which is technically true. There’s a layer of glass in the display, but it’s not the top layer. The surface is plastic, which can scratch and dent much more easily than glass. Even your fingernail is hard enough to leave marks. The display also doesn’t work with in-display fingerprint sensors right now, and the form factor limits where you can place one. Samsung chose to integrate it with the power button on the right edge, and it’s just passable. 

Smartphones have been getting more expensive for years, and now even flat phones like the Galaxy S20 Ultra cost as much as the Z Flip. And the coronavirus pandemic has thrown the global economy into disarray. It’s going to be harder than ever to convince people to spend this much on a phone, and that alone might delay progress on folding phones. 

The Galaxy Z Flip next to the S20 Ultra.

The engineering and materials are fundamentally more expensive than flat phones, so you’d expect them to be priced higher. However, we can expect more niche side projects like the Z Flip and Huawei Mate X. It’ll be years before the new flagship Galaxy phone is just foldable instead of flat, but I think we’ll get there. In the meantime, if you’re dead set on spending $1,400 on a smartphone, I’d recommend the Z Flip over the S20 Ultra.

Now read:



Some Galaxy S20 Ultra Owners Claim Camera Glass Spontaneously Shatters

Image credit: Samsung Community Forums

Samsung made a big deal of the Galaxy S20 Ultra’s camera at the announcement earlier this year, but reviews have been tepid. The camera setup on this $1,400 phone might not live up to expectations, but perhaps Samsung will address that with software updates. One thing updates can’t fix is shattered glass, and an increasing number of S20 Ultra owners say their camera modules have cracked for no apparent reason

The Galaxy S20 Ultra has a humongous camera module on the back with a 108MP primary sensor, 12MP ultra-wide, 48MP 4x telephoto, and a time-of-flight 3D sensor. Like other phones, the camera sensors are under a piece of scratch-resistant glass. However, even the latest versions of high-end Gorilla Glass can crack under the right conditions. 

Numerous S20 Ultra owners have taken to Samsung’s forums and Reddit to complain about mysterious damage to their camera glass. Most of the images they’ve shared show small “punctures” directly over the camera lenses. Other users complain of hairline cracks that appear seemingly out of nowhere. All the victims of this damage swear up and down they didn’t drop their phones, and many claim the damage happened spontaneously while the device was in a pocket or bag.

Even though Samsung’s S20 sales are reportedly not meeting expectations, they’re still selling a lot of phones compared with any other OEM. There will inevitably be some people with defects or accidental damage making noise on the internet. Admittedly, the shattered glass on these phones looks very unusual. I don’t think I’ve seen a phone break in this way before. 

The S20 Ultra’s giant camera assembly (far right), courtesy of iFixit.

Hardened glass does get weaker as panel size increases, but manufacturers compensate for that in screens by bonding the glass to the OLED or LCD panel underneath. It’s possible that the S20 Ultra’s oversized camera module is just too big, making it easier to damage the glass simply by carrying the phone around. 

Damage to the glass makes the cameras behind them essentially useless, so most have reached out to Samsung for help. Naturally, Samsung support has given these customers the cold shoulder. The company’s warranty doesn’t cover cosmetic damage, so the only option is to pay to have the camera glass replaced. Samsung’s official repair centers have quoted customers $400 for a fix, or $100 for those with Samsung Premium Care subscriptions. At that price, it’s likely Samsung will replace the entire camera module rather than just the glass. The company has thus far refused to admit there’s a problem with the phone, and forum moderators have removed many of the complaints. It’ll take a much more widespread pattern before Samsung changes its tune.

Top image credit: Samsung community forums

Now read:



New Startup SiPearl Will Challenge AMD, Intel for Control of the EU HPC Market

The new CPU design firm SiPearl may be a new company, but the company has massive ambitions in the HPC space. SiPearl is the name of the fabless semiconductor company that’s been tapped to design the first CPU built for the European Processor Initiative (EPI). The stated goal of the EPI? “to design and implement a roadmap for a new family of low-power European processors for extreme-scale computing, high-performance Big Data, and a range of emerging applications.”

SiPearl has licensed the Zeus core from ARM to use in this endeavor. Zeus is the upcoming next-gen core in the ARM Neoverse, and it’s expected to be derived from the Cortex-A77 and to use some of the same infrastructure as that core. The first generation of ARM Neoverse CPUs, codenamed Ares, was similar to the Cortex-A76 in most respects but contained a few key differences.

First, the chip was tuned to run at full clock rather than for the kind of power-saving modes that mobile SoCs are known for. Second, it contained a fully coherent L1 cache, in order to accelerate performance in virtual environments. Ares also offered a 1MB L2 cache, up from 512KB on the Cortex-A76. Finally, Ares was designed to work with a coherent mesh interconnect, rather than the cluster configuration used for ARM’s consumer parts.

While we don’t know what kinds of improvements Zeus will introduce over the Cortex-A77, we can reasonably expect they’ll be of this nature — tweaks and tuning to improve the core’s performance in server workloads. As the roadmap up above shows, SiPearl will introduce a chip based on TSMC’s 6nm node in 2021. 6nm appears to be a refined 7nm that leverages some learning from the EUV side of the business in an unspecified fashion. N6, to be clear, does not deploy EUV — it reuses the design rules of the previous generation — but TSMC apparently learned some tricks from EUV that it can bring over to the DUV side of the business.

Normally, the launch of a fabless CPU company with big ambitions wouldn’t warrant the declaration of war on Intel and AMD. But the fact that SiPearl is being underwritten by the European Processor Initiative — a project to develop an exascale-capable CPU using European companies and IP — means there’s more funding available than might otherwise be the case. SiPearl was awarded €6.2m of European Union subsidies to launch itself and is preparing to raise funds in order to finance its processor evolution up to the expected launch, in 2022.

It looks as though x86 and ARM are going to face off across the entire CPU market after all, at least in certain business segments. It may have taken a decade longer than some anticipated, but ARM is moving into every segment. Intel and AMD may have to stop slugging each other long enough to concentrate on a common rival if they want to maintain dominance in the market — and no, I’m not suggesting the two companies illegally collude to keep ARM-based hardware out of the server space.

Now Read:



RetroPie 4.6 Launches With Raspberry Pi 4 Support

For fans of retro game consoles and home computer systems, it’s been a long time coming: RetroPie 4.6 has launched, with the star feature being official support for the Raspberry Pi 4. The RetroPie team took its time on purpose with this one and is still calling the support for the Pi 4 “beta,” although it’s now available for everyone and included within the 4.6 install. RetroPie says that “there are still things to improve, but most emulators now run well.”

RetroPie 4.6 also includes a move to Raspbian Buster as a base for the images, now that Raspberry Pi Trading Ltd. no longer supports Raspbian Stretch. RetroPie said it would continue to support Stretch for “a while longer,” but it will likely stop updating binaries for Stretch before the year is out.

Credit: Michael Henzler/CC BY-SA 4.0

Other changes include improvements to the RetroPie packaging system and core RetroPie-Setup code so that it remembers the package stage. RetroPie 4.6 will also only update those binaries where an actual new one is available, and it will no longer overwrite source installs during updates. RetroArch gets an update to 1.8.5 with a new notification system, support for “real CD-ROM” games with the ability to dump a disc image, an improved disk control system with the ability to label disks in .m3u files, and RetroAchievements support for the original PlayStation, Sega CD, and PCEngine CD.

Next up are changes for EmulationStation, which gets a bump here to version 2.9.1. It includes always-welcome Scraper fixes for TheGameDBNet, grid view and theme improvements, and new options to disable the system name on custom collections and to save gamelist metadata after each modification. RetroPie 4.6 also updates a slew of emulators to the latest versions, including those for the Commodore Amiga, Atari 2600, Atari 800 and 5200, and ScummVM, the awesome engine emulator for running old-school graphic adventure games from LucasArts and some other 1980s and 1990s studios.

The Raspberry Pi 4 promised to bring plenty of additional firepower to RetroPie. The popular $35 computer is capable of running not just the usual classic consoles and game systems, but even late 1990s and early 2000s powerhouses like the Sega Dreamcast (and redream is now bundled with RetroPie 4.6), as well as the PSP, Saturn, and to some extent even the PlayStation 2. The last few aren’t by any means perfect yet, but Dreamcast games have been running at around 60fps at 720p resolution for several months now, and enthusiasts are working on getting Dolphin up and running for GameCube and Wii titles.

Head over to RetroPie to download the latest 4.6 build.

Now read:



TSMC Starts Development on 2nm Process Node, but What Technologies Will It Use?

TSMC has been firing on all thrusters for the past few years, and the firm seems confident that’s going to continue into the next few years. With 7nm in wide production and 5nm high volume manufacturing on-track, TSMC is looking even beyond the 3nm node and declaring that early 2nm research has now begun.

We don’t know what specific technologies TSMC will deploy at 2nm and the company has barely acknowledged the beginning of its research, so it’s safe to say even it isn’t sure yet, but we can look at some of the broad expectations. The International Roadmap for Devices and Systems publishes periodic updates on the future of silicon technology, including a 2018 chapter called “More Moore,” (this refers to the ongoing scaling of Moore’s Law). In it, they mapped out the expected technological developments for future nodes in broad strokes:

IDRS-Scaling-1

Chart by the International Roadmap for Devices and Systems. “More Moore”

The IDRS expects GAA (Gate-all-around) FETs and FinFETs to share the market at 3nm, with GAAFETs replacing FinFETs at 2nm. The acronym “LGAAFETS” refers to lateral gate-all-around FETS, or GAAFETs in a traditional 2D processor. Vertical Gate-all-around FETs would be used in yet-to-be-developed 3D transistor structures.

Surprisingly, the IDRS projects we’ll still see 193nm lithography deployed as far out as 2034. I would have expected EUV to have conquered the market by this point for all leading-edge nodes, but I haven’t found an explanation on this point in the report yet.

The IDRS is predicting the deployment of so-called “high-NA” EUV. NA is a dimensionless number that characterizes the range of angles over which a system can accept or emit light. EUV, by its very nature, pretty much loves to do anything except be emitted, so developing optical systems that support effective EUV dosing over a larger range of angles has been a high priority. The alternative to high-NA EUV is to move immediately to multi-patterning EUV.

*collective groan from audience*

Everything people don’t like about multi-patterning in 193nm they really don’t like about multi-patterning with EUV. IDRS is forecasting that we’ll see high-NA systems first deployed at 2nm.

3D stacking technology isn’t projected to change much — die-to-wafer and wafer-to-wafer will be deployed on this node as well as 3nm. The next major node shift, in 2028, will introduce a suite of new technologies.

It isn’t clear what kind of performance scaling enthusiasts should expect. According to TSMC, the 5nm node is a huge leap for density (80 percent improvement) but only a small gain for power consumption (1.2x iso performance) and performance (1.15x iso power). Those are very small gains for a major node shift, and they imply we shouldn’t expect a lot of performance gains strictly from the node. Whether this will be the new norm or a temporary pause is still unclear.

Note that the IDRS estimate of 2025 for 2.1nm is based on forecasting they did in 2018. The IDRS does not claim to know the exact dates when Intel, TSMC, or Samsung will introduce a node. With 5nm launching in 2020, we might expect 3nm by 2022, and 2nm by 2024 – 2025, so the estimate looks reasonable.

One trend we expect to continue into the future is the way Intel and AMD are designing new capabilities to continue to improve performance now that clock speed isn’t on the table the way it used to be. Chiplets, HBM, EMIB, Foveros, and similar technologies all drive higher performance without relying on historic drivers like smaller transistors, lower supply voltage, and higher clocks. A great deal of effort is being spent to optimize material engineering and circuit placement as a means of improving performance or lowering power consumption, precisely because new nodes don’t deliver these improvements any longer without a great deal of additional work.

Now Read:



AMD Posts Substantial Q1 2020 Gains Based on Ryzen, Epyc CPU Sales

AMD announced its quarterly results yesterday, with significant improvements in year-on-year sales. Total revenue grew by 1.4x year-on-year, driven by growth in Ryzen and Epyc demand, while Compute and Graphics segment revenue grew by 1.73x year-on-year.

AMD claims to have held more than 50 percent of the premium CPU market at various e-tailer firms this year, which is at least somewhat verified by the regular reports we’ve seen coming out of companies like Mindfactory.de. Bestselling lists at Amazon and Newegg have also regularly listed AMD holding 50 percent or more of the top-selling processors.

According to CEO Lisa Su, revenue from the console side of the business was quite low, as Microsoft and Sony are now drawing down the inventory of PS4 and Xbox One hardware that they’d previously produced, rather than focusing on ramping production in Q3 for a new sales cycle.

AMD expects to start ramping production of the new console components next quarter, which means this is the closest look we’ve ever gotten at AMD’s total server revenue without console sales hitting the segment. Note, however, that this business still contains an unknown number of GPU sales to support Google Stadia and other cloud gaming/datacenter GPU use-cases. Comments made by AMD suggest that Epyc is a larger percentage of total data center revenue than Instinct is, but exact figures have not been disclosed.

According to AMD, revenue for the quarter was $348M in this segment, down 21 percent year-on-year due to the decrease in console sales (partially compensated for by increased Epyc sales). This segment reported an operating loss of $26M in Q1 2020, though AMD notes that Q1 2019 included a $60M licensing gain and that Q4 2019 had “higher revenue and lower operating expenses.” Despite the revenue hit, AMD grew its server business by double-digit percentages from Q4 2019 to Q1 2020.

A year ago about this time, I argued that AMD would improve its gross margins after the launch of 7nm rather than gutting its prices and devaluing its brand-new 7nm products. This is exactly what happened:

Overall, AMD has continued to improve its balance sheet and overall position in the market, quarter after quarter. Three years ago, we’d only recently seen evidence that the original Zen architecture would have some legs, and enthusiasts were debating whether Intel’s lead at 1080p was evidence of some critical gaming flaw yet to be discovered. Today, you can buy a $99 AMD CPU with expected-equivalent or better performance to 2017’s $350 Core i7-7700K.

If you bought a high-end X370 motherboard in 2017, you can step from a maximum of eight to 16 cores without upgrading your platform. It’s not unheard of for an older board to support higher-core-count CPUs, but core count doubling with frequency and IPC improvements is a rare upgrade path. Intel still has noted strengths in the server market and maintains a smaller leadership position in gaming than it did in 2017, but AMD has done an excellent job of executing these past three years. Ryzen’s continued success and AMD’s various financial improvements are a testament to the excellent execution of its engineering team. We don’t know how AMD has structured its royalty license with Sony and Microsoft this time out, but the launch of the Xbox Series X and PS5 will only be beneficial to AMD’s bottom line.

For full-year 2020, AMD projects revenue growth of 1.2x – 1.3x, with a gross margin of 45 percent and operating expenses at 29 percent of revenue. Relatively few companies are giving full-year projections given coronavirus uncertainties, so AMD is clearly feeling confident on that basis alone.

Now Read: