LOADING STUFF...

IEEE Interview: A Revolution in Analog Circuits from the Meta AR Prototype Chip

News3mos agoupdate firefly
688 0 0

CheckCitation/SourcePlease click:ieee

(XR Navigation Network April 01, 2024) IEEE Spectrum, the flagship publication of the Institute of Electrical and Electronics Engineers (IEEE), is designed for pioneers leading change and progressives actively exploring solutions to problems, and is designed to explore the future of technology trends and their impact on society and business. For Fixing the Future, IEEE Spectrum's podcast, the team has been talking to some of the brightest minds in tech about concrete solutions to major challenges.

如果大家有印象,IEEE Spectrum早前介绍了MetaofAR原型芯片通过3D芯片技术获得了巨大的性能提升,而在新一期的program中,Fixing the Future的主持人斯蒂芬·卡斯(Stephen Cass)和IEEE Spectrum负责半导体方向的资深编辑塞缪尔·摩尔(Samuel K. Moore)共同探讨了芯片领域的发展,以及Meta的AR芯片等等。

A compilation of the relevant conversations below:

IEEEE Interview: Revolutionizing Analog Circuits from the Meta AR Prototype Chip</trp-post-container

Stephen Kass: Hello and welcome to Fixing the Future, EEE Spectrum's podcast, where we look at concrete solutions to major problems. I'm the host, Stephen Kass, senior editor at IEEE Spectrum. Today we're going to talk to Samuel Moore. Samuel, welcome to our program.

Samuel Moore: Thank you, Stephen. Very happy to be here.

Cass: You recently attended the ISSCC conference for the semiconductor research community. What exactly is that and why is it so important?

Moore: Well, aside from being a hard-to-pronounce acronym, it actually stands for the IEEE International Conference on Solid-State Circuits. It's one of the three main areas of semiconductor research. It's been around for over 70 years. It's really an elite conference if you're doing circuit research. The conference talks about about 200 topics on processors, memories, radio circuits, power circuits, and brain-computer interfaces.brain computer interface的话题。

Cass: What are the topics that really catch your eye?

MOORE: All right. So I'm going to tell you guys a couple things. First of all, there's a potential revolution in analog circuits in the making. There's a cool chip coming out that can perform AI super efficiently by mixing memory and compute resources. We've already seen Meta's future AR glasses or their chip. Finally, there are a bunch of really cool security devices, including a self-destruct circuit.

Cass: Oh, that sounds very cool. Well, let's start with analog circuits.

Moore: "We're doing it all wrong," according to Bram Nauta, an expert from the Netherlands. Basically, Moore's Law is very useful for digital circuits, but not very useful for analog circuits. They just haven't moved forward. They've been using super-advanced processors in the computing part for four to five generations now still using the same I/O chips.

Cass: Like your smartphone, it needs converters to digitize your voice, as well as to handle radio signals and so on.

MOORE: That's right. Totally. As they say, the world is analog. You have to make it digital in order to do calculations on it. So what you're saying about radio circuits is actually a very good example because you have antennas and then you have to amplify, you have to mix carrier signals and so on, but you have to amplify. You have to amplify very well linearly. And then you feed it into your analog-to-digital converter. As Notar pointed out, the amplifier couldn't do any better, but it would continue to consume tens or hundreds of times more power than any digital circuit could. So his idea was to remove it. No more linear amplifiers. Instead, he suggested we invent an analog-to-digital converter that didn't need an amplifier.

Cass: Well, why haven't we done this before? It sounds obvious. You don't like a component. You throw it away. So how do you make up the difference with a pure analog-to-digital converter?

Moore:Well, I can't exactly tell you how that was accomplished, especially because he's still in the middle of working on it. But his calculations are basically correct. It's really a question of, it's really a question of Moore's Law. It's not, "What are we doing now?" It's "What can we do in the future?"

CASEY: But is there some kind of tradeoff here?

Moore: There. Now you have a linear amplifier that consumes milliwatts and an analog-to-digital converter that can take advantage of Moore's Law. So what he's saying is, "We're going to make the analog-to-digital converter worse. It's going to consume more energy. But if you look at the system as a whole, the whole system will consume less energy." Part of the problem is that the metrics for how good a linear amplifier is are really just about the linear amplifier, rather than worrying about, "Well, what does the whole system consume?" If you care about the whole system, it ceases to make sense.

Cass: It also sounds closer to the dream of pure software-defined radios.

摩尔:没错。这是正确的。数字化可以利用摩尔定律。摩尔定律依然在继续。它正在放缓,但依然在继续。事情就是这样慢慢发展。现在它终于到了边缘,到了第一个放大器。诺塔对这次演讲有点担心,因为实际上在这次会议,很多事情都是一团糟。所以他告诉我,他实际上非常紧张。但这引起了一定的兴趣。我的意思是,有apple的工程师和其他公司的人找到他说:“是的,这有道理。也许我们会看看这一点。”

Cass: So it seems to address those bottlenecks and the bottleneck of linear amplifier efficiency. But you mentioned another bottleneck, namely the memory wall.

Moore: Yes. The memory wall is a long-standing problem in computing. In particular, it started in high-performance computing, but it's now applied to almost all computing, and the time and energy required to move a bit from memory to the CPU or GPU is far greater than the time and energy required to move a bit from one part of the GPU or CPU to another part of the GPU or CPU.

Cass: That's why in traditional CPUs you have things like cache L1. You'll hear about what's the first level cache, the second level cache, the third level cache. But it's much more than that. You're talking about much more than just having a little piece of memory near the CPU.

Moore: Yes. That's the problem with memory walls in general. People have been trying to solve this problem in various ways. One of the best ways is to put the computation into memory so that your bits don't have to move too far. There's a lot of different, there's a whole bunch of different ways to do it. When I was at the Senate, there were probably nine talks on this topic, and we even presented at Spectrum on really cool ways that you can do AI computations in memory using simulations.

Cass: Oh, so now we're back to the simulation.

MOORE: I mean, coincidentally, the multiplication and accumulation task, which is the key to running all the matrices of artificial intelligence, is basically Ohm's law and Kirchhoff's law. But it's a pain in the ass. Trying to do anything in analog is.

CASE: Before digital computers came along, like all the way back in the '70s, analog computers were actually very competitive, and you could solve problems with operational amplifiers, and that's why they were called operational amplifiers. You build all the equations and get the results. It's basically like using an analog operation where the behavior of the components simulates a particular mathematical equation. You need a certain analog calculation that you put in because it matches a specific calculation used in AI.

摩尔:没错。所以这是一个非常富有成效的领域,人们依然在努力。我在ISSCC大会遇到一个人。他的名字是埃文盖洛斯·埃莱夫塞鲁(Evangelos Eleftheriou)。他是Axelera的首席技术官。他得出的结论是,这尚未成熟。所以埃文盖洛斯找到了一种在内存中进行人工智能计算的数字化方法。它基本上依赖于将计算与缓存紧密地交织在一起,使它们成为彼此的一部分。当然,这需要一种新型的SRAM,但对此他守口如瓶。另外还需要用整数数学来代替浮点数学。你在AI世界中看到的大多数企业,比如Nvidia,他们的主要计算都是浮点数。现在,浮点数变得越来越小。他们可以在8位浮点数做越来越多的事情,但依然是浮点数。

Cass: Yeah, I actually like integer math because I've done a lot of these reverse calculations. The truth is, you realize that, oh, the Forth programming language is likewise notoriously integer-based. For many real-world problems, you can find a perfectly acceptable scale factor that can allow you to use integers with no significant difference in precision.

MOORE: Right now they're targeting what's called an edge computer. It's primarily geared toward machine vision. When running a typical machine vision benchmark, they're capable of doing 2,500 frames per second. So there could be a very large number of cameras.

Cass: Even if you use a standard frame rate, like 20 frames per second, you're dealing with 100 cameras at once.

MOORE: Yes. They're actually capable of doing 353 frames per watt, which is a very good number. The performance per watt is what really drives everything. If you're trying to put it in a car or any kind of moving vehicle, everybody's counting watts. That's the problem. Either way, I'll be watching and keeping an eye on it. They're coming out with some chips later. Could be really cool.

Cass: Speaking of which, you and I have found something interesting in 3D chip technology.

摩尔:是的,我非常喜欢3D芯片技术。3D芯片技术在先进的处理器中随处可见。如果你看看Intel在超级计算机的人工智能加速器,如果你看看AMD,他们真的利用了能够将一个芯片堆叠在另一个芯片的优势。再一次,摩尔定律变慢。我们真的不能指望得到那么多。所以,如果你想要每平方毫米有更多的晶体管(这是你获得更多计算量的方式),你必须开始把一个芯片放在另一个芯片的上面。

CASE: So as we go forward, in the future the number of transistors will not be per square millimeter, but per cubic millimeter.

Moore: You can measure it like this. Thank goodness they look basically the same shape as a regular chip. So this 3D technology is powered by something called hybrid bonding. I'm afraid I don't even understand where the word hybrid comes from. But really, it's cold soldering in a copper pad on top of one chip and a copper pad on the other. Here's how it works. Imagine you're building transistors in a silicon plane, and then you have layer after layer of interconnects. You have the same thing on another chip. What you do is you put them face to face, and there's a little bit of a gap between one copper plate and the other, but the insulation around them sticks together. Then you heat them up a little bit and the copper expands and squeezes itself together and sticks.

Cass: Oh, it's actually like brazing.

Moore: I'll take your word for it. But I really don't know what that is.

Cassie: I could be wrong. But it's like a magnet. You just swish and then everything sticks together. You don't have to use a soldering iron to do the heavy lifting.

Moore: There's no solder. That's very, very critical because it means that the density increases by almost an order of magnitude and you can have these connections. We're talking about one connection every few micrometers. If I'm counting correctly, that's 200,000 connections per square millimeter. Quite a lot actually. It's like they're all just built on top of a piece of silicon. It's just folded up and it's the same low energy/bit, the same high latency/bit.

Cass: That's where Meta steps in.

Moore: Yes. Meta shows up regularly at this and other conferences. I noticed that they talked on the panel about their expectations for chip technology for ideal augmented reality glasses. Anyway, what they want is 3D technology because it allows them to fit more performance, more silicon performance in an area so that it might actually fit into a device that's like a pair of glasses. And again, that might reduce the power consumption of the chip, and that's important because you don't want the chip to get too hot. You want it to last a long time. That way you don't need to charge it all the time.

Moore: As far as I know, this is the first time Meta has shown their chip. It's a customized machine learning chip. It's used to perform the neural networks that are absolutely necessary in augmented reality. They have a 4-millimeter by 4-millimeter chip, and it's actually a mix of two chips bonded together.

CASE: You need these things because you need chips that can do all the computer vision processing, processing what's going on in the environment. That's why machine learning is so important.

Moore: That's right. You need the AI to be right inside your glasses, not in the cloud or say a nearby server. It doesn't give you too much latency inside the device. Anyway, this chip is actually two 3D stacked chips. The cool thing is that they actually did it in 3D. because they had a 2D version. They tested the combined form and also the form with only half of it. In the end, the 3D stacking was much better. Not just twice as good.

MOORE: Basically, in their testing, they tracked two hands, and that's something that's obviously very important for augmented reality. It has to know where your hands are. That's the item they tested. So the 3D chip was able to track both hands, and it used less energy than a regular 2D chip in tracking just one hand. So the 3D chip is clearly a win for Meta. We'll see what the final project looks like and if anyone actually wants to use it. But it's clearly a technology that will help them achieve their goals.

© Copyright notes

Related posts

No comments

No comments...
en_USEN_US
Powered by TranslatePress