Categories: Articles

EDA in the Cloud: Stormy Weather

SoC design groups don’t do clouds. True, they take advantage of some of the underlying technology by running their own server farms, sometimes called internal clouds. But they don’t take advantage of the economies of scale, and the accompanying cheap price, that are available by piggybacking on something like Amazon Web Services (AWS). Nor do they take advantage of the instant scalability that a solution like that entails. Companies designing large SoCs can have server farms with perhaps 100,000 cores or more, but if they need half-a-million cores for a couple of weeks during tapeout then they forgo that by not being able to scale out onto public clouds.

I think that there are two primary problems: security and data size.

System and semiconductor companies designing SoCs typically have very strict rules that the critical IP doesn’t leave the premises. Security solutions get better all the time, but nobody is quite sure how good they are. Can the NSA read the design? Alter it to plant Trojans? Can the Chinese? When even RSA, perhaps the leading security company of the last couple of decades, can get hacked then is anyone safe? When the NSA steals the keys for two billion SIM cards from Gemalto, the company that manufactures them, what are the limits?

I remember seeing a keynote by Wally Rhines of Mentor a year or so ago where he talked about inserting Trojans into chips or IP. Perhaps the most worrying thing Wally said is that although you don’t read on the internet about Trojans being inserted into hardware, when he meets people in the right US government departments, they say it happens all the time. Wally’s assumption is that they are already doing it themselves, and they also assume the other guys are doing it. Given what we have learned about the NSA in the last year or two, it would be more surprising if they were not. So it is not just a theoretical problem to worry about years in the future, it is already happening. As the old joke goes, you are not paranoid if they really are out to get you. In the same way, SoC design groups may not be being paranoid by being suspicious of the public cloud.

But a bigger problem is probably the data volumes. The cloud can handle large amounts of data. Netflix has all its video-streaming services implemented on AWS, but while videos may be large, they basically don’t change once they have been uploaded. If necessary, they are really easy to replicate since all the copies are the same and never change. Even to upload the movies they do, apparently Netflix Fedexes disk drives to Amazon. The amounts of data involved in an advanced-node SoC design are truly staggering and it is changing all the time.

There is also a major reason that a design cannot just be moved up to the cloud and left there. Nobody has a single vendor flow. However, it is not feasible to keep moving an entire design out of one cloud into another because of the data volumes. Conceivably EDAC or someone could standardize cloud licensing in such a way that multi-vendor cloud solution would be workable but it hasn’t happened yet. It would need to encompass some of the trickier issues such as emulation too. Who would own them? Where would they be installed? The cloud solutions that exist tend to be for FPGA flows, but the data involved is a lot smaller. Even a large FPGA programming bitstream is small compared with a polygon-level layout of an SoC, and emulation is not really used in FPGA design since it is easier to just program up an array and try it.

Even if EDA is not enthusiastic about using the cloud as a computing fabric, it is nevertheless a huge opportunity for IP and VIP since building cloud datacenters involves a large number of standard interfaces running at very high speeds. SoCs for the cloud market will need to add their own differentiation, but the interfaces are not the place to do it. The protocols and, typically, the data rates are all fixed by the standard and so there is very little incentive for SoC design groups to invest time and money in designing their own interfaces that, by definition, will be undifferentiated me-too implementations.

I expect that eventually EDA will move into the cloud. Amazon and others are going to drive economies of scale to such high levels that it will probably not make sense for anyone to build their own computing infrastructure unless they are at scale, meaning Google/Facebook/Microsoft-type scale… Way beyond single semiconductor company scale.

Liat

Recent Posts

Molex Anticipates Steady Growth in High-Speed Connectivity in 2025, Driving Electronics Design Innovations Across Diverse Industry Sectors

• Increase in opportunities predicted for high-speed optical transceivers and miniaturized connectivity solutions to address…

3 days ago

Alphawave Semi Drives Innovation in Hyperscale AI Accelerators with Advanced I/O Chiplet for Rebellions Inc

Rebellions Selects Alphawave Semi's Multiprotocol Chiplet Solutions To Deliver Breakthrough Performance in Generative AI workloads…

3 days ago

Valeo & ROHM Semiconductor co-develop the next generation of power electronics

Valeo, a leading automotive technology company, and ROHM Semiconductor, a major semiconductor and electronic component…

3 days ago

New EiceDRIVER™ Power family of full-bridge transformer drivers for compact and cost-effective gate driver supplies

 Infineon Technologies AG (FSE: IFX / OTCQX: IFNNY) introduces the EiceDRIVER™ Power 2EP1xxR family of…

3 days ago

Unparalleled accuracy and longevity: Panasonic Industry presents brand new ultra-compact Air Quality Sensor

Next level monitoring precision of particulate matter, temperature, humidity, and total volatile organic compounds (TVOCs)…

3 days ago

Bitsight to Acquire Cyber Threat Intelligence Leader Cybersixgill to Help Enterprises to Preempt Cyber Attacks

Plans to Deliver Advanced Threat Intelligence within its Attack Surface and Third-Party Risk Solutions  Bitsight,…

5 days ago