Cores and Threads: Hybrid Processors for Today’s Multitasking World

Cores and Threads: Hybrid Processors for Today’s Multitasking World

Published in Military Embedded Systems
Written by Aaron Frank

The incredible growth of processing parallelism has resulted in a corresponding explosion of performance and capabilities, but not all cores and threads are created equally. For mainstream computer users, such as the vast majority of Windows users, the detailed usage of cores and threads is not important for the user to understand. After editing a document, we hit the <SAVE> icon, and all the magic happens under the hood. But for designers of critical real-time processing systems, what happens under the hood matters. With a more detailed understanding of the latest hybrid core processor enhancements, military embedded systems designers – whether designing for land, sea, or air use – can build more deterministic and responsive processing systems and at the same time maintain better control over power consumption, resulting in SWaP [size, weight, and power] savings and longer-duration missions.

Today, finding a processor with just a single processing core is difficult. In 2000, IBM introduced the concept of a dual-core processor in their Power4 processor. AMD followed in 2005 with the Opteron 800 and Athlon 64 X2 processors, each with two processing cores. Intel gained commercial success with their dual-core processor in 2006 with the Pentium Core2 processor.

Today, almost two decades later, it is not uncommon to see data centers running tens of thousands of processors, each with 64 or more cores. In addition to multiple processing cores, many architectures also support hyper-threading, which enables a processing core to execute two independent instruction threads simultaneously, mimicking a dual-core processor. Thus, a 64-core dual-threading processor can execute 128 independent threads simultaneously. Taken to the extreme, today’s high-end graphics processors (GPUs) can execute thousands of simultaneous operations, which is fundamental for highly parallel 3D visualizations and complex AI [artificial intelligence] processing tasks.

Read the full article.