amuck-landowner

Cores and threads

Tactical

Where is the beer!
  I have a dumb question. I did some reading when a provider advertise they have 8 available cpus but really they list there cpu they use on there server like a xeon e3-1240 for example. Really the cpu has 4 cores and 8 threads. Can you count threads as cpus? I always thought that the threads just gave like a boost or horsepower like a turbo charger on a engine. I was just been wondering that is all.  Lol i read a little on it and it just confuses me more and more! hahaha i have been with out my twinkies. 
 

perennate

New Member
Verified Provider
Either way it's misleading since they are CPU cores and not individual physical CPU's (although they nearly function as such).

Two threads per core does improve the performance of the core since CPU operations don't have to waste as much time waiting for previous operations to complete (if the previous result is used). Also of course less context switching.
 

Mun

Never Forget
Could be a dual e3-1240?

The cores to thread montra is debatable. In heavily threaded applications possibly faster, but most modern servers have more then enough threads laying around to make HT useless.

Core > Thread

Mun
 

Zach

New Member
Verified Provider
Could be a dual e3-1240?

The cores to thread montra is debatable. In heavily threaded applications possibly faster, but most modern servers have more then enough threads laying around to make HT useless.

Core > Thread

Mun
There is no motherboard that supports dual LGA 1155 socket processors.
 

shovenose

New Member
Verified Provider
I would consider HT as a core. Is it a true core? No. But these CPUs are so powerful these days that the real cores are never maxed out under normal circumstances.
 

acd

New Member
HT usually adds ~1/3 increase to overall performance. A HT pair on a core is basically two instruction pipelines with their own registers that share arithmetic units and some other resources. Those resources aren't typically used at the same time, so HT allows them to get more utilization, at the expense of causing some single thread stalls waiting for resources on occasion if there is contention.

There is some security risks involved with HT, but generally, the cache line bleed exploits are REALLY HARD to pull off on a loaded system. I don't know if linux has fixes for that problem (it's old, so they probably have)

tl;dr: they are not real cores, but are good enough. In an virtualized environment, you can pretty much guarantee there is enough threading going on to use the HTs effectively.
 
Last edited by a moderator:

earl

Active Member
Kinda makes you wonder when you buy a VPS with one core, did you get a real core or just a thread? not sure how that works.
 

acd

New Member
AFAIK, openvz does this by limiting the number of concurrent processes executing. kvm does it by limiting the number of cores it presents to the VM (which in turn, binds to a number of back end threads). If your cat /proc/cpuinfo inside the kvm says 2 CPUs, you have concurrent threads = 2. If the CPU supports HT, and the host has HT turned on, these are most likely threads, not cores.
 

earl

Active Member
@mun

Have to say bmw's age very well.. had a chance to drive a convertible 650i, just an awesome car!
 

Marc M.

Phoenix VPS
Verified Provider
HyperThreading was originally invented by Intel about ten years ago so that they could hide the high latencies and long pipeline in the Pentium 4. At over 20 stages it needed a ridiculous clock speed to perform.

Then Intel took a page from the AMD playbook and decided to shorten the pipeline, throw away the high GHz marketing materials and start from scratch. They've used as a basis the small - made for notebooks - Intel Core processor, and called the desktop version Core 2 Duo, and later on cca. 2007 they introduced the Core 2 Quad. Because of the short pipeline in this architecture HyperThreading would not work so Intel decided not to use it at all.

It was reintroduced in 2008 with the release of Nehalem. Intel could use it again due to the slightly longer pipelines and higher IPC count per core. This time around it could actually slightly improve performance for some limited scenarios where certain software could not take advantage of the speed of an entire core, HyperThreading would replicate the ALU resources and more small pieces of software could take advantage of the same core at the same time. This explanation is about as bare bones as it gets.

So basically HyperThreading is a mechanism meant to keep the CPU pipeline (or the pipeline of each core) filled with instructions at all times. Sometimes there is a slight speed increase, more often than not there is a slight performance loss.

When it comes to virtualization and servers in general throughput is extremely important, so the pipelines are always kept full. A bare metal hypervisor like Xen for instance will take care of its own threads and make sure that the CPU is properly partitioned between VMs and fully utilized, so HyperThreading becomes irrelevant.

The Bottom Line:

If after ten years we are still debating if HyperThreading has any tangible benefits or not then it clearly does not. It shows cute numbers in Windows synthetic benchmarks optimized for Intel, but that's about it. I'd say that the entire HyperThreading debate is like the motor oil debate: until you've been to a motor oil bottling plant, you will always think that if you spend an extra $10~$15 for a 5 quart jug of brand name motor oil your engine will run longer, however that's not the case. When they (the OEM) run out of jugs for one brand, they start bottling for another. The only difference between them is the weight (0w-20, 5w-20, 5w-30 and so on). The only time you get the real deal is when you buy it by the barrel, then you can rest assured that it's coming straight out of a refinery. Sorry for the comparison, however I'm a car guy as well :)
 

H4G

New Member
Verified Provider
In a multicore environment you are essentially putting multiple micro-processors on a single wafer. Each of these can be considered as a single processor and two or more such micro processors are placed on the same silicon wafer to work as one unit.

Hyper Threading is an Intel technology that essentially is a direct development from it's pipeline architecture and to speed up data dependent operations. If we consider a three stage pipeline architecture:

Fetch - Decode - Execute

            Fetch    - Decode - Execute

                           Fetch     - Decode - Execute

Now consider the following two operations happening simultaneously:

A=5; A=A+6

&

A=A+1 (where the values of A is taken from the previous operation)

So, let's execute this:

Fetch(A=5) - Decode - Execute(A=5+6=11)

                     Fetch(A=5 see the problem here?) - Decode - Execute(A=5+1=6) <- The correct answer was supposed to be 13)

**this is can considered as a very primitive processor :p

This is also called data hazard or data dependency. One way of solving this was inserting NOPs between instructions that would be dependent on data from another source. But this caused loss of processing time as one Pipeline simply had no operations to do and had to wait for the data till the first operation has been completed.

Hyper threading allows one single processor to appear to the program as two logical processors and these every instructions in broken into two (or more) threads that the processor can concurrently perform. So, if we consider the same example in the HT environment.

Thread 1              Thread 2 

Fetch OP1            Decode OP1 (Simultaneously)

Execute OP1        Fecth OP2

DecodeOP2         Execute OP2

OP1 = Operation 1 & OP2 = Operation 2

The threads are able to communicate between themselves and are able to execute different parts of the same program simultaneously. To leverage HT or MT the program or application must support it.

Now imagine each core on a processor having 2 such threads each so a quad core processor would have 8 threads. Research and statistically a performance improvement of about 20% is said to be obtained from this practice.
 
Last edited by a moderator:

concerto49

New Member
Verified Provider
It all depends on how you look at it - marketing or technical. AMD advertise 4 modules 8 cores meaning the same as 4 cores 8 threads from Intel. Different techs under the hood, but similar outcome. Intel duplicates some stuff, but cache is shared. AMD shares a floating point module but there are individual integer modules etc.

AMD shoot themselves in the foot in a lot of places that charge per core, whereas in Intel land at least you can turn the extra threads off.
 
Top
amuck-landowner