Page 1 of 1

32 bit and 64 bit explained

Posted: Fri Sep 02, 2011 10:19 am
by PraveenAlexis
Very often we find ourself thinking:

Will this 32 bit software run on my 64 bit operating system?

or

Will this 64 bit software run on my computer?

Here's a short tutorial which attempts to answer these questions and helps us understand the concepts of 64 bit and 32 bit hardware, operating system and applications.

32 bit systems have been part of the mainstream computing for more than a decade since the time of the 80386. Therefore, most of the software and operating system code written during this time has been 32 bit compatible.

32 bit systems can address upto 4 GB memory in one go. Some modern applications require more memory than this to complete their tasks. This and progress in chip fabrication technology led to the development of 64 bit processors for mainstream computing.

So here comes the problem, much of the software available today is still 32 bit, but the processors have migrated to 64 bit. The operating systems are slowly catching up. Eventually even the applications will catch up. But for now, we have to cope up with all combinations of 32 and 64 bits in hardware, operating system and applications.

You can consider these three factors to be three layers with the processor as the lowest layer and the application as the highest layer as shown below:
Image
To run a 64 bit application, you need support from all lower levels (64 bit OS and 64 bit processor).

To run a 64 bit OS, you need support from its lower level (a 64 bit processor).

A 32 bit OS will run on a 32 or 64 bit processor without any problems.

Similarly a 32 bit application will run on any combination of OS and processor (except a combination of 32 bit processor and 64 bit OS which is not possible). This is usually accomplished through emulation which is an operating system feature, part of all major operating systems.

Device drivers run in parallel to the operating system. Emulation is done at the operating system level, and is available to its higher layer: the application. Therefore, it is not possible to install a 32 bit device driver on a 64 bit operating system.

Answers to common questions

Will a 64 bit CPU run a standard (32-bit) program on a 64-bit version of an OS?
Yes it will. 64 bit systems are backward compatible with the 32 bit counterparts.

Will a 64-bit OS run a standard application on a 64 bit processor?
-Again, it will. This is because of backward compatibility.

Can I run W2K and WXP on an 64 bit CPU, and use old software?
-Yes, a 32 bit OS (W2K and WXP) will run on a 64 bit processor. Also, you should be able to run "old software" on a 64 bit OS.

However, before I close, let me also quote that many times, a 64 bit software will contain bits of 32 bit code. Similarly 32 bit software (usually very old ones) can have some code in 16 bit. Please be aware that 16 bit code will NOT run on 64 bit OS. This is one reason behind some 32 bit programs not working on 64 bit OSes.

Re: 32 bit and 64 bit explained

Posted: Fri Sep 02, 2011 4:57 pm
by Neo
I use both. However my guess is 32-bit is enough for more than 90% of the applications. So 99% of the total users are satisfied with 32-bit. On 64-bit hardware, the CPU has a set of 64-bit instructions. For example, if there are 64-bit support on the hardware, double precision (64-bit according to IEEE754 floating point standard) calculations could be executed without using 2 x 32-bit registers. Double precision is mostly used for graphical applications such as 3D modelling tools, games, etc... and advance accounting application such as forecasting, etc... So if you have a game with nice graphics, I'm sure it is faster if you play in a 64-bit PC.

As clearly stated 64-bit OS needs to be installed to get the advantage of hardware 64-bit for most of the applications. The reason is, the applications are calling OS functions other than using ASM directly. That means if you can detect whether you have a 64-bit CPU and your OS is a 32-bit one, still you can use the 64-bit instructions from your application if you know ASM (Assembly language). If you are interested to know more on this, have a look at AMD64 Architecture Programmer's Manual.

Without further complicating things, for novice users (such as Windows users :laughing: ), to run 64-bit applications, a 64-bit OS and a 64-bit hardware is needed. It is possible to run 32-bit application in a 32-bit or 64-bit OS. It is not possible to run a 64-bit application on a 32-bit OS even you have a 64-bit hardware. To install a 64-bit OS, you need a 64-bit hardware. I think this way it is more clear.

Re: 32 bit and 64 bit explained

Posted: Fri Sep 02, 2011 5:17 pm
by PraveenAlexis
Without further complicating things, for novice users (such as Windows users ), to run 64-bit applications, a 64-bit OS and a 64-bit hardware is needed. It is possible to run 32-bit application in a 32-bit or 64-bit OS. It is not possible to run a 64-bit application on a 32-bit OS even you have a 64-bit hardware. To install a 64-bit OS, you need a 64-bit hardware. I think this way it is more clear.
your not usisng windows?? :shock: then what?? :o
i know, my processor supports 64 bit, and im going to install another 2gb of ram ;) and there will be 4 gb of ram in total :) because 32bit uses only 3 gb of ram at once i need a 64 bit os :) and im playing gamestoo ;) but not that advance ones,

Re: 32 bit and 64 bit explained

Posted: Mon Sep 05, 2011 5:12 pm
by SemiconductorCat
There are many differences in 64 bit , than the long double.
such as how function calls are made, the calling convention have been changed.
in 64 bit windows API's won't be no longer defined as '__stdcall' .
it will be '__fastcall'.
http://www.tortall.net/projects/yasm/ma ... ption.html

so do not go and define ,
__stdcall when you declare a external function in a dll.


Either , if you don't use the 'long' prefix infront of your data-type it won't change.
such as,
long int
long long
long double
long float
...etc

but pointers will change to 64-bit, that's a big gap for sure. Because when your application is displaying pointers
it will normally converted to int or long. In that case there will be a situation with the old source bases.

and when you programming for ANSI C/C++ , I highly suggest not to use long prefix , because they are not
in the c98 standard [current C standard] ,your compiler will support it, but definitely breaks the platform
independence.However c99 [c++0x ] compilers support that new version.But I'm sure you will have to wait
another at least 5 years to C++0x to become a industry standard.

Linux programmers have more than experience with more than one platform since they are using that
ANSI C as a standard , they do not break it.If there is a exception that they met it's very necessary to break
it, they are isolating that part of the code with preprocessed.But for windows programmers this will be a new
experience to work with new data-types. Win32 and Win64 code bases are really compatible in source code
level ,exception is pointers, [when displaying a pointer and storing a pointer there will be a different.
Both x64 linux and windows uses a one standard LLP64 and linux LP64 [only long is 64 bit in linux], for a C programmers viewpoint , the only thing that
was changed was 'Pointer' .

if your source base is already following ANSI standard then you probably don't need to fear at all.
very little or extract nothing to had to be changed.

Re: 32 bit and 64 bit explained

Posted: Tue Sep 06, 2011 10:16 am
by Neo
Hi Sandun,
There are few points that I don't agree. Where did you find those information? Whenever you copy/paste something, make sure to add the source. So we can also verify the information. Not all information on the net is correct. So we will have to logically argue on the points until we are clear.
__fastcall and __stdcall
This is just an OS specific thing (Didn't see Praveen mentioned about Windows). Linux is expanding its operations with not only 64-bit OSs but also with cloud computing technologies. So for a question asked on general terms, Windows is just a specific example (obviously Windows is a closed source OS which doesn't have any space for independence). It is not the general C behaviour on 64-bit Linux or Macs. Also, I didn't see what you have said in the given link. Can you quote the part about stdcall and fastcall from that please?

I see 64-bit as a CPU architecture. AMD had invented the 64-bit architecture and licensed to Intel. Funny thing is Intel invented 8086 architecture and licensed to AMD. So both companies rely on each other for licenses :) So when we refer 64-bit processing architecture, it is generally referred as AMD64. To understand about the 64-bit architecture in a developer perspective, I have put a AMD64 link on my first post. Have a look at that (especially the AMD64 instruction set).

The 'data bus' is used to move the data around inside your computer. In a 32-bit computer, the width (or size) of the data bus is 32-bits wide. A 64-bit bus is twice as wide so the system can move twice as much data around. Being able to process more data means a faster system -- but only for specific things. Normal office productivity and web surfing will show no advantages at all, whereas graphics processing and scientific calculations will go much faster.

Unless, you use a 64-bit capable compiler, you will not be able to compile your codes to 64-bit platform & CPU. If you use a 64-bit hardware, 64-bit OS and 32-bit C compiler to compile your code, your application will just be a 32-bit one.
There are many differences in 64 bit , than the long double.
I know and you haven't read my post clearly. Notice the following statement.
For example, if there are 64-bit support on the hardware, double precision (64-bit according to IEEE754 floating point standard) calculations could be executed without using 2 x 32-bit registers
So double precision is a good example given to address the difference. Also, do you know that floating point costs the CPU far more than integers calculations? So improving floating point means a lot to overall system performance in a more floating point involved environment such as 3D games, financial forecasting, etc...

long double, long float
What are these? I haven't seen them in my life. I only know the IEEE standard for floating point numbers (IEEE754). It's a language independent standard used in both software and hardware (in floating point ALUs). According to that float is called single precision (32-bit) and double is called double precision (64-bit). Two components called exponent and mantissa is involved in the calculation and if you don't know about it worth to have a look. So there is nothing called 'long float' or 'long double' in the standard. Can you provide me a source?
long int
I don't think this is also a known data type. 'int' is the 32-bit (in some platforms it is 16-bit) integer data type and as prefixes, we can use 'unsigned' and 'singed'.
long long
This is a common 64-bit integer data type in most platforms. But on Windows (since you were talking about Windows before), __int64 is used in Visual C++.

Just a small confusion on your standards specification. There isn't anything called ANSI C/C++. It's ANSI C (In short C89). When it comes to standards, C and C++ are treated separately. Non of the standards intersect or have any kind of inter-relationship.
Have a look at History of C/C++ Standards.

Future of C:
"C1X" is the unofficial name of the planned new standard for the C programming language. It is intended to replace the existing C standard, informally known as C99. After C89 (ANSI C), ISO has started to give standards for C.

Future of C++:
The current standard extending C++ with new features was ratified and published by ISO in September 2011 as ISO/IROBOT.LK 14882:2011 (informally known as C++11).

So when it comes to standards, C and C++ need to be addressed separately. So there isn't anything called ANSI C++. Only ANCI C standard is C89 (given in 1989). Later on all C and C++ standards are given by ISO.
Win32 and Win64 code bases are really compatible in source code level ,exception is pointers
How if a user use AMD64 ASM codes within C code with 'asm' keyword? On Win32, it will crash the system. So in my opinion, all slight differences should take in to consideration which are specific for each OS/hardware including pointers.

My intention was not to disprove all you said but to express the right information (based on facts - I'm not going to use my 20+ years of C/C++ programming expereince ;) ) with logical arguments. If you are grabbing information from other sites, please don't forget to mention the source. Otherwise you'll not join the discussion from here which is unfair :lol:

Re: 32 bit and 64 bit explained

Posted: Fri Sep 09, 2011 6:41 am
by SemiconductorCat
Also, I didn't see what you have said in the given link. Can you quote the part about stdcall and fastcall from that please?
I think you already understood the difference between __stdcall and __fastcall.Two different calling convensions.

64 bit architectures are using __fastcall. Since you not hack into the header files of the SDK then it won't
be a problem. Yes calling convensions are not specificat to a computer hardware architecture, while linux in 32 bit
also uses __fastcall. So your correct in that,yes it's specific to the platform.
what I said is the "source code" compatibility of older 32-bit source bases and 64-bit source bases.
In a 32-bit computer, the width (or size) of the data bus is 32-bits wide.
No, when you saying data bus is 32-bit's it don't have any meaning. Because CISC computing allows you to read
data from memory which are not aligned into 32-bit alignment. In modern computing data bus refers to the
micro architecture data bus, and with technologies like hyper-threading , deep data pipelines and caches ,
even inside micro-architecture you can't clearly say it's 32-bit.You can argue registers are 32-bits, yes it's
a good argument.and if you talk about physical memory address bus , it's more than 32-bits wider clearly.
DDR which mean dual data rates like technologies have exceeds that 32-bit limit before a long time, even in PIII
times. (I can't remind precisely when DDR III started to dominate the market).

The difference is actually on the address bus, actually it's virtual than the physical, virtually in programming we refer pointers 32-bit when we programming with 32-bit machines and pointers will be 64-bit when we programming with 64-bit machines. Even through addresses are 32-bit or 64-bit , it does not say it's necessary to physical address bus to be 32-bits or 64-bits, In 64-bit computing there is something called , PAE (Physical Address Extension) ,normally memory management routines hides everything for us even in the kernel space we can nicely
have 32-bit raw pointers. So yes that means there can be more than 32-bit address bus width in even 32-bit machines. Theoretically with PAE 4G limit can exceed to 64G limit.


Your right in that 'long long' 'long double' thing. And I didn't copy it anywhere,but for the links you asked,
I just google, and here is the first link.
http://en.wikipedia.org/wiki/Long_double
yes you right, it was not covered under the ANSI C so thousand times discourage to write something like.

Code: Select all

long double d = 0.0;
and it says that it clearly violates that IEEE 754 standard. However microsoft even in 32-bit SDK/Compilers are violated that standard. Microsoft engineers telling ,
"Do favor floating point calculations instead of memory reads"
also one of main design goals decisions of both OpenGL and DirectX implementations.
Which means floating point calculations are less heavy than cache misses !


I know and you haven't read my post clearly. Notice the following statement.
I do read your statement , what i tried to tell is there 64-bit double precisions are already there because
FPU is always built inside 486DX above microprocessors. It may apply to those extended IEEE 754 but not
to the current source bases.
Also, do you know that floating point costs the CPU far more than integers calculations?
yes but not a fact. As I already told, do favor calculations instead lots of memory reads. It's right on early implementations , today you need to favor that over memory traffic. as I told above.
If not why optimization tools like Intel Parallel studio have implemented that as a optimization policy?
"reduce memory traffic over Floating point macro optimizations" , where macro optimization is either not
suitable in source level because it adds more complexity to the source code.Don't ask me how Studio
do this? I simply don't have enough brain to understand it. :{
How if a user use AMD64 ASM codes within C code with 'asm' keyword? On Win32, it will crash the system. So in my opinion, all slight differences should take in to consideration which are specific for each OS/hardware including pointers.
Are you asking how should we implement it?well use preprocessed macros.
32-bit inline assembly segment won't be injected into it's object code if it detected it.Anyway using a shell code
method you can still write it in "Machine code" rather than assembly code.But the fact is I didn't talked here
about assembly code, inline assembly is not part of ANSI C, so there is no point of talking inline assembly,
I'm talking about a ANSI C compatible source base.

My intention was not to disprove all you said but to express the right information (based on facts - I'm not going to use my 20+ years of C/C++ programming expereince ;) ) with logical arguments. If you are grabbing information from other sites, please don't forget to mention the source. Otherwise you'll not join the discussion from here which is unfair :lol:
You are correct. I'm still a university student who failed in my job. But facts are facts.
don't hesitate to quote my post and say "Hay that point is wrong," because I want to learn.
but I didn't copy and paste here. There is no point of duplicating information over the net. And the facts that I mention here wasn't
invented by me, btw ,unfair is everywhere in the universe , however non of above are inventions of glemine.
If you want links just google. I can just put the "http://google.com/" in general [because google is keeping
a 4 hour old cached version of most of sites].

Oky anyway you told that it will be faster precisions when we use 64-bit , can you give me a example
source code , quoted from a open source code base or something your own. I have reviewed the
factial generator source base, and they are even not using long double either. But performance difference
do exists in 64-bit when we took the proper benchmarks, may be when we can access more than 4G limit
memory manager pages less. What's your argument on that?

Re: 32 bit and 64 bit explained

Posted: Fri Sep 09, 2011 3:21 pm
by Neo
Here are my answers on picked up parts of your post :).

64 bit architectures are using __fastcall. You are correct. It doesn't support __stdcall and __cdecl as in x86.
http://msdn.microsoft.com/en-us/library/ms235286.aspx
http://msdn.microsoft.com/en-us/magazine/cc300794.aspx
In a 32-bit computer, the width (or size) of the data bus is 32-bits wide.
No
64-bit architecture is generally referred to a system with a 64-bit microprocessor, 64-bit address and 64-bit data-bus. (However there are some rare processors with higher processors and lower address or data buses and vice-versa. In general, we don't refer those).

Refer this article...
How Microprocessors Work.

Since you seems quite confused with technical terms, I recommend you to read the book Computer Architecture: A Quantitative Approach by Hennessy & Patterson. This is the standard university reference to learn about processor architectures, memory architectures, caching methods, etc... I think you will have that subject for BIT as well (but I'm not sure about the reference book for you). You will have to learn these with fundamental knowledge on how things work. When you refer DDR, Cache, etc... you need to exactly know how those really work and advantageous over the other.
Your right in that 'long long' 'long double' thing. And I didn't copy it anywhere,but for the links you asked,
I just google, and here is the first link.
http://en.wikipedia.org/wiki/Long_double
yes you right, it was not covered under the ANSI C so thousand times discourage to write something like.

Code: Select all

long double d = 0.0;
and it says that it clearly violates that IEEE 754 standard. However microsoft even in 32-bit SDK/Compilers are violated that standard.
With the link you have supplied, I have read it in detail. Thanks to you I leaned few newly standardised stuff. There exist a long double (80-bit or 128-bit) specification people are around. The new standard IEEE 754-2008 suggests a 128-bit quadruple precision standard called binary128. The more bits you have, the more precision you get. Okay.. so I'm actually wrong. It is that I haven't had to use more precision than 64-bit. I learned IEEE 754 (under Compiler Theory) in 2002 and this new standard wasn't available at that time. I never knew something more is even exists. Thank you very much for that.
Microsoft engineers telling, "Do favor floating point calculations instead of memory reads" also one of main design goals decisions of both OpenGL and DirectX implementations. Which means floating point calculations are less heavy than cache misses !
Bill Gates is one of the best engineers in floating point. In a discussion he had with Steve Jobs, Steve mentioned that Bill has helped him to make floating point for Apple in late 70s.

However the argument is out of topic. I was referring the floating point calculations (in ALU). I haven't referred memory reads in connection with floating point data. In terms of memory/cache, what matters is how much data needs to be fetched. Nothing to do with the organisation of data.
I know and you haven't read my post clearly. Notice the following statement.
I do read your statement , what i tried to tell is there 64-bit double precisions are already there because FPU is always built inside 486DX above microprocessors. It may apply to those extended IEEE 754 but not to the current source bases.
I have used a 386SX with a 387 FPU co-processor :) . FPUs are not actually built inside the modern CPUs as a separate unit. It is that modern CPUs are improved to have the x87 instruction set built-in. What I wanted to explain was the ease of fetching 64-bit data from memory (with a 64-bit data bus) for the use of double-precision (64-bit) floating point as an example. Since you didn't know the existence of 64-bit data bus as an essential component in 64-bit architecture, the issue was clear ;)
Also, do you know that floating point costs the CPU far more than integers calculations?
yes but not a fact. As I already told, do favor calculations instead lots of memory reads
Again the same thing. No connection with what I have discussed here. Obviously reducing cache misses is one of the main principals in program designing. If you refer the book I have mentioned above, there is a whole chapter on caching mechanisms.
How if a user use AMD64 ASM codes within C code with 'asm' keyword? On Win32, it will crash the system. So in my opinion, all slight differences should take in to consideration which are specific for each OS/hardware including pointers.
Are you asking how should we implement it?well use preprocessed macros.
32-bit inline assembly segment won't be injected into it's object code if it detected it.Anyway using a shell code method you can still write it in "Machine code" rather than assembly code.But the fact is I didn't talked here about assembly code, inline assembly is not part of ANSI C, so there is no point of talking inline assembly, I'm talking about a ANSI C compatible source base.
You said all code bases are compatible. But my argument was, not all. There can be exceptions like use of asm. I was using it since 1994 with Turbo C. It was very useful at that time but nowadays it is not much used.

However, ANCI is something that you don't use for sure. If you use a C++ compiler such as Microsoft Visual C++, you are talking about ISO C++ and ANCI C (C89) compilers aren't used in modern world. I'm in a doubt that you have even seen an ANSI C compiler in your life :). So no use in talking about ANSI C for 64-bit.
I'm still a university student who failed in my job
Don't just discourage yourself. I think the problem with you is that you miss theoretical knowledge and only concentrate on practical applications. However, theoretical knowledge is essential for advance stuff. At the end of BIT, I'm sure you will be ready to face the challenge. In my case, learning is a fun job. I don't miss even a small thing. In my opinion, learning a single point clearly is better than learning 1000s of things without clear understanding. When one point is clearly understood, we can easily forget about it and switch to learning next and so on.

You are becoming an asset to ROBOT.LK already. I really need to welcome you to the support team but I need to see better accurate answers from you. So please do much more searching, clearly understand it and then answer the post. I don't like to see 50% answers, answers that you state you need ask from a friend, etc... You understand it yourself, and then write the answer. Otherwise just skip it.