- Data Type
- A data type defines the storage requirements, handling requirements, and behavior of variables and function parameters.
When you create variables, the compiler needs to know how much memory should be allocated for storage and how the data should be handled in arithmetic and logical operations. There are four fundamental data types. Two of them handle integer type data while the other two handle floating-point data.
Type | Description | Size (bits) |
---|---|---|
char | Single Character | 8 |
int | Integer | 16 |
float | Single Precision Floating Point Number | 32 |
double | Double Precision Floating Point Number | 64 |
The size of these types is not standardized, though the sizes presented here are very common. The int data type is the most variable from one compiler to another since it is typically sized to be the same width as the ALU (Arithmetic Logic Unit) / data memory word. So on a 16-bit microcontroller, an int would be 16-bits, while on a 32-bit microcontroller an int would be 32-bits. This is almost always true but frequently breaks down in the 8-bit world. Many 8-bit compilers define int as 16-bits, while some define it as 8-bits. So, before you start writing code, make sure you read your compiler's user manual to find out how big an int is.
Even char, which in the past was almost always implemented with 8-bits to accommodate 7-bit ASCII encoding of characters, can now be 16-bits on some compilers that support Unicode encoding.
There are two more data types: void and enum which will be discussed later in the class due to their special applications.