Introduction to Programming/Variables
A "program" is a set of instructions stored in a computer's memory for it to execute. Usually in the course of execution the program will create new information. Also, nearly every program written must store information supplied after the program was created.
Such information may include user input, custom settings, or calculated values. Because this information can vary, the term variable is used to describe the element of a program which stores this information. For example, a name variable could contain "Jack" or "Jill" or any other value.
You may already be familiar with variables as they are used in mathematics—to represent unknown or inexplicitly defined fields of an equation. For instance, in the equation y = m*x + b, each alphabetic letter is a variable that represents something unknown (m is the slope of the line, b is the y-intercept, x is any legal x-value in the function's domain, and y is any legal y-value in the function's range.) In computer science, however, the value of a variable is never "unknown", but rather it is explicitly defined, although it may change in ways that the program's author cannot predict. It is important to remember that there is absolutely no (or at least very little) relationship between the term variable, as it is used in mathematics, and computer program variables.
In computer science, a variable is nothing more and nothing less than a named location in memory where data can be stored. In some languages, any kind of data can be stored in a variable while, in others, each variable is given a well defined type by the programmer and only data of the correct type may (or at least should) be stored in that location.
The location referred to by a variable may or may not have useful data in it, but typically it does not until it has been initialized. The process of carving out a piece of memory and assigning it a name is called creating, allocating or declaring a variable. In some languages, variables are automatically initialized when they are declared (given a value when they are established as pointing to that location in memory). Most languages allow variables to be initialized when they are declared but do not require it. The use of an uninitialized variable is a common source of programming errors, as the data stored in memory at the location that the variable points to is "garbage", nonsense bits that could have adverse effects on the running program.
In a program, you may see a line of code such as this one:
area = pi * r * r;
It may look like a math equation, but it is more than a math equation. It is an instruction to the computer to fetch the values from two memory locations, which do have absolute "ID numbers", but for our purposes and the semantics of programming languages are called pi and r, multiply those values together and then store the result in the memory location called area. It is very convenient that this code can take the familiar form of an equation but we cannot allow its appearance to fool us because it does not define a mathematical function; rather it defines a strict set of instructions that the computer will follow when running this line of code.
Similarly, most languages allow variable assignments such as the following:
X = X + 1
Looking at this assignment as a mathematical equation, it makes absolutely no sense—no value is equal to itself plus one. But we're not talking about mathematics here. While it may look like an equation, this is a variable assignment that states "retrieve the value of X, add one, and store the result back in X." So if the value of X is 0 before it reaches this line, it will become 1 afterward.
Several languages have a particular operator for this operation, e.g :
X = 8 // X is now 8
X += 1 // X is now 9
X –= 3 // X is now 6
X /= 2 // X is now 3
X *= 4 // X is now 12
Most languages have a variety of primitive types such as integers, floating point numbers, single characters and Boolean variables. Many languages also support a variety of aggregate types (eg. arrays). The exact size and format of available types may vary based upon operating system, hardware architecture, programming language and/or the specific compiler used.
One of the important things to take away from the reading is the concept of static typing. Most modern languages are statically-typed.
Despite the tremendous variety, the statically-typed languages generally follow some well-defined patterns:
- Compilers are quite picky about assigning valid values to each variable.
- There exist methods of converting between types.
- There are ways to define custom types.
- Reading Assignment: Byte (Wikipedia article)
With the possible exception of Boolean variables and user defined types, the byte (eight bits) is the smallest unit of information used in modern programming languages. A bit is represented by a 0 or a 1, and therefore a byte is a sequence of 8 0's and 1's, which have a total possible permutation of 28 combinations, or 256 distinct combinations.
Application to Variables
System memory must be allocated to store variable values; both the compiler and the author of a program must work together to determine how much memory should be allocated for each variable, and this memory is allocated in a sequence of consecutive bytes in memory.
Variable Data Types
Below are common usages of the byte in programming languages for data types of variables.
Each of these have slightly different names on different platforms and in different programming languages. The absolute size is specified in bytes, regardless of the environment. (see Data Types for more detail)
- one-byte (8-bit) value: 0 to 255 (unsigned) or –128 to +127 (signed)
- two-byte (16-bit) value: 0 to 65535 (unsigned) or –32768 to +32767 (signed)
- four-byte (32-bit) value: 0 to 4,294,967,296 (unsigned) or –2,147,483,648 to 2,147,483,647 (signed)
- eight-byte (64-bit) value: 0 to 18,446,744,073,709,551,615 or –9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
A single byte is often used in many languages to store a single text character. These are often denoted by the variable type char, which is a short-hand for character. Other languages, such as Java, use two bytes to store a character.