# Python Concepts/Numbers

Jump to navigation Jump to search

# Objective

 Learn about Python integers. Learn about non-decimal integers. Learn about Python floats. Learn about precision of floats. Learn about Boolean algebra. (Booleans are a subclass of integers.[1]) Learn about complex numbers in Python. Learn how to convert numbers into different basic data types.

# Lesson

## Data Types

This is the first of several lessons on the data types used by Python. Computer programs can process instructions that work with many different kinds of data, and the instructions need to be very precise. If you add a word to this sentence, 'add' means something very different from when you add 2 and 3. A computer language has to have a set of rules defining what operations can be applied to different kinds of data, and for this to work, there also has to be a set of rules defining exactly what data can be used with each operation. For example, if you want to calculate grossProfit = salesIncome - costs, a program has to know that these quantities are variables containing numbers rather than just strings of letters. They must have a numerical data type rather than string data type.

If you are not clear about the meaning in computer science of variables and of data types, it may help to brush up on the lesson Introduction_to_Programming/Variables.

### Two useful Built-in Functions

#### class type(object)

With one argument, return the type of an object.

>>> type(6)
<class 'int'>
>>>
>>> type(6.4)
<class 'float'>
>>>
>>> type('6.4')
<class 'str'>
>>>
>>> type(b'6.4')
<class 'bytes'>
>>>
>>> type(['6.4'])
<class 'list'>
>>>


#### isinstance(object, classinfo)

Return true if the object argument is an instance of the classinfo argument. classinfo can be a tuple. isinstance() arg 2 must be a type or tuple of types.

>>> isinstance(6,int)
True
>>>
>>> isinstance(6,str)
False
>>>
>>> isinstance('6',str)
True
>>>
>>> isinstance('6',(int,float,bytes))
False
>>>
>>> isinstance('6',(int,float,bytes,str))
True
>>>
>>> isinstance({},dict)
True
>>>
>>> isinstance({3,4,5},set)
True
>>>
>>> isinstance(b'',str)
False
>>>
>>> isinstance(b'',bytes)
True
>>>


# Python Integers

## Introduction to integers

Python has several data types to represent numbers. This lesson introduces two: integers, and floating point numbers, or 'floats'. We'll discuss floats later in the lesson. An integer, commonly abbreviated to int, is a whole number (positive, negative, or zero). So 7, 0, -11, 2, and 5 are integers. 3.14159, 0.0001, 11.11111, and even 2.0 are not integers, they are floats in Python. To test if this is true, we can use the isinstance built-in function to test if a number is or isn't an integer.

>>> isinstance(7, int)
True
>>> isinstance(0, int)
True
>>> isinstance(-11, int)
True
>>> isinstance(2, int)
True
>>> isinstance(5, int)
True
>>> isinstance(3.14159, int)
False
>>> isinstance(0.0001, int)
False
>>> isinstance(11.11111, int)
False
>>> isinstance(2.0, int)
False


A decimal integer contains one or more digits "0" ... "9". Underscores may be used to improve readability. With one exception described in floats below, leading zeros in a non-zero decimal number are not allowed.

We can perform simple mathematical operations with integers, like addition (+), subtraction (-), multiplication (*), and division (/). Here are some examples using simple math.

>>> 2+2
4
>>> 4-2
2
>>> 6+1
7
>>> 6+7-3
10
>>> 2*2
4
>>> 2*2*2
8
>>> -2
-2
>>> 8/2
4.0
>>> 4*4/2
8.0
>>> 4-4*2
-4
>>> 2-4
-2
>>> 10+10/2
15.0


You should have noticed three things in the above example. First, all mathematical operations follow an order of operations, called precedence; multiplication and division are done first, then addition and subtraction are performed, hence why 10+10/2 didn't result in 10.0. Secondly, when you divide, a float is always the result. Lastly, by putting a minus sign (-) in front of a number, it will become a negative number.

You can do more mathematical operations than the previously demonstrated ones. We can perform a floor division by using two forward slashes (//) to divide and have the result as an integer.

>>> 4 // 2
2
>>> 1 // 8
0
>>> 5 // 5
1
>>> 100 // 5
20
>>> 4 // 3
1


Now, that may save us trouble, but what if we want to get just the remainder of a division? We can perform a modulo operation to get the remainder. To perform a modulo, use a percent sign (%).

>>> 5 % 4
1
>>> 1 % 4
1
>>> 4 % 4
0
>>> 2 % 4
2
>>> 2 % 1
0
>>> 20 % 2
0
>>> 20 % 3
2
>>> -20 % 3
1


The divmod() built-in function returns both quotient and remainder:

>>> divmod(7,3)
(2, 1)
>>> (q,r) = divmod(7,3)
>>> q; r
2
1


You can also find the power of a number by using two asterisk symbols (**).

>>> 4 ** 2
16
>>> 4 ** 4
256
>>> 1 ** 11278923689
1
>>> 2 ** 4
16
>>> 10 ** 2
100
>>> 1024 ** 2
1048576
>>> 10 ** 6
1000000
>>> 25 ** (-1/2)
0.2
>>> 4 * - 3 ** 2
-36
>>> 4 * (- 3) ** 2
36
>>> 8 / 4 ** 2
0.5


The operator of exponentiation (**) has higher precedence than * or / or unary -.

If unsure of precedence, you can always use parentheses to force the desired result:

>>> (4 * (- 3)) ** 2 ; 4 * ((- 3) ** 2)
144
36
>>>


There is no limit for the length of integer literals apart from what can be stored in available memory.

## Non-decimal Integers

Almost everyone is familiar with ten based numbers. While base 10 is useful for everyday tasks, it isn't ideal for use in the computer world. Three other numeral systems are commonly used in computer science; binary, octal, and hexadecimal. We'll lightly cover Python's use of these in this section. The binary system is essential as all information is represented in binary form in computer hardware. Octal and hexadecimal are convenient for condensing binary numbers to a form that is more easily read by humans, while (unlike decimal) being simple to translate to or from binary. If you have difficulty with this part of the lesson, it may help to brush up on the lesson Numeral_systems in the course Introduction_to_Computers.

Most people have heard of binary and it is often associated with computers. Actually, modern binary made its way into the world far before electricity was widely in use. The binary system is 2 based, which means that only two numbers are used. Of course, these numbers are 0 and 1. So ${\displaystyle 1+1=10_{2},}$ unlike the decimal's ${\displaystyle 1+1=2_{10}.}$ To use binary numbers in python, prepend 0B or 0b to the number.[2]

>>> 0B11
3
>>> 0B1 + 0B1
2
>>> 0B11 + 0B1
4
>>> 0B10001 + 0B1
18
>>> 0B10001 - 0B1
16
>>> bin(2345)
'0b100100101001'
>>> 0b_111_0101_0011
1875


The value returned by bin() is a string. The underscore (_) may be used to make numbers more readable.

Note: In computers, a binary digit of information is called a bit.

The octal numeral system is something that really isn't used anymore, since it was superseded by hexadecimal. The octal system made sense decades ago when hardware was expensive, because the 8 based system can fit into three bits perfectly. Though this scheme fits into bits, it does not fit into a standard byte, which is 8 bits. Since the octal numeral system is 8 based, you can only use numbers "0"..."7". To use octal numbers in python, prepend 0o or 0O to the beginning of the number.[3] You may find it easier to use a lowercase o instead of an uppercase O, since it could be confused as a zero.

>>> 0o3
3
>>> 0o12
10
>>> 0o12 + 0o10
18
>>> 0o12 - 0o03
7
>>> 0o100
64
>>> 0o777
511
>>> 0o777 - 0o111
438
>>> oct(1_234_987)
'0o4554053'
>>> 0o_1234_9876
File "<stdin>", line 1
0o_1234_9876
^
SyntaxError: invalid token
>>> 0o_1234_0765
2736629


The hexadecimal numeral system is widely used when working with computers, because one hexadecimal digit can fit into a nibble (4 bits). Since a standard byte is 8 bits, two nibbles could perfectly fit into a byte, hence why the octal system is rather obsolete. Hexadecimal has 16 digits, which consist of "0"..."9" and "A"..."F" or "a"..."f". "Letters as numbers?", you may say. Indeed, it may be tricky working with letters as numbers, but once you get comfortable with them, it will be easy to use. To use hexadecimal numbers in python, prepend 0x or 0X to the beginning of the number.[4] I suggest using a lowercase x, since it is easier to distinguish from the numbers and uppercase letters.

>>> 0xF
15
>>> 0xF0
240
>>> 0xFF - 0xF
240
>>> 0xF + 0xA
25
>>> 0x2 + 0x2
4
>>> 0x12 - 0xA
8
>>> 0xFF / 0xF
17.0
>>> 0xF * 0xF
225
>>> hex(1_234_987)
'0x12d82b'
>>> 0x_12_D82B
1234987


Note: You do not have to use just uppercase letters when working with hexadecimal, you can also use lowercase letters if you find it easier.

This topic has been lightly brushed up on and will probably not be used until later in advanced lessons. If you feel a need to learn this, or you want to be proficient at it, the course Introduction to Computers has a lesson called Numeral systems that deals with these numeral systems with a little more in depth teaching.

### Bitwise Operators

All integers may be tested or modified by the Bitwise Operators: & (and), | (or), ^ (exclusive or), << (shift left), >> (shift right) and ~ (invert). However it makes good sense to confine our description of these operators to non-decimal integers, particularly binary and hexadecimal.

These operators are called 'bitwise' because they operate on individual bits within the integer.

1. The & operator produces a true output when both corresponding bits are true:

>>> bin (0b1010101 & 0b1111)
'0b101'
>>> bin (0b1010101 & 0b111000)
'0b10000'
>>> hex (0xFF00FF & 0xFF00)
'0x0'


In the first example both input operands

0b1010101
0b   1111
^ ^

have the marked bits set and the result is '0b101'.

2. The | operator produces a true output when at least one of both corresponding bits is true:

>>> bin (0b1010101 | 0b1110)
'0b1011111'
>>> bin (0b1010101 | 0b1100)
'0b1011101'
>>> hex (0xFF00FF | 0x3F0)
'0xff03ff'


In the first example both input operands

0b1010101
0b   1110
^ ^^^^^

have the marked bits set in at least one of the operands and the result is '0b1011111'.

3. The ^ operator produces a true output when exactly one of both corresponding bits is true:

>> bin (0b1010101 ^ 0b1110)
'0b1011011'
>>> bin (0b1010101 ^ 0b1100)
'0b1011001'
>>> hex (0xFF00FF ^ 0x3F0)
'0xff030f'


In the first example both input operands

0b1010101
0b   1110
^ ^^ ^^

have the marked bits set in exactly one of the operands and the result is '0b1011011'.

4. The  <<  operator shifts the operand left by the number of bits specified:

>> bin(0b10101 << 2)
'0b1010100'
>>> bin(0b10101 << 5)
'0b1010100000'
>>> hex(0xFF00FF << 8)
'0xff00ff00'
>>> (0xFF00FF << 8) == (0xFF00FF * 2**8)
True


In the first example the output is the input shifted left 2 bits:

0b  10101
0b1010100
^^

The ouput is the input with two 0's at the right hand end.

5. The  >>  operator shifts the operand right by the number of bits specified:

>> bin(0b10101 >> 2)
'0b101'
>>> bin(0b10101 >> 5)
'0b0'
>>> hex(0xFF00FF >> 8)
'0xff00'
>>> (0xFF00FF >> 8) == (0xFF00FF // 2**8)
True


In the first example the output is the input shifted right 2 bits:

0b10101
0b  101

The rightmost two bits of the input are lost forever. If you wish to preserve the 2 rightmost bits of the input, before shifting execute:
twoBits = operand & 0x3


The bitwise operators above perform as expected on all integers of (almost) unlimited length:

>>> hex( ( 0x1234_FEDC << 120 ) | ( 0x_CDE_90AB << 60 ) )
'0x1234fedc00000000cde90ab000000000000000'
>>> hex( ( 0x1234_FEDC << 200 ) ^ ( 0x_CDE_90AB << 207 ) )
'0x67d7cab5c00000000000000000000000000000000000000000000000000'


6. The behavior of the invert (~) operator shows that negative numbers are treated as their 2’s complement value:

>>> a = 0b1100101100101 ; bin(~a)
'-0b1100101100110'


For a true 1's complement bitwise invert here is one way to do it:

>>> a = 0b1100101100101 ; b = a ^ (  (1 << a.bit_length()) - 1  ); bin(b)
'0b11010011010'
>>> c = a + b; bin(c)
'0b1111111111111'   # to test the operation, all bits of c should be set.
>>> (c+1) == ( 1 << (c.bit_length()) )
True                # they are.


And another way to do it:

from decimal import *
a = 0b11100100011001110001010111    # a is int
b = bin(a)                          # b is string
print ('a =', b)

formerPrecision = getcontext().prec
getcontext().prec = a.bit_length()
d = Decimal.logical_invert( Decimal( b[2:] ) )    # d is Decimal object.
getcontext().prec = formerPrecision

print ('d =', d)
e = int(str(d),2)                   # e is int
print ('e =', bin(e))

( (a + e) == ( ( 1 << a.bit_length() ) - 1 ) ) and print ('successful inversion')


When you execute the above code, you see the following results:

a = 0b11100100011001110001010111
d =      11011100110001110101000
e =    0b11011100110001110101000
successful inversion


The Decimal.logical_invert() performs a 1's complement inversion.

# Python Floats

## Introduction to floats

Although integers are great for many situations, they have a serious limitation, integers are whole numbers. This means that they are not real numbers. A real number is a value that represents a quantity along a continuous line[5], which means that it can have fractions in decimal forms. 4.5, 1.25, and 0.75 are all real numbers. In computer science, real numbers are represented as floats. To test if a number is float, we can use the isinstance built-in function.

>>> isinstance(4.5, float)
True
>>> isinstance(1.25, float)
True
>>> isinstance(0.75, float)
True
>>> isinstance(3.14159, float)
True
>>> isinstance(2.71828, float)
True
>>> isinstance(1.0, float)
True
>>> isinstance(271828, float)
False
>>> isinstance(0, float)
False
>>> isinstance(0.0, float)
True


As a general rule of thumb, floats have a decimal point and integers do not have a decimal point. So even though 4 and 4.0 are the same number, 4 is an integer while 4.0 is a float.

The basic arithmetic operations used for integers will also work for floats. (Bitwise operators will not work with floats.)

>>> 4.0 + 2.0
6.0
>>> -1.0 + 4.5
3.5
>>> 1.75 - 1.5
0.25
>>> 4.13 - 1.1
3.03
>>> 4.5 // 1.0
4.0
>>> 4.5 / 1.0
4.5
>>> 4.5 % 1.0
0.5
>>> 7.75 * 0.25
1.9375
>>> 0.5 * 0.5
0.25
>>> 1.5 ** 2.0
2.25


## Some technical information about 'floats.'

A floating point literal can be either pointfloat or exponentfloat.

A pointfloat contains a decimal point (".") and at least one digit ("0"..."9"), for example:

34.45 ; 34. ; .45 ; 0. ; -.00 ; -33. ;

An exponentfloat contains an exponent which ::= ("e" | "E")["+" | "-"]decinteger.

("e" | "E") means that "e" or "E" is required.

["+" | "-"] means that "+" or "-" is optional.

decinteger means decimal integer.

These are examples of exponents: e9 ; e-0 ; e+1 ; E2 ; E-3 ; E+4 ;

The exponent is interpreted as follows:

${\displaystyle .5e2=.5(10^{2})=50.0;}$ ${\displaystyle -3E1=-3.0(10^{1})=-30.0;}$ ${\displaystyle .003e-5=.003(10^{-5})=3e-08;}$ ${\displaystyle 3e0=3.0(10^{0})=3.0;}$ ${\displaystyle 0090.5e-02=90.5(10^{-2})=0.905;}$ ${\displaystyle 0E0=0.0(10^{0})=0.0;}$

An exponent float can be either:

decinteger exponent, for example: 0e0 ; -3e1 ; 15E-6 ; or

pointfloat exponent, for example: .5E+2 ; -3.00e-5 ; 123_456.75E-5 ;

The separate parts of a floating point number are: 1.2345 = 12345e-4 =  ${\displaystyle \underbrace {12345} _{\text{significand}}\times \underbrace {10} _{\text{base}}\!\!\!\!\!\!^{\overbrace {-4} ^{\text{exponent}}}.}$ [6]

The significand may be called mantissa or coefficient. The base may be called radix. The exponent may be called characteristic or scale.

Within the floating point literal white space is not permitted. An underscore ("_") may be used to improve readability. Integer and exponent parts are always interpreted using radix 10. Within the context of floating point literals, a "decinteger" may begin with a "0". Numeric literals do not include a sign; a phrase like -1 is actually an expression composed of the unary operator - and the literal 1.

### sys.float_info

Object sys.float_info contains information about floats:

>>> import sys
>>> print ( '\n'.join(str(sys.float_info).split(', ')) )
sys.float_info(max=1.7976931348623157e+308 # maximum representable finite float
max_exp=1024
max_10_exp=308
min=2.2250738585072014e-308 # minimum positive normalized float
min_exp=-1021
min_10_exp=-307
dig=15 # maximum number of decimal digits that can be faithfully represented in a float
mant_dig=53 # float precision: the number of base-radix digits in the significand of a float
epsilon=2.220446049250313e-16
radix=2 # radix of exponent representation
rounds=1)
>>>


Information about some of the above values follows:

#### sys.float_info.mant_dig

>>> sys.float_info.mant_dig
53
>>>
>>> sys.float_info[7]
53
>>>
>>> I1 = (1<<53) - 1 ; I1 ; hex(I1) ; I1.bit_length()
9007199254740991
'0x1fffffffffffff'
53
>>> float(I1-1) ; float(I1-1) == I1-1
9007199254740990.0
True
>>> float(I1) ; float(I1) == I1
9007199254740991.0
True
>>> float(I1+1) ; float(I1+1) == I1+1
9007199254740992.0
True
>>> float(I1+2) ; float(I1+2) == I1+2
9007199254740992.0 # Loss of precision occurs here.
False
>>>
>>> I2 = I1 - 10**11 ; I2 ; hex(I2) ; I2.bit_length() ; float(I2) == I2 ; len(str(I2))
9007099254740991
'0x1fffe8b78917ff'
53
True # I2 can be accurately represented as a float.
16
>>> I3 = I1 + 10**11 ; I3 ; hex(I3) ; I3.bit_length() ; float(I3) == I3 ; len(str(I3))
9007299254740991
'0x2000174876e7ff'
54 # Too many bits.
False # I3 can not be accurately represented as a float.
16
>>>


#### sys.float_info.dig

>>> len(str(I1))
16
>>>
>>> sys.float_info.dig
15
>>> sys.float_info[6]
15
>>>


As shown above some (but not all) decimal numbers of 16 digits can be accurately represented as a float. Hence 15 as the limit in sys.float_info.dig.

#### sys.float_info.max

>>> sys.float_info.max
1.7976931348623157e+308
>>>
>>> sys.float_info[0]
1.7976931348623157e+308
>>>
>>> 1.7976931348623157e+305
1.7976931348623156e+305
>>> 1.7976931348623157e+306
1.7976931348623156e+306
>>> 1.7976931348623157e+307
1.7976931348623158e+307
>>> 1.7976931348623157e+308
1.7976931348623157e+308
>>> 1.7976931348623157e+309
inf
>>>


#### sys.float_info.min

>>> sys.float_info.min
2.2250738585072014e-308
>>> sys.float_info[3]
2.2250738585072014e-308
>>>
>>> 2.2250738585072014e-306
2.2250738585072014e-306
>>> 2.2250738585072014e-307
2.2250738585072014e-307
>>> 2.2250738585072014e-308
2.2250738585072014e-308
>>> 2.2250738585072014e-309
2.225073858507203e-309 # Loss of precision.
>>> 2.2250738585072014e-310
2.2250738585072e-310
>>> 2.2250738585072014e-311
2.225073858507e-311
>>>


## The Precision of Floats

Before you start calculating with floats you should understand that the precision of floats has limits, due to Python and the architecture of a computer. Some examples of errors due to finite precision are displayed below.

>>> 1.13 - 1.1
0.029999999999999805
>>> 0.001 / 11.11
9.000900090009002e-05
>>> 1 + .0000000000000001
1.0
>>> -5.5 % 3.2
0.9000000000000004
>>> float(1_234_567_890_123_456)
1234567890123456.0
>>> float(12_345_678_901_234_567)
1.2345678901234568e+16


In the first example, 1.13 - 1.1 = 0.03, although Python comes to the conclusion that the real answer is 0.029999999999999805. The fact behind this reasoning is based on how the computer stores memory, so the difference lost a little of its precision. As the minuend increases in size, so does its precision.  2.13 - 1.1 = 1.0299999999999998  and 3.13 - 1.1 = 2.03.

In the second example, 0.001 / 11.11 = 9.000900090009002e-05 where e-05 means ten to the power of negative five. The answer could also be 9.000900090009001e-05 depending on how the quotient is rounded, how long the quotient can be stored on the computer, and the most significant number on the right hand side.

In the third example, the sum of the addends 1 + .0000000000000001 = 1.0 although we know that it really is 1 + .0000000000000001 = 1.0000000000000001. The reason the second addend is left out is because of its insignificance. Although this might not matter for every day situations, it may be important for such uses as rocket science and possibly calculus.

The fourth example gives the correct result if rewritten:

>>> ((-5.5*10 ) % (3.2*10)) / 10.0
0.9


When working with Python floats, we need to be aware that there will probably be a margin of error.

## Decimal fixed point and floating point arithmetic for extreme precision

The Python "Decimal" module provides support for fast correctly-rounded decimal floating point arithmetic. The module offers several advantages over the float datatype, including:

• Decimal numbers can be represented exactly.
• The decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem.

The usual start to using decimals is importing the module, viewing the current context with getcontext() and, if necessary, setting new values for precision, rounding, or enabled traps:

>>> from decimal import *

>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[Inexact, FloatOperation, Rounded], traps=[InvalidOperation, DivisionByZero, Overflow])

>>> setcontext(ExtendedContext)
>>> getcontext()
Context(prec=9, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[])

>>> setcontext(BasicContext)
>>> getcontext()
Context(prec=9, rounding=ROUND_HALF_UP, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[Clamped, InvalidOperation, DivisionByZero, Overflow, Underflow])

>>> c = getcontext()
>>> c.flags[Inexact] = True
>>> c.flags[FloatOperation] = True
>>> c.flags[Rounded] = True
>>> getcontext()
Context(prec=9, rounding=ROUND_HALF_UP, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[Inexact, FloatOperation, Rounded], traps=[Clamped, InvalidOperation, DivisionByZero, Overflow, Underflow])

>>> getcontext().prec = 75    # set desired precision
>>> getcontext()
Context(prec=75, rounding=ROUND_HALF_UP, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[Inexact, FloatOperation, Rounded], traps=[Clamped, InvalidOperation, DivisionByZero, Overflow, Underflow])


We are now ready to use the decimal module.

>>> Decimal(3.14)    # Input to decimal() is float.
Decimal('3.140000000000000124344978758017532527446746826171875') # Exact value of float 3.14.

>>> Decimal('3.14')    # Input to decimal() is string.
Decimal('3.14')        # Exact value of 3.14 in decimal floating point arithmetic.


${\displaystyle ({\sqrt {2}})^{2}}$

>>> (2 ** 0.5)**2
2.0000000000000004     # Result of binary floating point operation. We expect 2.

>>> (Decimal('2') ** Decimal('0.5')) ** Decimal('2')
Decimal('1.99999999999999999999999999999999999999999999999999999999999999999999999999')
# Result of decimal floating point operation with string input. We expect 2.


${\displaystyle (2.12345678^{({\frac {1}{2.345}})})^{2.345}}$

>>> (2.12345678 ** (1/2.345)) ** 2.345
2.1234567800000006      # Result of floating point operation. We expect 2.12345678.

>>> (Decimal('2.12345678') ** (Decimal('1')/Decimal('2.345'))) ** Decimal('2.345')
Decimal('2.12345677999999999999999999999999999999999999999999999999999999999999999999')
# Result of decimal floating point operation with string input . We expect 2.12345678.

>>> getcontext().rounding=ROUND_UP
>>> (Decimal('2.12345678') ** (Decimal('1')/Decimal('2.345'))) ** Decimal('2.345')
Decimal('2.12345678000000000000000000000000000000000000000000000000000000000000000003')
# Result of decimal floating point operation with string input . We expect 2.12345678.


Some mathematical functions are also available to Decimal:

>>> getcontext().prec = 30

>>> Decimal(2).sqrt()
Decimal('1.41421356237309504880168872421')

>>> (Decimal(2).sqrt())**2
Decimal('2.00000000000000000000000000001')  # We expect 2.

>>> Decimal(1).exp()
Decimal('2.71828182845904523536028747135')   # Value of 'e', base of natural logs.

>>> Decimal(  Decimal(1).exp()  ).ln()
Decimal('0.999999999999999999999999999999')   # We expect 1.


## Lack of precision in the real world

(included for philosophical interest)

>>> a = 899_999_999_999_999.1 ; a - (a - .1)
0.125
>>> 1.13 - 1.1
0.029999999999999805


Simple tests indicate that the error inherent in floating point operations is about ${\displaystyle {\frac {1}{10^{16}}}.}$

This raises the question "How much precision do we need?"

For decades high school students calculated sines and cosines to 4 decimal places by referring to printed look-up tables. Before computers engineers used slide rules to make calculations accurate to about ${\displaystyle {\frac {1}{1000}}}$ for most calculations, and the Brooklyn Bridge is still in regular use.

With accuracy of ${\displaystyle {\frac {1}{10^{16}}}}$ engineers can send a rocket to Pluto and miss by 1cm.

If your calculations produce a result of ${\displaystyle 1.0(10^{-14})}$ and you were expecting ${\displaystyle 0,}$ will you be satisfied with your work? If your calculations were in meters, probably yes. If your calculations were in nanometers (${\displaystyle 10^{-9}}$ of a meter), probably no.

Knowing that lack of precision is inherent in floating point operations, you may have to include possibly substantial amounts of code to make allowances for it.

## Extreme Precision

(included for historical interest)

If you must have a result correct to 50 places of decimals, Python's integer math comes to the rescue. Suppose your calculation is:

${\displaystyle 123456.789/4567.87654}$ ${\displaystyle ={\frac {123456.789}{4567.87654}}}$ ${\displaystyle ={\frac {123456.78900}{4567.87654}}}$ ${\displaystyle ={\frac {12345678900}{456787654}}.}$

For 50 significant digits after the decimal point your calculation becomes:

${\displaystyle 123456.789/4567.87654}$ ${\displaystyle ={\frac {12345678900(10^{51})}{456787654(10^{51})}}}$ ${\displaystyle ={\frac {12345678900(10^{51})}{456787654}}/10^{51}.}$

>>> dividend = 12345678900
>>> divisor = 456787654
>>>
>>> (quotient, remainder) = divmod(dividend*(10**51), divisor) ; quotient;remainder
27027172892899596625262555804540198890751981663672547
231665262
>>> if remainder >= ((divisor + (divisor & 1)) >> 1) : quotient += 1
...
>>> quotient
27027172892899596625262555804540198890751981663672548
>>>


The correct result ${\displaystyle =27027172892899596625262555804540198890751981663672548(10^{-51}),}$ but note:

>>> quotient*(10**(-51))
27.027172892899596 # Lack of precision.
>>>


Put the decimal point in the correct position within a string to preserve precision:

>>> str(quotient)[0:-51] + '.' + str(quotient)[-51:]
'27.027172892899596625262555804540198890751981663672548'
>>>


Format the result:

>>> s1 = str(quotient)[-51:] ; s1
'027172892899596625262555804540198890751981663672548'
>>> L2 = [ s1[p:p+5] for p in range(0,51,5) ] ; L2
['02717', '28928', '99596', '62526', '25558', '04540', '19889', '07519', '81663', '67254', '8']
>>> decimal = '_'.join(L2) ; decimal
'02717_28928_99596_62526_25558_04540_19889_07519_81663_67254_8'
>>> str(quotient)[0:-51] + '.' + decimal
'27.02717_28928_99596_62526_25558_04540_19889_07519_81663_67254_8'
# The result formatted for clarity and accurate to 50 places of decimals.
>>>


Both strings '27.027172892899596625262555804540198890751981663672548' and

'27.02717_28928_99596_62526_25558_04540_19889_07519_81663_67254_8' are

acceptable as input to Python's Decimal module.

## Lack of precision and what to do about it

Lack of precision in floating point operations quickly becomes apparent:

sum = 0
increment = 0.000_000_000_1

for count in range(1,1000) :
sum += increment
print ('count= {}, sum = {}'.format(count,sum))
if sum != count/10_000_000_000 : break

count= 1, sum = 1e-10
count= 2, sum = 2e-10
count= 3, sum = 3e-10
count= 4, sum = 4e-10
count= 5, sum = 5e-10
count= 6, sum = 6e-10
count= 7, sum = 7e-10
count= 8, sum = 7.999999999999999e-10


The problem seems to be that floating point numbers are contained in 53 bits, limiting the number of significant digits in the decimal number displayed to 15 or 16. But this is not really the problem. If the standard limits the number of significant digits displayed to 15 or 16, so be it. The real problem is that underlying calculations are also performed in 53 bits.

>>> (0.000_000_000_1).hex()
'0x1.b7cdfd9d7bdbbp-34'
>>> h1 = '0x1b7cdfd9d7bdbbp-86' # increment with standard precision.
>>> float.fromhex(h1)
1e-10
>>>


### Precision greater than standard

Rewrite the above code so that the value increment has precision greater than standard.

increment${\displaystyle ={\frac {1}{10^{10}}}={\frac {x}{16^{26}}};\ x={\frac {16^{26}}{10^{10}}}}$

>>> x,r = divmod (16**26,10**10) ;x;r
2028240960365167042394
7251286016
>>> x += (r >= (10**10)/2);x
2028240960365167042395
>>> h1 = hex(x)[2:].upper();h1
'6DF37F675EF6EADF5B'
>>> increment = '0x' + h1 + 'p-104' ; increment
'0x6DF37F675EF6EADF5Bp-104' # Current value of increment.
>>> int(increment[:-5],16).bit_length()
71 # Greater precision than standard by 18 bits.
>>> float.fromhex(increment)
1e-10
>>>


Exact value of increment:

>>> from decimal import *
>>> Decimal(x) / Decimal(16**26)
Decimal('1.0000_0000_0000_0000_0000_01355220626007433600102690740628157139990861423939350061118602752685546875E-10')
>>> # 22 significant digits.

sum = '0x0p0'

for count in range(1,1000) :
hex_val = sum.partition('p')[0]
sum = hex( eval(hex_val) + x ) + 'p-104'
f1 = float.fromhex(sum)
print (
'count = {}, sum = {}, sum as float = {}'.format(count, sum, f1)
)
if f1 != count/10_000_000_000 : exit(99)

count = 1, sum = 0x6df37f675ef6eadf5bp-104, sum as float = 1e-10
count = 2, sum = 0xdbe6fecebdedd5beb6p-104, sum as float = 2e-10
count = 3, sum = 0x149da7e361ce4c09e11p-104, sum as float = 3e-10
...........................
count = 997, sum = 0x1ac354f2d94d7a0b7dd67p-104, sum as float = 9.97e-08
count = 998, sum = 0x1aca342acfc3697a2bcc2p-104, sum as float = 9.98e-08
count = 999, sum = 0x1ad11362c63958e8d9c1dp-104, sum as float = 9.99e-08


Consider the last line above: count = 999, sum = 0x1ad11362c63958e8d9c1dp-104, f1 = 9.99e-08.

The most accurate hex representation of value 9.99e-08 with this precision is in fact '0x1ad11362c63958e8d9b0ap-104' different from the above value by '0x113p-104'. If counting continues, drift increases but, as a fraction of sum, it remains fairly constant, apparently enabling accurate counting up to and including the theoretical limit of floats (15 decimal digits.)

                                      drift
vvv
count = 999, sum = 0x1ad11362c63958e8d9c1dp-104, sum as float = 9.99e-08
0x1AD11362C63958E8D9B0Ap-104 most accurate hex representation of sum as float
^^^^^^^^^^^^^^^^^^
18 hex digits = 69 bits

drift
vvvvv
count = 99999, sum = 0xa7c53e539be0252c255b85p-104, sum as float = 9.9999e-06
0xA7C53E539BE0252C24F026p-104 most accurate hex representation of sum as float
^^^^^^^^^^^^^^^^^
17 hex digits = 68 bits

drift
vvvvvvvv
count = 9999999999, sum = 0xffffffff920c8098a1aceb2ca5p-104, sum as float = 0.9999999999
0xFFFFFFFF920C8098A1091520A5p-104 most accurate hex representation of sum as float
^^^^^^^^^^^^^^^^^^
18 hex digits = 72 bits

drift
15 decimal digits                           vvvvvvvvvvvvv
count = 999999999999999, sum = 0x1869fffffffff920c81929f8524a0a5p-104, sum as float = 99999.9999999999
0x1869FFFFFFFFF920C8098A1091520A5p-104 most accurate hex representation of sum as float
^^^^^^^^^^^^^^^^^^
18 hex digits = 69 bits


While floating point operations implemented in software might not depend on conversion to and from hex strings, the above illustrates the accuracy that could be obtained if floating point software of selectable precision were to replace now antiquated floating point hardware.

Python's decimal module allows floating point calculations of (almost) infinite precision, but importing a special module to perform a calculation like ${\displaystyle 1.13-1.1}$ seems onerous.

>>> 1.13 - 1.1
0.029999999999999805
>>>


In a programming language as magnificent as Python, the above result is intolerable.

### Python's Decimal module

With a few simple changes the above counting loop takes full advantage of Python's Decimal module, and possible loss of precision becomes irrelevant.

from decimal import *

def D(v1) : return Decimal(str(v1))

sum = 0
increment = D(0.000_000_000_1)

for count in range(1,1000) :
sum += increment
print ('count = {}, sum = {}'.format(count,sum))
if sum != count * increment :
exit (99)

exit (0)

count = 1, sum = 1E-10
count = 2, sum = 2E-10
count = 3, sum = 3E-10
.................
count = 997, sum = 9.97E-8
count = 998, sum = 9.98E-8
count = 999, sum = 9.99E-8


A float is displayed with 'e', a Decimal object with 'E'.

>>> 9.99E-8
9.99e-08
>>> Decimal(str(9.99e-8))
Decimal('9.99E-8')
>>>


### Reset the float

#### Using formatted string

sum = 0
increment = 0.000_000_000_1

for count in range(1,1000) :
sum += increment
s1 = '{0:.10f}'.format(sum)
sum = float(s1)
print ('count= {}, sum = {}'.format(count,sum))
if sum != count / 10_000_000_000 :
exit (99)

exit (0)

count= 1, sum = 1e-10
count= 2, sum = 2e-10
count= 3, sum = 3e-10
.................
count= 997, sum = 9.97e-08
count= 998, sum = 9.98e-08
count= 999, sum = 9.99e-08


#### Using Decimal precision

Python's floating point standard states that the best accuracy to be expected is 15 significant decimal digits.

from decimal import *
getcontext().prec = 15

sum = 0
increment = 0.000_000_000_1

for count in range(1,1000) :
print ( 'count =', ('  '+str(count))[-3:], end='    ' )
sum += increment
d1 = Decimal(str(sum))
print ( 'd1 = sum =', ('                      ' + str(d1))[-21:], end='    ' )
d1 += 0 # This forces d1 to conform to 15 digits of precision.
print ( 'd1 =', ('                      ' + str(d1))[-20:], end='    ' )
sum = float(d1)
print ('sum =', sum)
if sum != count / 10_000_000_000 :
print ('   ', sum, count, increment, count*increment)
exit (99)

exit (0)

count =   1    d1 = sum =                 1E-10    d1 =                1E-10    sum = 1e-10
count =   2    d1 = sum =                 2E-10    d1 =                2E-10    sum = 2e-10
count =   3    d1 = sum =                 3E-10    d1 =                3E-10    sum = 3e-10
count =   4    d1 = sum =                 4E-10    d1 =                4E-10    sum = 4e-10
count =   5    d1 = sum =                 5E-10    d1 =                5E-10    sum = 5e-10
count =   6    d1 = sum =                 6E-10    d1 =                6E-10    sum = 6e-10
count =   7    d1 = sum =                 7E-10    d1 =                7E-10    sum = 7e-10
count =   8    d1 = sum = 7.999999999999999E-10    d1 = 8.00000000000000E-10    sum = 8e-10
count =   9    d1 = sum =                 9E-10    d1 =                9E-10    sum = 9e-10
count =  10    d1 = sum =                  1E-9    d1 =                 1E-9    sum = 1e-09
count =  11    d1 = sum = 1.1000000000000001E-9    d1 =  1.10000000000000E-9    sum = 1.1e-09
count =  12    d1 = sum =                1.2E-9    d1 =               1.2E-9    sum = 1.2e-09
count =  13    d1 = sum =                1.3E-9    d1 =               1.3E-9    sum = 1.3e-09
count =  14    d1 = sum = 1.4000000000000001E-9    d1 =  1.40000000000000E-9    sum = 1.4e-09
count =  15    d1 = sum =                1.5E-9    d1 =               1.5E-9    sum = 1.5e-09
count =  16    d1 = sum =                1.6E-9    d1 =               1.6E-9    sum = 1.6e-09
..............................
count = 296    d1 = sum =               2.96E-8    d1 =              2.96E-8    sum = 2.96e-08
count = 297    d1 = sum =               2.97E-8    d1 =              2.97E-8    sum = 2.97e-08
count = 298    d1 = sum = 2.9800000000000002E-8    d1 =  2.98000000000000E-8    sum = 2.98e-08
count = 299    d1 = sum = 2.9899999999999996E-8    d1 =  2.99000000000000E-8    sum = 2.99e-08
count = 300    d1 = sum = 3.0000000000000004E-8    d1 =  3.00000000000000E-8    sum = 3e-08
count = 301    d1 = sum =               3.01E-8    d1 =              3.01E-8    sum = 3.01e-08
count = 302    d1 = sum =               3.02E-8    d1 =              3.02E-8    sum = 3.02e-08
..............................
count = 997    d1 = sum =               9.97E-8    d1 =              9.97E-8    sum = 9.97e-08
count = 998    d1 = sum =               9.98E-8    d1 =              9.98E-8    sum = 9.98e-08
count = 999    d1 = sum =  9.989999999999999E-8    d1 =  9.99000000000000E-8    sum = 9.99e-08


The last line:

count = 999${\displaystyle \ \ \ \ }$ d1 = sum = ${\displaystyle \ \ \underbrace {9.9899\_9999\_9999\_999} _{\text{16 significant digits}}}$E-8${\displaystyle \ \ \ \ }$ d1 = ${\displaystyle \ \ \underbrace {9.9900\_0000\_0000\_00} _{\text{15 significant digits}}}$E-8${\displaystyle \ \ \ \ }$ sum = 9.99e-08

The value 9.9900_0000_0000_00 means that this is the most accurate value for d1 that can fit in 15 significant digits.

# The Boolean

In Python and most languages, a Boolean can be either True or False. A Boolean is a special data type and is a subclass of int.[7] Since a Boolean has two states and only one at a time, a Boolean creates a special relationship between things. We can think of some Boolean values that we deal with in real life, for example: on or off, hot or cold, light or darkness, etc. Although a Boolean can be True or False a Boolean expression can take a statement, like 1 == 1 or 1 == 0, and turn it into a Boolean, True for the former and False for the latter. We can use the bool() method to check the Boolean value of an object, which will be False for integer zero and for objects (numerical and other data types) that are empty, and True for anything else.

>>> 1 == 1
True
>>> 1 == 0
False
>>> bool(0)
False
>>> bool(1)
True
>>> bool(10001219830)
True
>>> bool(-1908)
True
>>> bool("Hello!")
True
>>> bool("")
False
>>> bool("              ")
True
>>> bool(None)
False
>>> bool(0.000000000000000000000000000000000)
False
>>> bool("0.000000000000000000000000000000000")
True
>>> bool(0.0)
False
>>> bool([])
False
>>> bool([1, 2, 3])
True
>>> bool()
False
>>> bool(True)
True
>>> bool(False)
False
>>> bool(1==1)
True
>>> bool(1==0)
False


Note: True and False are both case-sensitive, which means that you must type them exactly as shown, otherwise you'll get a syntax error.

You can also use three operators to alter a Boolean statement[8]: or, and, not. You can use an or statement to allow one or more Booleans to be False so long as one is True. An and statement requires all of the Booleans to be True for it be True. The not statement reverses a Boolean so  not True is False and not False is True. Here are some examples:

>>> not False
True
>>> not True
False
>>> True and True
True
>>> True and False
False
>>> True or False
True
>>> False or False
False
>>> not(False or False)
True
>>> not(False and False)
True
>>> not(False and True)
True


All of the possible combinations are:

True and True: True
True and False: False
False and True: False
False and False: False

True or True: True
True or False: True
False or True: True
False or False: False

(not(True and True)) == ((not True) or (not True)): True
(not(True and False)) == ((not True) or (not False)): True
(not(False and True)) == ((not False) or (not True)): True
(not(False and False)) == ((not False) or (not False)): True

(not(True or True)) == ((not True) and (not True)): True
(not(True or False)) == ((not True) and (not False)): True
(not(False or True)) == ((not False) and (not True)): True
(not(False or False)) == ((not False) and (not False)): True


The above negated statements reflect "De Morgan's laws." For example, the statement

(not(True and True)) == ((not True) or (not True)): True


is equivalent to: True and True ${\displaystyle \equiv }$ True or True.

## A simple way to choose one of two possible values:

>>> L1 = [1,2,0,3,0,5]


Produce list L2, a copy of L1, except that each value 0 in L1 has been replaced by 0xFF:

>>> L2 = []
>>>
>>> for p in L1 :
...     L2 += ([p], [0xFF])[p == 0]
...
>>> L2
[1, 2, 255, 3, 255, 5]
>>>


## Expressions containing multiple booleans

Consider the expression:

A and B or C


Does this mean

(A and B) or C


Does it mean

A and (B or C)


It might be tempting to say that there is no difference, but look closely:

for A in True, False :
for B in True, False :
for C in True, False :
b1 = (A and B) or C
b2 = A and (B or C)
if b1 != b2 :
print (
'''
for A = {}, B = {}, C = {}
(A and B) or C = {}
A and (B or C) = {}
'''.format(A, B, C, b1, b2)
)

for A = False, B = True, C = True
(A and B) or C = True
A and (B or C) = False

for A = False, B = False, C = True
(A and B) or C = True
A and (B or C) = False


Add another boolean to the expression:

A and B or C and D


and the number of different possibilities is at least 96.

You can see that the complexity of these expressions quickly becomes unmanageable.

The essence of this section: Keep your expressions simple and use parentheses as necessary to ensure that your code is interpreted exactly as you expect.

# Complex Numbers

A complex number is represented as a+bi where a and b are real numbers, like 7 or 12, and i is an imaginary number, where i² = -1. In the computer field, and in the world of Python, i is denoted as j for technical reasons, so we use a+bj. It should also be noted that a and b are both treated as floats. This subject will be briefly covered until later lessons.

>>> 1+2j
(1+2j)
>>> -1+5.5j
(-1+5.5j)
>>> 0+5.5j
5.5j
>>> 2j
2j
>>> 1+0j
(1+0j)
>>> complex(3,-2)
(3-2j)


Note also that j cannot be used on its own without b. If you try to use j on its own, Python will look for a variable j and use the value of that variable, or report an error if the variable is not known or a wrong type. So the imaginary number i or j must always be written as 1j.

>>> a = 5 + 3j
>>> a - j
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'j' is not defined
>>> a - 1j
(5+2j)
>>> j = -3j
>>> a - j
(5+6j)
>>> a - 1j
(5+2j)


The last result illustrates that even when the variable j has a numerical value, 1j (where, as above, can be any number) is always interpreted as the imaginary number j, not the variable j.

The usual mathematical operations can be performed on complex numbers:

>>> (1+3j)+(2-5j)
(3-2j)
>>> (1+3j)-(2-5j)
(-1+8j)
>>> (1+3j)*(2-5j)
(17+1j)
>>> a = complex(3,-5) ; b = 1 ; b += 2j ; a ; b
(3-5j)
(1+2j)
>>> a + b ; a - b
(4-3j)
(2-7j)
>>> a * b ; a / b
(13+1j)
(-1.4-2.2j)
>>> a + 4 ; b - 2j ; a * 3.1 ; b / 2
(7-5j)
(1+0j)
(9.3-15.5j)
(0.5+1j)
>>> b ; b /= 5 ; b
(1+2j)
(0.2+0.4j)
>>> a = complex(3,-5j) ; a
(8-0j)


Look closely at the last example. It does not produce an error, but is it what you want?

Note: the imaginary number, j, isn't case-sensitive, so you can use j or J.

You can extract the real number and the imaginary number by using .real and .imag respectively.

>>> (1+2j).real
1.0
>>> (1+2j).imag
2.0
>>> var = 5+3j
>>> var.real
5.0
>>> var.imag
3.0

Note:You'll get weird results if you don't use parentheses on a real number that isn't stored in a variable.

## cmath — Mathematical functions for complex numbers

### Introduction

Figure 1: Components of complex number Z.

Origin at point ${\displaystyle (0,0)}$.
${\displaystyle Z.real}$ parallel to ${\displaystyle X}$ axis.
${\displaystyle Z.imag}$ parallel to ${\displaystyle Y}$ axis.
${\displaystyle r=abs(Z)}$
${\displaystyle Z=r(\cos \phi +1j*\sin \phi )=Z.real+1j*Z.imag}$

A Python complex number Z is stored internally using rectangular or Cartesian coordinates. It is completely determined by its real part Z.real and its imaginary part Z.imag.

See Figure 1. In Cartesian Geometry of 2 dimensions the real part of complex number Z is parallel to the ${\displaystyle X}$ axis and the imaginary part is parallel to the ${\displaystyle Y}$ axis. The modulus of Z is the line from origin to Z with length r.

In scientific notation the number Z is written as ${\displaystyle a+bi}$ where ${\displaystyle i={\sqrt {-1}}.}$ Within Python Z is written as (a + bj).

Note: Because the expression 'bj' could be the name of a variable, and to avoid confusion, it might be better to express a complex number within Python as (a + b*1j).

The values a,b are the rectangular coordinates of complex number (a + bj).

>>> Z = complex(3.6, 2.7) ; Z
(3.6+2.7j)
>>> Z = 3.6 + 2.7j ; Z
(3.6+2.7j)
>>> Z = 3.6 + 2.7J ; Z # 'J' upper case.
(3.6+2.7j)
>>> Z.real ; Z.imag
3.6
2.7
>>>
>>> Z.real + 1j*Z.imag
(3.6+2.7j)
>>> Z.real + 1j*Z.imag == Z
True
>>>


The absolute value of a complex number is the length of the modulus:

${\displaystyle r={\sqrt {Z.real^{2}+Z.imag^{2}}}}$

>>> abs(Z)
4.5
>>> (Z.real**2 + Z.imag**2)**(1/2)
4.5
>>>


Some useful constants:

>>> import cmath
>>> ε = cmath.e ;  ε # Greek epsilon, base of natural logarithms.
2.718281828459045
>>> π = cmath.pi ;  π # Greek pi.
3.141592653589793
>>> τ = cmath.tau ; τ # Greek tau.
6.283185307179586
>>> τ == 2*π
>>> True


#### Addition of complex numbers

To add two complex numbers in rectangular format, simply add the real parts, then the imaginary parts:

>>> import cmath
>>>
>>> cn1 = 1+3j ; cn1
(1+3j)
>>> cn2 = 2-5j ; cn2
(2-5j)
>>> cn1 + cn2 == cn1.real + cn2.real + 1j*(cn1.imag + cn2.imag)
True
>>>


### Polar coordinates

Polar coordinates provide an alternative way to represent a complex number. In polar coordinates, a complex number Z is defined by the modulus r and the phase angle φ (phi). The modulus r is abs(Z) as above, while the phase φ is the counterclockwise angle, measured in radians, from the positive x-axis to the line segment that joins the origin to Z.

${\displaystyle \tan \phi ={\frac {Z.imag}{Z.real}}.}$

>>> tan_phi = Z.imag/Z.real ; tan_phi
0.75
>>> φ = cmath.atan(tan_phi).real ; φ
0.6435011087932844 # φ in radians
>>> φ * (180/π)
36.86989764584402 # φ in degrees.
>>>


Class method cmath.phase(Z) returns the phase:

>>> φ1 = cmath.phase(Z) ; φ1
0.6435011087932844
>>> φ1 == φ
True
>>>


From figure 1:

${\displaystyle \cos \phi ={\frac {Z.real}{r}};\ Z.real=r\cos \phi .}$

${\displaystyle \sin \phi ={\frac {Z.imag}{r}};\ Z.imag=r\sin \phi .}$

If polar coordinates are known:

${\displaystyle Z=r\cos \phi +1j*r\sin \phi =r(\cos \phi +1j*\sin \phi ).}$

>>> Z ; r ; φ
(3.6+2.7j)
4.5
0.6435011087932844
>>>
>>> cosφ = Z.real / r ; cosφ
0.8
>>> sinφ = Z.imag / r ; sinφ
0.6
>>>
>>> Z1 = r*( cosφ + 1j*sinφ  ) ; Z1
(3.6+2.7j)
>>>


#### De Moivre's formula

The format containing polar coordinates is useful because:

${\displaystyle Z^{n}=r^{n}(\cos(n\phi )+i\sin(n\phi ))}$

##### ${\displaystyle Z^{2}}$

${\displaystyle Z^{2}=r^{2}(\cos(2\phi )+i\sin(2\phi ))}$

###### Proof

${\displaystyle Z=r(\cos \phi +i\sin \phi )}$

${\displaystyle Z^{2}=(r(\cos \phi +i\sin \phi ))^{2}}$

${\displaystyle Z^{2}=r^{2}(\cos \phi +i\sin \phi )^{2}}$

${\displaystyle Z^{2}=r^{2}(\cos ^{2}\phi +2i\sin \phi \cos \phi -\sin ^{2}\phi )}$

${\displaystyle Z^{2}=r^{2}(\cos ^{2}\phi -\sin ^{2}\phi +2i\sin \phi \cos \phi )}$

${\displaystyle Z^{2}=r^{2}(\cos(2\phi )+i\sin(2\phi ))}$

###### An example
>>> sin2φ = cmath.sin(2*φ).real ; sin2φ
0.96
>>> cos2φ = cmath.cos(2*φ).real ; cos2φ
0.28
>>> Z
(3.6+2.7j)
>>> Z**2
(5.67+19.44j)
>>> r*r*(cos2φ + 1J*sin2φ)
(5.67+19.44j)
>>>

###### ${\displaystyle Z}$ and ${\displaystyle Z^{2}}$ on polar diagram
Figure 2: Z and Z^2.

${\displaystyle abs(Z_{1})=abs(Z_{2})=r_{1}=r_{2}}$.
Phase of ${\displaystyle Z_{1}=\phi }$. Phase of ${\displaystyle Z_{2}=180+\phi }$.
${\displaystyle Z_{2}=-Z_{1}}$.
${\displaystyle abs(Z)=r=r_{1}^{2}}$. Phase of ${\displaystyle Z=2\phi }$.
${\displaystyle Z=Z_{1}^{2}=Z_{2}^{2}}$.

In Figure 2 ${\displaystyle Z_{1}=3+1j*{\frac {5}{4}}}$.

${\displaystyle Z_{2}=-3+1j*(-{\frac {5}{4}})}$ ${\displaystyle =-3-1j*{\frac {5}{4}}}$ ${\displaystyle =-(3+1j*{\frac {5}{4}})=-Z_{1}.}$

${\displaystyle r_{1}}$ = length ${\displaystyle OZ_{1}}$ = abs${\displaystyle (Z_{1})=r_{2}}$ = length ${\displaystyle OZ_{2}}$ = abs${\displaystyle (Z_{2})={\frac {13}{4}}.}$

Angle ${\displaystyle \phi }$ is the phase of ${\displaystyle Z_{1}.\ \cos \phi ={\frac {12}{13}}.\ \sin \phi ={\frac {5}{13}}}$.

${\displaystyle Z_{1}=r_{1}*(\cos \phi +1j*\sin \phi )={\frac {13}{4}}*({\frac {12}{13}}+1j*{\frac {5}{13}})=3+1j*{\frac {5}{4}}.}$

${\displaystyle r}$ = length ${\displaystyle OZ}$ = abs${\displaystyle (Z)=r_{1}^{2}={\frac {169}{16}}.}$

Phase of ${\displaystyle Z=2\phi .}$ Therefore: ${\displaystyle Z=Z_{1}^{2}=Z_{2}^{2}.}$

${\displaystyle \sin 2\phi =2\sin \phi \cos \phi =2({\frac {5}{13}})({\frac {12}{13}})={\frac {120}{169}}.}$

${\displaystyle \cos 2\phi =\cos ^{2}\phi -\sin ^{2}\phi =({\frac {12}{13}})({\frac {12}{13}})-({\frac {5}{13}})({\frac {5}{13}})={\frac {144-25}{169}}={\frac {119}{169}}.}$

${\displaystyle Z=r*(\cos 2\phi +1j*\sin 2\phi )={\frac {169}{16}}({\frac {119}{169}}+1j*{\frac {120}{169}})={\frac {119}{16}}+1j*{\frac {120}{16}}=7{\frac {7}{16}}+1j*7{\frac {1}{2}}.}$

>>> Z1 = 3+1j*(5/4) ; Z1 ; abs(Z1) ;  abs(Z1) == 13/4
(3+1.25j)
3.25
True
>>> Z = Z1*Z1 ; Z ; Z.real == 7+(7/16)
(7.4375+7.5j)
True
>>>

##### ${\displaystyle {\sqrt {Z}}}$

${\displaystyle {\sqrt {Z}}={\sqrt {r}}(\cos(\phi /2)+i\sin(\phi /2))}$

>>> sinφ_2 = cmath.sin(φ/2).real ; sinφ_2
0.31622776601683794
>>> cosφ_2 = cmath.cos(φ/2).real ; cosφ_2
0.9486832980505138
>>> Z
(3.6+2.7j)
>>> cmath.sqrt(Z)
(2.0124611797498106+0.670820393249937j)
>>> (r**0.5)*(cosφ_2 + 1J*sinφ_2)
(2.0124611797498106+0.6708203932499369j)
>>>

###### ${\displaystyle {\sqrt {1}}}$

${\displaystyle \cos(0)+1j*\sin(0)=1+1j*0=1}$

${\displaystyle {\sqrt {1}}={\sqrt {\cos(0)+1j*\sin(0)}}}$ ${\displaystyle =\cos(0/2)+1j*\sin(0/2)}$ ${\displaystyle =\cos(0)+1j*\sin(0)=1}$

Trigonometric functions are cyclical:

${\displaystyle \cos(360)+1j*\sin(360)=1+1j*0=1}$

${\displaystyle {\sqrt {1}}={\sqrt {\cos(360)+1j*\sin(360)}}}$ ${\displaystyle =\cos(360/2)+1j*\sin(360/2)}$ ${\displaystyle =\cos(180)+1j*\sin(180)=-1+1j*0=-1}$

The two square roots of 1 are 1 and -1.

>>> 1**2 ; (-1)**2
1
1
>>>

###### ${\displaystyle {\sqrt {-1}}}$

${\displaystyle \cos(180)+1j*\sin(180)=-1+1j*0=-1}$

${\displaystyle {\sqrt {-1}}={\sqrt {\cos(180)+1j*\sin(180)}}}$ ${\displaystyle =\cos(180/2)+1j*\sin(180/2)}$ ${\displaystyle =\cos(90)+1j*\sin(90)=0+1j*1=1j}$

Trigonometric functions are cyclical:

${\displaystyle \cos(180+360)+1j*\sin(180+360)=-1+1j*0=-1}$

${\displaystyle {\sqrt {-1}}={\sqrt {\cos(180+360)+1j*\sin(180+360)}}}$ ${\displaystyle =\cos(90+180)+1j*\sin(90+180)}$ ${\displaystyle =\cos(270)+1j*\sin(270)=0+1j*(-1)=-1j}$

The two square roots of -1 are 1j and -1j.

>>> (1j)**2 ; (-1j)**2
(-1+0j)
(-1+0j)
>>>

##### Cube roots of 1 simplified

${\displaystyle \cos(0)+1j*\sin(0)=1+1j*0=1}$

${\displaystyle {\sqrt[{3}]{1}}=(\cos(0)+1j*\sin(0))^{(1/3)}}$ ${\displaystyle =\cos(0/3)+1j*\sin(0/3)}$ ${\displaystyle =\cos(0)+1j*\sin(0)=1}$

Trigonometric functions are cyclical:

${\displaystyle \cos(360)+1j*\sin(360)=1+1j*0=1}$

${\displaystyle {\sqrt[{3}]{1}}=(\cos(360)+1j*\sin(360))^{(1/3)}}$ ${\displaystyle =\cos(360/3)+1j*\sin(360/3)}$ ${\displaystyle =\cos(120)+1j*\sin(120)=-\cos(60)+1j*\sin(60)}$ ${\displaystyle =-{\frac {1}{2}}+1j{\frac {\sqrt {3}}{2}}={\frac {-1+1j*{\sqrt {3}}}{2}}}$

Proof: ${\displaystyle r_{2}=-1(\cos(60)-1J*\sin(60))=-1(\cos(-60)+1J*\sin(-60)).}$ ${\displaystyle r_{2}^{3}=(-1)^{3}(\cos(-180)+1J*\sin(-180))=-1(-1+1J*0)=1.}$

${\displaystyle \cos(720)+1j*\sin(720)=1+1j*0=1}$

${\displaystyle {\sqrt[{3}]{1}}=(\cos(720)+1j*\sin(720))^{(1/3)}}$ ${\displaystyle =\cos(720/3)+1j*\sin(720/3)}$ ${\displaystyle =\cos(240)+1j*\sin(240)=-\cos(60)-1j*\sin(60)}$ ${\displaystyle =-{\frac {1}{2}}-1j{\frac {\sqrt {3}}{2}}={\frac {-1-1j*{\sqrt {3}}}{2}}}$

Proof: ${\displaystyle r_{3}=-1(\cos(60)+1J*\sin(60))}$. ${\displaystyle r_{3}^{3}=(-1)^{3}(\cos(180)+1J*\sin(180))=-1(-1+1J*0)=1.}$

The three cube roots of 1 are : ${\displaystyle 1,-\cos 60^{\circ }\pm 1j*\sin 60^{\circ }}$ or ${\displaystyle 1,\ {\frac {-1+1j*{\sqrt {3}}}{2}},\ {\frac {-1-1j*{\sqrt {3}}}{2}}.}$

>>> r1 = 1 ; v1 = r1**3 ; v1
1
>>> r2 = ( -1 + 1j * (3**0.5)) / 2 ; v2 = r2**3 ; v2
(0.9999999999999998+1.1102230246251565e-16j)
>>> r3 = ( -1 - 1j * (3**0.5)) / 2 ; v3 = r3**3 ; v3
(0.9999999999999998-1.1102230246251565e-16j)
>>>
>>> [ cmath.isclose(v,1,abs_tol=1e-15) for v in (v1,v2,v3) ]
[True, True, True]
>>>


#### Multiplication of complex numbers

To multiply two complex numbers in polar format, multiply the moduli and add the phases.

>>> cn1 = 3+4j ; cn1
(3+4j)
>>> r1,φ1 = cmath.polar(cn1) ; r1 ; φ1
5.0
0.9272952180016122 # radians
>>>
>>> cn2 = -4+3j ; cn2
(-4+3j)
>>> r2,φ2 = cmath.polar(cn2) ; r2 ; φ2
5.0
2.498091544796509 # radians
>>>
>>> v1 = cn1*cn2 ; v1
(-24-7j)
>>> v2 = 25*( cmath.cos(φ1 + φ2) + 1j*cmath.sin(φ1 + φ2) ) ; v2 # r1 * r2 = 25
(-24-7.000000000000002j)
>>>
>>> cmath.isclose(v1, v2, abs_tol=1e-15)
True
>>>

##### Proof

${\displaystyle (\cos A+1j*\sin A)*(\cos B+1j*\sin B)}$ ${\displaystyle =\cos A\cos B+1j*\cos A\sin B+1j*\sin A\cos B+j^{2}\sin A\sin B}$ ${\displaystyle =\cos A\cos B-\sin A\sin B+1j*(\cos A\sin B+\sin A\cos B)}$ ${\displaystyle =\cos(A+B)+1j*\sin(A+B)}$

### Classification functions

#### cmath.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)

Return True if the values a and b are close to each other and False otherwise.

Whether or not two values are considered close is determined according to given absolute and relative tolerances.

>>> v1; cmath.polar(v1)
(87283949+87283949j)
(123438144.45328155, 0.7853981633974483)
>>> v2; cmath.polar(v2)
(87283949+87283950j)
(123438145.16038834, 0.7853981691258783)
>>> cmath.isclose(v1,v2)
False
>>>
>>> cmath.isclose(v1,v2, rel_tol=8e-9)
False
>>> cmath.isclose(v1,v2, rel_tol=9e-9)
True
>>>
>>> cmath.isclose(v1,v2, abs_tol=1)
True
>>> cmath.isclose(v1,v2, abs_tol=.5)
False
>>>


### Power and logarithmic functions

#### cmath.sqrt(x)

Return the positive value of the square root of x.

>>> cmath.sqrt(-1)
1j
>>>
>>> cmath.sqrt(7+24j)
(4+3j)
>>> -cmath.sqrt(7+24j)
(-4-3j) # sqrt has both positive and negative values.
>>> (-cmath.sqrt(7+24j))**2
(7+24j)
>>>
>>> cmath.sqrt(7+(7/16) + 1j*7.5)
(3+1.25j)
>>>


#### cmath.exp(x)

Return the exponential value e**x.

>>> cmath.exp(1)
(2.718281828459045+0j) # Value of e, base of natural logarithms.
>>>


Euler's formula: ${\displaystyle e^{i\theta }=\cos \theta +1j*\sin \theta }$

When ${\displaystyle \theta }$ has the value ${\displaystyle {\frac {\pi }{3}}}$ or ${\displaystyle 60}$ degrees:

>>> π = cmath.pi ; π
3.141592653589793
>>>
>>> cmath.exp(1j*π/3) # π/3 = 60 degrees.
(0.5+0.8660254037844386j)
>>>
>>> cmath.cos(π/3)
(0.5-0j)
>>> cmath.sin(π/3)
(0.8660254037844386+0j)
>>>
>>> cmath.exp(1j*π/3) == cmath.cos(π/3) + 1j*cmath.sin(π/3)
True
>>>


The case when ${\displaystyle \theta =\pi }$:

>>> cmath.exp(1j*π)
(-1+0j)
>>>


The combination of value -1 and expression cmath.exp(1j*π) is equivalent to Euler's famous Identity: ${\displaystyle e^{i\pi }=-1}$

##### When x is complex:

According to the rules of exponents, the expression cmath.exp(a+bj) is equivalent to:

${\displaystyle e^{(a+bj)}=e^{a}e^{bj}=e^{a}(\cos b+1j*\sin b)}$

${\displaystyle e^{(1+1j*\pi )}=e^{1}e^{1j*\pi }=e(-1)}$

>>> e = cmath.exp(1) ; e
(2.718281828459045+0j)
>>>
>>> b = cmath.exp(1j*π) ; b
(-1+1.2246467991473532e-16j)
>>> cmath.isclose(b.imag,0,abs_tol=1e-15)
True
>>> b=complex(b.real,0);b
(-1+0j)
>>>
>>> c = cmath.exp(1+1j*π) ; c
(-2.718281828459045+3.328935140402784e-16j)
>>> cmath.isclose(c.imag,0,abs_tol=1e-15)
True
>>> c=complex(c.real,0);c
(-2.718281828459045+0j)
>>>
>>> c == e*b == e*(-1)
True
>>>


# Number Conversions

## Introduction

Since integers and floats can't be mixed together in some situations, you'll need to be able to convert them from one type to another. Luckily, it's very easy to perform a conversion. To convert a data type to an integer, use the int() function.

>>> int(1.5)
1
>>> int(10.0)
10
>>> int(True)
1
>>> int(False)
0
>>> int('0xFF', base=16) ; int('0xF1F0', 16) ; int('0b110100111', 0) ; int('11100100011',2)
255
61936
423
1827


You can even convert strings, which you'll learn about later.

>>> int("100")
100


To convert a data type to a float, use the float() function. Like the integer, you can convert strings to floats.

>>> float(102)
102.0
>>> float(932)
932.0
>>> float(True)
1.0
>>> float(False)
0.0
>>> float("101.42")
101.42
>>> float("4")
4.0


Note: You cannot use any of the above conversions on a complex number, as it will raise an error. You can work around this by using .real and .imag

You can also use the bool() function to convert a data type to a Boolean.

>>> bool(1)
True
>>> bool(0)
False
>>> bool(0.0)
False
>>> bool(0.01)
True
>>> bool(14)
True
>>> bool(14+3j)
True
>>> bool(3j)
True
>>> bool(0j)
False
>>> bool("")
False
>>> bool("Hello")
True
>>> bool("True")
True
>>> bool("False")
True


Note: Notice that bool("False") is True. Unlike int() and float(), when bool() converts a string, it checks to see if the string is empty or not.

Converting a data type to a complex is a little more tricky, but still easy. All you need to do is use the function complex() which takes two parameters, one of which is optional. The first parameter is the real number, which is required, and the second parameter is the imaginary number, which is optional.

>>> complex(True)
(1+0j)
>>> complex(False)
0j
>>> complex(3, 1)
(3+1j)
>>> complex(1, 22/7)
(1+3.142857142857143j)
>>> complex(0, 1.5)
1.5j
>>> complex(7, 8)
(7+8j)
>>> complex("1")
(1+0j)
>>> complex("1+4j")
(1+4j)
>>> complex("9.75j")
9.75j


## Converting integers, decimal to non-decimal

This conversion is from int to str representing int:

>>> a = 12345678901234567890
>>> b = bin(a) ; b
'0b1010101101010100101010011000110011101011000111110000101011010010'
>>> h = hex(a) ; h
'0xab54a98ceb1f0ad2'
>>> o = oct(a) ; o
'0o1255245230635307605322'
>>>


## Converting integers, non-decimal to decimal

This conversion is from str representing int to int:

>>> a;b;h;o
12345678901234567890
'0b1010101101010100101010011000110011101011000111110000101011010010'
'0xab54a98ceb1f0ad2'
'0o1255245230635307605322'
>>>
>>> int(b,base=0) == int(b,base=2) == int(b,0) == int(b,2) == a # Base 0 or correct base is required.
True
>>> int(h,16) == a
True
>>> int(o,8) == a
True
>>>
>>> int ('ab54a98ceb1f0ad2', 16) == a # When base 16 is supplied, the prefix '0x' is not necessary.
True
>>>
>>> eval(b) == a # Function eval(...) provides simple conversion from str to base type.
True
>>> eval(h) == a
True
>>> eval(o) == a
True
>>>
>>> int('12345678901234567890',0) == int('12345678901234567890',base=0) == a
True
>>> int('12345678901234567890',10) == int('12345678901234567890',base=10) == a
True
>>> eval('12345678901234567890') == int('12345678901234567890') == a
True
>>>


## Interfacing with Python's Decimal module

>>> from decimal import *
>>> float1 = 3.14159
>>> dec1 = Decimal(float1) ; dec1
Decimal('3.14158999999999988261834005243144929409027099609375')
>>> str(dec1)
'3.14158999999999988261834005243144929409027099609375'
>>>
>>> float2 = eval(str(dec1)) ; float2
3.14159
>>> isinstance(float2, float)
True
>>> float2 == float1
True
>>>
>>> float2 = float(dec1) ; float2
3.14159
>>> isinstance(float2, float)
True
>>> float2 == float1
True
>>>


## Converting int to bytes

Method int.to_bytes(length, byteorder, *, signed=False) returns a bytes object representing an integer where:

    length (in bytes) must be sufficient to contain int, at least (int.bit_length() + 7) // 8
byteorder can be 'big', 'little' or sys.byteorder,
signed must be True if int is negative.


For example:

>>> int1 = 0x1205
>>> bytes1 = int1.to_bytes(2, byteorder='big') ; bytes1
b'\x12\x05' # A bytes object containing int1.
>>> isinstance(bytes1, bytes)
True
>>>
>>> int2 = 0xe205
>>> bytes2 = int2.to_bytes(2, byteorder='big', signed=True) ; bytes2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: int too big to convert
>>>
>>> bytes2 = int2.to_bytes(3, byteorder='big', signed=True) ; bytes2
b'\x00\xe2\x05'
>>>
>>> bytes2 = int2.to_bytes(2, byteorder='big') ; bytes2
b'\xe2\x05'
>>>
>>> int3 = -7675
>>> bytes3 = int3.to_bytes(2, byteorder='big') ; bytes3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: can't convert negative int to unsigned
>>>
>>> bytes3 = int3.to_bytes(2, byteorder='big', signed=True) ; bytes3
b'\xe2\x05'
>>>
>>> bytes2 == bytes3
True
>>>


The bytes object b'\xe2\x05' can represent 0xe205 or -7675 depending on whether it's interpreted as signed or unsigned.

To preserve the original int, let length = ((int.bit_length() + 7) // 8) + 1 if necessary and use signed=True.

>>> hex(int2); hex(int3)
'0xe205'
'-0x1dfb'
>>> bytes2 = int2.to_bytes(3, byteorder='big', signed=True) ; bytes2
b'\x00\xe2\x05' # Most significant bit (value=0) preserves sign (+).
>>> bytes3 = int3.to_bytes(2, byteorder='big', signed=True) ; bytes3
b'\xe2\x05' # Most significant bit (value=1) preserves sign (-).
>>>


## Converting bytes to int

A bytes object is an immutable sequence with every member ${\displaystyle x}$ an int satisfying 0xFF ${\displaystyle >=x>=0.}$

The classmethod int.from_bytes(bytes, byteorder, *, signed=False) may be used to convert from bytes object to int.

The value returned is an int represented by the given bytes object or any sequence convertible to bytes object:

>>> hex(int.from_bytes(b'\xcd\x34', byteorder='little'))
'0x34cd'
>>> hex(int.from_bytes(b'\xcd\x34', byteorder='big'))
'0xcd34'
>>> hex(int.from_bytes(b'\xcd\x34', byteorder='little', signed=True))
'0x34cd'
>>> hex(int.from_bytes(b'\xcd\x34', byteorder='big', signed=True))
'-0x32cc'
>>> hex(int.from_bytes([0xCD,0x34], byteorder='big')) # Input is list convertible to bytes.
'0xcd34'
>>> hex(int.from_bytes((0xCD,0x34), byteorder='big')) # Input is tuple convertible to bytes.
'0xcd34'
>>> hex(int.from_bytes({0xCD,0x34}, byteorder='big'))
'0x34cd' # Ordering of set is unpredictable.
>>> hex(int.from_bytes(bytes([0xCD,0x34]), byteorder='big')) # Input is bytes object.
'0xcd34'
>>> hex(int.from_bytes(bytearray([0xCD,0x34]), byteorder='big')) # Input is bytearray.
'0xcd34'
>>>


### Complete conversion

Complete conversion means conversion from int to bytes to int, or from bytes to int to bytes. When converting int/bytes/int, it is reasonable to expect that the final int should equal the original int. If you keep byteorder consistent and signed=True, you will produce consistent results:

Positive number with msb (most significant bit) clear:

>>> int1 = 0x1205
>>> bytes1 = int1.to_bytes(2, byteorder='big', signed=True) ; bytes1
b'\x12\x05'
>>> int1a = int.from_bytes(bytes1, byteorder='big', signed=True) ; hex(int1a)
'0x1205'
>>> int1==int1a
True


Negative number with msb clear:

>>> int1 = -0x1205
>>> bytes1 = int1.to_bytes(2, byteorder='big', signed=True) ; bytes1
b'\xed\xfb'
>>> int1a = int.from_bytes(bytes1, byteorder='big', signed=True) ; hex(int1a)
'-0x1205'
>>> int1==int1a
True


Positive number with msb set:

>>> int1 = 0xF205
>>> bytes1 = int1.to_bytes(2, byteorder='big', signed=True) ; bytes1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: int too big to convert
>>> bytes1 = int1.to_bytes(3, byteorder='big', signed=True) ; bytes1
b'\x00\xf2\x05'
>>> int1a = int.from_bytes(bytes1, byteorder='big', signed=True) ; hex(int1a)
'0xf205'
>>> int1==int1a
True


Negative number with msb set:

>>> int1 = -0xF305
>>> bytes1 = int1.to_bytes(2, byteorder='big', signed=True) ; bytes1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: int too big to convert
>>> bytes1 = int1.to_bytes(3, byteorder='big', signed=True) ; bytes1
b'\xff\x0c\xfb'
>>> int1a = int.from_bytes(bytes1, byteorder='big', signed=True) ; hex(int1a)
'-0xf305'
>>> int1==int1a
True


## floats

Two methods support conversion to and from hexadecimal strings. Because Python’s floats are stored internally as binary numbers, converting a float to or from a decimal string usually involves a small rounding error. In contrast, hexadecimal strings allow exact representation and specification of floating-point numbers.

### from hex

A hexadecimal float as represented by a hexadecimal string and similar to decimal floats can be int, point float or exponent float.

A point float contains a decimal point and at least one hex digit. It may contain an exponent represented by p and a power of 2.

An exponent float contains at least one hex digit and exponent.

Class method float.fromhex(s) returns the float represented by hexadecimal string s. The string s may have leading and trailing whitespace.

>>> float.fromhex('  ABC ') ; 0xABC
2748.0
2748
>>> float.fromhex('  0xABCp6 ') ; 0xABC *(2**6) # Within the string '  0xABCp6 ' the prefix '0x' is optional.
175872.0
175872
>>> float.fromhex('  ABCp-5 ') ; 0xABC *(2**(-5))
85.875
85.875
>>>


#### point float

Consider f1 = float.fromhex('0x3.a7d4').

f1 = ${\displaystyle (3+{\frac {0xA}{16}}+{\frac {0x7}{16^{2}}}+{\frac {0xD}{16^{3}}}+{\frac {0x4}{16^{4}}})}$

To simplify the conversion put the hex string in the format of exponent float without decimal point:

f1 = ${\displaystyle (3+{\frac {0xA}{16}}+{\frac {0x7}{16^{2}}}+{\frac {0xD}{16^{3}}}+{\frac {0x4}{16^{4}}})(16^{4})(16^{-4})}$

f1 = ${\displaystyle (3(16^{4})+{\frac {0xA(16^{4})}{16}}+{\frac {0x7(16^{4})}{16^{2}}}+{\frac {0xD(16^{4})}{16^{3}}}+{\frac {0x4(16^{4})}{16^{4}}})(16^{-4})}$

f1 = ${\displaystyle (0x30000+0xA000+0x700+0xD0+0x4)(16^{-4})}$

f1 = ${\displaystyle (0x3A7D4)(16^{-4})=0x3A7D4(2^{-16})}$

>>> float.fromhex('0x3.a7d4') ; float.fromhex('0x3a7d4p-16') ; 0x3A7D4 * (16**(-4))
3.65557861328125
3.65557861328125
3.65557861328125
>>>


#### point float with exponent

Consider f1 = float.fromhex('0x3.a7p10').

f1 = ${\displaystyle (3+{\frac {0xA}{16}}+{\frac {0x7}{16^{2}}})*(2^{10})}$

To simplify the conversion put the hex string in the format of exponent float without decimal point:

>>> float.fromhex('0x3.a7p10') ; float.fromhex('0x3a7p2') ; 0x3A7 * (2 ** 2)
3740.0
3740.0
3740
>>>


#### 1/3

Consider f1 = float.fromhex('0x0.55555555555555555555')

f1 = ${\displaystyle (0+{\frac {5}{16}}+{\frac {5}{16^{2}}}+{\frac {5}{16^{3}}}+...+{\frac {5}{16^{20}}})}$

To simplify calculation of f1:

f1 = ${\displaystyle 0x\underbrace {55555555555555555555} _{\text{20 hex digits}}(16^{-20})=0x55555555555555555555(2^{-80})}$

>>> fives = '0x' + '5'*20 ; fives
'0x55555555555555555555'
>>>
>>> v1 = '0x0.'+fives[2:] ; v1
'0x0.55555555555555555555' # Point float.
>>>
>>> v2 = fives + 'p-80' ; v2
'0x55555555555555555555p-80' # Exponent float without point.
>>>
>>> v3 = fives[:-18] + '.' + fives[-18:] + 'p-8' ; v3
'0x55.555555555555555555p-8' # Point float with exponent.
>>>
>>> float.fromhex(v1) ; float.fromhex(v2) ; float.fromhex(v3) ; eval(fives)*(2**(-80))
0.3333333333333333
0.3333333333333333
0.3333333333333333
0.3333333333333333
>>>
>>> v4 = '0x' + hex( eval(fives) << 3 )[2:].upper() + 'p-83' ; v4
'0x2AAAAAAAAAAAAAAAAAAA8p-83' # Exponent float with significand shifted left 3 bits.
>>> float.fromhex(v4)
0.3333333333333333
>>>


Exact value of hex string representing ${\displaystyle {\frac {1}{3}}}$ = '0x55555555555555555555p-80' = ${\displaystyle {\frac {0x55555555555555555555}{2^{80}}}.}$

>>> from decimal import *
>>> getcontext().prec = 100
>>>
>>> d1 = Decimal( eval(fives) ) / Decimal(2**80) ; d1
Decimal('0.33333333333333333333333305760646248232410837619710264334571547806262969970703125')
>>> float(d1)
0.3333333333333333
>>> len(str(d1))
82 # Well within precision of 100.
>>>


#### More room for error with greater precision

Consider the hex representation of float 0.03.

>>> (0.03).hex()
'0x1.eb851eb851eb8p-6'
>>>


See what happens when there is an error of only 1 in the rightmost hex digit:

>>> float.fromhex('0x1.eb851eb851eb8p-6')
0.03
>>> float.fromhex('0x1.eb851eb851eb7p-6')
0.029999999999999995
>>> float.fromhex('0x1.eb851eb851eb9p-6')
0.030000000000000002
>>>


With limited precision calculations have to be perfect because there is no room for error. See what happens when there is greater precision:

>>> float.fromhex('0x1eb851eb851eb77fffffp-82')
0.029999999999999995
>>> float.fromhex('0x1eb851eb851eb7800000p-82')
0.03
>>> float.fromhex('0x1eb851eb851eb8800000p-82')
0.03
>>> float.fromhex('0x1eb851eb851eb8800001p-82')
0.030000000000000002
>>>
>>> eval('0x1eb851eb851eb8800000') - eval('0x1eb851eb851eb77fffff')
16,777,217 # A range of more than 16,000,000 values that all convert to 0.03.
>>>


Less than perfect floating point calculations can be tolerated when there is greater precision in the underlying calculations. If floating point software with greater precision were to be used, reset the underlying hex value when it's obvious that it should reflect the correct value of the displayed float.

>>> h1
>>> '0x1eb851eb851eb7812345p-82'
>>> float.fromhex(h1)
0.03
>>> h1 = '0x1eb851eb851eb851eb85p-82' # Most accurate hex representation of float 0.03 with this precision.
>>> float.fromhex(h1)
0.03
>>>


### to hex

Instance method float.hex() returns a representation of a floating-point number as a hexadecimal string.

>>> (24.567).hex()
'0x1.89126e978d4fep+4' # 13 places of hex decimals.
>>>
>>> float.fromhex('0x189126e978d4fep-48')
24.567
>>>


The significand in standard form: ${\displaystyle 0x1\underbrace {89126e978d4fe} _{\text{13 hex digits}}}$

Number of bits in significand = 1 + 13*4 = 53. Recall that sys.float_info.mant_dig = 53.

#### In standard form

Conversion to hex string with 13 places of decimals as above:

${\displaystyle {\frac {24567}{1000}}={\frac {x}{16^{13}}};}$ ${\displaystyle x={\frac {24567(16^{13})}{1000}}}$

>>> x,r = divmod( 24567*(16**13), 1000  ) ; x; r
110639932045610975
232
>>> h1 = '0x' + hex(x)[2:].upper() ; h1
'0x189126E978D4FDF'
>>> v1 = h1 + 'p-52' ; v1
'0x189126E978D4FDFp-52'
>>> float.fromhex(v1)
24.567
>>> x.bit_length()
57
>>>


Truncate and round x so that the result fits in 53 bits:

>>> x1 = (x >> 4) + ((x & 0xF) >= 8) ; x1 ; hex(x1)
6914995752850686
'0x189126e978d4fe'
>>> x1.bit_length()
53
>>> h1a = '0x' + hex(x1)[2:].upper() ; h1a
'0x189126E978D4FE'
>>>
>>> v1a = h1a + 'p-48' ; v1a
'0x189126E978D4FEp-48' # Exponent float.
>>> v1b = h1a[:-12] + '.' + h1a[-12:] ; v1b
'0x18.9126E978D4FE' # Point float
>>> v1c = h1a[:-13] + '.' + h1a[-13:] + 'p+4' ; v1c
'0x1.89126E978D4FEp+4' # Standard format, point float with exponent.
>>>
>>> float.fromhex(v1a) ; float.fromhex(v1b) ; float.fromhex(v1c)
24.567
24.567
24.567
>>> v1c.lower() == (24.567).hex()
True
>>>


#### float.as_integer_ratio()

Instance method float.as_integer_ratio() returns a pair of integers whose ratio is exactly equal to the hex representation of the original float and with a positive denominator.

>>> a,b = (1.13 - 1.1).as_integer_ratio(); a; b; a/b
67553994410557
2251799813685248
0.029999999999999805
>>>


${\displaystyle {\frac {a}{b}}={\frac {67553994410557}{2251799813685248}}={\frac {0x3d70a3d70a3d}{2^{51}}}}$ = '0x3d70a3d70a3dp-51'.

>>> float.fromhex('0x3d70a3d70a3dp-51')
0.029999999999999805
>>>


#### Conversion with more precision than standard form

To convert 24.567 to hex with greater precision than standard form:

Get the power of 2 in standard form:

>>> a,b = (24.567).as_integer_ratio()
>>> power_of_2 = len(bin(b))-3
>>> b == 2**power_of_2
True
>>>


Decide what precision you want. For example, with 4 more hex digits.

>>> power_of_2 += 16
>>> power_of_2
63
>>>


Get the value of float 24.567 as the exact ratio of two integers:

>>> a1,b1 = Decimal(str(24.567)).as_integer_ratio();a1;b1
24567
1000
>>>


${\displaystyle {\frac {24567}{1000}}={\frac {x}{2^{63}}};\ x={\frac {24567(2^{63})}{1000}}}$

>>> x,r = divmod( 24567*(2**63),1000  );x;r
226590580829411277275
136
>>> h1 = hex(x).upper().replace('X','x') + 'p-63' ; h1
'0xC489374BC6A7EF9DBp-63'
>>> float.fromhex(h1)
24.567
>>>


Exact value of 24.567:

>>> d1 = Decimal(24.567);d1;float(d1)
Decimal('24.5670_0000_0000_0001_7053025658242404460906982421875') # 17 significant digits.
24.567
>>>


Exact value of 24.567 with greater precision (h1):

>>> d2 = Decimal( eval(h1[:-4]) ) / Decimal(2**63);d2;float(d2)
Decimal('24.5669_9999_9999_9999_9998_5254850454197139697498641908168792724609375') # 21 significant digits.
24.567
>>>


Compare the differences:

>>> diff1 = Decimal('24.567') - d1
>>> diff2 = Decimal('24.567') - d2
>>> diff1;diff2
Decimal('-1.7053025658242404460906982421875E-16')
Decimal('1.4745149545802860302501358091831207275390625E-20')
>>>


#### Exact conversion

The only floating point numbers that can be converted exactly to hex are those that, when expressed as a fraction, have a divisor that is an integer power of 2.

>>> (1+71/(2**67)).hex()
'0x1.0000000000000p+0'
>>>


To convert 1+71/(2**67) to hex exactly:

${\displaystyle 1+{\frac {71}{2^{67}}}}$ ${\displaystyle =1+{\frac {142}{2^{68}}}}$ ${\displaystyle =1+{\frac {0x8E}{16^{17}}}}$ ${\displaystyle =0x1\_\underbrace {0000\_0000\_0000\_0008\_E} _{\text{17 hex digits}}(16^{-17})}$ ${\displaystyle =0x1.0000000000000008E}$ ${\displaystyle =0x10000000000000008E(2^{-68})}$

>>> h1 = '0x1' + '0'*15 + '8E' ; h1
'0x10000000000000008E'
>>> v1 = h1+'p-68' ; v1
'0x10000000000000008Ep-68' # exact hex value of 1 + 71/(2**67)
>>> float.fromhex(v1)
1.0 # lack of precision.
>>>
>>> getcontext().prec
150
>>> d1 = Decimal(1) + Decimal(71)/Decimal(2**67) ; d1
Decimal('1.0000000000000000004811147140404425925908071803860366344451904296875') # exact decimal value of 1 + 71/(2**67)
>>>
>>> len(str(d1))
69
>>> d2 = Decimal(eval(h1)) / Decimal(2**68) ; d2
Decimal('1.0000000000000000004811147140404425925908071803860366344451904296875') # exact decimal value of '0x10000000000000008Ep-68'
>>> d1 == d2
True
>>>
>>> float(d1)
1.0
>>>


### Floating point calculation of 1.13 - 1.1

>>> (1.1).hex()
'0x1.199999999999ap+0'
>>> v1 = '0x1199999999999Ap-52' ; float.fromhex(v1)
1.1
>>>
>>> (1.13).hex()
'0x1.2147ae147ae14p+0'
>>> v2 = '0x12147AE147AE14p-52' ; float.fromhex(v2)
1.13
>>>


Difference = 1.13 - 1.1 = v2 - v1 = '0x12147AE147AE14p-52' - '0x1199999999999Ap-52' = (0x12147AE147AE14 - 0x1199999999999A)${\displaystyle 2^{-52}}$ = 0x7AE147AE147A${\displaystyle (2^{-52})}$ = '0x7AE147AE147Ap-52'

>>> float.fromhex('0x7AE147AE147Ap-52')
0.029999999999999805
>>> 1.13-1.1
0.029999999999999805
>>>


Exact value of v1:

>>> d1 = Decimal(eval(v1[:-4])) / Decimal(2**52) ; d1 ; d1 == Decimal(1.1)
Decimal('1.100000000000000088817841970012523233890533447265625') # > 1.1
>>> True
>>>


Exact value of v2:

>>> d2 = Decimal(eval(v2[:-4])) / Decimal(2**52) ; d2 ; d2 == Decimal(1.13)
Decimal('1.12999999999999989341858963598497211933135986328125') # < 1.13
>>> True
>>>


Exact value of difference:

>>> d1a = d2-d1 ; d1a
Decimal('0.029999999999999804600747665972448885440826416015625')
>>>
>>> d1a == Decimal(1.13 - 1.1)
>>> True
>>> float(d1a)
0.029999999999999805
>>>


#### Why the error appears to be so great

>>> error2 = d2-Decimal('1.13'); error2
Decimal('-1.0658141036401502788066864013671875E-16') # Negative number.
>>>
>>> error1 = d1-Decimal('1.1'); error1
Decimal('8.8817841970012523233890533447265625E-17') # Positive number.
>>>
>>> total_error = error2-error1 ; total_error
Decimal('-1.95399252334027551114559173583984375E-16') # Relatively large negative number.
>>>
>>> total_error + ( Decimal('1.13')-Decimal('1.1') - Decimal(1.13-1.1) )
Decimal('0E-51')
>>>


#### An observation

While the value 0.03 does not contain a repeating decimal, it seems that there is a repeating "decimal" in the hex representation. Repeat the sequence in hex and see what happens:

>>> diff1 = '0x7AE14_7AE14_7AE14_7AE14_7AE14p-104'
>>> float.fromhex( diff1.replace('_', '') )
0.03
>>>


Exact decimal value of diff1:

>>> diff2 = Decimal(eval(diff1[:-5])) / Decimal(2**104) ; diff2
Decimal('0.029999999999999999999999999999976334172843369645837648143041516413109803806946729309856891632080078125')
>>> float(diff2)
0.03
>>>


Should the point in the hex representation be called a "heximal" point?

#### User-written floating point software for an accurate result

>>> (1.13).hex()
'0x1.2_147ae_147ae_14_p+0' # 13 hexadecimal digits after the hexadecimal point.
>>> (1.1).hex()
'0x1.1_99999_99999_9a_p+0' # 13 hexadecimal digits after the hexadecimal point.
>>>


By inspection produce hex values for 1.13 and 1.1 accurate to 14 hexadecimal places after the hexadecimal point.

>>> v1 = '0x12_147ae_147ae_148_p-56'
# '0x1.2_147ae_147ae_147ae_147ae_147ae_p+0' accurate to 14 hexadecimal digits after the hexadecimal point.
>>> float.fromhex(v1.replace('_',''))
1.13
>>>
>>> v2 = '0x1_19999_99999_999a_p-56' # Accurate to 14 hexadecimal digits after the hexadecimal point.
>>> float.fromhex(v2.replace('_',''))
1.1
>>>


Slightly more precision in the underlying calculations produces an accurate difference:

>>> diff = hex(eval(v1[:-5]) - eval(v2[:-5])) + 'p-56' ; diff
'0x7ae147ae147aep-56'
>>> float.fromhex(diff)
0.03
>>>


Exact value of diff:

>>> from decimal import *
>>> getcontext().prec = 100
>>>
>>> d1 = Decimal( eval(diff[:-4]) ) / Decimal(2**56) ; d1
Decimal('0.0299999999999999988897769753748434595763683319091796875')
>>> float(d1)
0.03
>>>
>>> getcontext().prec = 15 # Maximum precision of floats.
>>> d1 += 0 ; d1
Decimal('0.0300000000000000') # Accurate to 15 digits of precision.
>>> float(d1)
0.03
>>>


# Miscellaneous Topics

## Plus zero and minus zero

The concept of plus and minus zero does not apply to the integers:

>>> +0 ; -0
0
0
>>>

Floats retain the distinction:

>>> +0. ; -0. ; +0. == -0. == 0
0.0
-0.0
True
>>>

As do complex numbers:

>>> complex(-0., -0.)
(-0-0j)
>>> complex(-0., -0.).real
-0.0
>>> complex(-0., -0.).imag
-0.0
>>>

### Examples of plus and minus zero

>>> '{0:0.2f}'.format(.0000)
'0.00'
>>> '{0:0.2f}'.format(.0003)
'0.00'
>>> '{0:0.2f}'.format(-.0000)
'-0.00'
>>> '{0:0.2f}'.format(-.0003)
'-0.00'
>>>

A small non-zero positive number was displayed as '0.00'; a small non-zero negative number was displayed as '-0.00'.

According to values under "Cube roots of 1 simplified," one of the cube roots of unity is ${\displaystyle r_{3}={\frac {-1-1j*{\sqrt {3}}}{2}}.}$

We expect that ${\displaystyle r_{3}^{3}}$ should equal ${\displaystyle 1.}$ However:

>>> r3 = ( -1 - 1j * (3**0.5)) / 2 ; v3 = r3**3 ; v3
(0.9999999999999998-1.1102230246251565e-16j)
>>>

The best accuracy which we can expect from floats is 15 significant digits:

>>> '{0:0.15f}'.format( v3.real )
'1.000000000000000'
>>> '{0:0.15f}'.format( v3.imag )
'-0.000000000000000'
>>>

The respective values '1.000000000000000', '-0.000000000000000' are the most accurate that can be displayed with 15 places of decimals.

v3.imag, a small non-zero negative number was displayed as '-0.000000000000000'.

>>> float( '-0.000000000000000' )
-0.0
>>>

The conversion from string to float preserves the distinction of minus zero, indicating that the original value was, probably, a small, non-zero, negative number.

## Precision and formatted decimals

Within this section the expression "decimal precision" means precision as implemented by Python's decimal module.

Precision means the number of digits used to display a value beginning with and containing the first non-zero digit. Some examples using decimal precision:

>>> from decimal import *
>>> getcontext().prec = 4
>>>
>>> Decimal('0.0034567')
Decimal('0.0034567') # Precision not enforced here.
>>> Decimal('0.0034567') + 0
Decimal('0.003457') # '+ 0' forces result to conform to precision of 4.
>>> Decimal('0.003456432') + 0
Decimal('0.003456')
>>> Decimal('3456432') + 0
Decimal('3.456E+6')
>>> Decimal('3456789') + 0
Decimal('3.457E+6')
>>>
>>> Decimal('0.00300055') + 0
Decimal('0.003001')
>>> Decimal('0.00300033') + 0
Decimal('0.003000') # Trailing Zeroes are retained to conform to precision of 4.
>>>

Note how the following values are rounded. More about rounding in the next section.

>>> from decimal import *
>>> getcontext().prec = 4
>>>
>>> Decimal('3456500') + 0
Decimal('3.456E+6') # Rounded down.
>>> Decimal('3457500') + 0
Decimal('3.458E+6') # Rounded up.
>>>

Python's string method .format can be used to display a value accurate to a given number of decimal positions.

>>> '{0:.4f}'.format(0.003456)
'0.0035' # Accurate to four places of decimals.
>>> '{0:.4f}'.format(0.003446)
'0.0034' # Accurate to four places of decimals.
>>>

Default rounding provided by python's string method .format is same as that for decimal precision:

>>>
>>> '{0:.4f}'.format(0.00345)
'0.0034' # Rounded down.
>>> '{0:.4f}'.format(0.00355)
'0.0036' # Rounded up.
>>>

## Rounding of close but inexact values

Even elementary calculations lead to increasing complexity in the practical execution of a task involving numbers.

It is fairly easy to imagine one fifth of one inch shown mathematically as ${\displaystyle {\frac {1}{5}}}$ inch.

What if you had to cut from a piece of steel rod a smaller piece with an exact length of ${\displaystyle {\frac {1}{5}}}$ inch?

Most common measuring instruments have lengths expressed in inches and sixteenths of an inch. Then ${\displaystyle {\frac {1}{5}}}$ inch becomes ${\displaystyle {\frac {3.2}{16}}}$ inch.

You use a micrometer measuring instrument accurate to ${\displaystyle {\frac {1}{1000}}}$ inch and ${\displaystyle {\frac {200}{1000}}}$ inch is ${\displaystyle {\frac {1}{5}}}$ inch exactly.

What if you had to produce a piece of steel rod with length ${\displaystyle {\frac {1}{7}}}$ feet exactly?

${\displaystyle {\frac {1}{7}}}$ feet = ${\displaystyle {\frac {12}{7}}}$ inches = ${\displaystyle 1.714285714285}$ inches = ${\displaystyle 1{\frac {11.428571428571}{16}}}$ inches.

Both the measuring tape and micrometer seem inadequate for the task. You devise a system dependent on similar triangles and parallel lines. Success. The value ${\displaystyle {\frac {1}{7}}}$ can be produced geometrically.

What if you had to produce a square with an exact area of ${\displaystyle 2}$ square inches? The side of the square would be ${\displaystyle {\sqrt {2}}}$ inches exactly, but how do you measure ${\displaystyle {\sqrt {2}}}$ inches exactly? In practical terms your task becomes feasible if some tolerance in the area of the finished product is allowed, for example ${\displaystyle 2\pm 0.0002}$ square inches. Then a "square" with adjacent sides of 1.4142 and 1.4143 inches has an area within the specified tolerance.

### DRIP for example

Even simple calculations based on exact decimals quickly lead to impractical numbers. Suppose you have a DRIP (Dividend ReInvestment Plan) with any of the big well-known corporations. Most DRIPs permit you to purchase fractions of a share of stock. Shares of an attractive stock are currently trading for ${\displaystyle \38}$ and you invest ${\displaystyle \100}$, buying ${\displaystyle {\frac {100}{38}}}$ shares of stock.

On your statement you don't see a credit of ${\displaystyle 2{\frac {24}{38}}}$ shares of stock. The custodian of the plan may show your holding accurate to four places of decimals: ${\displaystyle 2.6315}$ shares.

The fraction ${\displaystyle {\frac {24}{38}}}$ is actually ${\displaystyle 0.63157894736........}$ but your credit is ${\displaystyle 2.6315}$ called 'rounding down.'

The corporation then issues a dividend of ${\displaystyle \0.37}$ per share when the stock is trading for ${\displaystyle \37.26}$ per share and you reinvest the dividend. If the custodian reinvests the dividend for you at a ${\displaystyle 5\%}$ discount to market price, your credit is ${\displaystyle {\frac {0.37*2.6315}{37.26*0.95}}=0.02750670960.....}$ shares, shown on your statement as ${\displaystyle 0.0275}$ shares giving you a total holding of ${\displaystyle 2.6315+0.0275=2.659}$ shares probably shown on your statement as ${\displaystyle 2.6590}$ with a current market value of ${\displaystyle \2.659*37.26=\99.07434}$ shown on your statement as ${\displaystyle \99.07}$.

The result of your mathematical calculations will probably be a close approximation of the exact value after allowing for tolerable errors based on precision and rounding of intermediate results.

### Default rounding

When you import python's decimal module, values are initialized as the 'default' values.

>>> from decimal import *
>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow])
>>> setcontext(DefaultContext)
>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow]) # Same as above.
>>>


The default rounding method is called ROUND_HALF_EVEN. This means that, if your result is exactly half-way between two limits, the result is rounded to the nearest even number. For simplicity set precision to 4:

>>> getcontext().prec = 4
>>> getcontext()
Context(prec=4, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow])
>>>
>>> Decimal('0.012344') + 0
Decimal('0.01234')
>>> Decimal('0.012345') + 0
Decimal('0.01234') # Rounded down to nearest even number.
>>> Decimal('0.045674') + 0
Decimal('0.04567')
>>> Decimal('0.045675') + 0
Decimal('0.04568')  # Rounded up to nearest even number.
>>>


Default rounding provided by python's string method .format is same as that for decimal precision:

>>>
>>> '{0:.4f}'.format(0.00345)
'0.0034' # Rounded down.
>>> '{0:.4f}'.format(0.00355)
'0.0036' # Rounded up.
>>>


A disadvantage of this method of rounding is that an examination of the result does not indicate what the original value was:

>>> Decimal('0.045675') + 0 == Decimal('0.045685') + 0
True
>>>


### Other rounding modes

#### ROUND_HALF_UP

Rounding mode ROUND_HALF_UP is illustrated as follows:

>>> getcontext().rounding=ROUND_HALF_UP
>>> getcontext()
Context(prec=4, rounding=ROUND_HALF_UP, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow])
>>>
>>> Decimal('0.012345') + 0
Decimal('0.01235')
>>> Decimal('0.012355') + 0
Decimal('0.01236')
>>> Decimal('0.012365') + 0
Decimal('0.01237')
>>> Decimal('0.012375') + 0
Decimal('0.01238')


Same logic for negative numbers:

>>> Decimal('-0.012345') + 0
Decimal('-0.01235')
>>> Decimal('-0.012355') + 0
Decimal('-0.01236')
>>> Decimal('-0.012365') + 0
Decimal('-0.01237')
>>> Decimal('-0.012375') + 0
Decimal('-0.01238')


#### ROUND_DOWN

The numbers in the example under DRIP above are derived using python's .quantize() method and rounding set to ROUND_DOWN.

setcontext(DefaultContext)

number_of_initial_shares = 100/38

print ('number_of_initial_shares =', number_of_initial_shares)

number_of_initial_shares = Decimal(number_of_initial_shares).quantize(Decimal('.0001'), rounding=ROUND_DOWN)

print (
'''After rounding down
number_of_initial_shares =''', number_of_initial_shares
)

shares_added = 0.37*float(number_of_initial_shares) / (37.26*0.95)

print ('''
shares_added =''',  shares_added)

shares_added = Decimal(shares_added).quantize(Decimal('.0001'), rounding=ROUND_DOWN)

print (
'''After rounding down
shares_added =''', shares_added
)

total_shares = shares_added + number_of_initial_shares

value = total_shares * Decimal('37.26')

print ('''
value =''', value)

value = value.quantize(Decimal('.01'), rounding=ROUND_DOWN)

print (
'''After rounding down
value =''', '$'+str(value) )  number_of_initial_shares = 2.6315789473684212 After rounding down number_of_initial_shares = 2.6315 shares_added = 0.02750670960815888 After rounding down shares_added = 0.0275 value = 99.074340 After rounding down value =$99.07

When using method .quantize(....), ensure that getcontext().prec is adequate:

>>> getcontext().prec = 6
>>> Decimal('0.0000123456').quantize(Decimal('1e-6'))
Decimal('0.000012')
>>> Decimal('123.0000123456').quantize(Decimal('1e-6'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>]
>>> getcontext().prec = 9
>>> Decimal('123.0000123456').quantize(Decimal('1e-6'))
Decimal('123.000012') # Desired result has precision of 9.
>>>


# Assignments

 Experiment with the python interpreter using integers, floats, Booleans, and complex numbers, noting errors, for example: >>> + 44 ; eval(' - 33 ') ; eval(' - 0003.e-002 ') ; eval(' + 0003.00000E-002 ') ; eval(' + 0003.12E0073 ') >>> + 4 + 1j*3 ; complex(2,-3) ; complex("2+3j") ; eval("2+3J") ; complex('-3',7j) >>> bool(6) ; bool(0) ; bool('0') ; '123'+0 ; bool('123') + 1 ; 1+3 == 4 ; 2**2 == -3  Think critically about integers and floats. When should you use integers? When should you use floats? A loss of significance for a float value should be expected when working with long or insignificant numbers. When would this become a problem? Under "Using formatted string" above the test for correct summation is:  if sum != count / 10_000_000_000 :  This could be written as:  if sum != count * increment :  However: >>> 5*(1e-10) ; 6*(1e-10) ; 7*(1e-10) 5e-10 6e-10 7.000000000000001e-10 >>> >>> 7 / (1e10) 7e-10 >>>  How would you write the line if sum != count * increment : to ensure an accurate test? Using techniques similar to those contained in section "Cube roots of 1 simplified" calculate all five of the fifth roots of unity. Using the "Proof" under "Multiplication of complex numbers" show that: ${\displaystyle {\frac {\cos A+1j*\sin A}{\cos B+1j*\sin B}}=\cos(A-B)+1j*\sin(A-B)}$ One of the cube roots of unity, ${\displaystyle r_{2}={\frac {-1+1j*{\sqrt {3}}}{2}}.}$ For greater precision than is available with floating point arithmetic, use Python's decimal module to calculate ${\displaystyle r_{2}^{3}.}$

# Further Reading or Review

 Completion status: Ready for testing by learners and teachers. Please begin!

# References

Python's built-in functions:

Python's documentation: