10. Floating Point, Separate Assembly and Objects

Part of 22C:60, Computer Organization Notes
by Douglas W. Jones
THE UNIVERSITY OF IOWA Department of Computer Science

Classes and Objects

In the previous chapter, we discussed many classes of number representations, along with the implementation of operations ranging from addition and subtraction, on the one hand, to multiplication and division. In an object-oriented programming language such as C++ or Java, we can identify each of these classes or data types with a specific linguistic construct, the class definition. Each class definition completely encapsulates all of the details of the representation of all objects of that class, so that outside that class definition, operations on objects of that class are carried out only by the methods defined as part of the class definition.

Assembly and machine language programmers cannot completely hide the details of the data types with which they work. Any time an object is loaded into registers, its size is exposed, and any time specific machine instructions are used to manipulate an object, these instructions must be used with full knowledge of their effect on the representation of the object as well as on the abstract value connected to that representation.

Nonetheless, an assembly language programmer on a machine such as the Hawk can go a considerable distance toward isolating the users of objects from the details of their representations! We can do this in several steps, first isolating the code used to implement operations from the code that uses those operations, second, providing clean interface specifications for the objects, and finally allowing for polymorphic classes, that is, mixtures of different but compatable representations.

Separate Assembly

Suppose you have a set of subroutines, for example, multiply and divide, that you want to break off from a large program. Since the mid 1950's, there have been assembly languages that allowed such routines to be separately assembled before use in other programs. We have already been using such tools here, specifically, in the form of the Hawk Monitor, a separately assembled block of code that includes input, output, limited arithmetic support, and handlers for traps.

There are at two good reasons to separate any program into multiple source files, no matter what programming language is being used. First, smaller source files are frequently easier to edit. Real-world applications programs are frequently huge, with total sizes measured in thousands to millions of lines of code. Second, by separating an application into multiple separate pieces, we can isolate program components that have already been tested from those currently undergoing development. Some files may contain standard components that are used in many applications, while other files are unique to one application.

In assembly language programs, and to some extent, in C and C++ programs, there is another reason to separate programs into multiple source files. In these languages, some identifiers are defined locally within one source file, while other identifiers are global to all source files. In C and C++, for example, static functions and static global variables are local to the source file in which they are defined, while other functions and globals are global across all source files that make up an appliction. In the SMAL assembly language, all identifiers are, by default, purely local to one source file, but they may be made global using explicit INT and EXT declarations.

If we wanted to put our own integer multiply and divide routines in a separate source file from the program or programs that used them, we might structure the file like this:

Outline of a separately assembled source file for the Hawk
        TITLE   intmuldiv.a, integer multiply and divide
        INT     INTMUL
        INT     INTDIV

        SUBTITLE multiply

INTMUL:         ; link through R1
                ; on entry, R3 = multiplier
                ;           R4 = multiplicand
                ; ... other interface specifications

        ... Code for multiply

        SUBTITLE divide

INTDIV:         ; link through R1
                ; on entry, R3 = low 32 bits of dividend
                ;           R4 = high 32 bits of dividend
                ; ... other interface specifications

        ...  Code for divide

        END

This source file, named intmuldiv.a, contains no main program and has no starting address. Some aspects of the file are merely examples of good programming practice. For example, the title of the file begins with the file name, so that assembly listings are always self-descriptive, and the remainder of the title briefly describes the contents. The subtitle given before each routine adds additional documentation. Most decent assemblers provide similar documentation support, and where they do not, careful use of comments can achieve the same effect. Listings created by the SMAL assembler will always contain a table of contents constructed from the subtitles in the source file.

We have adopted a naming convention here that is worth noting. Instead of calling our multiplication and division routines MULTIPLY and DIVIDE, we have named them INTMUL and INTDIV. There could be many different multiply and divide routines in a large program, some for integers, some for long integers, some for floating point and perhaps some for complex numbers. If we were programming in a language line C++ or Java, we would distinguish between these using overloading rules or using object prefixes such as a.multiply(), where this means "the multiply routine belonging to the class of the variable a." In assembly language, we can't easily do either of these, but we can prefix the name of the subroutine with the name (or at least, with an abbreviated name) of the class to which it belongs.
Object oriented versus conventional calls
Object oriented
    x = a.multiply( b )
    y = c.multiply( d )
Conventional
    x = integer_multiply( a, b )
    y = floating_multiply( c, d )

The one feature of the SMAL file given above that is specific to separate assembly is the use of INT directives to declare the identifiers INTMUL and INTDIV. These directives declare to the assembler that these internally defined identifiers in this file are to be exported for use in other assembly source files. Up to this point, the scope rules of our assembly language have appeared to be trivial: All identifiers declared by labels or definitions anywhere in a source file may be used anywhere in that file. Now, we add to this the rule that identifiers declared with the INT directive may be available elsewhere.

The user of a separately compiled collection of routines typically needs to write a short list of declarations in order to access each of those routines. While there is nothing in most assembly languages that requires that these declarations be gathered together, we will do so. Specifically, we will gather all the declarations needed to use the contents of intmuldiv.a into a file called intmuldiv.h, and we will ask users of our multiply and divide routines to use the USE directive to insert intmuldiv.h into their source files, instead of copying it directly. Files with the suffix .h are usually called header files, or they are described as interface specification files. This is exactly the same way that C and C++ programmers have long used to solve the same problem.

The header file for users of a separately assembled source file
        ; intmuldiv.h -- interface specification for intmuldiv.a

        ALIGN   4
        EXT     INTMUL  ; integer multiply
PINTMUL:W       INTMUL  ; link through R1
                        ; on entry, R3 = multiplier
                        ;           R4 = multiplicand
                        ; ... other interface specifications

        EXT     INTDIV  ; integer divide
PINTDIV:W       INTDIV  ; link through R1
                        ; on entry, R3 = low 32 bits of dividend
                        ;           R4 = high 32 bits of dividend
                        ; ... other interface specifications

Here, again, we have decorated our source file with comments that are not, strictly speaking, necessary. We have given our file a title, but we have not used the TITLE directive because we don't usually want the header file to be listed in an assembly listing, and we have also included comments giving the complete interface definition for each routine listed in our header file.

The key components of our header file are the EXT declarations for INTMUL and INTDIV. These declarations tell the assembler that these symbols are defined externally in a different source file, so there must be no local definition in the source file that includes this definition. With these definitions in place, the assembler will set things up so that the linker can bind the definition exported from one object file to the uses made of those definitions in another. If no definiton is exported by any object file, the linker will report an error.

The declarations of PINTMUL and PINTDIV as labels on words containing the values of INTMUL and INTDIV make up for a shortcoming of the Hawk linker. The Hawk linker can fill in the value of a word from an external symbol, but external symbols cannot be used for PC relative indexed addressing on the Hawk. So, we must call externally defined subroutines using these pointers. Here is an fragment of an assembly source file that uses these integer multiply and divide routnes:

Using separately compiled subroutines
        TITLE   main.a
; ... any necessary header comments

        USE     "monitor.h"
        USE     "intmuldiv.h"

        ... whatever code is needed

        LOAD    R1,PINTMUL
        JSRS    R1,R1           ; call intmul

        ... whatever code is needed

In general, the code for all methods associated with an abstract class will go in the same source file. This file ends up serving as class implementation file, while the header file holds the interface definition for the class.

Sometimes, it is necessary for a group of subroutines to share variables with each other. This is done using class variables in programming languages like Java, or using static variables in C. The SMAL assembly language provides COMMON blocks to serve this purpose. Suppose, for example, we wanted to keep statistics on the number of times INTMUL and INTDIV were called during the execution of program. We could do this as follows:

Use of a COMMON block for static variables
        TITLE   intmuldiv.a, integer multiply and divide
        INT     INTMUL
        INT     INTDIV

        COMMON  INTSTATS,STATSIZE
MULCNT  =       0       ; count of calls to INTMUL
DIVCNT  =       4       ; count of calls to INTDIV
STATSIZE=       8
PSTATS: W       INTSTATS

        SUBTITLE multiply

INTMUL:         ; ... interface specifications

        ... Code for multiply

        LOAD    R5,PSTATS       ; prepare to access intstats fields
        LOAD    R6,R5,MULCNT
        ADDSI   R6,1
        STORE   R6,R5,MULCNT    ; mulcnt = mulcnt+1

        ... More code for multiply

Common blocks in the SMAL assembly language have global names, so we called our block holding statistics INTSTATS, using the same prefix we used for the other internal definitions we are exporting to the larger world. The size and contents of this common block, however, are private to this source file, so we can omit the prefix and use purely local names.

The COMMON directive in SMAL requires the programmer to declare the size of the block of memory required. Here, we have used a pattern for the declaration of the size and structure of the common block that is very similar to the pattern we used for activation records, so we have given each field of the COMMON block a symbolic name, and we have also given the block size a name.

Later in the code, when it comes time to reference the common block, we have loaded a register with the address of the entire block and then referenced the fields of the common block in exactly the same way that we referenced the fields of the activation record of a subroutine. The only difference is that we have not used R2. Local variables go in the activation record, while static or class variables go in the common block!

Exercises

a) Rewrite the recursive FIBONACCI routine from Chapter 6 of the notes so that it uses a common block to count the number of calls to FIBONACCI (recursive calls as well as calls from outside).

b) Write header files for the DSPNUM and FIBONACCI routines from Chapter 6, assuming that they were each assembled in from a separate source file.

c) What code must be added to DSPNUM and FIBONACCI from Chapter 6 in order to allow them to be assembled from separate source files.

d) Strictly speaking, header files are never needed (not in C or C++, and not in SMAL. Rewrite the code above for a user of a set of separately compiled subroutines so that it does not use intmuldiv.h but instead has, directly as part of the code, the bare minimum content that was formerly in intmuldiv.h and is minimally sufficient to allow use of the multiply and divide routines in intmuldiv.a

Linkage

Once we have created the source files for a program that includes multiple separately assembled components, we must assemble them into one executable program. This is the function of the linker. In the above section, we assumed that we had a main program, main.a that called subroutines in intmuldiv.a and in the Hawk monitor. Each time a programmer changes intmuldiv.a the programmer should re-assemble it to produce a new version of intmuldiv.o, and each time main.a is changed, it should be re-assembled to make a new version of main.o. To test the program, however, we need to combine these object files to make an executable file. We do this using the linker, which combines main.o with intmuldiv.o and produces link.o, the executable file.

The path from source code to executable object code
source code main.a intmuldiv.a
assembled by smal main.a smal intmuldiv.a
object code main.o monitor.o intmuldiv.o
linked by link main.o intmuldiv.o
executable code:link.o

The linker does a number of jobs. It combines the two object files main.o and muldiv.o, but it also adds in monitor.o automatically, so that the user program can call on monitor routines. In addition, it takes all of the common blocks mentioned in any of the separate object files and appends them end to end in main memory. In allocating space for all of the code files and common blocks, the linker takes care to make each file or block begin at a memory address that is divisible by 4, so that aligned data within the code file or common block will be physically aligned in memory.

Exercises

e) Assuming that dspnum.a and fibonacci.a are the source files for separately assembled versions of the subroutines given in chapter 6, and that you have a main program main.a, give the sequence of commands you would issue in order to assemble these pieces, link them, and make them ready to execute as a file called link.o.

An example, Floating Point Arithmetic

To illustrate these ideas, consider the problem of supporting floating-point arithmetic on a machine where there is no hardware support for floating point. The Hawk architecture includes opcodes reserved for use by a floating-point coprocessor, but we will ignore those instructions here and assume that we have a low-end Hawk machine that does not support floating-point operations in hardware.

First, of course, we need to look at the details of floating-point numbers. In decimal, floating-point numbers are sometimes referred to as being expressed in scientific notation. For example, consider 6.02214199×1023, Avagadro's number, which you may recognize from elementary chemistry. This has the mantissa 6.02214199, and the exponent 23, to the base 10.

When we write a number in scientific notation, we always write it in normalized form. The normalizaiton rule is that we always express the mantissa in a form with one digit before the point and the remaining significant bits after the point. So, for example, we write 6.02 × 1023 and not 60.2 × 1022 or 0.62 × 1024, even though all three of these are mathematically equal. An alternative way of expressing this normalization rule is that, except for the special case of zero, the minimimum value of the mantissa is 1.0, and the mantissa is always less than 10.

Binary floating-point number systems are generally structured identically, with an exponent expressed in binary and a mantissa expressed in binary, but the normalization rules that are used vary considerably, as do the choices for representation of the sign of the exponent and mantissa.

The Hawk architecture includes opcodes that are reserved for communication with a floating-point coprocessor, but before we discuss formats used to implement floating-point operations in hardware, we will examine a software implementation, using floating-point numbers as example objects.

The interface specification for a class of objects should list all operations applicable to objects of that class, the methods, and it should The implementation of the class must then give the details of the object representation and the specific algorithms used to implement each method. It is good practice to add documentation to the interface specification, so that it serves as a manual for the class as well as a formal interface.

For our floating-point class, the set of operations is fairly obvious. We want operators that add, subtract, multiply and divide floating-point numbers, and we also want operators that return the integer part of a number, and that convert integers to floating-point form. We probably want other operations, but we will forgo those for now.

In most object-oriented programming languages, a strong effort is made to avoid copying objects from place to place. Instead, objects sit in memory and object handles are used to refer to them. The handle for an object is actually just a pointer to that object, that is, a word holding the address of the object. Therefore, our floating point operators will take, as parameters, the addresses of their operands, not the values.

Finally, the interface specificaiton for a class must indicate how to allocate storage for an element of that class. The only thing a user of the object needs to know is the size of the object, not the internal details of its representation. The following interface specification for our Hawk floating point package assumes that each floating point number is stored in two words of memory, an exponent and a mantissa of one word each.

Interface Specification for a Hawk floating-point package
        TITLE float.h, interface specification for float.a

FLOATSIZE = 8   ; size of a floating point number, in bytes

                ; for all calling sequences here:
                ;   R1 = return address
                ;   R2 = pointer to activation record
                ;   R3-7 = parameters and temporaries
                ;   R8-15 = guaranteed to saved and restored

                ; functions that return floating values use:
                ;   R3 = pointer to place to put return value
                ;       the caller must pass this pointer!

        ALIGN   4
        EXT     FLOAT   ; convert integer to floating
PFLOAT: W       FLOAT   ; on entry, R3 = pointer to floating result
                        ;           R4 = integer to convert

        EXT     FLTINT  ; convert floating to integer
PFLTINT:W       FLTINT  ; on entry, R3 = pointer to floating value
                        ; on exit,  R3 = integer return value

Interface Specification, continued
        EXT     FLTCPY  ; copy a floating point number
PFLTCPY:W       FLTCPY  ; on entry, R3 = pointer to floating result
                        ;           R4 = pointer to floating operand

        EXT     FLTTST  ; test sign and zeroness of floating number
PFLTTST:W       FLTTST  ; on entry, R3 = pointer to floating value
                        ; on exit,  R3 = integer -1, 0 or 1

        EXT     FLTADD  ; add floating-point numbers
PFLTADD:W       FLTADD  ; on entry, R3 = pointer to floating result
                        ;           R4 = pointer to addend
                        ;           R5 = pointer to augend

        EXT     FLTSUB  ; subtract floating-point numbers
PFLTSUB:W       FLTSUB  ; on entry, R3 = pointer to floating result
                        ;           R4 = pointer to subtrahend
                        ;           R5 = pointer to minuend

        EXT     FLTNEG  ; negate a floating-point number
PFLTNEG:W       FLTNEG  ; on entry, R3 = pointer to floating result
                        ;           R4 = pointer to operand

        EXT     FLTMUL  ; multiply floating-point numbers
PFLTMUL:W       FLTMUL  ; on entry, R3 = pointer to floating result
                        ;           R4 = pointer to multiplicand
                        ;           R5 = pointer to multiplier

        EXT     FLTDIV  ; divide floating-point numbers
PFLTDIV:W       FLTDIV  ; on entry, R3 = pointer to floating result
                        ;           R4 = pointer to multiplicand
                        ;           R5 = pointer to multiplier

Exercises

f) Write a main program that uses a common block of size FLOATSIZE to hold each floating point variable it needs in the computation of the floating point representation of 0.1, computed by converting 1 and 10 to floating point and then dividing 1.0 by 10.0. This should call FLOAT several times, and then FLTDIV.

g) Write a separately compilable subroutine called SQUARE that takes 2 pointers to floating point numbers as parameters and returns the square of the second number in the first. Don't forget to write an appropriate interface specification, and comment everyting appropriately, including an indication of the file names that should be used.

A floating point representation

It is easy to suggest that a floating point number can be represented as a pair of words, one holding the exponent and another holding the mantissa, but this is not enough detail. Which word is which? We need to specify the interpretation of the bits of each of these words. What is the range of exponent values? How do we represent the sign of the exponent? How is the mantissa normalized? How do we represent non-normalized values such as zero?

On a computer that supports two's complement integers, it makes sense to represent the exponent and mantissa as two's complement values. We can represent zero using a mantissa of zero and the smallest legal exponent. The more difficult question is, where is the point in our two's complement mantissa? We could put the point anywhere and make it work, but the two obvious choices are to use an integer mantissa or to put the point immediately to the right of the sign bit. In the latter case, we will normalize the mantissa so that the bit immediately to the right of the point is always a one. The following examples illustrate this number format.

A floating-point number representation
exponent  00000000000000000000000000000000 +0.5 × 20 = 0.5
mantissa  01000000000000000000000000000000
exponent  00000000000000000000000000000001 +0.5 × 21 = 1.0
mantissa  01000000000000000000000000000000
exponent  00000000000000000000000000000001 -0.5 × 21 = -1.0
mantissa  11000000000000000000000000000000
exponent  00000000000000000000000000000001 +0.75 × 21 = 1.5
mantissa  01100000000000000000000000000000
exponent  00000000000000000000000000000001 -0.75 × 21 = -1.5
mantissa  10100000000000000000000000000000
exponent  11111111111111111111111111111111 +0.5 × 2-1 = 0.25
mantissa  01000000000000000000000000000000
exponent  11111111111111111111111111111111 +0.5 × 2-1 = 0.25
mantissa  01000000000000000000000000000000
exponent  11111111111111111111111111111101 +0.5 × 2-3 = 0.0625
mantissa  01000000000000000000000000000000
exponent  11111111111111111111111111111101 ~8/10 × 2-3 = 0.1...
mantissa  01100110011001100110011001100110
exponent  11111111111111111111111111111101 ~-8/10 × 2-3 = -0.1...
mantissa  10011001100110011001100110011010

Exercises

h) In this number system, what is the largest possible positive value (in binary!).

i) In this number system, what is the smallest possible positive nonzero normalized value?

j) In this number system, how is 10.010 represented.

k) In this number system, how is 100.010 represented.

Normalizing a floating point number

Many operations on floating point numbers produce results that are unnormalized, and these must be normalized before performing additional operations on them. If this is not done, there will be a loss of precision in the results; classical scientific notation is always presented in normalized form for the same reason. To normalize a floating point number, we must distinguish some special cases: First, is the number zero? Zero cannot be normalized! Second, is the number negative? Because we have opted to represent our mantissa in two's complement form, negative numbers are slightly more difficult to normalize; this is why many hardware floating-point systems use signed magnitude for their floating point numbers.

The normalize subroutine is not part of the public interface to our floating point package, but rather, it a private component, used as the final step of just about every floating point operation. Therefore, we can write it with the assumption that operands are passed in registers instead of using pointers to memory locations. We will code this here using registers 3 and 4 to hold the exponent and mantissa, respectively, both on entrance and on exit:

Normalizing a floating-point number
        SUBTITLE normalize

NORMALIZE:      ; normalize floating point number
                ; link through R1
                ; R3 = exponent on entry and exit
                ; R4 = mantissa on entry and exit
                ; no other registers used
        TESTR   R4
        BZR     NRMNZ           ; if (mantissa == 0) {

        LIL     R3,#800000
        SL      R3,8            ;   exponent = 0x80000000;
        JUMPS   R1              ;   return;

NRMNZ:  BNS     NRMNEG          ; } else if (mantissa > 0) {
NRMPLP:                         ;   while
        BITTST  R4,30
        BCS     NRMPRT          ;     ((mantissa & 0x40000000) == 0) {
        SL      R4,1            ;     mantissa = mantissa << 1;
        ADDSI   R3,-1           ;     exponent = exponent - 1;
        BR      NRMPLP          ;   }
NRMPRT:
        JUMPS   R1              ;   return;

NRMNEG:                         ; } else { /* mantissa < 0 */
        ADDSI   R4,-1           ;   mantissa = mantissa - 1;
                                ;   /* mantissa now in one's complement form */
NRMNLP:                         ;   while
        BITTST  R4,30
        BCR     NRMNRT          ;     ((mantissa & 0x40000000) != 0) {
        SL      R4,1            ;     mantissa = mantissa << 1;
        ADDSI   R3,-1           ;     exponent = exponent - 1;
        BR      NRMPLP          ;   }
NRMNRT:
        ADDSI   R4,1            ;   mantissa = mantissa + 1;
                                ;   /* mantissa now in two's complement form */
        JUMPS   R1              ;   return;
                                ; }

There are two tricks in this code worth mention. First, this code uses the BITTST instruction to test bit 30 of the mantissa. This instruction moves the indicated bit to the C condition code; in fact, the assembler converts this instruction to either a left or a right shift to move the indicated bit into the carry bit while discarding the shifted result using R0. In C, C++ or Java, in contrast, inspection of one bit of a word is most easily expressed by anding that word with a constant with just that bit set.

The second trick involves normalizing negative numbers. In the example values presented above, note that the representation of -0.5 has bit 30 set to 1, while -0.75 has it set to zero. By subtracting or adding one in the least significant bit of each negative value, we can convert back and forth between one's complement and two's complement, allowing us to take advantage of the fact that bit 30 of the one's complement representation of normalized mantissas is always zero.

Exercises

l) The above code does not detect underflow! If it decrements the exponent below the smallest legal value, it produces the highest legal value. Rewrite the code to make it produce a value of zero whenever decrementing the exponent would underflow.

Floating to Integer and Integer to Floating Conversion

Conversion from integer to floating point is remarkably simple! All that needs to be done is to adjust the exponent field to 31 and set the mantissa field to the desired integer, and then normalize the result. This is because the fixed point fractions we are using to represent the mantissa can be viewed as integer counts in units of 2-31. As a result, our code simply moves the data into place for a call to normalize and then stores the results in the indicated memory location.

Integer to Floating Conversion on the Hawk
; format of a floating point number stored in memory
EXPONENT  = 0
MANTISSA  = 4
FLOATSIZE = 8

        SUBTITLE integer to floating conversion

FLOAT:                  ; on entry, R3 = pointer to floating result
                        ;           R4 = integer to convert
        MOVE    R5,R1           ; R5 = return address
        MOVE    R6,R3           ; R6 = pointer to floating result
        LIS     R3,31           ; exponent = 31; /* R3-4 is now floating */
        JSR     R1,NORMALIZE    ; normalize( R3-4 );
        STORES  R3,R6           ; result->exponent = exponent;
        STORE   R4,R6   MANTISSA; result->mantissa = mantissa;
        JSRS    R5              ; return; /* uses saved return address! */

Conversion of floating-point numbers to integer is a bit more complex, but only because we have no pre-written denormalize routine that will set the exponent field to 31. Instead, we need to write this ourselves! Where the normalize routine shifted the mantissa left and decremented the exponent until the number was normalized, the floating to integer conversion routine will have to shift the mantissa right and increment the exponent until the exponent has the value 31.

This leaves open the question of what happens if the initial value of the exponent was greater than 31. The answer is, in that case, the integer part of the number is too large to represent in 32 bits! In this case, we could raise an exception, if we had a decent exception handleing model, or, lacking that, we could set the overflow condition code, allowing the calling program to test to see if the conversion was legal or not. Here, we will do neither, leaving this problem as an exercise for the reader.

Floating to Integer Conversion on the Hawk
        SUBTITLE floating to integer conversion

FLTINT:                 ; on entry, R3 = pointer to floating value
                        ; on exit   R3 = integer result
        LOADS   R4,R3           ; R4 = argument->exponent
        LOAD    R3,R3,MANTISSA  ; R3 = argument->mantissa
FINTLP:                         ; while
        CMPI    R4,31
        BGE     FINTLX          ;   (exponent < 31) {
        SR      R3,1            ;   mantissa = mantissa >> 1
        ADDSI   R4,1            ;   exponent = exponent + 1;
        BR      FINTLP          ; }
FINTLX:
        ; unchecked error condition: exponent > 31 implies overflow
        JUMPS   R1              ; return denormalized mantissa

Exercises

m) The above code for floating to integer conversion truncates the result in an odd way for negative numbers. If the floating point input value is -1.5, what integer does it return? Why?

n) The above code for floating to integer conversion truncates the result in an odd way for negative numbers. Fix the code so that it truncates the way a naive programmer would expect.

o) The above code for floating to integer conversion truncates, but sometimes, it is desirable to have a version that rounds a number to the nearest integer. Binary numbers can be rounded by adding one in the most significant digit that will be discarded, that is, in the 0.5's place. Write code for FLTROUND that does this.

p) The above code for floating to integer conversion could do thousands of right shifts for numbers with very negative exponents! This is an utter waste. Modify the code so that it automatically recognizes these extreme cases and returns a value of zero whenever more than 32 shifts would be required.

Floating Point Addition

We are now ready to explore the implementation of some of the floating point operations. These follow quite naturally from the standard rules for working with numbers in scientific notation. Consider the problem of adding 9.92×103 to 9.25×101. We begin by denormalizing the numbers so that they have the same exponents; this allows us to add the mantissas, after which we renormalize the result and round it to the appropriate number of decimal places:

Adding in scientific notation
given 9.92 × 103 + 9.25 × 101
denormalized 9.92 × 103 + 0.0925 × 103
rearranged (9.92 + 0.0925) × 103
added 10.0125 × 103
normalized 1.00125 × 104
rounded 1.00 × 104

The final rounding step is one many students forget, particularly in this era of scientific calculators. For numbers given in scientific notation, we have the convention that the number of digits given is an indication of the precision of the measurements from which the numbers were taken. As a result, if two numbers are given in scientific notation and then added or subtracted, the result should not be expressed to greater precision than the least precise of the operands! When throwing away the less significant digits of the result, we always round in order to minimise the loss of information and introduction of systematic error that would result from truncation.

An important question arises here: Which number do we denormalize prior to adding? The the answer is, we never want to lose the most significant digits of the sum, so we always increase the smaller of the two exponents while shifting the corresponding mantissa to the right.

In addition, we are seriously concerned with preventing a carry out of the high digit of the result; this caused no problem with pencil and paper, but if we do this in software, we must be prepared to recover from overflow in the sum! This problem is solved in the following floating point add subroutine for the Hawk:

Adding two floating point numbers on the Hawk
        SUBTITLE floating add

; activation record format
RA      =       0       ; return address
R8SAVE  =       4       ; place to save R8

FLTADD:         ; on entry, R3 = pointer to floating sum
                ;           R4 = pointer to addend
                ;           R5 = pointer to augend
        STORES  R1,R2           ; save return address
        STORE   R8,R2,R8SAVE    ; save R8
        MOVE    R7,R3           ; R7 = saved pointer to sum
        LOADS   R3,R4           ; R3 = addend.exponent
        LOAD    R4,R4,MANTISSA  ; R4 = addend.mantissa
        LOAD    R6,R5,MANTISSA  ; R6 = augend.mantissa
        LOADS   R5,R5           ; R5 = augend.exponent
        CMP     R3,R5
        BLE     FADDEI          ; if (addend.exponent > augend.exponent) {
        MOVE    R8,R3
        MOVE    R3,R5
        MOVE    R5,R8           ;   exchange exponents
        MOVE    R8,R4
        MOVE    R4,R6
        MOVE    R6,R8           ;   exchange mantissas
FADDEI:                         ; }
                                ; assert (addend.exponent <= augend.exponent)
FADDDL:                         ; while
        CMP     R3,R5
        BGE     FADDDX          ;   (addend.exponent < augend.exponent) {
        ADDSI   R3,1            ;   increment addend.exponent
        SR      R4,1            ;   shift addend.mantissa
        BR      FADDDL
FADDDX:                         ; }
                                ; assert (addend.exponent = augend.exponent)
        ADD     R4,R6           ; add mantissas
        BOR     FADDNO          ; if (overflow) { /* we need one more bit */
        ADDSI   R3,1            ;   increment result.exponent
        SR      R4,1            ;   shift result.mantissa
        SUB     R0,R0,R0        ;   set carry bit in order to ...
        ADJUST  R4,CMSB         ;   flip sign bit of result (overflow repaired!)
FADDNO:                         ; }
        JSR     R1,NORMALIZE    ; normalize( result )
        STORES  R3,R7           ; save result.exponent
        STORE   R4,R7,MANTISSA  ; save result.mantissa
        LOAD    R8,R2,R8SAVE    ; restore R8
        LOADS   R1,R2           ; restore return address
        JUMPS   R1              ; return!

Most of this code follows simply from the logic of adding that we demonstrated with the addition of two numbers using scientific notation. There are two or three places, however, worthy of note.

First, about 1/3 of the way down, this code exchanges the two numbers; this involves exchanging two pairs of registers. There are many ways to do this; the approach used here is the simplest to understand, setting the value in one of the registers aside, moving the other register, and then moving the set-aside value into its final resting place. This takes three move instructions and a spare register. There are other ways to do this that are just as fast but do not require a spare register, but these are harder to understand. The most famous, using the exclusive or operator, is a=a⊕b;b=a⊕b;a=a⊕b.

Because this routine completely uses registers 1 to 7 and it both calls the normalize routine and needs an extra register for the exchange discussed above, it needs to use its activation record; here, we have constructed an activation record with two fields, one for saving register 1 to allow the call to NORMALIZE, and one for saving register 8, freeing it for local use. While FLTADD uses its activation record, NORMALIZE does not. Therefore, this code does not need to adjust the stack pointer, register 2, before or after the call to normalize.

Finally, there is the issue of dealing with overflow during addition. Here, we take advantage of the fact that, when the sign is wrong, interpreted as a sign bit, it is correct, if interpreted merely as the most significant bit of the magnitude, with an invisible sign bit to the left of it. Thereforem, we can do a signed right shift to make space for the new sign bit (incrementing the exponent to compensate for this) and then complement the sign by adding one to it. We add one to the sign bit using a somewhat clumsy trick using the ADJUST instruction.

Exercises

q) The floating point add code given here is rather stupid about shifting. It could right-shift the lesser of the two addends thousands of times, yet a shift of more than 32 bits is never needed. Fix this!

r) Fix this code so that the denormalize step rounds the lesser of the two addends by adding one to the least significant bit just prior to the final right shift operation.

Floating Point Multiplication

Given a working integer multiply operator as a starting point, floating point multiplication is actually somewhat simpler than floating point addition. This simplicity is equally apparent in the algorithm for multiplying numbers in scientific notation: Add the exponents, multiply the mantissas and normalize the result, as illustrated below:
Multiplication in scientific notation
given 1.02 × 103  ×  9.85 × 101
rearranged (1.02 × 9.85) × 10(3 + 1)
multiplied 10.047 × 104
normalized 1.0047 × 105
rounded 1.00 × 105

Unlike addition, we did not have to denormalize anything before the actual operation. The one important issue we face that was not present with addition or subtraction is a matter of precision. Multiplying two 32-bit mantissas gives a 64-bit result. We will assume that we have a signed multiply routine that delivers this result, with the following calling sequence:
A signed multiply interface specification
MULTIPLYS:              ; link through R1
                        ; on entry, R3 = multiplier
                        ;           R4 = multiplicand
                        ; on exit,  R3 = product, low bits
                        ;           R4 = product, high bits
                        ; destroys R5, R6
                        ; uses no other registers

If the multiplier and multiplicand had 31 places after the point in each, then the 64-bit product has 62 places after the point. If the multiplier and multiplicand are normalized to have a minimum absolute value of 0.5, the product will have a minimum absolute value of 0.25. Therefore, normalizing the mantissa will involve shifting at least one bit left, and sometimes two bits left. Ideally, we should use 64-bit shifts for this normalize step in order to avoid loss of precision in this process, so we cannot use the normalize code we used with addition, subtraction and conversion from binary to floating point.
Multiplying two floating point numbers on the Hawk
        SUBTITLE floating multiply

; activation record format
RA      =       0       ; return address
PRODUCT =       0       ; pointer to floating product

FLTMUL:         ; on entry, R3 = pointer to floating product
                ;           R4 = pointer to multiplier
                ;           R5 = pointer to multiplicand
        STORES  R1,R2           ; save return address
        STORE   R3,R2,PRODUCT   ; save pointer to product
        LOADS   R6,R4           ; R6 = multiplier.exponent
        LOADS   R7,R5           ; R7 = multiplicand.exponent
        ADD     R7,R6,R7        ; R7 = product.exponent
        LOAD    R3,R4,MANTISSA  ; R3 = multiplier.mantissa
        LOAD    R4,R5,MANTISSA  ; R4 = multiplicand.mantissa
        LOAD    R1,PMULTIPLYS
        JSRS    R1,R1           ; R3-4 = product.mantissa
                                ; assert (R3-4 has 2 bits left of the point)
        SL      R3,1
        ADDC    R4,R4           ; shift product.mantissa 1 place
                                ; assert (R3-4 has 1 bit left of the point)
        BNS     FMULN           ; if (product.mantissa > 0) {
        BITTST  R4,30
        BCS     FMULOK          ;   if (product.mantissa not normalized) {
        SL      R3,1
        ADDC    R4,R4           ;     shift product.mantissa 1 place
        ADDSI   R7,-1           ;     decrement product.exponent
        BR      FMULOK          ;   }
FMULN:                          ; } else { negative mantissa
        ADDSI   R3,-1
        BCS     FMULNC
        ADDSI   R4,-1           ;   decrement product.mantissa
FMULNC:                         ;   mantissa is now in one's complement form
        BITTST  R4,30
        BCR     FMULNOK         ;   if (product.mantissa not normalized) {
        SL      R3,1
        ADDC    R4,R4           ;     shift product.mantissa 1 place
        ADDSI   R7,-1           ;     decrement product.exponent
FMULNOK:                        ;   }
        ADDSI   R3,1
        ADDC    R4,R0           ;   increment product.mantissa
FMULOK:                         ; } mantissa now normalized
        LOAD    R5,R2,PRODUCT
        STORES  R7,R5           ; store product.exponent
        STORE   R4,R5           ; store product.mantissa
        LOADS   R1,R2           ; restore return address
        JUMPS   R1              ; return

Most of the above code is involved with normalizing the result! This version of normalization is special in two ways. First, it involves 64-bit shifting, and second, because we know that the numbers coming in were normalized, we know that we never have to shift more than 1 place for normalization purposes. Multiplying two normalized numbers in the range from 0.5 to 1.0 simply cannot produce a product smaller than 0.25, and normalizing this requires only a one-place shift.

There are some oversights in this code! What if the product is zero? Our normalization rule states that a product of zero ought to have a particular exponent, the most negative possible value. Furthermore, there is no test at all for overflow or underflow, that is, no test for the possibility that adding the exponents might produce a value outside the legal range of exponents.

Exercises

s) Fix this floating point multiply code so that it detects underflow and overflow in adding exponents and correctly returns zero on underflow and when the exponent is too large, locks the exponent at its maximum value.

t) Fix this floating point multiply code so it correctly normalizes products with the value zero.

u) Write code for a floating point divide routine.

Other Operations, IEEE Format

Obviously, we need multiply and divide routines, but we need other operations as well. Because we have committed ourselves to an object-oriented model, we are not allowing the user of our floating point numbers to peer into their representations. Therefore, we must provide tools for comparing numbers, for testing the sign of numbers, for testing for zero, and for other operations that might otherwise appear to be trivial to a user with access to the number representation.

Another issue we face is the import and export of floating point numbers. We need tools to convert numbers to and from textual form, but we also must be prepared to exchange numbers in binary form with other computers. While there are still many computers that support eccentric floating point representations, there is one extremely common representation, the IEEE standard floating point system. This standard has been established by the Institute of Electrical and Electronics Engineers, and is now widely supported by the floating point hardware of many computers. This standard includes both 32 and 64-bit floating point numbers, but for this discussion, we will ignore the latter and focus on conversion between our eccentric floating point representation to IEEE 32-bit floating point numbers.

The IEEE standard single-precision floating-point number representation
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00
S exponent mantissa

In the IEEE floating point formats, the most significant bit holds the sign of the mantissa, and the mantissa is stored in signed magnitude form. The magnitude of the mantissa of a 32-bit floating-point number is stored in the least significant 23 bits, while the exponent is stored in the 8 remaining bits. IEEE double-precision numbers differ from the above in two ways. First, they have a 64-bit representation, and second, they have a 16-bit exponent instead of an 8-bit exponent.

The IEEE format has some rules that may be a bit puzzling: First, under normal circumstances, the mantissa is normalized so that its minimum value is 1.0 and it is always less than 2.0. Thus, the number usually has its point immediately to the right of the most significant bit of its mantissa, and the most significant bit is always one! If some bit of a number is always 1, there is no need to store it, we can just assume its value; therefore, in the IEEE format, the most significant bit is not stored, it is simply assumed to be one with the 23-bits of the mantissa representing only the places to the right of the point.

The second odd feature of the IEEE format is that the exponent is given as a biased signed integer with the eccentric bias of 127, and the normal range of exponents excludes both the smallest and largest values, 000000002 and 111111112. An exponent of zero is reserved for a mantissa of zero or for unnormalized (extraordinarily small) values, while an exponent of all ones is reserved for infinity (with a mantissa of zero) and for values that the IEEE calls NaNs, where NaN stands for not a number.

Because of the use of the odd bias 127 for exponents, an exponent of one is represented as 100000002, zero is 011111112, and negative one is 011111102. The following table shows IEEE floating-point numbers, given in binary, along with their interpretations.

Example IEEE single-precision floating-point numbers
Infinity and NaN
0 11111111 00000000000000000000000=Infinity
1 11111111 00000000000000000000000=-Infinity
0 11111111 00000000000010000000000=NaN
1 11111111 00000010001111101000110=NaN
Normalized numbers
0 10000000 00000000000000000000000= +1.0 × 21 * 1.002 = 2
0 01111110 00000000000000000000000= +1.0 × 2-1 * 1.002 = 0.5
0 01111111 10000000000000000000000= +1.0 × 2-1 * 1.102 = 1.5
0 01111111 11000000000000000000000= +1.0 × 2-1 * 1.112 = 1.75
1 01111111 11000000000000000000000= -1.0 × 2-1 * 1.112 = -1.75
Unnormalized numbers
0 00000001 00000000000000000000000= +1.0 × 2-126 * 1.002 = 2-126
0 00000000 10000000000000000000000= +1.0 × 2-126 * 0.102 = 2-127
0 00000000 01000000000000000000000= +1.0 × 2-126 * 0.012 = 2-128
0 00000000 00000000000000000000000= +1.0 × 2-126 * 0.002 = 0
1 00000000 00000000000000000000000= -1.0 × 2-126 * 0.002 = 0

In writing a routine to convert from our eccentric format to IEEE format, we must consider several issues: First, there is the matter of the range of values! Our numbers, with a 32-bit exponent field, have an extraordinarily large range. Second, we must worry about converting the exponent and mantissa to the appropriate form, and finally, we must pack these together.

The following code for packing a number into IEEE format is actually considerably simplified! It completely ignores the possibility that the value might be a NaN, not a number.

Packing an IEEE Floating point value in C, part 1
unsigned int ieeepack( int exponent, int mantissa )
{
        int sign = 0;

        /* first split off the sign */
        if (mantissa < 0) {
                mantissa = -mantissa;
                sign = 0x80000000;
        }
        /* put the mantissa in IEEE normalized form */
        mantissa = mantissa >> 7;

        /* convert */

Packing an IEEE Floating point value, part 2
        /* convert */
        if (exponent > 128) { /* convert overflow to infinity */
                mantissa = 0;
                exponent = 0x7F800000;
        } else if (exponent < -125) { /* convert underflow to zero */
                mantissa = 0;
                exponent = 0;
        } else { /* conversion is possible */
                mantissa = mantissa & 0x007FFFFF;
                exponent = (exponent + 126) <<
        }
        return sign | exponent | mantissa;
}

There is one significant complexity in this code: The advertised bias of the IEEE format is 127, yet we used a bias of 126 above! This is because we also subtracted one from the original exponent to account for the fact that our numbers were normalized in the range 0.5 to 1.0, while IEEE numbers are normalized in the range 1.0 to 2.0. This is also why we compared with 128 and -125 instead of 127 and -126 when checking for the maximum and minimum legal exponents in the IEEE format.

In the above code, we have omitted one significant detail! We have simply forced all underflows to zero when we ought to have allowed numbers that underflow by only a small amount to be stored in denormalized form.

Conversion from IEEE format to our eccentric Hawk format is comparatively easy because both our exponent and mantissa fields are larger than those in the single-precision IEEE format, allowing us to do these conversions with no loss of precision. This conversion is presented in Hawk assembly language here, ignoring the possibility that the value might be a NaN or infinity.
Hawk code to unpack an IEEE-format floating-point number
        SUBTITLE unpack an IEEE-format floating point number

FLTIEEE:        ; on entry, R3 points to the return floating value
                ;           R4 is the number in IEEE format.
                ; R5 is used as a temporary
        MOVE    R5,R4           ; R5 = exponent
        SL      R5,1            ; throw away the bit left of the exponent
        SR      R5,12
        SR      R5,12           ; pull the exponent field all the way right
        ADDI    R5,R5,-126      ; unbias the exponent
        STORES  R5,R3           ; save converted exponent
        MOVE    R5,R4           ; R5 = mantissa
        SR      R5,9            ; push mantissa all the way left
        SL      R5,1            ; and then pull it back for missing one bit
        SUB     R0,R0,R0        ; set carry
        ADJUST  R5,CMSB         ; and use it to put missing one into mantissa
        TESTR   R4
        BNR     FIEEEPOS        ; if (number < 0) {
        NET     R5,R5           ;   negate mantissa
FIEEEPOS:                       ; }
        STORE   R5,R3,MANTISSA  ; save converted mantissa
        JUMPS   R1              ; return

This code makes extensive use of shifting to clear fields within the number. Thus, instead of writing n&0xFFFFFF00, we write (n>>8)<<8. This trick is useful on many machines where loading a large constant is significantly slower than a shift instruction. By doing this, we avoid both loading a long constant into a register and the need to reserve a register to hold it. We used a related trick to set the implicit one bit, using a subtract instruction to set the carry bit and then adding this bit into the number using an adjust instruction.

Exercises

v) What is 10.010 in IEEE single-precision format?

w) What is the representation of the smallest nonzero positive value in IEEE single-precision format?

Conversion to Decimal

A well designed floating point package will include a complete set of tools for conversion to and from decimal textual representations, but our purpose here is to use the conversion problem to illustrate the use of our floating point package, so we will write our conversion code as user-level code, making no use of any details of the floating point abstraction that are not described in the header file for the package.

First, consider the problem of printing a floating point number using only the operations we have defined, ignoring the complexity of assembly language and focusing on the algorithm. We can begin by taking the integer part of the number and printing that, followed by a point, but the question is, how do we continue from there, printing the digits after the point?

One approach to printing the fractional part is as follows. After printing the integer part of the value of the number, convert that integer value back to floating and subtract it from the number, leaving just the fractional part, then multiply that by ten to bring one decimal digit worth of the value up above the point. Print that digit, and then repeat this process for each following digit. This is not particularly efficient, since it keeps converting back and forth between floating and integer representations, but it works. The resulting algorithm is given here in C:

C code to print a floating point number
void fltprint( float num, int places )
{
        int inum; /* the integer part */

        if (num < 0) {  /* make it positive and print the sign */
                num = -num;
                dspch( '-' );
        }

        /* first put out integer part */
        inum = fltint( num );
        dspnum( inum );
        dspch( '.' );

        /* second put out digits of the fractional part */
        for (; places > 0; places--) {
                num = (num - float(inum)) * 10.0;
                inum = fltint( num );
                dspch( inum + '0' );
        }
}

We face a few problems here, and it is best to tackle these incrementally. First, in order to allow code to be written with no knowledge of the structure of floating point numbers, we must pass pointers to numbers, not the numbers themselves, because passing the numbers themselves will require that the assembly language programmer know how manyu registers it takes to hold each number. Second, we have used arithmetic operators above that involve calls to routines in the floating point package. We will tackle these problems as the high-level before trying to deal with them in assembly language.

Lower level C code to print a floating point number
void fltprint( float *pnum, int places )
{
        float num;  /* a copy of the number */
        float tmp;  /* a temporary floating point number */
        float ten;  /* a constant floating value */
        int inum;   /* the integer part */
        int i;      /* loop counter */

        float( &ten, 10 );

        if (flttst( &num ) < 0) {  /* make it positive, print the sign */
                fltneg( &num, pnum );
                dspch( '-' );
        } else {
                fltcpy( &num, pnum );
        }

        /* first put out integer part */
        inum = fltint( &num );
        dspnum( inum );
        dspch( '.' );

        /* second put out digits of the fractional part */
        while (places > 0) {
                float( &tmp, inum );
                fltsub( &num, &num, &tmp );
                fltmul( &num, &num, &ten );
                inum = fltint( &num );
                dspch( inum + '0' );
                places = places - 1;
        }
}

The above code shows some of the problems we forced on ourselves by insisting on having no knowledge of the representation of floating point numbers when we write our print routine. Where a C or Java programmer would write 10.0, relying on the compiler to translate this into floating point representation, and put it in memory, we have been forced to use the integer constant 10 and then call the float() routine to convert it to its internal representation. This is a common consequence of strict object oriented encapsulation, although loose encapsulation schemes, for example, those that export compile or assembly time macros to process constants into their internal representation can get around this.

The next problem we face is that at the time we write this code, we are denying ourselves access to knowledge of the size of the representation of floating point numbers, therefore, unlike all of our previous examples, we cannot allocate space in our activation records taking advantage of a known size. Our solution to this problem rests on two elements.

First, we will rely on the fact that the interface definition for the floating point package float.h provides us with the size of a floating point number in the constant FLOATSIZE; in fact, we have adopted the general convention that, for each object, record or structure, we always have a symbol defined to hold its size.

Second, we can use the assembler itself to sum up the sizes of the fields of the activation record instead of adding them manually, as we have in all of our previous examples. So, we will begin with an activation record size of zero and then define each field in terms of the previous activation record size, before adding the size of that field to compute the new activation record size. We could, of course, have defined all of the easy to define fields first using the old method, but to be consistant, we have defined all of the fields this way in the following:

Building an activation record for FLTPRINT
        TITLE   fltprint.a -- floating print routine
        USE     "float.h"
        INT     FLTPRINT

; activation record format
ARSIZE  =       0       ; initial size of activation record

RA      =       ARSIZE          ; return address
ARSIZE  =       ARSIZE + 4      ; size of return address

NUM     =       ARSIZE          ; copy of the floating point number
ARSIZE  =       ARSIZE + FLOATSIZE

TMP     =       ARSIZE          ; a temporary floating point number
ARSIZE  =       ARSIZE + FLOATSIZE

TEN     =       ARSIZE          ; the constant ten
ARSIZE  =       ARSIZE + FLOATSIZE

R8SAVE  =       ARSIZE          ; save area for register 8
ARSIZE  =       ARSIZE + 4

R9SAVE  =       ARSIZE          ; save area for register 9
ARSIZE  =       ARSIZE + 4

In the above, had we allowed ourselves to use knowledge about the size of a floating point number, we could have defined NUM=4, TMP=12 and TEN=20, but then, any change in the floating point package would have required us to rewrite this code. Consider, for example, the problem of rewriting the code to allow for a new version of the floating point package that used 3 words per number, one for the exponent and two for a 64-bit mantissa.

The local variables for saving registers 8 and 9 were allocated so that the integer variables in our code can use these registers over and over again instead of being loaded and stored in order to survive each call to a routine in the floating point package. Of course, if those routines need registers 8 and 9, they will be saved and restored anyway, but we leave that to them.

The following code contains one significant optimization. With all of the subroutine calls, we could have incremented and decremented the stack pointer many times. Instead, we increment it just once at the start of the print routine and decrement it just once at the end; in between, we always subtract ARSIZE from every displacement into the activation record in order to correct for this.

The body of the floating print routine, part 1
FLTPRINT:       ; on entry: R3 = pointer to floating point number to print
                ;           R4 = number of places to print after the point
        STORES  R1,R2
        STORE   R8,R2,R8SAVE
        STORE   R9,R2,R9SAVE    ; saved return address, R8, R9
        MOVE    R8,R3           ; R8 = pointer to number
        MOVE    R9,R4           ; R9 = places

        ADDI    R2,R2,ARSIZE    ; from here on, R2 points to end of AR

        LEA     R3,R2,TEN-ARSIZE
        LIS     R4,10
        LOAD    R1,PFLOAT
        JSRS    R1,R1           ; float( &ten, 10 );

The body of floating print, part 2
        MOVE    R3,R8
        LOAD    R1,FLTTST
        JSRS    R1,R1
        TESTR   R3
        BNR     FPRNNEG         ; if (flttst( pnum ) < 0) {

        LEA     R3,R2,NUM-ARSIZE
        MOVE    R4,R8
        LOAD    R1,PFLTNEG
        JSRS    R1,R1           ;   fltneg( &num, pnum );

        LIS     R3,'-'
        LOAD    R1,PDSPCH
        JSRS    R1,R1           ;   dspch( '-' );

        BR      FPRABS
FPRNNEG;                        ; } else {
        LEA     R3,R2,NUM-ARSIZE
        MOVE    R4,R8
        LOAD    R1,PFLTCPY
        JSRS    R1,R1           ;   fltcpy( &num, pnum );
                                ; }
FPRABS:                         ; /* first put out the integer part */
        LEA     R3,R2,NUM-ARSIZE
        LOAD    R1,PFLTINT
        JSRS    R1,R1
        MOVE    R8,R3           ; R8 = inum = fltint( num );

        LOAD    R1,PDSPNUM
        JSRS    R1,R1           ; dspnum( inum );

        LIS     R3,'.'
        LOAD    R1,PDSPCH
        JSRS    R1,R1           ; dspch( '.' );
FPRLP:
        TESTR   R9
        BLE     FPRLX           ; while (places > 0) {

        LEA     R3,R2,TMP-ARSIZE
        MOVE    R4,R8
        LOAD    R1,PFLOAT
        JSRS    R1,R1           ;   float( &tmp, inum );

        LEA     R3,R2,NUM-ARSIZE
        MOVE    R4,R3
        LEA     R5,R2,TMP-ARSIZE
        LOAD    R1,PFLTSUB
        JSRS    R1,R1           ;   fltsub( &num, &num, &tmp );

        LEA     R3,R2,NUM-ARSIZE
        MOVE    R4,R3
        LEA     R5,R2,TEN-ARSIZE
        LOAD    R1,PFLTMUL
        JSRS    R1,R1           ;   fltmul( &num, &num, &ten );

 

 
The body of floating print, part 3
        LEA     R3,R2,NUM-ARSIZE
        LOAD    R1,PFLTINT
        JSRS    R1,R1
        MOVE    R8,R3           ;   R8 = inum = fltint( &num );

        ADDI    R3,R3,'0'
        LOAD    R1,PDSPCH
        JSRS    R1,R1           ;   dspch( inum + '0' );
        ADDSI   R9,-1           ;   places = places - 1;
        BR      FPRLP
FPRLX:                          ; }
        ADDI    R2,R2,-ARSIZE
        LOAD    R8,R2,R8SAVE
        LOAD    R9,R2,R9SAVE
        LOADS   R1,R2           ; restore return address, R8, R9
        JUMPS   R1              ; return

Exercises

x) Write a floating print routine that produces its output in scientific notation, for example, using the format 6.02E23 where the E stands for times ten to the. To do this, you will have to first do a decimal normalize, counting the number of times you have to multiply or divide by ten in order to bring the mantissa into the range from 1 to just under 10, and then you will have to print the mantissa (using the floating print routine we just discussed), and finally print the exponent.

A Final Note on Object Representation

At this point, it should appear that an object, say a floating point number, is represented by a sequence of memory locations holding the variables that compose the representation of that object, and that the methods of a class are simply subroutines that are called with the address of the object as their first parameter. This simple view is an oversimplification that we will address in upcoming chapters, but it is basically correct.

What is oversimplified about this view? Several things. First, this view only works when the method being called in the code can be determined statically. This is certaily true if our program has only one representation for each class of objects, but it is not always true if we allow a class hierarchy, where there are classes and subclasses. Class hierarchies do not introduce problems when the subclasses merely add new operations, new representation fields, and new behaviors to an existing class, but if two different subclasses implement the same interface specification in different ways, we have a problem.

Whenever there are multiple implementations of the same interface specification for some class, we say that we have a polymorphic class. For example, if we have one class, floating_point, with subclasses single_precision, double_precision and perhaps rational or even arbitrary_precision, our basic floating point class is polymorphic.

When object oriented languages allow polymorphic classes are translated to assembly language, each object of each class that implements the same interface must begin with an indication of its type. Without this, we would have no way, when operating on a polymorphic object, to determine which implementation of the operation was appropriate. There are several ways to do this; some of these give faster access to methods of objects of polymorphic classes, while others give more compact representations for objects of such classes.

The fastest way to access methods of objects that are instances of polymorphic classes is to include pointers to the applicable methods directly in each instance of each object. For objects implemented using our floating point representation, we might do this as follows:

A fast representation for a polymorphic floating point number
; format of a single-precision Hawk floating point number stored in memory

; all compatable floating point numbers begin as follows
PFLOAT  =       0
PFLTINT =       4
PFLTCPY =       8
PFLTTST =       12
PFLTADD =       16
PFLTSUB =       20
PFLTNEG =       24
PFLTMUL =       28
PFLTDIV =       32

; fields specific to single-precision Hawk numbers
EXPONENT =      36
MANTISSA =      40

FLOATSIZE =     44

Initializing an object using this representation is rather cumbersome, since all of these pointers must be filled in, but once it is initialized, access to any method of the object is quite fast:
Fast access to a method of a polymorphic floating point number
; assume R3 points to an initialized floating point value as given above
LOAD    R1,R3,PFLTINT   ; get pointer to the FLTINT method
JSRS    R1,R1           ; call the method

Of course, for objects as simple as floating point numbers, where those objects have a large number of applicable methods, this approach causes the size of the object to be greatly bloated. To avoid this, we can store the pointers to the methods in a compact table elsewhere, for example, in the same memory area that holds the code for the class. This makes sense because the table of method pointers is constant. It is common to refer to this table of method pointers as the class descriptor, and to refer to the pointer to the class descriptor stored in each object as the tag field of that object, since it is the field that identifies the class to which the object belongs. Most implementations of object-oriented programming languages use this approach, storing the tag field of the object in the first word of the object. This leads to the following code for a call to a method of a polymorphic class:
Access to a method of an efficiently represented polymorphic floating point number
; assume R3 points to an initialized floating point value
LOAD    R1,R3,PMETHODS  ; get pointer to method list of object's class
LOAD    R1,R1,PFLTINT   ; get pointer to the FLTINT method
JSRS    R1,R1           ; call the method

This is the approach used in most modern object oriented languages such as C++. What this means is that calls to methods of polymorphic classes cost, typically, one or at most two more instructions than calls to methods where there is only one possible subroutine. Good compilers always avoid using this more expensive mechanism whenever the method can be uniquely determined at compile time, and use it only where polymorphism is present.

Languages like Java also do this, but then they add a new layer of inefficiency by including large numbers of default attributes in every instance of every class. Java objects all know their own names, for example, so each object representation holds a pointer to the string constant holding the object's name. This information is, of course, of great value during debugging, but it adds significantly to the memory requirements of large programs.