Contents (hide)
1 Objectives
2 Different Perspectives of Memory
Services

3 Background: Memory Technologies
4 Hardware Perspective
5 Programming language perspective
5.1 Memory Access
6 Stack model vs. Heap model
6.1 Procedures
6.2 Stack
6.2.1 Stack Implementation
6.3 Heap
6.3.1 Heap Implementation
6.3.2 Memory deallocation in Java
6.4 Stack and Heap in C++
6.4.1 Stack
6.4.2 Heap
6.4.3 Allocating and Deleting Arrays in C++

Memory management: Stack Model Vs. Heap Model

# Objectives

This lecture initiates our discussion of memory usage. Memory is one of the central shared resources in a multiprocessing RTE. During the discussion we introduce the memory models that are used in Java and C++ programming languages, and the corresponding services that a Java/C++ programmer can get from the underlying RTE (JVM or Operating System).

# Different Perspectives of Memory Services

We can look at memory operations at different levels:

• Hardware level: operations like READ and WRITE.
• Operating System level: sharing of the physical memory among different processes, allocating memory to processors, etc.
• Programming Language: how programming language constructs relate to memory operations at runtime and at compile time.

We start with a short background on memory technologies, continue with an overview of hardware level aspects and finally focus on the programing language aspect.

# Background: Memory Technologies

Computer systems are built around different types of memory technologies:

• On-chip storage - registers and cache storage - very fast storage which generally reside on the same chip as the main processor
• Random Access Memory - a family of technologies used to implement the computer main memory the RAM.
• Persistent online storage - Hard disk or solid state disks using magnetic or electromagnetic technologies.
• Persistent offline storage - Tapes/DVD/CD are magnetic or optical offline storage technologies.

There is a natural memory hierarchy used in computer systems: faster memory is available in smaller quantities, slower memory is available in larger quantities. Memory costs rise the faster the memory (access wise) is.

The following properties characterize the memory system of a computer:

1. Memory space: how much main memory the processor can address (maximum potential size)
2. Word size: words are the native units of data the CPU operates on. Usually, internal registers can store exactly one word, and writes/reads from main memory are at lease word-sized.

These 2 parameters correspond to 2 hardware parameters:

1. Width of the address bus: memory is viewed as an array of words. Each word has an index (0, 1, …, up to the size of the memory space). The index of a word in the main memory is called its address.
2. Width of the data bus: memory is read and written to/from registers in the processor and locations in main memory. Each such operation is performed on a single word at a time. The size of the word is determined by the number of physical connections (lines) between the processor and the memory system.

# Hardware Perspective

At the lowest level, the processor views the main memory in a very simple manner; a fast storage medium. The main memory is used to store all the data a process needs for its computation (e.g., parameters, variables, objects, etc.). To perform any operation on a piece of data (word-sized), the following cycle must be followed:

1. Load the data from memory into internal processor registers.
2. Apply the operation on the registers and put the result in another register.
3. Write the result from the register into the relevant memory location.

The primitive operations that the processor uses with relation to memory are:
• Read (Address), (Register): copy the content of the word stored at (Address) into the (Register) of the processor. Registers are referenced by their number (0, 1, … number of registers on the processor).
• Write (Register), (Address): copy the content of the (Register) to the memory cell at (Address).

For example, consider the following Java code:

1. int i = 10;
2. int j = 20;
3.
4. i = i + j;

The compiler generates the relevant code for the CPU to execute. The generated code in an abstract set of processor instructions would look like the following code; by convention, we write [var] to denote the address in memory in which a variable is stored.

;; Initialize the memory cells for i and j
WRITE $10, [i] ;; 10 is a constant value WRITE$20, [j]		;; 20 is a constant value

;; Execute the sum
READ [i], R1		;; copy the content of i in register 1 (R1)
READ [j], R2		;; copy the content of j in register 2 (R2)
ADD R1, R2, R3		;; execute the operation: R1 + R2 and store the result in R3
WRITE R3,[i]		;; store the result in the memory location of i


# Programming language perspective

We start with a discussion of how Java constructs relate to memory. We then compare the Java approach with the C++ approach and stress the differences.

#### Which explicit/implicit memory-related services we already used in Java?

Explicit

• Object allocation - Object allocation using new
• Initialization - Constructor invocation or variable initialization
• Copying - Copying data from one variable to another

Implicit
• Object deletion - done by the Garbage Collector
• Variable allocation - Local (primitive) variables allocation
• Variable deallocation - Local variable deallocation when leaving scope

## Memory Access

In high level languages, there is usually only one way to access memory: through a variable. A variable in Java is characterized by:

1. A name
2. A type
3. A scope: the program region in which the name is recognized and refers to the same variable

Variables are bound to values at runtime. A value is characterized by:

1. A type
2. A position in memory which is the value address
3. A size - how much memory does the value requires in memory

When a variable is bound to a value, the type of the value must be compatible with the type of the variable. Types are compatible when:

• They are identical
• The type of the value is more specific in the inheritance hierarchy than the type of the variable.
• The type of the value implements the type of the variable (when the type of the variable is an interface).

# Stack model vs. Heap model

Modern programming languages employ two different memory models, to satisfy different needs, namely the Stack and the Heap. But first we need to understand how procedures are related to the memory model.

## Procedures

High level languages usually employ the concept of procedures (or functions). A program is composed of several procedures. Each procedure is a small piece of code, performing a simple and defined operation. Procedures may call other procedures or themselves. Each procedure may receive parameters, define local variables and return a value. Note that even object oriented programs are, under the hood, procedural. When you call a member method of some object, you basically call the method with the object as one of the arguments. Indeed, in some languages, you are required to pass the object as the first argument!.

## Stack

 Activation Frame Generation Who actually creates the activation frames? Well, the procedure itself. The compiler generates the appropriate code, and inserts it as the preamble of each procedure.
 Variable Bindings Variables' binding is usually very simple, as access to variables must be really fast. The compiler hard-codes the location of each variable in the procedure's code, usually using a simple offset from the start of the activation frame.
Procedural calls can be viewed in a stack-like manner; each procedure's activation requires a frame on a stack. Each time a procedure is called, a new activation frame is generated on the stack. Each activation frame holds the following information:

• The parameters passed to the procedure
• Local primitive variables declared in the procedure
• The return value - Where to return to when the procedure ends

For example, consider the following code:
public static void foo(int a, bool b){
int var_a;
int var_b;
.
.
.
}

Each time the function is invoked, an appropriate activation frame will open on the stack, which will look something like this:

When an activation frame is created, the binding between local variables and their location on the stack is made. When a procedure terminates, its associated activation frame is popped from the stack, and (figuratively) discarded.

In Java, only the variables located on the top-most activation frame may be accessed (parameters and local variables). However, as we will see, this is not the case in C++ (and we are not referring to pointers!).

### Stack Implementation

To realize the concept of procedures and activation frames, modern computer systems inherently support the stack data structure; For each execution element (process, thread, etc.) there exists an associated memory region which is called the stack. The operating system is responsible for allocating enough space for the stack of each process (or thread), and the process (or thread) implicitly, using code generated by the compiler, manages the stack on its own. The stack is defined by a single CPU register, which points to the top of the stack (initialized by the operating system). This register may be directly modified by the process (or thread), by assigning different values, or indirectly modified, by pushing and popping words to/from the stack using special machine code instructions.

## Heap

Sometimes, programs need to store information which is relevant across function calls, or is just too big to fit on the stack. We would like to be able to specify that we want a block of memory of a given size to store some information. We usually do not care where the memory comes from, we are just interested in getting a block of it for our use. As a result, we abstract this service as a heap, where blocks of memory are heaped in a pile, and we can get to the block we need if we remember where we left it.

Before we go into the details of how usually heaps are implemented by an RTE, let us see where we have made use of the heap in Java. In Java, there is a clear distinction between what is saved on the stack and what is stored on the heap. In Java, all local variables of methods are stored on the stack and all Objects, are stored on the heap.

Consider the following code snippet:

1. int i = 5;
2. Object o = new Object();

Here, both i and o are stored on the stack. Only the new Object allocated by new is stored on the heap. The section of the stack represented by the variable i contains the binary representation of 5 (00...00101). So, what is stored in o's section?

We need to make a clear distinction between the Object which might be very big, with enough space to hold the entire state of Object, and the variable o which holds the location of the Object on the heap! More specifically, the section associated with o on the stack holds a reference (a number) to the starting location of the block of memory new allocated to hold the instance of the new Object().

This distinction between Objects and primitives in Java explains why primitives are always passed to functions by value and Objects always by reference.

### Heap Implementation

The heap is implemented with the collaboration of the Operating system and the RTE (even programs executing directly beneath – or above, if you prefer – the operating system usually employ a wrapper around the coarse control the operating system provides, which in turn defines the RTE).
 Man Man is a shorthand for Manual (as in a guide, reference). The man is the standard documentation system on any unix system, which contains the documentation (help files) for all the standard utilities (try running man command where command is any shell command you have previously used), system calls and libraries installed on the system. The man is arranged in section, numbered consecutively, where section 1 is for utilities, section 2 for system calls, section 3 for library calls etc. The convention when mentioning a utility/system call/library call is to mention within brackets in which section of the man it resides. For example, brk(2) tells us that brk is a system call. Look it up using man 2 brk. More on man: man man.
The OS (physically) allocates a large block of memory to each process, as the process starts, to serve as its heap. This block is shared by all the threads in the same process, and may be enlarged during the life time of the process by using special system calls (e.g., brk(2)). Smaller blocks are then (logically) allocated to hold specific data items (e.g., Objects). Allocating smaller blocks from the heap, keeping track of which parts of the heap are used at any given time, and requesting the OS to enlarge the heap are the responsibility of the RTE and are usually done transparently.
Note that managing the heap is done by the RTE while managing the stack is done with code instructions inserted by the compiler.

### Memory deallocation in Java

As we have already mentioned, blocks of memory are allocated in Java only when creating new Objects. Each time we call the new operator, a block of memory is allocated, large enough to hold the Object we are instantiating.

However, we have not mentioned how memory in Java is freed. What happens to an Object once we have no further use for it? e.g., we allocated an array inside some function, and then left the function without leaking the reference to the array outside of the scope of the function. Should the array be kept in memory indefinitely? No. This would exhaust our memory rapidly. We would like to reuse the memory used by the array, in a way to tell the RTE that the memory can be recycled. In Java, this is done automatically for us, by an active entity which is called the garbage collector.

Once in a while (or when the user asks by calling System.gc()), the garbage collector kicks into action, and recycles, or frees, all garbage Objects. An Object is considered garbage if it cannot be accessed from a variable on the stack. The algorithm of the garbage collector essentially works like this:

1. Compose a set of all Objects referenced from the stack and from registers.
2. For each Object in the set, add all Objects it references.
3. Repeat step (2) until the set remains unchanged (in essence, compute the transitive closure of the "a references b" relation).
4. Free all objects which are not in the set.

It should be pretty obvious why this algorithm is correct, but not that it incurs a heavy penalty in terms of efficiency (which we will not discuss further).

## Stack and Heap in C++

While the general abstractions of stack and heap are similar in C++ and Java, there are fundamental differences in how they are managed. Generally speaking, C++ allows the programmer more control over memory than Java. However, such a level of control comes with a price (or lines of code to write if you wish), and requires the programmer completely understands the environment in which her program will execute to avoid (the very large) pitfalls.

### Stack

The stack in C++ is managed in the same way the stack in Java is managed, with one important difference: objects in C++ may be allocated on the stack. This means that they can be passed by value to functions! Moreover, when an activation frame is deleted, objects are not just discarded as primitive types are, but before destroying them the C++ RTE ensures that a special function of the object will be called, namely, the destructor. A destructor is a function written by the programmer which is in charge of cleaning after the object: releasing the object state, and any resources the object might have acquired.

We later discuss three important methods of objects in C++: The destructor, copy constructor and assignment operator.

#### Objects on the stack and pass by value

Consider this example:

void doSomething(Foo foo2)
{
foo2.changeMe(...);
}
Foo foo1;
doSomthing(foo1);

During runtime, the following takes place:

1. A place to hold class Foo is reserved on the stack and is binded with the variable foo1.
2. The default constructor is called
3. An activation frame for doSomthing is created, with enough space to hold the return address and an object of class Foo.
4. A new object of class Foo is instantiated on the stack, by using the copy constructor of class Foo and passing foo1 as the argument.
5. The method changeMe() of the object foo2 is called.
6. The destructor of foo2 is called.
7. The doSomething frame is poped from the stack and the program continues from the return address.

It follows that changes made on foo2 are not reflected in foo1.

Now consider another example:

Foo doSomething(Foo foo2)
{
foo2.changeMe(...);
return foo2;
}
...
Foo foo1;
Foo foo3 = doSomething(foo1);

During runtime, the following takes place:

1. A place to hold class Foo is reserved on the stack and is binded with the variable foo1.
2. A place to hold class Foo is reserved on the stack and is binded with the variable foo3.
3. The default constructor is called for foo1
4. An activation frame for doSomthing is created, with enough space to hold the return address and an object of class Foo.
5. A new object of class Foo is instantiated on the stack, by using the copy constructor of class Foo and passing foo1 as the argument.
6. The method changeMe() of the object foo2,is called.
7. foo2 is copied by the copy constructor of class Foo to the location of foo3.
8. The destructor of foo2 is called.
9. The doSomething frame is poped from the stack and the program continues from the return address.

As before changes made on foo2 are not reflected in foo1, but the main issue is that doSomething implicitly employs three objects of type Foo, instantiating, copying and destructing them!

### Heap

As everything in C++ can be allocated on the stack, so can everything be allocated on the heap. Even primitive types. For example, we can allocate an int or an object of class Foo on the stack:

1. int *i = new int(8);
2. Foo *foo = new Foo(42);

#### New and Delete Operators

Allocating on the heap is achieved by calling the new operator. Note, new is an operator, not a function (which we will explain in the following lectures), which allocates space on the heap and initializes the space as we requested by calling a constructor.

Please notice that the returned value of new a_type(vars) is always of type a_type *, which is called a pointer to a_type (but more on pointers in the next lecture).

The memory is initialized using the constructor we requested (which must exists, otherwise our code will not compile).

In contrast to Java, memory allocated on the heap in C++ is never garbage collected, and it is the responsibility of the programmer to free the memory. For example, to free the memory we allocated earlier we will use the following code:

1. delete i;
2. delete foo;

Each time we delete an object, its destructor is called.

### Allocating and Deleting Arrays in C++

Arrays in C++ are allocated by using a special operator, new [], which takes the size of the array as a parameter. For example, to allocate an array of Foo objects, we would use the following code:

Foo *foo_array = new Foo[100];

Pay attention that the type returned by the new [] operator is, again, of type Foo *. Again, pointers, which we will discuss in the next lecture. But this is not the important point. The important point is that we have allocated a block of memory on the heap, large enough to hold 100 Foo objects. Each object is then initialized using the default constructor of class Foo (note that the default constructor of primitive numbers is 0, bool is false and pointers are 0).

But, how can we release the memory? by calling the delete[] operator!

delete[] foo_array;

Which, in turn, calls the destructor of each object in the array, and then frees the memory.

Note that new and new[] are not the same operators. Same holds for delete and delete[]. You must pay attention to call delete[] on memory allocated by new[] and delete for memory allocated by new.