Assembly code level optimization Technique

Optimizing at the code level is not effective as higher-level optimizations. Memory allocation optimizations, algorithmic optimizations should be tried before trying out the code level optimization. Code line level optimization is performed based on the assembly instruction created for one line of source code or one function. This needs to be done with the help of assembler and disassembler.

The output of the compiler (object file and executable) should conform to standard format so that it shall be run by the operating system. This format is called ELF (Executable and linking format) and all compilers output including FORTRAN, C, C++, etc will conform to this format. It defines how an executable file is organized so that loader knows where to look for code, data, etc.

Analyzing the assembly instructions is tricky and it requires knowledge about the assembly language. To get primitive understanding of how the C source code gets transformed to assembly language, you can use objdump tool

For example, consider the following program (fact.c)

#include <stdio.h>
int factorial(int number)
{
    if (number >12) return(1);
    int fact =1;
    int i =1;
    for (; i<=number; i++)
    {
        fact = fact * i;
    }
    return fact;
}
int main()
{
   printf(“%d\n”, factorial(12));
   return 0;
}

Compile the above code with debugging enabled(-g option)

$gcc -g -c fact.c
Objdump -S option displays both the source code and its corresponding assembly instructions together. This will help us to understand how C code is transformed to assembly by the compiler.

$objdump -S fact.o
fact.o:     file format elf32-i386
Disassembly of section .text:
00000000 <factorial>:
#include <stdio.h>
int factorial(int number)
{
   0:   55                      push   %ebp
   1:   89 e5                   mov    %esp,%ebp
   3:   83 ec 14                sub    $0x14,%esp
    if (number >12) return(1);
   6:   83 7d 08 0c             cmpl   $0xc,0x8(%ebp)
   a:   7e 09                   jle    15 <factorial+0x15>
   c:   c7 45 ec 01 00 00 00    movl   $0x1,0xffffffec(%ebp)
  13:   eb 2c                   jmp    41 <factorial+0x41>
    int fact =1;
  15:   c7 45 f8 01 00 00 00    movl   $0x1,0xfffffff8(%ebp)
    int i =1;
  1c:   c7 45 fc 01 00 00 00    movl   $0x1,0xfffffffc(%ebp)
    for (; i<=number; i++)
  23:   eb 0e                   jmp    33 <factorial+0x33>
    {
        fact = fact * i;
  25:   8b 45 f8                mov    0xfffffff8(%ebp),%eax
  28:   0f af 45 fc             imul   0xfffffffc(%ebp),%eax
  2c:   89 45 f8                mov    %eax,0xfffffff8(%ebp)
  2f:   83 45 fc 01             addl   $0x1,0xfffffffc(%ebp)
  33:   8b 45 fc                mov    0xfffffffc(%ebp),%eax
  36:   3b 45 08                cmp    0x8(%ebp),%eax
  39:   7e ea                   jle    25 <factorial+0x25>
    }
    return fact;
  3b:   8b 45 f8                mov    0xfffffff8(%ebp),%eax
  3e:   89 45 ec                mov    %eax,0xffffffec(%ebp)
  41:   8b 45 ec                mov    0xffffffec(%ebp),%eax
}
  44:   c9                      leave
  45:   c3                      ret

00000046 <main>:
int main()
{
  46:   8d 4c 24 04             lea    0x4(%esp),%ecx
  4a:   83 e4 f0                and    $0xfffffff0,%esp
  4d:   ff 71 fc                pushl  0xfffffffc(%ecx)
  50:   55                      push   %ebp
  51:   89 e5                   mov    %esp,%ebp
  53:   51                      push   %ecx
  54:   83 ec 14                sub    $0x14,%esp
   printf(“%d\n”, factorial(12));
  57:   c7 04 24 0c 00 00 00    movl   $0xc,(%esp)
  5e:   e8 fc ff ff ff          call   5f <main+0x19>
  63:   89 44 24 04             mov    %eax,0x4(%esp)
  67:   c7 04 24 00 00 00 00    movl   $0x0,(%esp)
  6e:   e8 fc ff ff ff          call   6f <main+0x29>
   return 0;
  73:   b8 00 00 00 00          mov    $0x0,%eax
}
  78:   83 c4 14                add    $0x14,%esp
  7b:   59                      pop    %ecx
  7c:   5d                      pop    %ebp
  7d:   8d 61 fc                lea    0xfffffffc(%ecx),%esp
  80:   c3                      ret

Even if you do not know assembly instructions, you can still use this technique to improve the code by counting the total number of assembly instructions produced for a function before and after your code changes.

Compiler(gcc) also provides -S option to stop processing after it produces an assembly file.

Consider the following program t1.c

#include<stdlib.h>
typedef struct element
{
    int              val;
    struct element  *next;
} Element;
Element *head;

void insert(int no)
{
    Element *newNode = (Element *) malloc( sizeof( Element ) );
    newNode->val = no;
    newNode->next = NULL;
    if (NULL == head) { head = newNode; return ;}
    Element *iter = head;
    while (NULL != iter->next) iter = iter->next;
    iter->next = newNode;
    return;
}
int main()
{
    insert(10);
    return 0;
}
Generate the assembly file for the above program as follows

$gcc -O2 -S t1.c
gcc will produce an assembly output file whose file name is the same as the original c file (t1 in this example) with a .s suffix. It will display a long output, so it is better to redirect it to a file.

The output file contains the following

        .file   “t1.c”
        .text
        .p2align 4,,15
.globl insert
        .type   insert, @function
insert:
        pushl   %ebp
        movl    %esp, %ebp
        subl    $8, %esp
        movl    $8, (%esp)
        call    malloc
        movl    head, %edx
        testl   %edx, %edx
        movl    %eax, %ecx
        movl    8(%ebp), %eax
        movl    $0, 4(%ecx)
        movl    %eax, (%ecx)
        jne     .L8
        jmp     .L11
        .p2align 4,,7
.L5:
        movl    %eax, %edx
.L8:
        movl    4(%edx), %eax
        testl   %eax, %eax
        jne     .L5
        movl    %ecx, 4(%edx)
        leave
        .p2align 4,,2
        ret
.L11:
        movl    %ecx, head
        leave
        ret
        .size   insert, .-insert
        .p2align 4,,15
.globl main
        .type   main, @function
main:
        leal    4(%esp), %ecx
        andl    $-16, %esp
        pushl   -4(%ecx)
        pushl   %ebp
        movl    %esp, %ebp
        pushl   %ecx
        subl    $4, %esp
….
The insert function has approximately 25 instructions with three branches.

If the program implements simple linked list, then we shall modify the program as follows (t2.c)

#include<stdio.h>
#include<stdlib.h>
typedef struct element
{
    int              val;
    struct element  *next;
} Element;
Element *head;
void insert(int no)
{
    Element *newNode = (Element *) malloc( sizeof( Element ) );
    newNode->val = no;
    newNode->next = head;
    head = newNode;
    return;
}
int main()
{
    insert(10);
    return 0;
}
Generate the assembly instruction

$gcc -O2 -S t2.c
t2.s contains

        .file   “tt.c”
        .text
        .p2align 4,,15
.globl insert
        .type   insert, @function
insert:
        pushl   %ebp
        movl    %esp, %ebp
        subl    $8, %esp
        movl    $8, (%esp)
        call    malloc
        movl    8(%ebp), %edx
        movl    %edx, (%eax)
        movl    head, %edx
        movl    %edx, 4(%eax)
        movl    %eax, head
        leave
        ret
        .size   insert, .-insert
        .p2align 4,,15
.globl main
        .type   main, @function
main:
        leal    4(%esp), %ecx
        andl    $-16, %esp
        pushl   -4(%ecx)
        pushl   %ebp
        movl    %esp, %ebp

Now insert() function contains only 12 instructions, we have optimized the code to run faster without changing the behavior. This code changes cannot be made if main() assumes that the linked list retrieval is in FIFO fashion.

Code Level Optimization Techniques

Some of the techniques mentioned below “arithmetic operators”, “jump table to replace if…elseif”, “faster for loops” makes the code hard to debug, un-portable and un-maintainable. If there is no other way to optimize, then only attempt these techniques.

Use pass by reference for user defined type

Passing parameters by value to functions, result in the complete parameter being copied on to the stack. Pass by value shall be replaced by const references to avoid this data copy. Passing bigger objects as return values also has the same performance issues; instead replace it as OUT parameters for that function.

Pre-compute quantities to speed up run-time calculations

Avoid constant computing inside the loops. Though most of the compilers do this optimization, it is still better if it is handled in our code rather than relying on the compiler.

for (int i=0; i <100; i++) {
    offset = offset + strlen(str);
}
Shall be replaced to 

int size = strlen(str);
for (int i=0; i <100; i++) {
    offset = offset + size;
}

Avoid system calls in time critical section of the code 

System calls are costly as it involves switching into kernel mode from user mode. For instances where two process needs to communicate, using shared memory is better than using message queues of pipes as shared memory does not incur any system calls to send and receive data.

Lazy / Minimal Evaluation 

Lazy or delayed evaluation is the technique of delaying a computation until the result of the computation is actually needed for that code path. 

For example, if the expressions are concatenated using logical ‘AND’ or ‘OR’ operator, the expression is evaluated from left to right. We shall gain CPU cycles by placing the expressions in correct position

Consider the code example

if (strcmp(str, “SYSTEM”) && flag == 1)

{

   //…

}

In the above code segment, strcmp() function takes lot of time and for cases where flag != 1, it still does the comparison wasting lot of CPU cycles. The efficient code is

if (flag ==1 && strcmp(str, “SYSTEM”))

{

   //…

}

The same is applicable for OR logical operator when flag == 1. In general, for expressions using operator &&, make the condition that is most likely to be false the leftmost condition. For expressions using operator ||, make the condition that is most likely to be true the leftmost condition

Frequently executed cases should come first in the switch case or if..elseif statement so that the number of comparisons is reduced for the frequently executed case.

The following switch statement is inefficient as integer datatype, which is used most of the time, is placed in the last case and then string data type. Double data type is seldom used in most of the applications, but it is placed in the first case.

switch(dataType) {
  case typeDouble: { doDoubleAction(); break; }
  case typeDate: {doDateAction();; break; }
  case typeShort: {doShortAction();.; break; }
  case typeString: {doStringAction(); break; }
  case typeTimeStamp: {doTimeStampAction(); break; }
  case typeInt: { doIntAction(); break; }
  default: {doDefaultAction(); break; }
}
This shall be made efficient by keeping the frequently used data types first and followed by least and seldom-used data types.

switch(dataType) {
  case typeInt: {…; break; }
  case typeString: {…; break; }
  case typeShort: {…; break; }
  case typeDouble: {…; break; }
  case typeTimeStamp: {…; break; }
  case typeDate: {…; break; }
  default: {…; break; }
}
Much more elegant way is to implement a jump table using function pointers. Consider the following code

typedef void (*functs)();
functs JumpTable[] = { doIntAction, doStringAction(), doShortAction() /* etc*/} ;

Place your function pointers in the same sequence of dataType enum. The above JumpTable assumes that DataType enum is defined as follows

enum DataType{
typeInt =0,
typeString,
typeShort,
/* other types */
};

To call the appropriate implementation just use the following statement

JumpTable[dataType]();

Now the compare operations are replaced with array indexing operation, which are much faster. Moreover the time taken for every data type is nearly the same as compared to if..elseif and switch() constructs.

Minimize local variables

If the number of local variables in a function is less, then the compiler will be able to fit them into registers, thereby avoiding access to the memory (stack). If no local variables need to be saved on the stack, the compiler need not set up and restore the frame pointer.

Reduce number of parameters

Function calls with large number of parameters may be expensive due to large number of parameter pushes on stack on each call. This is applicable even when struct is passed by value. 

Declare local functions as static

If the function is small enough, it may be inlined without having to maintain an external copy if declared as static.

Avoid using global variables in performance critical code

Global variables are never allocated to registers. Global variables can be changed by assigning them indirectly using a pointer, or by a function call. Hence, the compiler cannot cache the value of a global variable in a register, resulting in extra loads and stores when globals are used. 

Pointer chains

Pointer chains are frequently used to access information in structures. For example, a common code sequence is:

 typedef struct { int x, y, color; } Point;

 typedef struct { Point *pos, int something; } Pixel;

 void initPixel(Pixel*p)

 {

 p->pos->x = 0;

 p->pos->y = 0;

 p->pos->color =0;

 }

However, this code must reload p->pos for each assignment. A better version would cache p->pos in a local variable:

 void initPixel(Pixel *p)

 {

 Point *pos = p->pos;

 pos->x = 0;

 pos->y = 0;

 pos->color = 0;

 }

Another way is to include the Point structure in the Pixel structure, thereby avoiding pointers completely. Some compilers do provide this optimization by default.

Replace frequent read/write with mmap

If the application does lot of read/write calls, then you should seriously consider to convert it into mmap() . This does not incur data copy from kernel to user and vice versa and avoids kernel mode switch for executing the read() or write() system call. You can use fsync() call, once to flush all the data to the disk file after you write all your data into the mapped memory area. 

Place elements together, which are accessed together

When you declare a structure make sure that the data elements are declared in such a way that most frequently accessed elements at the beginning

For example if page element is accessed many times and hasSpace little less and freeNode rarely, then declare the structure as follows.

Struct Node{

Int page;

Int hasSpace;

Int freeNode;

///other types

};

This improves the performance because L1 or L2 cache miss, will not get only the required byte, it also gets one full cache line size of data into the appropriate cache. When page is cached, it also caches hasSpace if cache line is 64 bit.  L1 cache line size is usually 64 bits and for L2, it is 128 bits.

Arithmetic operator

Replacing multiplication and division by shift and add operations.

X * 2 shall be replaced to X+X or X<<1

X * N shall be replaced by X<<I where N is 2 ^ I

X / N shall be replaced by X >>I where N is 2 ^ I

Faster for() loops 

for( i=0; i<10; i++){ … } 

If the code inside the loop does not care about the order of the loop counter, we can do this instead:

for( i=10; i–; ) { … }

This runs faster because it is quicker to process i– as the test condition, which says “Is i non-zero? If so, decrement it and continue”. For the original code, the processor has to calculate “Subtract i from 10. Is the result non-zero? If so, increment i and continue.” 

Prefer int over char and short

If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. This is because, when you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. Doing so, will increase the memory usage but decreases the CPU cycles 

Advise OS on how it should manage memory pages

Using madvise()  system call, we can specify how the swapping and prefetching should be handled by the operating system. If you are accessing data sequentially, OS can prefetch some pages and keep it ready before the program references it. If program accesses data in some memory location frequently, it shall be locked in physical memory avoiding page swap to disk, using mlock() system call.

Thread Design Techniques

A) Pipeline
Task is divided into multiple sub tasks and each thread is responsible for doing one subtask. Job is passed from one thread to other till it reaches the final state.

B) Master/Servant
Master thread does accept the work and delegates work to its servants. Servants are created on the fly by the master when a work arrives to the master. After finishing the job servants go away. For next work, master creates new servants.

C) Thread Pool
Master thread accepts the work and delegate it to predefined set of servants, called thread pool. This is usually accomplished by using a queuing mechanism in between. Master adds request to the queue and all the threads in the pool removes the request from the queue and handles it iteratively.

Porting Unix Socket code to Windows

Windows sockets and UNIX type berkeley sockets provide pretty much similar interface which eases the porting effort. Apart from regular network system calls such as socket(), bind(), listen(), accept(),etc two API calls are important in Windows sockets

WSAStartup()-> should be called before calling any other winsock APIs

WSACleanup()-> should be called when program exit

After a socket connection is established, data can be transferred using the standard read() and write() calls as in UNIX sockets. Sockets must be closed by using the closesocket() function in Windows instead of close() in case of Windows.

Header File:

winsock2.h

Library:

Ws2_32.lib

Alternatively, you can add below line to header file

#pragma comment(lib, “Ws2_32.lib”);

More information on this topic can be found here @ msdn