ABSTRACT

The point of having high-performance computing (HPC) is to allow individual nodes to work together in order to solve a problem larger than any one computer can easily solve. These nodes will need to be able to communicate and pass information from one to another interchangeably through variety of computer network. As long as the nodes are connected to the same network, any tasks can still be solved even when all the computers are not located in the same place. It is possible to achieve higher performance computing through a process called HPC which increases computer performance by doing parallel processing to run complicated tasks more efficiently. A high-performance computer or commonly known as supercomputer, can be denoted as an output of this process. However, it is a single computer with tens of thousands of processors and is too expensive for every organization to own. HPC is aimed to achieve such performance but this can only partially be done without owning a supercomputer, which is by using multiple computers working together through parallel computing, as mentioned above. Parallel computing is the use of many processors to complete or solve complex computational problems or task. The task is 318initially divided into different small parts, which will be completed by each of the processors to obtain results faster, and will be transferred to the receiver once every subtasks are completed. Each of the processors is allowed to exchange information with one another, for better performance results. Parallel computers, are actually classified into a few classes based on the tolerance level of parallelism. Parallel computing is often of interest to small and medium sized businesses, as organizations want to achieve supercomputer-performance; although technically it is hardly achievable as a supercomputer is a system that operates at nearly the currently highest operational rate, in which most work at more than a petaflop (thousand trillion) floating-point operations per second. HPC technology focuses on developing parallel processing algorithms and systems by incorporating both administration and parallel computational techniques.