Tag Archives: Diman Zad Tootaghaj

Stochastic Modeling and Optimization of Stragglers

Abstract:
MapReduce framework is widely used to parallelize batch jobs since it exploits a high degree of multi-tasking to process them. However, it has been observed that when the number of servers increases, the map phase can take much longer than expected. This paper analytically shows that the stochastic behavior of the servers has a negative effect on the completion time of a MapReduce job, and continuously increasing the number of servers without accurate scheduling can degrade the overall performance. We analytically model the map phase in terms of hardware, system, and application parameters to capture the effects of stragglers on the performance. Mean sojourn time (MST), the time needed to sync the completed tasks at a reducer, is introduced as a performance metric and mathematically formulated. Following that, we stochastically investigate the optimal task scheduling which leads to an equilibrium property in a datacenter with different types of servers. Our experimental results show the performance of the different types of schedulers targeting MapReduce applications. We also show that, in the case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can always achieve the lowest MST.

 

Stochastic modeling and optimization of stragglers in mapreduce framework

@phdthesis{farhat2015stochastic,
  title={Stochastic modeling and optimization of stragglers in mapreduce framework},
  author={Farhat, Farshid},
  year={2015},
  school={The Pennsylvania State University}
}

 

Stochastic modeling and optimization of stragglers

@article{farhat2016stochastic,
  title={Stochastic modeling and optimization of stragglers},
  author={Farhat, Farshid and Tootaghaj, Diman and He, Yuxiong and Sivasubramaniam, Anand and Kandemir, Mahmut and Das, Chita},
  journal={IEEE Transactions on Cloud Computing},
  year={2016},
  publisher={IEEE}
}

Node Architecture and Cloud Workload Characteristics Analysis

Abstract
The combined impact of node architecture and workload characteristics on off-chip network traffic with performance/cost analysis has not been investigated before in the context of emerging cloud applications. Motivated by this observation, this paper performs a thorough characterization of twelve cloud workloads using a full-system datacenter simulation infrastructure. We first study the inherent network characteristics of emerging cloud applications including message inter-arrival times, packet sizes, inter-node communication overhead, self-similarity, and traffic volume. Then, we study the effect of hardware architectural metrics on network traffic. Our experimental analysis reveals that (1) the message arrival times and packet-size distributions exhibit variances across different cloud applications, (2) the inter-arrival times imply a large amount of self-similarity as the number of nodes increase, (3) the node architecture can play a significant role in shaping the overall network traffic, and finally, (4) the applications we study can be broadly divided into those which perform better in a scale-out or scale-up configuration at node level and into two categories, namely, those that have long-duration, low-burst flows and those that have short-duration, high-burst flows. Using the results of (3) and (4), the paper discusses the performance/cost trade-offs for scale-out and scale-up approaches and proposes an analytical model that can be used to predict the communication and computation demand for different configurations. It is shown that the difference between two different node architecture’s performance per dollar cost (under same number of cores system wide) can be as high as 154 percent which disclose the need for accurate characterization of cloud applications before wasting the precious cloud resources by allocating wrong architecture. The results of this study can be used for system modeling, capacity planning and managing heterogeneous resources for large-scale system designs.

Full Text > Combined Impact of Node Architecture and Cloud Workload Characteristics

More info > Diman Zad Tootaghaj ‘s Publications

Optimal Placement in Network On-Chip

Abstract:
Parallel programming is emerging fast and intensive applications need more resources, so there is a huge demand for on-chip multiprocessors. Accessing L1 caches beside the cores are the fastest after registers but the size of private caches cannot increase because of design, cost and technology limits. Then split I-cache and D-cache are used with shared LLC (last level cache). For a unified shared LLC, bus interface is not scalable, and it seems that distributed shared LLC (DSLLC) is a better choice. Most of papers assume a distributed shared LLC beside each core in on-chip network. Many works assume that DSLLCs are placed in all cores; however, we will show that this design ignores the effect of traffic congestion in on-chip network. In fact, our work focuses on optimal placement of cores, DSLLCs and even memory controllers to minimize the expected latency based on traffic load in a mesh on-chip network with fixed number of cores and total cache capacity. We try to do some analytical modeling deriving intended cost function and then optimize the mean delay of the on-chip network communication. This work is supposed to be verified using some traffic patterns that are run on CSIM simulator.

Full text @ OPCCMCNOC

Towards Stochastically Optimizing Data Computing Flows

Abstract:
With rapid growth in the amount of unstructured data produced by memory-intensive applications, large scale data analytics has recently attracted increasing interest. Processing, managing and analyzing this huge amount of data poses several challenges in cloud and data center computing domain. Especially, conventional frameworks for distributed data analytics are based on the assumption of homogeneity and non-stochastic distribution of different data-processing nodes. The paper argues the fundamental limiting factors for scaling big data computation. It is shown that as the number of series and parallel computing servers increase, the tail (mean and variance) of the job execution time increase. We will first propose a model to predict the response time of highly distributed processing tasks and then propose a new practical computational algorithm to optimize the response time.

 

Modeling and Optimization of Straggling Mappers

ABSTRACT
MapReduce framework is widely used to parallelize batch jobs since it exploits a high degree of multi-tasking to process them. However, it has been observed that when the number of mappers increases, the map phase can take much longer than expected. This paper analytically shows that stochastic behavior of mapper nodes has a negative effect on the completion time of a MapReduce job, and continuously increasing the number of mappers without accurate scheduling can degrade the overall performance. We analytically capture the effects of stragglers (delayed mappers) on the performance. Based on an observed delayed exponential distribution (DED) of the response time of mappers, we then model the map phase by means of hardware, system, and application parameters. Mean sojourn time (MST), the time needed to sync the completed map tasks at one reducer, is mathematically formulated. Following that, we optimize MST by finding the task inter-arrival time to each mapper node. The optimal mapping problem leads to an equilibrium property investigated for different types of inter-arrival and service time distributions in a heterogeneous datacenter (i.e., a datacenter with different types of nodes). Our experimental results show the performance and important parameters of the different types of schedulers targeting MapReduce applications. We also show that, in the case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can always achieve the lowest MST.

[Tech Report] [Master Thesis] [IEEE Trans]

Last version > MapReduce_Performance_Optimization

Game-theoretic model to mitigate packet dropping

Abstract:
Performance of routing is severely degraded when misbehaving nodes drop packets instead of properly forwarding them. In this paper, we propose a Game-Theoretic Adaptive Multipath Routing (GTAMR) protocol to detect and punish selfish or malicious nodes which try to drop information packets in routing phase and defend against collaborative attacks in which nodes try to disrupt communication or save their power. Our proposed algorithm outranks previous schemes because it is resilient against attacks in which more than one node coordinate their misbehavior and can be used in networks which wireless nodes use directional antennas. We then propose a game theoretic strategy, ERTFT, for nodes to promote cooperation. In comparison with other proposed TFT-like strategies, ours is resilient to systematic errors in detection of selfish nodes and does not lead to unending death spirals.

Website > Game-Theoretic Network Simulator (GTNS)

Full text > Game-theoretic approach to mitigate packet dropping in wireless ad-hoc networks

Code > GTNS

 

Risk of attack coefficient effect on availability of adhoc networks

Abstract:
Security techniques have been designed to obtain certain objectives. One of the most important objectives all security mechanisms try to achieve is the availability, which insures that network services are available to various entities in the network when required. But there has not been any certain parameter to measure this objective in network. In this paper we consider availability as a security parameter in ad-hoc networks. However this parameter can be used in other networks as well. We also present the connectivity coefficient of nodes in a network which shows how important is a node in a network and how much damage is caused if a certain node is compromised.

Risk of attack coefficient effect on availability of adhoc networks