Loading...
Search for:
benchmarking
0.01 seconds
Total 190 records
, M.Sc. Thesis Sharif University of Technology ; Bakhshi, Ali (Supervisor)
Abstract
To verify most of the numerical and analytical developments in the structural and earthquake engineering techniques, experimental methods have been considered by many researchers. For instance, numerical simulations in structural health monitoring, structural control and energy dissipation systems need to be verified and compared with experimental results. Consequently, research organizations spend substantial budgets on assessing and updating these experimental methods. Many of these experimental tests are essentially conducted on the benchmark structures with specific technical characteristics; these properties should not be changed after each experiment. Therefore, the tests on these...
Control of the Activated Sludge System Using Neural Network Model Predictive Control
, M.Sc. Thesis Sharif University of Technology ; Shaygan Salek, Jalaloddin (Supervisor)
Abstract
Activated sludge systems are widespread biological wastewater treatment systems that have a very complex and nonlinear dynamics with a wide range of time constants and, as a consequence, are difficult to model and control. On the other hand, using neural networks as function approximators has provided a reliable tool for modeling complex dynamic systems like activated sludge. In this study a multi-input multi-output neural network model predictive controller (NNMPC) is developed and tested based on the basic control strategy of a benchmark simulation model (called BSM1) suggested by european co-operation in the field of science and technical research (COST) actions 682/624. The controller...
New Generation of On-purpose Attacks for Evaluating Digital Image Watermarking Methods by Preserving the Image Quality
, Ph.D. Dissertation Sharif University of Technology ; Jamzad, Mansour (Supervisor)
Abstract
Up to now, compared with the comprehensive research for developing robust watermarking algorithms, no equal attention has been devoted to the proposition of benchmarks tailored to assess the watermark robustness. In addition, almost all the state of the art benchmarks only integrate a number of common image processing operations like geometrical transformations to remove watermarks. However, the quality of the processed image is often too degraded to permit further commercial exploitation. Moreover, to the best of our knowledge, the design of these tools does not take into account the statistical properties of the images and watermarks in the design of attacks. In spite of the significant...
Improving CPU-GPU System Performance Through Dynamic Management of LLC and NoC
, M.Sc. Thesis Sharif University of Technology ; Sarbazi Azad, Hamid (Supervisor)
Abstract
CPU-GPU Heterogeneous System Architectures (HSA) play an important role in today's computing systems. Because of fast-growing in technology and the necessity of high-performance computing, HSAs are widely used platforms. Integrating the multi-core Central Processing Unit (CPU) with many-core Graphics Processing Unit (GPU) on the same die combines the feature of both processors and providing better performance. The capacity of HSAs to provide high throughput of computing led to the widespread use of these systems. Besides the high performance of HSAs, we also face challenges. These challenges are caused by the use of two processors with different behaviors and requirements on the same die....
Single-Cell RNA-seq Dropout Imputation and Noise Reduction by Machine Learning
, M.Sc. Thesis Sharif University of Technology ; Soleymani Baghshah, Mahdih (Supervisor) ; Sharifi Zarchi, Ali (Supervisor) ; Goodarzi, Hani (Co-Supervisor)
Abstract
Single-cell RNA sequencing (scRNA-seq) technologies have empowered us to study gene expressions at the single-cell resolution. These technologies are developed based on barcoding of single cells and sequencing of transcriptome using next-generation sequencing technologies. Achieving this single-cell resolution is specially important when the target population is complex or heterogeneous, which is the case for most biological samples, including tissue samples and tumor biopsies.Single-cell technologies suffer from high amounts of noise and missing values, generally known as dropouts. This complexity can affect a number of key downstream analyses such as differential expression analysis,...
Integration of clustering analysis and reward/penalty mechanisms for regulating service reliability in distribution systems
, Article IET Generation, Transmission and Distribution ; Vol. 5, issue. 11 , 2011 , p. 1192-1200 ; ISSN: 17518687 ; Fotuhi-Firuzabad, M ; Billinton, R ; Sharif University of Technology
Abstract
This study proposes an approach for improving service reliability in the distribution network based on establishing competition among electric distribution utilities. The idea behind this approach is to categorise the utilities and compare the performance of utilities located in one cluster with the other members of the same cluster. The reward/penalty mechanism (RPM) as a quality regulating instrument is designed for each cluster and used to penalise the utilities with a performance worse than the benchmark and to reward them for providing a performance better than the benchmark. Based on the RPM, utilities located in one cluster compete to make more profit by serving customers in better...
An optimal hardware implementation for active learning method based on memristor crossbar structures
, Article IEEE Systems Journal ; Vol. 8, issue. 4 , 2014 , pp. 1190-1199 ; ISSN: 19328184 ; Shouraki, S. B ; Haghighat, B ; Sharif University of Technology
Abstract
This paper presents a new inference algorithm for active learning method (ALM). ALM is a pattern-based algorithm for soft computing, which uses the ink drop spread (IDS) algorithm as its main engine for feature extraction. In this paper, a fuzzy number is extracted from each IDS plane rather than from the narrow path and the spread, as in previous approaches. This leads to a significant reduction in the hardware required to implement the inference part of the algorithm and real-time computation of the implemented hardware. A modified version of the memristor crossbar structure is used to solve the problem of varying shapes of the ink drops, as reported in previous studies. In order to...
Connectedness of users-items networks and recommender systems
, Article Applied Mathematics and Computation ; Vol. 243 , 2014 , Pages 578-584 ; ISSN: 00963003 ; Jalili, M ; Sharif University of Technology
Abstract
Recommender systems have become an important issue in network science. Collaborative filtering and its variants are the most widely used approaches for building recommender systems, which have received great attention in both academia and industry. In this paper, we studied the relationship between recommender systems and connectivity of users-items bipartite network. This results in a novel recommendation algorithm. In our method recommended items are selected based on the eigenvector corresponding to the algebraic connectivity of the graph - the second smallest eigenvalue of the Laplacian matrix. Since recommending an item to a user equals to adding a new link to the users-items bipartite...
Nonparametric frontier analysis models for efficiency evaluation in insurance industry: A case study of Iranian insurance market
, Article Neural Computing and Applications ; Vol. 24, issue. 5 , April , 2014 , pp. 1153-1161 ; ISSN: 09410643 ; Barati, B ; Majazi Dalfard, V ; Hatami-Shirkouhi, L ; Sharif University of Technology
Abstract
Performance evaluation and efficiency analysis is considered to be one of the critical responsibilities of the management department. This paper investigates and assesses the efficiency and performance of Iranian insurance companies through nonparametric frontier analysis (FA) models. The two well-known nonparametric FA models, data envelopment analysis (DEA) and free disposal hull, are utilized to separate the efficient companies from the inefficient companies, and two well-known super-efficiency analysis models are utilized to rank the efficient units. For the further analysis, critical inputs are also identified for inefficient companies using DEA sensitivity analysis which is a powerful...
Towards dark silicon era in FPGAs using complementary hard logic design
, Article Conference Digest - 24th International Conference on Field Programmable Logic and Applications, FPL 2014 ; Sept , 2014 , pp. 1 - 6 ; ISBN: 9783000446450 ; Khaleghi, B ; Ebrahimi, Z ; Asadi, H ; Tahoori, M. B ; Sharif University of Technology
Abstract
While the transistor density continues to grow exponentially in Field-Programmable Gate Arrays (FPGAs), the increased leakage current of CMOS transistors act as a power wall for the aggressive integration of transistors in a single die. One recently trend to alleviate the power wall in FPGAs is to turn off inactive regions of the silicon die, referred to as dark silicon. This paper presents a reconfigurable architecture to enable effective fine-grained power gating of unused Logic Blocks (LBs) in FPGAs. In the proposed architecture, the traditional soft logic is replaced with Mega Cells (MCs), each consists of a set of complementary Generic Reconfigurable Hard Logic (GRHL) and a conventional...
A multi-objective harmony search algorithm to optimize multi-server location-allocation problem in congested systems
, Article Computers and Industrial Engineering ; Vol. 72, Issue. 1 , 2014 , pp. 187-197 ; ISSN: 03608352 ; Rahmati, SH. A ; Pasandideh, S. H. R ; Niaki, S. T. A ; Sharif University of Technology
Abstract
A novel bi-objective multi-server location-allocation (LA) model is developed in this paper, in which the facilities are modeled as an M/M/m queuing system. Further, capacity and budget limitations are provided to make the LA problem more realistic. The two objective functions include (1) minimizing aggregate waiting times and (2) minimizing the maximum idle time of all facilities. Since the proposed model is NP-hard, a meta-heuristic algorithm called multi-objective harmony search algorithm (MOHA) is developed to solve it. In this algorithm, a new presentation scheme that satisfies most of the model constraints is proposed. Since there is no benchmark available in the literature to validate...
A comparative study of different approaches for finding the upper boundary points in stochastic-flow networks
, Article International Journal of Enterprise Information Systems ; Volume 10, Issue 3 , 1 July , 2014 , Pages 13-20 ; ISSN: 15481115 ; Nasseri, S. H ; Forghani Elahabad, M ; Ebrahimnejad, A ; Sharif University of Technology
Abstract
An information system network (ISN) can be modeled as a stochastic-flow network (SFN). There are several algorithms to evaluate reliability of an SFN in terms of Minimal Cuts (MCs). The existing algorithms commonly first find all the upper boundary points (called d-MCs) in an SFN, and then determine the reliability of the network using some approaches such as inclusion-exclusion method, sum of disjoint products, etc. However, most of the algorithms have been compared via complexity results or through one or two benchmark networks. Thus, comparing those algorithms through random test problems can be desired. Here, the authors first state a simple improved algorithm. Then, by generating a...
A new similarity measure for intensity-based image registration
, Article Proceedings of the 4th International Conference on Computer and Knowledge Engineering, ICCKE 2014 ; 18 December , 2014 , Pages 227-232 ; ISBN: 9781479954865 ; Aghajani, K ; Manzuri Shalmani, M. T ; Sharif University of Technology
Abstract
Defining a suitable similarity measure is a crucial step in (medical) image registration tasks. A common problem with frequently used intensity-based image registration algorithms is that they assume intensities of different pixels are independent of each other that could lead to low registration performance especially in the presence of spatially-varying intensity distortions, because they ignore the complex interactions between the pixel intensities. Motivated by this problem, in this paper we present a novel similarity measure which takes into account nonstationarity of the pixels intensity and complex spatially varying intensity distortions in mono-modal settings. Experimental results on...
Comparative analysis of the boundary transfer method with other near-wall treatments based on the k-ε turbulence model
, Article European Journal of Mechanics, B/Fluids ; Vol. 44, issue , 2014 , pp. 22-31 ; ISSN: 09977546 ; Basirat Tabrizi, H ; Farhadpour, F. A ; Sharif University of Technology
Abstract
Accurate description of wall-bounded turbulent flows requires a fine grid near walls to fully resolve the boundary layers. We consider a locally simplified transport model using an assumed near-wall viscosity profile to project the wall boundary conditions using the boundary transfer method. Related coefficients are obtained numerically. By choosing a near-wall viscosity profile, we derive an analytic wall function, which significantly reduces the CPU costs. The performance of this wall function is compared to other near-wall treatments proposed in the literature for two frequently used benchmark cases: near-equilibrium channel flow and flow over a backward-facing step with separation and...
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
, Article Advances in Water Resources ; Vol. 69, issue , 2014 , p. 181-196 ; Pishvaie, M. R ; Boozarjomehry, R. B ; Sharif University of Technology
Abstract
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered...
Discrete optimum design of truss structures by an improved firefly algorithm
, Article Advances in Structural Engineering ; Vol. 17, issue. 10 , 2014 , p. 1517-1530 ; Makiabadi, M ; Sarcheshmehpour, M ; Sharif University of Technology
Abstract
This paper presents an improved firefly algorithm (FA) for fast optimization of truss structures with discrete variables. The enhanced accelerated firefly algorithm (AFA) is a simple, but very effective modification of FA. In order to investigate the performance and robustness of the proposed algorithm, some benchmark (structural optimization) problems are solved and the results are compared with FA and other algorithms. The results show that in some test cases, AFA not only finds lighter structures compared to other algorithms, but also converges faster. In the rest test cases, the optimal solutions are found with very less computational effort. The study also shows that the proposed AFA...
Efficient and concurrent reliable realization of the secure cryptographic SHA-3 algorithm
, Article IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems ; Vol. 33, issue. 7 , July , 2014 , p. 1105-1109 ; 0278-0070 ; Mozaffari-Kermani, M ; Reyhani-Masoleh, A ; Sharif University of Technology
Abstract
The secure hash algorithm (SHA)-3 has been selected in 2012 and will be used to provide security to any application which requires hashing, pseudo-random number generation, and integrity checking. This algorithm has been selected based on various benchmarks such as security, performance, and complexity. In this paper, in order to provide reliable architectures for this algorithm, an efficient concurrent error detection scheme for the selected SHA-3 algorithm, i.e., Keccak, is proposed. To the best of our knowledge, effective countermeasures for potential reliability issues in the hardware implementations of this algorithm have not been presented to date. In proposing the error detection...
Energy-aware scheduling algorithm for precedence-constrained parallel tasks of network-intensive applications in a distributed homogeneous environment
, Article Proceedings of the 3rd International Conference on Computer and Knowledge Engineering, ICCKE 2013 ; 2013 , Pages 368-375 ; 9781479920921 (ISBN) ; Rajabi, A ; Goudarzi, M ; Sharif University of Technology
2013
Abstract
A wide range of scheduling algorithms used in the data centers have traditionally concentrated on enhancement of performance metrics. Recently, with the rapid growth of data centers in terms of both size and number, the power consumption has become a major challenge for both industry and society. At the software level, energy-aware task scheduling is an effective technique for power reduction in the data centers. However, most of the currently proposed energy-aware scheduling approaches are only paying attention to computation cost. In the other words, they ignore the energy consumed by the network equipment, namely communication cost. In this paper, the problem of scheduling...
Clustering and outlier detection using isoperimetric number of trees
, Article Pattern Recognition ; Volume 46, Issue 12 , December , 2013 , Pages 3371-3382 ; 00313203 (ISSN) ; Javadi, R ; Shariat Razavi, S. B ; Sharif University of Technology
2013
Abstract
We propose a graph-based data clustering algorithm which is based on exact clustering of a minimum spanning tree in terms of a minimum isoperimetry criteria. We show that our basic clustering algorithm runs in O(nlogn) and with post-processing in almost O(nlogn) (average case) and O(n2) (worst case) time where n is the size of the data-set. It is also shown that our generalized graph model, which also allows the use of potentials at vertices, can be used to extract an extra piece of information related to anomalous data patterns and outliers. In this regard, we propose an algorithm that extracts outliers in parallel to data clustering. We also provide a comparative performance analysis of...
FARHAD: A Fault-Tolerant Power-Aware Hybrid Adder for add intensive applications
, Article Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors ; 2013 , Pages 153-159 ; 10636862 (ISSN) ; 9781479904921 (ISBN) ; Baniasadi, A ; Asadi, H ; Sharif University of Technology
2013
Abstract
This paper introduces an alternative Fault-Tolerant Power-Aware Hybrid Adder (or simply FARHAD). FARHAD is a highly power efficient protection solution against errors in application with high number of additions. FARHAD, similar to earlier studies, relies on performing add operations twice to detect errors. Unlike previous studies, FARHAD uses an aggressive adder to produce the initial outcome and a low-power adder to generate the second outcome, referred to as the checker. FARHAD uses checkpointing, a feature already available to high-performance processors, to recover from errors. FARHAD achieves the high energy-efficiency of timeredundant solutions and the high performance of...