Loading...
Search for:
average-numbers
0.007 seconds
Randomized algorithms for comparison-based search
, Article Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011, NIPS 2011, 12 December 2011 through 14 December 2011 ; December , 2011 ; 9781618395993 (ISBN) ; Diggavi, S ; Delgosha, P ; Mohajer, S ; Sharif University of Technology
Abstract
This paper addresses the problem of finding the nearest neighbor (or one of the R-nearest neighbors) of a query object q in a database of n objects, when we can only use a comparison oracle. The comparison oracle, given two reference objects and a query object, returns the reference object most similar to the query object. The main problem we study is how to search the database for the nearest neighbor (NN) of a query, while minimizing the questions. The difficulty of this problem depends on properties of the underlying database. We show the importance of a characterization: combinatorial disorder D which defines approximate triangle inequalities on ranks. We present a lower bound of Ω(Dlog...
Budgeted experiment design for causal structure learning
, Article 35th International Conference on Machine Learning, ICML 2018, 10 July 2018 through 15 July 2018 ; Volume 4 , 2018 , Pages 2788-2801 ; 9781510867963 (ISBN) ; Salehkaleybar, S ; Kiyavash, N ; Bareinboim, E ; Sharif University of Technology
International Machine Learning Society (IMLS)
2018
Abstract
We study the problem of causal structure learning when the experimenter is limited to perform at most k non-adaptive experiments of size 1. We formulate the problem of finding the best intervention target set as an optimization problem, which aims to maximize the average number of edges whose directions are resolved. We prove that the corresponding objective function is submodular and a greedy algorithm suffices to achieve (1 - approximation of the optimal value. We further present an accelerated variant of the greedy algorithm, which can lead to orders of magnitude performance speedup. We validate our proposed approach on synthetic and real graphs. The results show that compared to the...
Thresholded smoothed-ℓ0(SL0) dictionary learning for sparse representations
, Article 2009 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2009, Taipei, 19 April 2009 through 24 April 2009 ; 2009 , Pages 1825-1828 ; 15206149 (ISSN); 9781424423545 (ISBN) ; Babaie Zadeh, M ; Institute of Electrical and Electronics Engineers; Signal Processing Society ; Sharif University of Technology
2009
Abstract
In this paper, we suggest to use a modified version of Smoothed- ℓ0 (SL0) algorithm in the sparse representation step of iterative dictionary learning algorithms. In addition, we use a steepest descent for updating the non unit columnnorm dictionary instead of unit column-norm dictionary. Moreover, to do the dictionary learning task more blindly, we estimate the average number of active atoms in the sparse representation of the training signals, while previous algorithms assumed that it is known in advance. Our simulation results show the advantages of our method over K-SVD in terms of complexity and performance. ©2009 IEEE
Centrality-based group formation in group recommender systems
, Article 26th International World Wide Web Conference, WWW 2017 Companion, 3 April 2017 through 7 April 2017 ; 2019 , Pages 1187-1196 ; 9781450349147 (ISBN) ; Khalili, S ; Elahe Ghalebi, K ; Grosu, R ; Mojde Morshedi, S ; Movaghar, A ; Sharif University of Technology
International World Wide Web Conferences Steering Committee
2019
Abstract
Recommender Systems have become an attractive field within the recent decade because they facilitate users' selection process within limited time. Conventional recommender systems have proposed numerous methods focusing on recommendations to individual users. Recently, due to a significant increase in the number of users, studies in this field have shifted to properly identify groups of people with similar preferences and provide a list of recommendations for each group. Offering a recommendations list to each individual requires significant computational cost and it is therefore often not efficient. So far, most of the studies impose four restrictive assumptions: (1) limited number of...
Lazy instruction scheduling: Keeping performance, reducing power
, Article ISLPED'08: 13th ACM/IEEE International Symposium on Low Power Electronics and Design, Bangalore, 11 August 2008 through 13 August 2008 ; 2008 , Pages 375-380 ; 15334678 (ISSN); 9781605581095 (ISBN) ; Taghizadeh, M ; Jahangir, A. H ; Sharif University of Technology
2008
Abstract
An important approach to reduce power dissipation is reducing the number of instructions executed by the processor. To achieve this goal, this paper introduces a novel instruction scheduling algorithm that executes an instruction only when its result is required by another instruction. In this manner, it not only does not execute useless instructions, but also reduces the number of instructions executed after a mispredicted branch. The cost of the extra hardware is 161 bytes for 128 instruction window size. Measurements done using SPEC CPU 2000 benchmarks show that the average number of executed instructions is reduced by 13.5% while the average IPC is not affected. Copyright 2008 ACM
Estimating the mixing matrix in underdetermined Sparse Component Analysis (SCA) using consecutive independent component analysis (ICA)
, Article 16th European Signal Processing Conference, EUSIPCO 2008, Lausanne, 25 August 2008 through 29 August 2008 ; 2008 ; 22195491 (ISSN) ; Pad, P ; Babaie Zadeh, M ; Jutten, C ; Sharif University of Technology
2008
Abstract
One of the major problems in underdetermined Sparse Component Analysis (SCA) is the appropriate estimation of the mixing matrix, A, in the linear model x(t) = As(t), especially where more than one source is active at each instant of time (It is called 'multiple dominant problem'). Most of the previous algorithms were restricted to single dominant problem in which it is assumed that at each instant, there is at most one single dominant component. Moreover, because of high computational load, all present methods for multiple dominant problem are practical only for small scale cases (By 'small scale' we mean that the average number of active sources at each instant, k, is less than 5). In this...
Fuzzy classification by multi-layer averaging: An application in speech recognition
, Article 3rd International Conference on Informatics in Control, Automation and Robotics, ICINCO 2006, Setubal, 1 August 2006 through 5 August 2006 ; Volume SPSMC , 2006 , Pages 122-126 ; 9728865619 (ISBN); 9789728865610 (ISBN) ; Shouraki, S. B ; Halavati, R ; Sharif University of Technology
2006
Abstract
This paper intends to introduce a simple fast space-efficient linear method for a general pattern recognition problem. The presented algorithm can find the closest match for a given sample within a number of samples which has already been introduced to the system. The fact of using averaging and fuzzy numbers in this method encourages that it may be a noise resistant recognition process. As a test bed, a problem of recognition of spoken words has been set forth to this algorithm. Test data contain clean and noisy samples and results have been compared to that of a widely used speech recognition method, HMM
A new approach for sparse decomposition and sparse source separation
, Article 14th European Signal Processing Conference, EUSIPCO 2006, Florence, 4 September 2006 through 8 September 2006 ; 2006 ; 22195491 (ISSN) ; Babaie Zadeh, M ; Jutten, C ; Sharif University of Technology
2006
Abstract
We introduce a new approach for sparse decomposition, based on a geometrical interpretation of sparsity. By sparsedecomposition we mean finding sufficiently sparse solutions of underdetermined linear systems of equations. This will be discussed in the context of Blind Source Separation (BSS). Our problem is then underdetermined BSS where there are fewer mixtures than sources. The proposed algorithm is based on minimizing a family of quadratic forms, each measuring the distance of the solution set of the system to one of the coordinate subspaces (i.e. coordinate axes, planes, etc.). The performance of the method is then compared to the minimal 1-norm solution, obtained using the linear...
Robust register caching: An energy-efficient circuit-level technique to combat soft errors in embedded processors
, Article IEEE Transactions on Device and Materials Reliability ; Volume 10, Issue 2 , February , 2010 , Pages 208-221 ; 15304388 (ISSN) ; Namazi, A ; Miremadi, S. G ; Sharif University of Technology
2010
Abstract
This paper presents a cost-efficient technique to jointly use circuit- and architecture-level techniques to protect an embedded processor's register file against soft errors. The basic idea behind the proposed technique is robust register caching (RRC), which creates a cache of the most vulnerable registers within the register file in a small and highly robust cache memory built from circuit-level single-event-upset-protected memory cells. To guarantee that the most vulnerable registers are always stored in the robust register cache, the average number of read operations during a register's lifetime is used as a metric to guide the cache replacement policy. A register is vulnerable to soft...
An energy efficient circuit level technique to protect register file from MBUs and SETs in embedded processors
, Article Proceedings of the International Conference on Dependable Systems and Networks, 29 June 2009 through 2 July 2009, Lisbon ; 2009 , Pages 195-204 ; 9781424444212 (ISBN) ; Namazi, A ; Miremadi, S.G ; Sharif University of Technology
2009
Abstract
This paper presents a circuit level soft error-tolerant-technique, called RRC (Robust Register Caching), for the register file of embedded processors. The basic idea behind the RRC is to effectively cache the most vulnerable registers in a small highly robust register cache built by circuit level SEU and SET protected memory cells. To decide which cache entry should be replaced, the average number of read operations during a register ACE time is used as a criterion to judge. In fact, the victim cache entry is one which has the maximum read count. To minimize the power overhead of the RRC, the clock gating technique is efficiently exploited for the main register file resulting in...