Loading...
Search for: learning
0.009 seconds
Total 1673 records

    Erratum: LiFi grid: A machine learning approach to user-centric design (Applied Optics (2020) 59 (8895-8901) DOI: 10.1364/AO.396804)

    , Article Applied Optics ; Volume 59, Issue 31 , 2020 , Pages 9755- Pashazanoosi, M ; Nezamalhosseini, S. A ; Salehi, J. A ; Sharif University of Technology
    OSA - The Optical Society  2020
    Abstract
    This publisher’s note amends the author listing and affiliation section in Appl. Opt. 59, 8895 (2020). © 2020 Optical Society of America  

    Cloud computing based technologies, applications and structure in U-learning

    , Article Proceedings 2012 17th IEEE International Conference on Wireless, Mobile and Ubiquitous Technology in Education, WMUTE 2012, 27 March 2012 through 30 March 2012 ; March , 2012 , Pages 196-198 ; 9780769546629 (ISBN) Ghazizadeh, A ; Manouchehry, M ; Sharif University of Technology
    2012
    Abstract
    This article mainly focuses on the characteristic, technologies and applications of cloud computing in mobile and electronic learning, and analyzes the features of this concept. We firstly tried to clarify the meaning of cloud computing as well as its features, secondly, proposed different models of using cloud computing in different learning environments, including web-based learning, mobile video learning and observational learning  

    An attribute learning method for zero-shot recognition

    , Article 2017 25th Iranian Conference on Electrical Engineering, ICEE 2017, 2 May 2017 through 4 May 2017 ; 2017 , Pages 2235-2240 ; 9781509059638 (ISBN) Yazdanian, R ; Shojaee, S. M ; Soleymani Baghshah, M ; Sharif University of Technology
    Abstract
    Recently, the problem of integrating side information about classes has emerged in the learning settings like zero-shot learning. Although using multiple sources of information about the input space has been investigated in the last decade and many multi-view and multi-modal learning methods have already been introduced, the attribute learning for classes (output space) is a new problem that has been attended in the last few years. In this paper, we propose an attribute learning method that can use different sources of descriptions for classes to find new attributes that are more proper to be used as class signatures. Experimental results show that the learned attributes by the proposed... 

    An iterative stochastic algorithm based on distributed learning automata for finding the stochastic shortest path in stochastic graphs

    , Article Journal of Supercomputing ; 2019 ; 09208542 (ISSN) Beigy, H ; Meybodi, M. R ; Sharif University of Technology
    Springer  2019
    Abstract
    In this paper, we study the problem of finding the shortest path in stochastic graphs and propose an iterative algorithm for solving it. This algorithm is based on distributed learning automata (DLA), and its objective is to use a DLA for finding the shortest path from the given source node to the given destination node whose weight is minimal in expected sense. At each stage of this algorithm, DLA specifies edges needed to be sampled. We show that the given algorithm finds the shortest path with minimum expected weight in stochastic graphs with high probability which can be close to unity as much as possible. We compare the given algorithm with some distributed learning automata-based... 

    An iterative stochastic algorithm based on distributed learning automata for finding the stochastic shortest path in stochastic graphs

    , Article Journal of Supercomputing ; 2019 ; 09208542 (ISSN) Beigy, H ; Meybodi, M. R ; Sharif University of Technology
    Springer  2019
    Abstract
    In this paper, we study the problem of finding the shortest path in stochastic graphs and propose an iterative algorithm for solving it. This algorithm is based on distributed learning automata (DLA), and its objective is to use a DLA for finding the shortest path from the given source node to the given destination node whose weight is minimal in expected sense. At each stage of this algorithm, DLA specifies edges needed to be sampled. We show that the given algorithm finds the shortest path with minimum expected weight in stochastic graphs with high probability which can be close to unity as much as possible. We compare the given algorithm with some distributed learning automata-based... 

    Open synchronous cellular learning automata

    , Article Advances in Complex Systems ; Volume 10, Issue 4 , 2007 , Pages 527-556 ; 02195259 (ISSN) Beigy, H ; Meybodi, M. R ; Sharif University of Technology
    World Scientific Publishing Co. Pte Ltd  2007
    Abstract
    Cellular learning automata is a combination of learning automata and cellular automata. This model is superior to cellular learning automata because of its ability to learn and also is superior to single learning automaton because it is a collection of learning automata which can interact together. In some applications such as image processing, a type of cellular learning automata in which the action of each cell in the next stage of its evolution not only depends on the local environment (actions of its neighbors) but it also depends on the external environments. We call such a cellular learning automata as open cellular learning automata. In this paper, we introduce open cellular learning... 

    A mathematical framework for cellular learning automata

    , Article Advances in Complex Systems ; Volume 7, Issue 3-4 , 2004 , Pages 295-319 ; 02195259 (ISSN) Beigy, H ; Meybodi, M. R ; Sharif University of Technology
    2004
    Abstract
    The cellular learning automata, which is a combination of cellular automata, and learning automata, is a new recently introduced model. This model is superior to cellular automata because of its ability to learn and is also superior to a single learning automaton because it is a collection of learning automata which can interact with each other. The basic idea of cellular learning automata, which is a subclass of stochastic cellular learning automata, is to use the learning automata to adjust the state transition probability of stochastic cellular automata. In this paper, we first provide a mathematical framework for cellular learning automata and then study its convergence behavior. It is... 

    A Study of Frameworks Leading to Organizational Learning in High Schools of Mofid Educational Center in the last Decade

    , M.Sc. Thesis Sharif University of Technology Afsah Noodeh, Tooraj (Author) ; Mashayekhi, Alinaghi (Supervisor)
    Abstract
    The purpose of this research is to determine and measure the mechanisms which cause an increase in pace, scope and intensity of organizational learning at schoolsand four levels are defined in it for learning concept at a high school: beginner, struggling, developed and learning. Having learning capacity increased, the school upgrades to another level. The features of organizational learning are different at each one of the above cited levels. At the beginner level, learning takes place little by little and at a very low pace, and its effect is observable as professional transformation of the organization members. At higher levels, it occurs at a higher rate and enjoys a wider scope and... 

    Modular framework kinematic and fuzzy reward reinforcement learning analysis of a radially symmetric six-legged robot

    , Article Life Science Journal ; Volume 10, Issue SUPPL 8 , 2013 , Pages 120-129 ; 10978135 (ISSN) Shahriari, M ; Osguie, K. G ; Khayyat, A. A. A ; Sharif University of Technology
    2013
    Abstract
    Hexapod robots gives us the ability to study walking robots without facing problems such as stability in many aspects. It has a great deal of flexibility in movement even if a leg becomes malfunctioned. Radially symmetric (hexagonal) hexapods have more flexibility in movement than rectangular leg alignments. Because of symmetry they can move in any direction and time efficiently. Inverse kinematic problem of this kind of hexapods is solved through a modular mobile view considering six degrees of freedom for the trunk. Then typical tripod and wave gaits are analyzed and simulated through the presented formulation. In Reinforcement Learning algorithm for walking it is important how to make... 

    Expertise finding in bibliographic network: Topic dominance learning approach

    , Article IEEE Transactions on Cybernetics ; Vol. 44, issue. 12 , 2014 , pp. 2646-2657 ; ISSN: 21682267 Neshati, M ; Hashemi, S. H ; Beigy, H ; Sharif University of Technology
    Abstract
    Expert finding problem in bibliographic networks has received increased interest in recent years. This problem concerns finding relevant researchers for a given topic. Motivated by the observation that rarely do all coauthors contribute to a paper equally, in this paper, we propose two discriminative methods for realizing leading authors contributing in a scientific publication. Specifically, we cast the problem of expert finding in a bibliographic network to find leading experts in a research group, which is easier to solve. We recognize three feature groups that can discriminate relevant experts from other authors of a document. Experimental results on a real dataset, and a synthetic one... 

    Expertness framework in multi-agent systems and its application in credit assignment problem

    , Article Intelligent Data Analysis ; Vol. 18, issue. 3 , 2014 , p. 511-528 Rahaie, Z ; Beigy, H ; Sharif University of Technology
    Abstract
    One of the challenging problems in artificial intelligence is credit assignment which simply means distributing the credit among a group, such as a group of agents. We made an attempt to meet this problem with the aid of the reinforcement learning paradigm. In this paper, expertness framework is defined and applied to the multi-agent credit assignment problem. In the expertness framework, the critic agent, who is responsible for distributing credit among agents, is equipped with learning capability, and the proposed credit assignment solution is based on the critic to learn to assign a proportion of the credit to each agent, and the used proportion should be learned by reinforcement... 

    Asynchronous cellular learning automata

    , Article Automatica ; Volume 44, Issue 5 , 2008 , Pages 1350-1357 ; 00051098 (ISSN) Beigy, H ; Meybodi, M. R ; Sharif University of Technology
    2008
    Abstract
    Cellular learning automata is a combination of cellular automata and learning automata. The synchronous version of cellular learning automata in which all learning automata in different cells are activated synchronously, has found many applications. In some applications a type of cellular learning automata in which learning automata in different cells are activated asynchronously (asynchronous cellular learning automata) is needed. In this paper, we introduce asynchronous cellular learning automata and study its steady state behavior. Then an application of this new model to cellular networks has been presented. © 2008  

    Technological learning in large firms: mechanism and processes

    , Article Interactive Learning Environments ; 2021 ; 10494820 (ISSN) Ghazinoory, S ; Mohajeri, A ; Kiamehr, M ; Danaeefard, H ; Sharif University of Technology
    Routledge  2021
    Abstract
    The prerequisite of developing countries’ economic growth is to move along the technological development trajectory through technological learning, and large firms as hubs of technological knowledge, play an important role in this transition. In this paper, we tried to bridge two main taxonomies in the field of technological development, one of them referring to taxonomies of firms, and the other one referring to technological learning processes. We have identified technological learning processes in several post catch-up large firms through content analysis and then by employing a survey approach, we explore technological learning processes of Iranian large firms. The results indicate that... 

    Robust attitude control of an agile aircraft using improved Q-Learning

    , Article Actuators ; Volume 11, Issue 12 , 2022 ; 20760825 (ISSN) Zahmatkesh, M ; Emami, S. A ; Banazadeh, A ; Castaldi, P ; Sharif University of Technology
    MDPI  2022
    Abstract
    Attitude control of a novel regional truss-braced wing (TBW) aircraft with low stability characteristics is addressed in this paper using Reinforcement Learning (RL). In recent years, RL has been increasingly employed in challenging applications, particularly, autonomous flight control. However, a significant predicament confronting discrete RL algorithms is the dimension limitation of the state-action table and difficulties in defining the elements of the RL environment. To address these issues, in this paper, a detailed mathematical model of the mentioned aircraft is first developed to shape an RL environment. Subsequently, Q-learning, the most prevalent discrete RL algorithm, will be... 

    A new method for discovering subgoals and constructing options in reinforcement learning

    , Article Proceedings of the 5th Indian International Conference on Artificial Intelligence, IICAI 2011 ; 2011 , Pages 441-450 ; 9780972741286 (ISBN) Davoodabadi, M ; Beigy, H ; SIT; Saint Mary's University; EKLaT Research; Infobright ; Sharif University of Technology
    Abstract
    In this paper the problem of automatically discovering subtasks and hierarchies in reinforcement learning is considered. We present a novel method that allows an agent to autonomously discover subgoals and create a hierarchy from actions. Our method identifies subgoals by partitioning local state transition graphs. Options constructed for reaching these subgoals are added to action choices and used for accelerating the Q-Learning algorithm. Experimental results show significant performance improvements, especially in the initial learning phase  

    Evaluation and optimization of distributed machine learning techniques for internet of things

    , Article IEEE Transactions on Computers ; 2021 ; 00189340 (ISSN) Gao, Y ; Kim, M ; Thapa, C ; Abuadbba, S ; Zhang, Z ; Camtepe, S ; Kim, H ; Nepal, S ; Sharif University of Technology
    IEEE Computer Society  2021
    Abstract
    Federated learning (FL) and split learning (SL) are state-of-the-art distributed machine learning techniques to enable machine learning without accessing raw data on clients or end devices. However, their comparative training performance under real-world resource-restricted Internet of Things (IoT) device settings, e.g., Raspberry Pi, remains barely studied, which, to our knowledge, have not yet been evaluated and compared, rendering inconvenient reference for practitioner. This work firstly provides empirical comparisons of FL and SL in real-world IoT settings regarding learning performance and on-device execution overhead. Our analyses demonstrate that the learning performance of SL is... 

    Active learning of causal structures with deep reinforcement learning

    , Article Neural Networks ; Volume 154 , 2022 , Pages 22-30 ; 08936080 (ISSN) Amirinezhad, A ; Salehkaleybar, S ; Hashemi, M ; Sharif University of Technology
    Elsevier Ltd  2022
    Abstract
    We study the problem of experiment design to learn causal structures from interventional data. We consider an active learning setting in which the experimenter decides to intervene on one of the variables in the system in each step and uses the results of the intervention to recover further causal relationships among the variables. The goal is to fully identify the causal structures with minimum number of interventions. We present the first deep reinforcement learning based solution for the problem of experiment design. In the proposed method, we embed input graphs to vectors using a graph neural network and feed them to another neural network which outputs a variable for performing... 

    Active learning from positive and unlabeled data

    , Article Proceedings - IEEE International Conference on Data Mining, ICDM, 11 December 2011 through 11 December 2011 ; December , 2011 , Pages 244-250 ; 15504786 (ISSN) ; 9780769544090 (ISBN) Ghasemi, A ; Rabiee, H. R ; Fadaee, M ; Manzuri, M. T ; Rohban, M. H ; Sharif University of Technology
    2011
    Abstract
    During recent years, active learning has evolved into a popular paradigm for utilizing user's feedback to improve accuracy of learning algorithms. Active learning works by selecting the most informative sample among unlabeled data and querying the label of that point from user. Many different methods such as uncertainty sampling and minimum risk sampling have been utilized to select the most informative sample in active learning. Although many active learning algorithms have been proposed so far, most of them work with binary or multi-class classification problems and therefore can not be applied to problems in which only samples from one class as well as a set of unlabeled data are... 

    A cooperative learning method based on cellular learning automata and its application in optimization problems

    , Article Journal of Computational Science ; Volume 11 , November , 2015 , Pages 279–288 ; 18777503 (ISSN) Mozafari, M ; Shiri, M. E ; Beigy, H ; Sharif University of Technology
    Elsevier  2015
    Abstract
    In this paper, a novel reinforcement learning method inspired by the way humans learn from others is presented. This method is developed based on cellular learning automata featuring a modular design and cooperation techniques. The modular design brings flexibility, reusability and applicability in a wide range of problems to the method. This paper focuses on analyzing sensitivity of the method's parameters and the applicability in optimization problems. Results of the experiments justify that the new method outperforms similar ones because of employing knowledge sharing technique, reasonable exploration logic, and learning rules based on the action trajectory  

    Towards a bounded-rationality model of multi-agent social learning in games

    , Article 2010 10th International Conference on Intelligent Systems Design and Applications, ISDA'10, Cairo, 29 November 2010 through 1 December 2010 ; 2010 , Pages 142-148 ; 9781424481354 (ISBN) Hemmati, M ; Sadati, N ; Nili, M ; Sharif University of Technology
    2010
    Abstract
    This paper deals with the problem of multi-agent learning of a population of players, engaged in a repeated normal-form game. Assuming boundedly-rational agents, we propose a model of social learning based on trial and error, called "social reinforcement learning". This extension of well-known Q-learning algorithm, allows players within a population to communicate and share their experiences with each other. To illustrate the effectiveness of the proposed learning algorithm, a number of simulations on the benchmark game of "Battle of Sexes" has been carried out. Results show that supplementing communication to the classical form of Q-learning, significantly improves convergence speed towards...