Loading...
Search for: world-wide-web
0.006 seconds
Total 48 records

    Trust inference in web-based social networks using resistive networks

    , Article Proceedings- 3rd International Conference on Internet and Web Applications and Services, ICIW 2008, Athens, 8 June 2008 through 13 June 2008 ; 2008 , Pages 233-238 ; 9780769531632 (ISBN) Taherian, M ; Amini, M ; Jalili, R ; Sharif University of Technology
    2008
    Abstract
    By the immense growth of the Web-Based Social Networks (WBSNs), the role of trust in connecting people together through WBSNs is getting more important than ever. In other words, since the probability of malicious behavior in WBSNs is increasing, it is necessary to evaluate the reliability of a person before trying to communicate with. Hence, it is desirable to find out how much a person should trust another one in a network. The approach to answer this question is usually called trust inference. In this paper, we propose a new trust inference algorithm (Called RN-Trust) based on the resistive networks concept. The algorithm, in addition to being simple, resolves some problems of previously... 

    Highlighting CAPTCHA

    , Article 2008 Conference on Human System Interaction, HSI 2008, Krakow, 25 May 2008 through 27 May 2008 ; 2008 , Pages 247-250 ; 1424415438 (ISBN); 9781424415434 (ISBN) Shirali Shahreza, M ; Sharif University of Technology
    2008
    Abstract
    There are many sites specially designed for mobile phones. In cases such as the registering in websites, some hackers write programs to make automatic false enrolments which waste the resources of the website. Therefore, it is necessary to distinguish between human users and computer programs. These systems are known as CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart). CAPTCHA methods are mainly based on the weak points of OCR (Optical Character Recognition) systems while using them difficult in tools such as PDAs (Personal Digital Assistant) or mobile phones that lack a big keyboard. So the Non-OCR-Based CAPTCHA methods are proposed which are do not need... 

    Web page clustering using harmony search optimization

    , Article IEEE Canadian Conference on Electrical and Computer Engineering, CCECE 2008, Niagara Falls, ON, 4 May 2008 through 7 May 2008 ; 2008 , Pages 1601-1604 ; 08407789 (ISSN) ; 9781424416431 (ISBN) Forsati, R ; Mahdavi, M ; Kangavari, M ; Safarkhani, B ; Sharif University of Technology
    2008
    Abstract
    Clustering has become an increasingly important task in modern application domains. Targeting useful and relevant information on the World Wide Web is a topical and highly complicated research area. Clustering techniques have been applied to categorize documents on web and extracting knowledge from the web. In this paper we propose novel clustering algorithms based on Harmony Search (HS) optimization method that deals with web document clustering. By modeling clustering as an optimization problem, first, we propose a pure HS based clustering algorithm that finds near global optimal clusters within a reasonable time. Then we hybridize K-means and harmony clustering to achieve better... 

    An architecture for context-aware semantic Web services

    , Article IEEE International Conference on Web Services, ICWS 2008, Beijing, 23 September 2008 through 26 September 2008 ; 2008 , Pages 779-780 ; 9780769533100 (ISBN) Keivanloo, I ; Abolhassani, H ; Technical Committee on Services Computing ; Sharif University of Technology
    2008
    Abstract
    Context awareness in Web services is gaining momentum. Since it is not a trivial task, it suffers from lack of a general solution. In this paper, we introduce a novel approach for context-aware Semantic Web Services which is applicable for any environment. It is established based on composition of context provider Web Services and other context-aware Semantic Web Services. In addition, an extended version of the Semantic Web Service Ontology Language for Semantic Web services is introduced, in order to make it possible to find appropriate context-aware Semantic Web services based on available context information. So as to make it applicable for any environment, the solution does not hold any... 

    Using social annotations for search results clustering

    , Article 13th International Computer Society of Iran Computer Conference on Advances in Computer Science and Engineering, CSICC 2008, Kish Island, 9 March 2008 through 11 March 2008 ; Volume 6 CCIS , 2008 , Pages 976-980 ; 18650929 (ISSN); 3540899847 (ISBN); 9783540899846 (ISBN) Aliakbary, S ; Khayyamian, M ; Abolhassani, H ; Sharif University of Technology
    2008
    Abstract
    Clustering search results helps the user to overview returned results and to focus on the desired clusters. Most of search result clustering methods use title, URL and snippets returned by a search engine as the source of information for creating the clusters. In this paper we propose a new method for search results clustering (SRC) which uses social annotations as the main source of information about web pages. Social annotations are high-level descriptions for web pages and as the experiments show, clustering based on social annotations yields good clusters with informative labels. © 2008 Springer-Verlag  

    Identifying child users: Is it possible?

    , Article SICE Annual Conference 2008 - International Conference on Instrumentation, Control and Information Technology, Tokyo, 20 August 2008 through 22 August 2008 ; October , 2008 , Pages 3241-3244 ; 9784907764296 (ISBN) Shirali Shahreza, S ; Shirali Shahreza, M ; IEEE ; Sharif University of Technology
    2008
    Abstract
    There are many websites on the Internet which are for adult users and we want to restrict access to them for adults only. Adult content filtering programs on client computers are the current available solutions. But if we can identify the child users at server side, we can protect the children more. In this paper, we try to answer the question that whether it is possible to identify child users or not? © 2008 SICE  

    RIAL: Redundancy reducing inlining algorithm to map XML DTD to relations

    , Article 2008 International Conference on Computational Intelligence for Modelling Control and Automation, CIMCA 2008, Vienna, 10 December 2008 through 12 December 2008 ; July , 2008 , Pages 25-30 ; 9780769535142 (ISBN) Rafsanjani, A. J ; Mirian Hosseinabadi, S. H ; Sharif University of Technology
    2008
    Abstract
    XML has emerged as a common standard for data exchange over the World Wide Web. One way to manage XML data is to use the power of relational databases for storing and querying them. So the hierarchical XML data should be mapped into flat relational structure. In this paper we propose an algorithm which maps DTD to relational schema and as well as content and structure it preserves the functional dependencies during the mapping process in order to produce relations with less redundancy. This is done by categorizing functional dependencies and introducing four rules to be applied to the relations created by the hybrid inlining algorithm according to each category. These rules will reduce... 

    CAPTCHA systems for disabled people

    , Article 2008 IEEE 4th International Conference on Intelligent Computer Communication and Processing, ICCP 2008, Cluj-Napoca, 28 August 2008 through 30 August 2008 ; October , 2008 , Pages 319-322 ; 9781424426737 (ISBN) Shirali Shahreza, M ; Shirali Shahreza, S ; Sharif University of Technology
    2008
    Abstract
    Nowadays the Internet users are from different ages and groups. The disabled people also use the Internet. Some websites are especially created for disabled people. Many Internet sites offers services for human users, but unfortunately some computer programs are designed to abuse these services. To solve this problem, some systems named CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart) have been introduced to distinguish between human users and computer programs. CAPTCHA methods are mainly based on the weaknesses of OCR systems while using them is undesirable for human users, especially for disabled people. Therefore some CAPTCHA methods are designed... 

    Challenges in using peer-to-peer structures in order to design a large-scale web search engine

    , Article 13th International Computer Society of Iran Computer Conference on Advances in Computer Science and Engineering, CSICC 2008, Kish Island, 9 March 2008 through 11 March 2008 ; Volume 6 CCIS , 2008 , Pages 461-468 ; 18650929 (ISSN); 3540899847 (ISBN); 9783540899846 (ISBN) Mousavi, H ; Movaghar, A ; Sharif University of Technology
    2008
    Abstract
    One of the distributed solutions for scaling Web Search Engines (WSEs) may be peer-to-peer (P2P) structures. P2P structures are successfully being used in many systems with lower cost than ordinary distributed solutions. However, the fact that they can also be beneficial for large-scale WSEs is still a controversial subject. In this paper, we introduce challenges in using P2P structures to design a large-scale WSE. Considering different types of P2P systems, we introduce possible P2P models for this purpose. Using some quantitative evaluation, we compare these models from different aspects to find out which one is the best in order to construct a large-scale WSE. Our studies indicate that... 

    CAPTCHA for children

    , Article 2008 IEEE International Conference on System of Systems Engineering, SoSE 2008, Monterey, CA, 2 June 2008 through 4 June 2008 ; 2008 ; 9781424421732 (ISBN) Shirali Shahreza, S ; Shirali Shahreza, M ; Sharif University of Technology
    2008
    Abstract
    In some websites it is necessary to distinguishing between human users and computer programs which is known as CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart). CAPTCHA methods are mainly based on the weak points of OCR systems and using them are undesirable to human users. In this paper a method has been presented for distinguishing between human users and computer programs on the basis of choice of an object shown on the screen. In this method some objects are chosen randomly and the pictures of these topics are downloaded from the Internet. Then after applying some effects such as rotation, all of the pictures are shown on the screen. Then we ask the... 

    Clustering search engine log for query recommendation

    , Article 13th International Computer Society of Iran Computer Conference on Advances in Computer Science and Engineering, CSICC 2008, Kish Island, 9 March 2008 through 11 March 2008 ; Volume 6 CCIS , 2008 , Pages 380-387 ; 18650929 (ISSN); 3540899847 (ISBN); 9783540899846 (ISBN) Hosseini, M ; Abolhassani, H ; Sharif University of Technology
    2008
    Abstract
    As web contents grow, the importance of search engines became more critical and at the same time user satisfaction decreased. Query recommendation is a new approach to improve search results in web. In this paper we represent a method to help search engine users in attaining required information. Such facility could be provided by offering some queries associated with queries submitted by users in order to direct them toward their target. At first, all previous query contained in a query log should be clustered, therefore, all queries that are semantically similar will be detected. Then all queries that are similar to user's queries are ranked according to a relevance criterion. The method... 

    A fast community based algorithm for generating web crawler seeds set

    , Article WEBIST 2008 - 4th International Conference on Web Information Systems and Technologies, Funchal, Madeira, 4 May 2008 through 7 May 2008 ; Volume 2 , 2008 , Pages 98-105 ; 9789898111265 (ISBN) Daneshpajouh, S ; Nasiri, M. M ; Ghodsi, M ; Sharif University of Technology
    2008
    Abstract
    In this paper, we present a new and fast algorithm for generating the seeds set for web crawlers. A typical crawler normally starts from a fixed set like DMOZ links, and then continues crawling from URLs found in these web pages. Crawlers are supposed to download more good pages in less iterations. Crawled pages are good if they have high PageRanks and are from different communities. In this paper, we present a new algorithm with O(n) running time for generating crawler's seeds set based on HITS algorithm. A crawler can download qualified web pages, from different communities, starting from generated seeds set using our algorithm in less iteration  

    Encouraging persons with hearing problem to learn sign language by Internet websites

    , Article 8th IEEE International Conference on Advanced Learning Technologies, ICALT 2008, Santander, 1 July 2008 through 5 July 2008 ; 2008 , Pages 1036-1037 ; 9780769531670 (ISBN) Shirali Shahreza, M ; Shirali Shahreza, S ; Sharif University of Technology
    2008
    Abstract
    Nowadays the Internet users are from different ages and groups. Disabled people are a group of the Internet users. Some websites are especially created for these people. One group of the disabled people are deaf persons. They have a special talking language which is named sign language. Here we present a method to encourage them, esp. children, to learn the sign language. In this method, when a deaf person wants to enter a website which is created for deaf persons, a word is shown as a movie using a sign language. The user should recognize the word and select it from a list. If the user understands the sign language and recognizes the word, he/she can enter the website. This project has been... 

    Failure recovery of composite semantic web services using subgraph replacement

    , Article International Conference on Computer and Communication Engineering 2008, ICCCE08: Global Links for Human Development, Kuala Lumpur, 13 May 2008 through 15 May 2008 ; 2008 , Pages 489-493 ; 9781424416929 (ISBN) Saboohi, H ; Amini, A ; Abolhassani, H ; Sharif University of Technology
    2008
    Abstract
    Web services foster functionality of current web to service oriented architecture. Nascent semantic web is capable of automating activities by annotating shared ontological semantics to documents and services. Although, a zillion web services with diversity have been made since the inception of its diffusion, it is not a panacea for software development and it is still in its infancy. A middle agent (broker) simplifies the interaction of service providers and service requester, especially in the case that an atomic web service cannot fulfill user's need. The broker composes a desired value-added service and orchestrates the execution of bundled sub-processes. It is inevitable that several... 

    A geographical question answering system

    , Article 3rd International Conference on Web Information Systems and Technologies, Webist 2007, Barcelona, 3 March 2007 through 6 March 2007 ; Volume WIA , 2007 , Pages 308-314 Behrangi, E ; Ghasemzadeh, H ; Sheykh Esmaili, K ; Minaei Bidgoli, B ; Sharif University of Technology
    2007
    Abstract
    Question Answering systems are one of the hot topics in context of information retrieval. In this paper, we develop an open-domain Question Answering system for spatial queries. We use Google for gathering raw data from the Web and then in a few iterations density of potential answers will be increased, finally based on a couple of evaluators the best answers are selected to be returned to user. Our proposed algorithm uses fuzzy methods to be more precise. Some experiments have been designed in order to evaluate the performance of our algorithm and results are totally promising. We will describe that how this algorithm can be applied to other type of questions as well  

    Collage CAPTCHA

    , Article 2007 9th International Symposium on Signal Processing and its Applications, ISSPA 2007, Sharjah, 12 February 2007 through 15 February 2007 ; 2007 ; 1424407796 (ISBN); 9781424407798 (ISBN) Shirali Shahreza, M ; Shirali Shahreza, S ; Sharif University of Technology
    2007
    Abstract
    Nowadays, many daily human activities such as education, commerce, talks, etc. are carried out through the Internet. In cases such as the registering in websites, some hackers write programs to make automatic false enrolments which waste the resources of the website while this may even stop the entire website from working. Therefore, it is necessary to tell apart human users from computer programs which is known as CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart). CAPTCHA methods are mainly based on the weak points of OCR (Optical Character Recognition) systems while using them are undesirable to human users. In this paper a method has been presented for... 

    Localized CAPTCHA for illiterate people

    , Article 2007 International Conference on Intelligent and Advanced Systems, ICIAS 2007, Kuala Lumpur, 25 November 2007 through 28 November 2007 ; 2007 , Pages 675-679 ; 1424413559 (ISBN); 9781424413553 (ISBN) Shirali Shahreza, M. H ; Shirali Shahreza, M ; Sharif University of Technology
    2007
    Abstract
    Nowadays, many daily human activities such as education, commerce, talks, etc. are carried out through the Internet. In cases such as the registering in websites, some hackers write programs to make automatic false enrolments which waste the resources of the website while this may even stop the entire website from working. Therefore, it is necessary to tell apart human users from computer programs which is known as CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart). CAPTCHA methods are mainly based on the weak points of OCR (Optical Character Recognition) systems while using them are undesirable to human users. So the Non-OCR-Based CAPTCHA methods are... 

    Coincidence based mapping extraction with genetic algorithms

    , Article 3rd International Conference on Web Information Systems and Technologies, Webist 2007, Barcelona, 3 March 2007 through 6 March 2007 ; Volume WIA , 2007 , Pages 176-183 Qazvinian, V ; Abolhassani, H ; Haeri, S. H ; Sharif University of Technology
    2007
    Abstract
    Ontology Aligning is an answer to the problem of handling heterogenous information on different domains. After application of some measures, one reaches a set of similarity values. The final goal is to extract mappings. Our contribution is to introduce a new genetic algorithm (GA) based extraction method. The GA, employs a structured based weighting model, named "coincidence based model", as its fitness function. In the first part of the paper, some preliminaries and notations are given and then we introduce the coincidence based weighting. In the second part the paper discusses the details of the devised GA with the evaluation results for a sample dataset  

    CAPTCHA for blind people

    , Article ISSPIT 2007 - 2007 IEEE International Symposium on Signal Processing and Information Technology, Cairo, 15 December 2007 through 18 December 2007 ; 2007 , Pages 995-998 ; 9781424418350 (ISBN) Shirali Shahreza, M ; Shirali Shahreza, S ; Sharif University of Technology
    2007
    Abstract
    Nowadays the Internet users are from different ages and groups. The disabled people also use the Internet. Some websites are especially created for disabled people. Many Internet sites offers services for human users, but unfortunately some computer programs are designed which abuse these services. As a result some systems named CAPTCHA (Completely Automated Public Turing test to tell Computers and Human Apart) have been introduced to tell apart human users and computer software. In this paper, a new CAPTCHA method is introduced which can be used by blindpeople. In this method a simple mathematicalproblem is created according to predefined patterns and converted to speech using a... 

    Kavosh: An intelligent neuro-fuzzy search engine

    , Article 7th International Conference on Intelligent Systems Design and Applications, ISDA'07, Rio de Janeiro, 22 October 2007 through 24 October 2007 ; November , 2007 , Pages 597-602 ; 0769529763 (ISBN); 9780769529769 (ISBN) Milani Fard, A ; Ghaemi, R ; Akbarzadeh-T., M. R ; Akbari, H ; Sharif University of Technology
    2007
    Abstract
    In this paper we propose a neuro-fuzzy architecture for Web content taxonomy using hybrid of Adaptive Resonance Theory (ART) neural networks and fuzzy logic concept. The search engine called Kavosh1 is equipped with unsupervised neural networks for dynamic data clustering. This model was designed for retrieving images without metadata and in estimating resemblance of multimedia documents; however, in this work only text mining method is implemented. Results show noticeable average precision and recall over search results. © 2007 IEEE