<RECORD 2>
Hence, the security of our solution just depends on the tamper-resistant module
TPM. ...... Tableau decision algorithm is provided for TD<inf>ALCQIO</inf>.
part of the document
A way of key management in cloud storage based on trusted computing
Yang, Xin1, 2, 3; Shen, Qingni1, 2, 3; Yang, Yahui1; Qing, Sihan1, 4
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6985 LNCS, p 135-145, 2011, Network and Parallel Computing - 8th IFIP International Conference, NPC 2011, Proceedings;
ISSN:03029743,E-ISSN:16113349;ISBN-13:9783642244025;
DOI: 10.1007/978-3-642-24403-2_11; Conference: 8th IFIP International Conference on Network and Parallel Computing, NPC 2011, October 21, 2011 - October 23, 2011;
Publisher: Springer Verlag
Author affiliation: 1 School of Software and Microelectronics, Peking University, Beijing, China2 MoE Key Lab. of Network and Software Assurance, Peking University, Beijing, China3 Network and Information Security Lab., Institute of Software, Peking University, Beijing, China4 Institute of Software, Chinese Academy of Sciences, Beijing 100086, China
Abstract: Cloud security has gained increasingly emphasis in the research community, with much focus primary concentrated on how to secure the operation system and virtual machine on which cloud system runs on. We take an alternative perspective to consider the problem of building a secure cloud storage service on top of a public cloud infrastructure where the service provider is not completely trusted by the customer. So, it is necessary to put cipher text into the public cloud. We describe an architecture based on Trusted Platform Module and the client of cloud storage system to help manage the symmetric keys used for encrypting data in the public cloud and the asymmetric keys used for encrypting symmetric keys. The key management mechanism includes how to store keys, how to backup keys, and how to share keys. Based on the HDFS (Hadoop Distributed File System), we put a way of key management into practice, and survey the benefits that such an infrastructure will provide to cloud users and providers, and we also survey the time cost it will bring to us. © 2011 IFIP International Federation for Information Processing. (10 refs.)Main Heading: Cloud computingControlled terms: Cryptography - Parallel architectures - SurveysUncontrolled terms: Asymmetric key - backup - cipher text - Key management - public cloud - Symmetric keysClassification Code: 723 Computer Software, Data Handling and Applications - 722.4 Digital Computers and Systems - 722 Computer Systems and Equipment - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television - 405.3 Surveying
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Gateway-oriented password-authenticated key exchange protocol in the standard model
Wei, Fushan1, 2; Zhang, Zhenfeng2; Ma, Chuangui1
Source: Journal of Systems and Software, 2011;
ISSN: 01641212; DOI: 10.1016/j.jss.2011.09.061 Article in Press
Author affiliation: 1 Department of Information Research, Zhengzhou Information Science and Technology Institute, Zhengzhou 450002, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: A gateway-oriented password-based authenticated key exchange (GPAKE) is a 3-party protocol, which allows a client and a gateway to establish a common session key with the help of an authentication server. GPAKE protocols are suitable for mobile communication environments such as GSM (Global System for Mobile Communications) and 3GPP (The Third Generation Partnership Project). To date, most of the published protocols for GPAKE have been proven secure in the random oracle model. In this paper, we present the first provably-secure GPAKE protocol in the standard model. It is based on the 2-party password-authenticated key exchange protocol of Jiang and Gong. The protocol is secure under the DDH assumption (without random oracles). Furthermore, it can resist undetectable on-line dictionary attacks. Compared with previous solutions, our protocol achieves stronger security with similar efficiency. © 2011 Elsevier Inc. All rights reserved.Main Heading: Gateways (computer networks)Controlled terms: Authentication - Computer crime - Global system for mobile communicationsUncontrolled terms: Authenticated key exchange - Authentication servers - DDH assumptions - Dictionary attack - Mobile communications - Password-authenticated key exchange - Random Oracle model - Session key - The standard model - Third generation - Without random oraclesClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Design and implement of integrity checking schema under cloud storage model
Fu, Yan-Yan1; Zhang, Min1; Feng, Deng-Guo1
Source: Tongxin Xuebao/Journal on Communications, v 32, n 9 A, p 8-15, September 2011; Language: Chinese;
ISSN: 1000436X;
Publisher: Editorial Board of Journal on Communications
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100080, China
Abstract: In the cloud storage model, user need to confirm the file status while it's kept in the untrusted remote storage server. Then user could decide to restore the data or use the data for other purpose. By pre-random sampling from a file to form samples and sign them, provide users with a credible certification credentials. When user initiates a verification, the storage server re-generate a new signature in accordance with the same rules. By comparing the signatures, user can verify whether the file is complete. Analysis showed that, with this random sampling method, user can find file corruption with quite high probability. though a single verification may be not quite provable, user can send multiple challenge to get better credibility. Also, time required for verification has nothing to do with file size , but the credibility of the verification. Experimental results show that this schema work better under cloud storage model than other schema based on signatures and has a very high credibility. (17 refs.)Main Heading: Model checkingControlled terms: Communication - TechnologyUncontrolled terms: File corruption - File sizes - High probability - Integrity - Integrity checking - Random sampling - Random sampling method - Remote storage - Storage model - Storage servers - User needClassification Code: 716 Telecommunication; Radar, Radio and Television - 723.1 Computer Programming - 901 Engineering Profession
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Modeling aspect-oriented software architecture based on ACME
Rong, Mei1; Liu, Changlin2; Zhang, Guangquan2, 3
Source: ICCSE 2011 - 6th International Conference on Computer Science and Education, Final Program and Proceedings, p 1159-1164, 2011, ICCSE 2011 - 6th International Conference on Computer Science and Education, Final Program and Proceedings;
ISBN-13: 9781424497188;
DOI: 10.1109/ICCSE.2011.6028839; Article number: 6028839; Conference: 6th International Conference on Computer Science and Education, ICCSE 2011, August 3, 2011 - August 5, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 Shenzhen Tourism College, Jinan University, Shenzhen, 518053, China2 School of Computer Science and Technology, Soochow University, Suzhou 215006, China3 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: The crosscutting phenomena has been found at architectural level. Using aspects, AOP effectively solves the code tangling problem produced by the crosscutting phenomena at code level. This paper presents an approach to aspect-oriented software architecture design by introducing the concept of aspects into the process of software architecture design. We describe the regular architectural structure with the basic design elements of ACME, and represent the architectural aspects with the proposed extension of ACME. The example of an Online Bookstore System has been used to illustrate our proposal. © 2011 IEEE. (18 refs.)Main Heading: Software architectureControlled terms: Aspect oriented programming - Computer science - Computer software - Design - Education computing - Network architecture - Software designUncontrolled terms: ACME - Architectural levels - Architectural structure - Aspect-oriented - Aspect-oriented software - Design elements - Online bookstore - software architecture designClassification Code: 408 Structural Design - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An efficient mutual authentication and key agreement protocol preserving user anonymity in mobile networks
Xu, Jing1; Zhu, Wen-Tao2; Feng, Deng-Guo1
Source: Computer Communications, v 34, n 3, p 319-325, March 15, 2011; ISSN: 01403664; DOI: 10.1016/j.comcom.2010.04.041;
Publisher: Elsevier
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Information Security, Graduate University, Chinese Academy of Sciences, Beijing 100049, China
Abstract: We address the problem of mutual authentication and key agreement with user anonymity for mobile networks. Recently, Lee et al. proposed such a scheme, which is claimed to be a slight modification of, but a security enhancement on Zhu et al.'s scheme based on the smart card. In this paper, however, we reveal that both schemes still suffer from certain weaknesses which have been previously overlooked, and thus are far from the desired security. We then propose a new protocol which is immune to various known types of attacks. Analysis shows that, while achieving identity anonymity, key agreement fairness, and user friendliness, our scheme is still cost-efficient for a general mobile node. © 2010 Elsevier B.V. All rights reserved. (17 refs.)Main Heading: Wireless networksControlled terms: Authentication - Network protocols - Network security - Smart cardsUncontrolled terms: Anonymity - Cost-efficient - Key agreement - Mobile networks - Mobile nodes - Mutual authentication - New protocol - Roaming services - Security enhancements - User anonymity - User friendliness - Wireless securityClassification Code: 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Cryptanalysis of an (hierarchical) identity based parallel key-insulated encryption scheme
Wang, Xu An1; Weng, Jian2, 3, 4; Yang, Xiaoyuan1; Zhang, Minqing1
Source: Journal of Systems and Software, v 84, n 2, p 219-225, February 2011; ISSN: 01641212; DOI: 10.1016/j.jss.2010.09.021;
Publisher: Elsevier Inc.
Author affiliation: 1 Key Laboratory of Information and Network Security, Engineering College of Chinese Armed Police Force, Xi'an 710086, China2 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China3 Department of Computer Science, Jinan University, Guangzhou 510632, China4 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100080, China
Abstract: Abstract: Recently, Ren and Gu (2010) proposed an identity-based parallel key-insulated encryption (IBPKIE) scheme, and further extended their IBPKIE scheme to a hierarchical identity-based parallel key-insulated encryption (HIBPKIE) scheme. They claimed that both schemes are secure against adaptive chosen-ciphertext attacks without random oracles. However, in this paper, by giving concrete attacks, we indicate that their schemes are even not secure against chosen-plaintext attacks. © 2010 Elsevier Inc. All rights reserved. (11 refs.)Main Heading: CryptographyControlled terms: Security of dataUncontrolled terms: Chosen ciphertext attack - Chosen-plaintext attack - Encryption schemes - Identity Based Encryption - Identity-based - Parallel key-insulation - Without random oraclesClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
The weight enumerator of a class of cyclic codes
Ma, Changli1; Zeng, Liwei1; Liu, Yang1; Feng, Dengguo2; Ding, Cunsheng3, 4
Source: IEEE Transactions on Information Theory, v 57, n 1, p 397-402, January 2011; ISSN: 00189448; DOI: 10.1109/TIT.2010.2090272; Article number: 5673757;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 College of Mathematics and Information Science, Hebei Normal University, Shi Jia Zhuang 050016, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100080, China3 College of Mathematics and Information Science, Hebei Normal University, Shi Jia Zhuang, Hebei Province, China4 Hong Kong University of Science and Technology, Hong Kong, Hong Kong
Abstract: Cyclic codes with two zeros and their dual codes have been a subject of study for many years. However, their weight distributions are known only for a few cases. In this paper, the weight distributions of the duals of the cyclic codes with two zeros are settled for a few cases. The weight distributions of punctured versions of these codes are also determined for several special cases. © 2006 IEEE. (11 refs.)Uncontrolled terms: Cyclic code - Dual codes - Linear codes - Weight distributions - Weight enumerator
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Structural property analysis of a kind of Petri net synthesis
Xia, Chuanliang1, 2; Z., Liu; P., Sun
Source: Advanced Materials Research, v 255-260, p 1989-1993, 2011, Advances in Civil Engineering;ISSN:10226680;ISBN-13:9783037851395;
DOI:10.4028/www.scientific.net/AMR.255-260.1989; Conference: 2011 International Conference on Civil Engineering and Building Materials, CEBM 2011, July 29, 2011 - July 31, 2011; Sponsor: Kunming University of Science and Technology; International Association for Scientific and High Technology;
Publisher: Trans Tech Publications
Author affiliation: 1 School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China2 State Key Laboratory of Computer Science, Institute of Software, Academy of Sciences, Beijing, China
Abstract: Petri net synthesis can avoid the state exploration problem by guaranteeing the correctness in the Petri net while incrementally expanding the net. This paper proposes the conditions imposed on a synthesis shared a kind of subnet under which the following structural properties will be preserved: repetitiveness, consistency, structural boundedness, conservativeness, structural liveness, P-invariant and T-invariant. © (2011) Trans Tech Publications, Switzerland. (9 refs.)Main Heading: Petri netsControlled terms: Building materials - Civil engineering - Construction equipmentUncontrolled terms: Boundedness - Liveness - P-invariants - Property analysis - Repetitiveness - State exploration - T-invariantClassification Code: 415 Metals, Plastics, Wood and Other Structural Materials - 414 Masonry Materials - 413 Insulating Materials - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory - 412 Concrete - 409 Civil Engineering, General - 405.1 Construction Equipment - 411 Bituminous Materials
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Comments on "monomial bent functions"
Sun, Guanghong1; Wu, Chuankun2
Source: IEEE Transactions on Information Theory, v 57, n 6, p 4014-4015, June 2011; ISSN: 00189448; DOI: 10.1109/TIT.2011.2136950; Article number: 5773025;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 College of Sciences, Hohai University, Nanjing, 210098, China2 StateKey Lab of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China (2 refs.)
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Efficient secret sharing schemes
Lv, Chunli1, 2; Jia, Xiaoqi3; Lin, Jingqiang1; Jing, Jiwu1; Tian, Lijun2; Sun, Mingli2
Source: Communications in Computer and Information Science, v 186 CCIS, p 114-121, 2011, Secure and Trust Computing, Data Management, and Applications - 8th FTRA International Conference, STA 2011, Proceedings; ISSN: 18650929; ISBN-13: 9783642223389; DOI: 10.1007/978-3-642-22339-6_14; Conference: 8th FTRA International Conference on Secure and Trust Computing, Data Management, and Application, STA 2011, June 28, 2011 - June 30, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Graduate University, Chinese Academy of Sciences, China2 College of Information and Electrical Engineering, China Agricultural University, China3 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, China
Abstract: We propose a new XOR-based (k,n) threshold secret SSS, where the secret is a binary string and only XOR operations are used to make shares and recover the secret. Moreover, it is easy to extend our scheme to a multi-secret sharing scheme. When k is closer to n, the computation costs are much lower than existing XOR-based schemes in both distribution and recovery phases. In our scheme, using more shares (≥ k) will accelerate the recovery speed. © 2011 Springer-Verlag. (17 refs.)Main Heading: RecoveryControlled terms: Information managementUncontrolled terms: Binary string - Computation costs - Multi-secret sharing scheme - Recovery speed - Secret sharing schemes - XOR operationClassification Code: 531 Metallurgy and Metallography - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Impossible differential cryptanalysis of SPN ciphers
Li, R.1; Sun, B.1; Li, C.1, 2
Source: IET Information Security, v 5, n 2, p 111-120, June 2011; ISSN: 17518709, E-ISSN: 17518717; DOI: 10.1049/iet-ifs.2010.0174;
Publisher: Institution of Engineering and Technology
Author affiliation: 1 National University of Defense Technology, Department of Mathematics and System Science, Science College, Changsha, 410073, China2 Chinese Academy of Sciences, State Key Laboratory of Information Security, Institute of Software, Beijing, 100190, China
Abstract: Impossible differential cryptanalysis is a very popular tool for analysing the security of modern block ciphers and the core of such attack is based on the existence of impossible differentials. Currently, most methods for finding impossible differentials are based on the miss-in-the-middle technique and they are very ad hoc. In this study, the authors concentrate on substitution- permutation network (SPN) ciphers whose diffusion layer is defined by a linear transformation P. Based on the theory of linear algebra, the authors propose several criteria on P and its inversion P-1 to characterise the existence of 3/4-round impossible differentials. The authors further discuss the possibility to extend these methods to analyse 5/6-round impossible differentials. Using these criteria, impossible differentials for reduced-round Rijndael are found that are consistent with the ones found before. New 4-round impossible differentials are discovered for block cipher ARIA. Many 4-round impossible differentials are firstly detected for a kind of SPN cipher that employs a 32×32 binary matrix proposed at ICISC 2006 as its diffusion layer. It is concluded that the linear transformation should be carefully designed in order to protect the cipher against impossible differential cryptanalysis. © 2011 The Institution of Engineering and Technology. (20 refs.)Main Heading: CryptographyControlled terms: Linear algebra - Linear equations - Linear transformations - Lyapunov methods - Mathematical transformationsUncontrolled terms: Binary matrix - Block ciphers - Differential cryptanalysis - Diffusion layers - Permutation network - RijndaelClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An environment-adaptive role-based access control model
Wu, Xinsong1, 2; He, Yeping1; Zhou, Zhouyi1, 2; Liang, Hongliang1
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 6, p 983-990, June 2011; Language: Chinese; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate School of Chinese Academy of Sciences, Beijing 100049, China
Abstract: Large scale network-based applications, such as infectious diseases reporting system, require that access control policy can be changed according to environment alternation. However, existing access control models are inflexible and can not be adapted to environment alternation because they are lack of mechanisms to capture environment alternation and to change access control policy. In this paper, we analyze the access control requirements of infectious diseases reporting system. Based on the analysis, we extract the general access control requirements of large scale network-based applications. Through extending RBAC model, we design the components of the environment-adaptive role-based access control model called EA-RBAC and give the formal definition of the model. Compared with traditional RBAC models, EA-RBAC model adds event-trigger, event-based equivalent states transition, environment role and virtual domain mechanisms. Through event-trigger and equivalent states transition, the system can perceive environment alternation and transit state based on environment alternation. Through environment role and virtual domains, the system can dynamically adjust environment role and user authorization based on current state. EA-RBAC model can enforce flexible access control policy for large scale network-based applications while holds security. Also, as an example, this paper gives the applicability analysis of EA-RBAC model in infectious disease reporting system. (12 refs.)Main Heading: Access controlControlled terms: Adaptive control systems - Disease control - Security systems - Virtual realityUncontrolled terms: Access control models - Access control policies - Applicability analysis - Control requirements - Domain mechanism - Environment role - Environment-adaptive - Equivalent state - Event-based - Formal definition - Infectious disease - Network-based - On currents - RBAC - RBAC model - Reporting systems - Role-based access control model - State-based - Virtual domainClassification Code: 461.7 Health Care - 723 Computer Software, Data Handling and Applications - 731.1 Control Systems - 914.1 Accidents and Accident Prevention
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Program analysis: From qualitative analysis to quantitative analysis (NIER track)
Liu, Sheng1, 2; Zhang, Jian1
Source: Proceedings - International Conference on Software Engineering, p 956-959, 2011, ICSE 2011 - 33rd International Conference on Software Engineering, Proceedings of the Conference;ISSN:02705257;ISBN-13:9781450304450;DOI:10.1145/1985793.1985957; Conference: 33rd International Conference on Software Engineering, ICSE 2011, May 21, 2011 - May 28, 2011; Sponsor: Assoc. Comput. Mach., Spec. Interest Group Softw.; Eng. (ACM SIGSOFT); IEEE Computer Society; Technical Council on Software Engineering (TCSE);
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Graduate University, Chinese Academy of Sciences, China
Abstract: We propose to combine symbolic execution with volume computation to compute the exact execution frequency of program paths and branches. Given a path, we use symbolic execution to obtain the path condition which is a set of constraints; then we use volume computation to obtain the size of the solution space for the constraints. With such a methodology and supporting tools, we can decide which paths in a program are executed more often than the others. We can also generate certain test cases that are related to the execution frequency, e.g., those covering cold paths. © 2011 ACM. (10 refs.)Main Heading: Quality controlControlled terms: Software engineeringUncontrolled terms: execution probability - Path condition - Program analysis - Qualitative analysis - Solution space - Supporting tool - symbolic execution - Test caseClassification Code: 723.1 Computer Programming - 913.3 Quality Assurance and Control
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Efficient ciphertext policy attribute-based encryption with constant-size ciphertext and constant computation-cost
Chen, Cheng1; Zhang, Zhenfeng1; Feng, Dengguo1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6980 LNCS, p 84-101, 2011, Provable Security - 5th International Conference, ProvSec 2011, Proceedings; ISSN: 03029743,E-ISSN: 16113349;ISBN-13: 9783642243158;DOI: 10.1007/978-3-642-24316-5_8; Conference: 5th International Conference on Provable Security, ProvSec 2011, October 16, 2011 - October 18, 2011; Sponsor: The National Natural Science Foundation of China (NSFC); Xidian Univ., Key Lab. Comput. Networks; Inf. Secur., Minist. Educ.;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China
Abstract: Attribute-based encryption provides good solutions to the problem of anonymous access control by specifying access policies among private keys or ciphertexts over encrypted data. In ciphertext-policy attribute-based encryption (CP-ABE), each user is associated with a set of attributes, and data is encrypted with access structures on attributes. A user is able to decrypt a ciphertext if and only if his attributes satisfy the ciphertext access structure. CP-ABE is very appealing since the ciphertext and data access policies are integrated together in a natural and effective way. Most current CP-ABE schemes incur large ciphertext size and computation costs in the encryption and decryption operations which depend at least linearly on the number of attributes involved in the access policy. In this paper, we present two new CP-ABE schemes, which have both constant-size ciphertext and constant computation costs for a non-monotone AND gate access policy, under chosen plaintext and chosen ciphertext attacks. The security of first scheme can be proven CPA-secure in standard model under the decision n-BDHE assumption. And the security of second scheme can be proven CCA-secure in standard model under the decision n-BDHE assumption and the existence of collision-resistant hash functions. Our scheme can also be extended to the decentralizing multi-authority setting. © 2011 Springer-Verlag. (29 refs.)Main Heading: Access controlControlled terms: Efficiency - Hash functionsUncontrolled terms: Access policies - Access structure - Chosen ciphertext attack - Chosen ciphertext security - Ciphertexts - Collision-resistant hash functions - Computation costs - Data access - Encrypted data - Encryption and decryption - Plaintext - Private key - Standard modelClassification Code: 723 Computer Software, Data Handling and Applications - 913.1 Production Engineering
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Detection approach for covert channel based on concurrency conflict interval time
Wang, Yongji1, 2; Wu, Jingzheng1, 3; Ding, Liping1; Zeng, Haitao4
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 8, p 1542-1553, August 2011; Language: Chinese; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 National Engineering Research Center for Fundamental Software, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 Graduate University of Chinese Academy of Sciences, Beijing 100049, China4 China Mobile Research Institute, Beijing 100053, China
Abstract: Concurrency conflicts may bring data conflict covert channel in multi-level secure systems. The existing covert channel detection methods have the following flaws: 1) Analyzing conflict records with single point, so the invaders can evade to be detected; 2) Using single indicator will bring false positive and false negative. We present a detection method based on conflict interval time called CTIBDA in this paper. This method solves the above problems: 1) Analyzing the conflict records with subject and object can prevent intruders from dispersing; 2) Using both the distribution and the sequence of intervals between transactions conflicts as indicators. The experimental results show that this approach can reduce the false positive and false negative and increase the accuracy. CTIBDA is suitable for online implementation and can be universally applied to concurrency conflict covert channels in other scenarios. (33 refs.)Main Heading: Concurrency controlUncontrolled terms: Concurrency conflict - Covert channels - Covert timing channels - Data conflict covert channel - Detection approach - Detection methods - False negatives - False positive - Interval time - Multi-level - Online implementation - Secure system - Single pointClassification Code: 723.3 Database Systems
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Implementing service collaboration based on decentralized mediation
Qiao, Xiaoqiang1; Wei, Jun1
Source: Proceedings - International Conference on Quality Software, p 228-235, 2011, Proceedings - 11th International Conference on Quality Software, QSIC 2011; ISSN: 15506002; ISBN-13: 9780769544687; DOI: 10.1109/QSIC.2011.18; Article number: 6004331; Conference: 11th International Conference on Quality Software, QSIC 2011, July 13, 2011 - July 14, 2011; Sponsor: Computer Science School of the Universidad Complutense de Madrid; Madrid Convention Bureau of the Madrid City Council;
Publisher: IEEE Computer Society
Author affiliation: 1 Technology Center of Software Engineering, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Service collaboration allows the realization of more complicated business logic by using existing services. As Web services are generally designed by different organizations, there will be certain mismatches that make them not fit together. Mediation mechanism plays an important role in service collaboration, which guarantees the seamless interaction without changing the internal implementation of services. This paper proposes a comprehensive approach of decentralized mediation framework for multiple services collaboration across organizational boundaries. We also present a novel technique for mediation existence checking and mediator synthesis based on interaction paths, which not only reduces the complexity of mediator synthesis but also provides parallel sub-processes for multiple interactive parts to ensure the parallelism degree of the mediator. © 2011 IEEE. (20 refs.)Main Heading: Web servicesUncontrolled terms: Business logic - decentralized mediation - Interaction Path - mediation-based compatibility - Multiple services - Novel techniques - Organizational boundaries - service collaborationClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A practical covert channel identification approach in
Source code based on directed information flow graph
Wu, JingZheng1, 3; Ding, Liping1; Wang, Yongji1, 2; Han, Wei1, 3
Source: Proceedings - 2011 5th International Conference on Secure Software Integration and Reliability Improvement, SSIRI 2011, p 98-107, 2011, Proceedings - 2011 5th International Conference on Secure Software Integration and Reliability Improvement, SSIRI 2011; ISBN-13: 9780769544533; DOI: 10.1109/SSIRI.2011.17; Article number: 5992008; Conference: 2011 5th International Conference on Secure Software Integration and Reliability Improvement, SSIRI 2011, June 27, 2011 - June 29, 2011; Sponsor: Korea Software Engineering Society;
Publisher: IEEE Computer Society
Author affiliation: 1 National Engineering Research Center for Fundamental Software, Institute of Software, China2 State Key Laboratory of Computer Science, Institute of Software, China3 Graduate School, Chinese Academy of Science, Beijing, China
Abstract: Covert channel analysis is an important requirement when building secure information systems and identification is the most difficult task. Although some approaches were presented they are either experimental or constrained to some particular systems. This paper presents a practical approach based on directed information flow graph taking advantage of the Source code analysis. The approach divides the whole system into serval independent modules and analyzes them respectively. All the shared variables and their caller functions are found out from the Source codes and modeled into directed information flow graphs. When the information flow branches are visible and modifiable to the external interface a potential covert channel exists. Contributions made in this paper are as follows a modularized analysis scheme is proved and reduces the workloads of identifying a directed information flow graph algorithm is presented and used to model the covert channels more than 30 covert channels have been identified in Linux kernel Source code using this scheme and a typical channel scenario is constructed. © 2011 IEEE. (34 refs.)Main Heading: Reliability analysisControlled terms: Algorithms - Building codes - Computer operating systems - Computer programming languages - Graphic methods - Software reliabilityUncontrolled terms: Alias analysis - Covert channels - Directed information - Modularized - Source code analysisClassification Code: 921 Mathematics - 913.3 Quality Assurance and Control - 913 Production Planning and Control; Manufacturing - 723 Computer Software, Data Handling and Applications - 722 Computer Systems and Equipment - 403 Urban and Regional Planning and Development - 402 Buildings and Towers
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An efficient PBA protocol based on elliptic curves
Chu, Xiaobo1; Qin, Yu1; Feng, Dengguo1
Source: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, p 415-420, 2011, 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011; ISBN-13: 9781612844855; DOI: 10.1109/ICCSN.2011.6013624; Article number: 6013624; Conference: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, May 27, 2011 - May 29, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Science, Beijing, China
Abstract: Remote attestation is one of the main topics in trusted computing research area. It has great significance in attesting trustworthiness of terminal platform and establishing remote trust relationship in distributed computing environment. Property-based attestation(PBA for short) is an emerging method in which binary integrity value has been replaced with secure property as the content to attest. PBA has drawn great attention for several aspects of advantages including better scalability, better usability and better protection on configuration privacy. Unfortunately, current PBA protocols have suffered a lot from low performance and high implementation cost. In these protocols, secure chip with only limited computation capacity is arranged to execute too much computations. This reasonless design not only makes the secure chip be a bottleneck of performance but also increases secure chip's production cost. In this paper, we propose an efficient PBA protocol based on elliptic curve cryptography. Compared with existing schemes, our protocol greatly enhances performance with very limited cost. The basic idea to achieve this improvement is (1)transforming computations on large finite field executed by secure chip into computations in small group of elliptic curve points and (2)adopting batch proof skills and asymmetric pairings. Under random oracle model, our protocol is proved to be secure. © 2011 IEEE. (24 refs.)Main Heading: GeometryControlled terms: Communication - Distributed computer systemsUncontrolled terms: Computation capacity - Distributed computing environment - Elliptic curve - Elliptic curve cryptography - Finite fields - Implementation cost - Production cost - Property-based attestation - Random Oracle model - remote attesatation - Remote attestation - Small groups - Trust relationship - trusted computing - Trusted platform moduleClassification Code: 716 Telecommunication; Radar, Radio and Television - 722.4 Digital Computers and Systems - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Provable secure authentication protocol with anonymity for roaming service in global mobility networks
Zhou, Tao1, 3; Xu, Jing2
Source: Computer Networks, v 55, n 1, p 205-213, January 7, 2011; ISSN: 13891286; DOI: 10.1016/j.comnet.2010.08.008;
Publisher: Elsevier
Author affiliation: 1 State Key Laboratory of Information Security, Graduate University, Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 National Engineering Research Center of Information Security, Beijing 100190, China
Abstract: User authentication is an important security mechanism for recognizing legal roaming users. The emerging global mobility network, however, has called for new requirements for designing authentication schemes due to its dynamic nature and vulnerable-to-attack structure, which the traditional schemes overlooked, such as user anonymity. In this paper, we propose an efficient wireless authentication protocol with user anonymity for roaming service. We also introduce a formal security model suitable for roaming service in global mobility networks and show that the proposed protocol is provable secure based on this model. To the best of our knowledge, this paper offers the first formal study of anonymous authentication scheme for roaming service in global mobility networks. In addition, we point out some practical attacks on Chang et al.'s authentication scheme with user anonymity for roaming environments. © 2010 Elsevier B.V. All rights reserved. (16 refs.)Main Heading: AuthenticationControlled terms: Network protocols - Network securityUncontrolled terms: Anonymity - Anonymous authentication - Authentication protocols - Authentication scheme - Dynamic nature - Formal security models - Global mobility network - Provable security - Roaming services - Security mechanism - User anonymity - User authentication - Wireless roamingClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Holographic algorithms by Fibonacci gates
Cai, Jin-Yi1; Lu, Pinyan2; Xia, Mingji3
Source: Linear Algebra and Its Applications,2011;ISSN:00243795;
DOI: 10.1016/j.laa.2011.02.032 Article in Press
Author affiliation: 1 Computer Sciences Department, University of Wisconsin - Madison, 1210 West Dayton Street, Madison, WI 53706, U.S.A.2 Microsoft Research Asia, #999, Zi Xing Road, Min Hang District, Shanghai, 200241, P.R. China3 Institute of Software, Chinese Academy of Sciences, #4 South Fourth Street, Zhong Guan Cun, Beijing 100190, P.R. China
Abstract: We introduce Fibonacci gates as a polynomial time computable primitive, and develop a theory of holographic algorithms based on these gates. The Fibonacci gates play the role of matchgates in Valiant's theory (Valiant (2008) [19]). They give rise to polynomial time computable counting problems on general graphs, while matchgates mainly work over planar graphs only. We develop a signature theory and characterize all realizable signatures for Fibonacci gates. For bases of arbitrary dimensions we prove a basis collapse theorem. We apply this theory to give new polynomial time algorithms for certain counting problems. We also use this framework to prove that some slight variations of these counting problems are #P-hard. Holographic algorithms with Fibonacci gates prove to be useful as a general tool for classification results of counting problems (dichotomy theorems (Cai et al. (2009) [7])). © 2011 Elsevier Inc. All rights reserved.Main Heading: AlgorithmsControlled terms: Polynomial approximationUncontrolled terms: Arbitrary dimension - Classification results - Counting problems - Dichotomy theorem - General graph - General tools - Matchgates - Planar graph - Polynomial-time - Polynomial-time algorithmsClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics - 921.6 Numerical Methods
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Local search with edge weighting and configuration checking heuristics for minimum vertex cover
Cai, Shaowei1; Su, Kaile2, 3; Sattar, Abdul2, 4
Source: Artificial Intelligence, v 175, n 9-10, p 1672-1696, June 2011; ISSN: 00043702; DOI: 10.1016/j.artint.2011.03.003;
Publisher: Elsevier
Author affiliation: 1 Key Laboratory of High Confidence Software Technologies, Peking University, Ministry of Education, Beijing, China2 Institute for Integrated and Intelligent Systems, Griffith University, Brisbane, Australia3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China4 ATOMIC Project, Queensland Research Lab, NICTA, Australia
Abstract: The Minimum Vertex Cover (MVC) problem is a well-known combinatorial optimization problem of great importance in theory and applications. In recent years, local search has been shown to be an effective and promising approach to solve hard problems, such as MVC. In this paper, we introduce two new local search algorithms for MVC, called EWLS (Edge Weighting Local Search) and EWCC (Edge Weighting Configuration Checking). The first algorithm EWLS is an iterated local search algorithm that works with a partial vertex cover, and utilizes an edge weighting scheme which updates edge weights when getting stuck in local optima. Nevertheless, EWLS has an instance-dependent parameter. Further, we propose a strategy called Configuration Checking for handling the cycling problem in local search. This is used in designing a more efficient algorithm that has no instance-dependent parameters, which is referred to as EWCC. Unlike previous vertex-based heuristics, the configuration checking strategy considers the induced subgraph configurations when selecting a vertex to add into the current candidate solution. A detailed experimental study is carried out using the well-known DIMACS and BHOSLIB benchmarks. The experimental results conclude that EWLS and EWCC are largely competitive on DIMACS benchmarks, where they outperform other current best heuristic algorithms on most hard instances, and dominate on the hard random BHOSLIB benchmarks. Moreover, EWCC makes a significant improvement over EWLS, while both EWLS and EWCC set a new record on a twenty-year challenge instance. Further, EWCC performs quite well even on structured instances in comparison to the best exact algorithm we know. We also study the run-time behavior of EWLS and EWCC which shows interesting properties of both algorithms. © 2011 Elsevier B.V. All rights reserved. (61 refs.)Main Heading: Problem solvingControlled terms: Combinatorial optimization - Heuristic algorithms - Learning algorithmsUncontrolled terms: Candidate solution - Combinatorial optimization problems - Configuration checking - Edge weighting - Edge weights - Efficient algorithm - Exact algorithms - Experimental studies - Hard instances - Hard problems - Induced subgraphs - Iterated local search - Local optima - Local search - Local search algorithm - Minimum vertex cover - Runtimes - Vertex cover - Weighting schemeClassification Code: 723 Computer Software, Data Handling and Applications - 723.1 Computer Programming - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Extending fuzzy soft sets with fuzzy description logics
Jiang, Yuncheng1, 2; Tang, Yong1; Chen, Qimai1; Liu, Hai1; Tang, Jianchao1
Source: Knowledge-Based Systems, v 24, n 7, p 1096-1107, October 2011; ISSN: 09507051; DOI: 10.1016/j.knosys.2011.05.003;
Publisher: Elsevier
Author affiliation: 1 School of Computer Science, South China Normal University, Guangzhou 510631, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Molodtsov initiated the concept of soft set theory, which can be used as a generic mathematical tool for dealing with uncertainty. However, it has been pointed out that classical soft sets are not appropriate to deal with imprecise and fuzzy parameters. In order to handle these types of problem parameters, some fuzzy extensions of soft set theory are presented, yielding fuzzy soft set theory. Fuzzy description logics (DLs) are a family of logics which allow the representation of and the reasoning within structured knowledge affected by vagueness. In this paper we extend fuzzy soft sets with fuzzy DLs, i.e., present an extended fuzzy soft set theory by using the concepts of fuzzy DLs to act as the parameters of fuzzy soft sets. We define some operations for the extended fuzzy soft sets. Moreover, we prove that certain De Morgan's laws hold in the extended fuzzy soft set theory with respect to these operations. In fact, the extended fuzzy soft set theory based on fuzzy DLs presented in this paper is a fuzzy extension of the extended soft set theory based on DLs. © 2011 Elsevier B.V. All rights reserved. (61 refs.)Main Heading: Fuzzy set theoryControlled terms: Data description - Formal languages - Knowledge representationUncontrolled terms: Description logic - Fuzzy description logic - Fuzzy soft sets - Fuzzy terminology - Soft setsClassification Code: 723 Computer Software, Data Handling and Applications - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Linear approximations of addition modulo 2n-1
Zhou, Chunfang1, 2; Feng, Xiutao1; Wu, Chuankun1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6733 LNCS, p 359-377, 2011, Fast Software Encryption - 18th International Workshop, FSE 2011, Revised Selected Papers; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642217012;
DOI: 10.1007/978-3-642-21702-9_21; Conference: 18th International Workshop on Fast Software Encryption, FSE 2011, February 13, 2011 - February 16, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China2 Graduate University, Chinese Academy of Science, Beijing, 100049, China
Abstract: Addition modulo 231 - 1 is a basic arithmetic operation in the stream cipher ZUC. For evaluating ZUC's resistance against linear cryptanalysis, it is necessary to study properties of linear approximations of the addition modulo 231 - 1. In this paper we discuss linear approximations of the addition of k inputs modulo 2n - 1 for n ≥ 2. As a result, an explicit expression of the correlations of linear approximations of the addition modulo 2n - 1 is given when k = 2, and an iterative expression when k > 2. For a class of special linear approximations with all masks being equal to 1, we further discuss the limit of their correlations when n goes to infinity. It is shown that when k is even, the limit is equal to zero, and when k is odd, the limit is bounded by a constant depending on k. © 2011 Springer-Verlag. (21 refs.)Main Heading: CryptographyControlled terms: Security of dataUncontrolled terms: Addition modulo 2 - Arithmetic operations - Explicit expressions - Linear approximations - Linear cryptanalysis - Modular addition - Modulo 2 - Stream CiphersClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Acquiring key privacy from data privacy
Zhang, Rui1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6584 LNCS, p 359-372, 2011, Information Security and Cryptology - 6th International Conference, Inscrypt 2010, Revised Selected Papers; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642215179;
DOI: 10.1007/978-3-642-21518-6_25; Conference: 6th China International Conference on Information Security and Cryptology, Inscrypt 2010, October 20, 2010 - October 24, 2010; Sponsor: State Key Laboratory of Information Security; Chinese Academy of Sciences; Chinese Association for Cryptologic Research;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Research Center for Information Security (RCIS), National Institute of Advanced Industrial Science and Technology (AIST), Japan
Abstract: A primary functionality of public key encryption schemes is data privacy, while in many cases key privacy (aka. anonymity of public keys) may also be important. Traditionally, one has to separately design/prove them, because data privacy and key privacy were shown to be independent from each other [5,40]. Existing constructions of anonymous public key encryption usually take either of the following two approaches: 1 Directly construct it from certain number theoretic assumptions. 2 Find a suitable anonymous encryption scheme with key privacy yet without chosen ciphertext security, then use some dedicated transforms to upgrade it to one with key privacy and chosen ciphertext security. While the first approach is intricate and a bit mysterious, the second approach is unnecessarily a real solution to the problem, namely, how to acquire key privacy. In this paper, we show how to build anonymous encryption schemes from a class of key encapsulation mechanisms with only weak data privacy, in the random oracle model. Instantiating our generic construction, we obtain many interesting anonymous public key encryption schemes. We note that some underlying schemes are based on gap assumptions or with bilinear pairings, which were previously well-known not anonymous. © 2011 Springer-Verlag. (40 refs.)Main Heading: Data privacyControlled terms: Public key cryptographyUncontrolled terms: anonymity - Bilinear pairing - Chosen ciphertext security - Encryption schemes - Generic construction - Key encapsulation mechanisms - key privacy - Public keys - Public-key encryption - Public-key encryption scheme - Random Oracle model - Real solutionsClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Approach for physically-based animation of tree branches impacting by raindrops
Yang, Meng1, 3; Wu, En-Hua1, 2
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 8, p 1934-1947, August 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.04022;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macao, China3 Graduate University, The Chinese Academy of Sciences, Beijing 100049, China
Abstract: This paper presents an approach to animate realistic interactions between tree branches and raindrops, based on physical theory. Tree branches and petioles are represented by a model namely ETPSM (extended three-prism spring model) with three-prism structures, which can be flexibly controlled by four kinds of spring systems. The branches and leaves are animated due to the bidirectional transference of kinetic energy between raindrops and the branch system. The interactions between them can be well simulated by an efficient technique, specially designed for liquid motion on non-rigid objects with hydrophilic surfaces. When the branches are impacted by a raindrop, they will vibrate and twist, and the raindrops will flow along the leaf vein, merge into larger ones, or hang on the leaf apex, and the branch will rebound and vibrate for a while after the raindrop falls off the leaf, finally into a rest place. Experimental results show that this approach can be used to efficiently and realistically simulate these interactions between tree branches and raindrops; meanwhile, it can efficiently and easily simulate the elastic deformation of rod under the influence of external forces, such as rotation and vibration. © 2011 ISCAS. (23 refs.)Main Heading: DropsControlled terms: Animation - Deformation - Plant extracts - PrismsUncontrolled terms: Branch animation - Branch system - External force - Hydrophilic surfaces - Liquid motion - Non-rigid objects - Physical theory - Physically-based animation - Physically-based deformation - Raindrop animation - Spring model - Spring system - Tree branchesClassification Code: 421 Strength of Building Materials; Mechanical Properties - 422 Strength of Building Materials; Test Equipment and Methods - 443.1 Atmospheric Properties - 461.9 Biology - 723.5 Computer Applications - 741.3 Optical Devices and Systems
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Virtual monotonic counters using trusted platform module
Li, Hao1, 2; Qin, Yu1, 2; Feng, Dengguo1, 2
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 3, p 415-422, March 2011; Language: Chinese; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 National Engineering Research Center of Information Security, Beijing 100190, China
Abstract: Any security storage system needs to address at least three security issues: confidentiality, integrity and freshness. Of these, freshness is the most challenging problem. However, the traditional software-based solutions themselves are on the storage device, such as a hard disk. Hence, they can not solve the problem. The attacker can replay the whole disk data using an "out-of-date" image of hard disk. Thus, the only solution to this problem would be to employ some form of irreversible state change. In this paper, we analyze the problem of replay attacks upon storage, and propose a TPM-based solution to build virtual counters, in order to defend against replay attacks. In this solution, we build a virtual counter manager (VCM) with three mechanisms in TPM: TPM Counters, transport sessions and protection of private keys; and then we can create and manage lots of trusted virtual counters with VCM. Furthermore, an algorithm for checking malicious operations of VCM is presented in order to ensure the trust of it. Hence, the security of our solution just depends on the tamper-resistant module TPM. Finally, the performance of our solution is analyzed, and two changes are proposed to improve the performance in order to keep the solution of anti-replay attacks feasible. (14 refs.)Main Heading: Problem solvingControlled terms: Hard disk storage - Security of data - Thermoelectric power - Virtual storageUncontrolled terms: Monotonic counters - Replay attack - TPM - Transport session - Trusted computingClassification Code: 701.1 Electricity: Basic Concepts and Phenomena - 722.1 Data Storage, Equipment and Techniques - 723.2 Data Processing and Image Processing - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Characterizing failure-causing parameter interactions by adaptive testing
Zhang, Zhiqiang1, 2; Zhang, Jian1
Source: 2011 International Symposium on Software Testing and Analysis, ISSTA 2011 - Proceedings, p 331-341, 2011, 2011 International Symposium on Software Testing and Analysis, ISSTA 2011 - Proceedings; ISBN-13: 9781450305624; DOI: 10.1145/2001420.2001460; Conference: 20th International Symposium on Software Testing and Analysis, ISSTA 2011, July 17, 2011 - July 21, 2011; Sponsor: Assoc. Comput. Mach., Spec. Interest Group; Softw. Eng. (ACM SIGSOFT);
Publisher: Association for Computing Machinery
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China2 Graduate University, Chinese Academy of Sciences, China
Abstract: Combinatorial testing is a widely used black-box testing technique, which is used to detect failures caused by parameter interactions (we call them faulty interactions). Traditional combinatorial testing techniques provide fault detection, but most of them provide weak fault diagnosis. In this paper, we propose a new fault characterization method called faulty interaction characterization (FIC) and its binary search alternative FIC-BS to locate one failure-causing interaction in a single failing test case. In addition, we provide a tradeoff strategy of locating multiple faulty interactions in one test case. Our methods are based on adaptive black-box testing, in which test cases are generated based on outcomes of previous tests. For locating a t-way faulty interaction, the number of test cases used is at most k (for FIC) or t(⌈log2 k⌉+1)+1 (for FIC-BS), where k is the number of parameters. Simulation experiments show that our method needs smaller number of adaptive test cases than most existing methods for locating randomly-generated faulty interactions. Yet it has stronger or equivalent ability of locating faulty interactions. © 2011 ACM. (18 refs.)Main Heading: Software testingControlled terms: Computer software selection and evaluation - Fault detection - TestingUncontrolled terms: Adaptive test case - Adaptive testing - Binary search - Black-box testing - Combinatorial testing - Fault characterization - faulty interaction - Group testing - Parameter interactions - Simulation experiments - Test caseClassification Code: 422 Strength of Building Materials; Test Equipment and Methods - 422.2 Strength of Building Materials : Test Methods - 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Understanding, manipulating and searching hand-drawn concept maps
Jiang, Yingying1; Tian, Feng1; Zhang, Xiaolong2; Dai, Guozhong3; Wang, Hongan3
Source: ACM Transactions on Intelligent Systems and Technology, v 3, n 1, October 2011; ISSN: 21576904, E-ISSN: 21576912; DOI: 10.1145/2036264.2036275; Article number: 11;
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China2 College of Information Sciences and Technology, Pennsylvania State University, PA 16802, United States3 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Concept maps are an important tool to organize, represent, and share knowledge. Building a concept map involves creating text-based concepts and specifying their relationships with line-based links. Current concept map tools usually impose specific task structures for text and link construction, and may increase cognitive burden to generate and interact with concept maps. While pen-based devices (e.g., tablet PCs) offer users more freedom in drawing concept maps with a pen or stylus more naturally, the support for hand-drawn concept map creation and manipulation is still limited, largely due to the lack of methods to recognize the components and structures of hand-drawn concept maps. This article proposes a method to understand hand-drawn concept maps. Our algorithm can extract node blocks, or concept blocks, and link blocks of a hand-drawn concept map by combining dynamic programming and graph partitioning, recognize the text content of each concept node, and build a concept-map structure by relating concepts and links. We also design an algorithm for concept map retrieval based on hand-drawn queries. With our algorithms, we introduce structure-based intelligent manipulation techniques and ink-based retrieval techniques to support the management and modification of hand-drawn concept maps. Results from our evaluation study show high structure recognition accuracy in real time of our method, and good usability of intelligent manipulation and retrieval techniques. © 2011 ACM. (38 refs.)Main Heading: Character recognitionControlled terms: Algorithms - Dynamic programming - Intelligent robots - Personal computersUncontrolled terms: Concept maps - Evaluation study - Graph Partitioning - Intelligent manipulation - Pen-based - Real time - Recognition - Retrieval - Retrieval techniques - Share knowledge - Specific tasks - Structure recognition - Structure-based - Tablet PCs - Text contentClassification Code: 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 731.6 Robot Applications - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Ergodic theory over F2 {left open bracket} T {right open bracket}
Lin, Dongdai1; Shi, Tao1, 2; Yang, Zifeng3
Source: Finite Fields and their Applications, 2011; ISSN: 10715797, E-ISSN: 10902465; DOI: 10.1016/j.ffa.2011.11.001 Article in Press
Author affiliation: 1 The State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, PR China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, PR China3 Department of Mathematics, The Capital Normal University, Beijing 100048, PR China
Abstract: In cryptography and coding theory, it is important to study the pseudo-random sequences and the ergodic transformations. We already have the ergodic 1-Lipschitz theory over Z2 established by V. Anashin and others. In this paper we present an ergodic theory over F2 {left open bracket} T {right open bracket} and some ideas which might be very useful in applications. © 2011 Elsevier Inc. All rights reserved.Main Heading: Codes (symbols)Controlled terms: Computer programmingUncontrolled terms: Coding Theory - Ergodic theory - Ergodics - Pseudorandom sequencesClassification Code: 723.1 Computer Programming - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research on modeling and rendering of realistic imaging for object in spatial space
Liu, Qing-Wu1, 2; Zheng, Chang-Wen1; Zhang, Yi1, 2
Source: Tien Tzu Hsueh Pao/Acta Electronica Sinica, v 39, n 10, p 2307-2310+2316, October 2011; Language: Chinese; ISSN: 03722112;
Publisher: Chinese Institute of Electronics
Author affiliation: 1 National Key Laboratory of Integrated Information System Technology, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Science, Beijing 100190, China
Abstract: A new model about BRDF defined by microfacet was developed for spatial object optical reflection modeling based on the principle of optical geometry and the technique of ray tracing. This model briefly contains the contribution to spatial object reflection characteristics by the microfacet normal direction, reflectivity and the shelter relation among microfacets. The diffuse reflection and the mirroring reflection are employed by this model in order to calculate the whole luminance characteristic and the local high luminance characteristic of spatial object. The result of validating work showed that the BRDF microfacet model curve is basically consistent as the actual measurement data curve. The rendering experiment expressly proved that the BRDF microfacet model could be used to simulate the effect exactly and calculate the optical reflection characteristic accurately of spatial object. (14 refs.)Main Heading: Distribution functionsControlled terms: Luminance - ReflectionUncontrolled terms: Bidirectional reflectance distribution functions - Diffuse reflection - Measurement data - Microfacets - Model curve - New model - Normal direction - Optical geometry - Optical reflection - Reflection characteristics - Spatial objects - Spatial spacesClassification Code: 711 Electromagnetic Waves - 741.1 Light/Optics - 922.1 Probability Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Two applications of an incomplete additive character sum to estimating nonlinearity of Boolean functions
Du, Yusong1, 2; Zhang, Fangguo1, 3
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p 190-201, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426;
DOI: 10.1007/978-3-642-25243-3_16; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 School of Information Science and Technology, Sun Yat-sen University, Guangzhou 510006, China2 Key Lab. of Network Security and Cryptology, Fujian Normal University, Fuzhou 350007, China3 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: In recent years, several classes of Boolean functions with good cryptographic properties have been constructed by using univariate (or bivariate) polynomial representation of Boolean functions over finite fields. The estimation of an incomplete additive character sum plays an important role in analyzing the nonlinearity of these functions. In this paper, we consider replacing this character sum with another incomplete additive character sum, whose estimation was firstly given by A.Winterhof in 1999. Based on Winterhof's estimation, we try to modify two of these functions and obtain better nonlinearity bound of them. © 2011 Springer-Verlag. (13 refs.)Main Heading: Boolean functionsControlled terms: Algebra - Estimation - Security of dataUncontrolled terms: Additive characters - Algebraic degrees - Algebraic immunity - Bivariate - Functions over finite fields - incomplete additive character sum - Non-Linearity - Polynomial representations - UnivariateClassification Code: 723.2 Data Processing and Image Processing - 921 Mathematics - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Trusted subjects configuration based on TE model in MLS systems
Li, Shangjie1, 2; He, Yeping1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6802 LNCS, p 98-107, 2011, Trusted Systems - Second International Conference, INTRUST 2010, Revised Selected Papers; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252822;
DOI: 10.1007/978-3-642-25283-9_7; Conference: 2nd International Conference on Trusted Systems, INTRUST 2010, December 13, 2010 - December 15, 2010; Sponsor: Beijing University of Technology; ONETS Wireless and Internet Security Company; Singapore Management University; Adm. Comm. Zhongguangcun Haidian Sci. Park;
Publisher: Springer Verlag
Author affiliation: 1 National Engineering Research Center for Fundamental Software, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate School, Chinese Academy of Sciences, Beijing 100049, China
Abstract: Trusted subjects are inevitably parts of multi-level security systems or trusted networks. They can introduce security risk into system, as they don't comply with *-property in Bell LaPadula model. It's an important work to determine which subjects are trusted from hundreds and thousands of applications, and what their security requirements are during the developing and deploying secure operating systems. In this paper, an approach is proposed to address these issues based on information flow and risk analysis. Type enforcement specification is used as a base for information flow analysis and then finding out trusted subjects and their security requirements:security label range and security assurance level. © 2011 Springer-Verlag. (16 refs.)Main Heading: Risk assessmentControlled terms: Access control - Network security - Risk analysisUncontrolled terms: Information flow analysis - Information flows - Multi-level security - Risk-based - Secure operating system - Security assurance - Security requirements - Security risks - Trusted network - Trusted Subject - Type enforcementClassification Code: 922.1 Probability Theory - 922 Statistical Methods - 914.1 Accidents and Accident Prevention - 914 Safety Engineering - 912 Industrial Engineering and Management - 911 Cost and Value Engineering; Industrial Economics - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Data dissemination scheme based on adaptive density of nodes in vehicular ad-hoc networks
Yang, Wei-Dong1; Liu, Ji-Zhao1; Liu, Yan2; Deng, Miao-Lei1; Zhou, Xin-Yun3
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n SUPPL. 1, p 83-92, October 2011; Language: Chinese; ISSN: 10009825;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 College of Information Science and Engineering, He'nan University of Technology, Zhengzhou 450001, China2 School of Software and Microelectronics, Peking University, Beijing 102600, China3 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China
Abstract: The high mobility of vehicles causes the frequent change of networks topology. This is part of a huge huge challenge on the data dissemination of VANETs. Even though existing flood-based routing protocols provide high reliability, they can not achieve the good trade-off shown among delivery ratio, delay, and the number of message redundant copies. A data dissemination scheme based on adaptive node density for VANETs is proposed. Nodes can rapidly gain the geographical distribution of "hotspot" regions via the proposed distributed algorithm. The hop-count limit function is established which is based on the Euclidean distance of the nearest "hostspot" region and density of nodes. When making forwarding deciding, nodes set a dynamically upper bound on the message hop count to avoiding unnecessary message redundant copies in the "hotspot" region. The number of message redundant copies can be effectively reduced in the network. The simulation results show that the delivery ratio and delay of this scheme are close to the epidemic routing protocol, but the number of message copies can be reduced by 37.5%. © Copyright 2011, Editorial Department of Journal of Software. All rights reserved. (15 refs.)Main Heading: Mobile ad hoc networksControlled terms: Ad hoc networks - Geographical distribution - Geographical regions - Intelligent vehicle highway systems - Routing protocols - Telecommunication networksUncontrolled terms: Data dissemination - Delivery ratio - Distributed algorithm - Epidemic routing - Euclidean distance - High mobility - High reliability - Hop count - Hot spot - Node density - Upper Bound - Vehicular ad hoc networks - Vehicular ad-hoc networkClassification Code: 902.1 Engineering Graphics - 901.4 Impact of Technology on Society - 723.5 Computer Applications - 723 Computer Software, Data Handling and Applications - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Set-theoretic foundation of parametric polymorphism and subtyping
Castagna, Giuseppe1; Xu, Zhiwu1, 2
Source: Proceedings of the ACM SIGPLAN International Conference on Functional Programming, ICFP, p 94-106, 2011, ICFP'11 - Proceedings of the 2011 ACM SIGPLAN International Conference on Functional Programming; ISBN-13: 9781450308656; DOI: 10.1145/2034773.2034788; Conference: 16th ACM SIGPLAN International Conference on Functional Programming, ICFP'11, September 19, 2011 - September 21, 2011; Sponsor: ACM SIGPLAN;
Publisher: Association for Computing Machinery
Author affiliation: 1 CNRS, Laboratoire Preuves, Programmes et Systèmes, Univ. Paris Diderot, Sorbonne Paris Cité, Paris, France2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Science, Beijing, China
Abstract: We define and study parametric polymorphism for a type system with recursive, product, union, intersection, negation, and function types. We first recall why the definition of such a system was considered hard-when not impossible-and then present the main ideas at the basis of our solution. In particular, we introduce the notion of "convexity" on which our solution is built up and discuss its connections with parametricity as defined by Reynolds to whose study our work sheds new light. Copyright © 2011 ACM. (24 refs.)Main Heading: Functional programmingControlled terms: Computer programming languages - Recursive functionsUncontrolled terms: Parametric polymorphism - Parametricity - Reynolds - Subtypings - Type systems - TypesClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 723.1.1 Computer Programming Languages
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Mobile agent-based secure task partitioning and allocation algorithm for cloud & client computing
Xu, Xiao-Long1, 2; Cheng, Chun-Ling1; Xiong, Jing-Yi1; Wang, Ru-Chuan1
Source: Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, v 31, n 8, p 922-926, August 2011; Language: Chinese; ISSN: 10010645;
Publisher: Beijing Institute of Technology
Author affiliation: 1 College of Computer, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210003, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Science, Beijing 100190, China
Abstract: In order to protect the privacy of the task in the cloud & client computing environment and prevent the malicious nodes or the competitors from prying into the internal logic and objectives of the task, a mobile Agent-based secure task partitioning and allocation algorithm for cloud & client computing is proposed. The new algorithm takes into account the cloud computing cluster server nodes and user terminals nodes together, divides task into a number of appropriate sub-tasks, and utilizes mobile Agent to carry the code and data of sub-tasks to the suitable nodes in accordance with the corresponding task allocation for implementation. The result of developed prototype system shows that, under the protection of the algorithm, the malicious terminal node looking into the code and data of the sub-task assigned to it or even co-attacking the system still can not understand the overall workflow and final objective of the task. (12 refs.)Main Heading: Cloud computingControlled terms: Cluster computing - Clustering algorithms - Computer systems - Mobile agentsUncontrolled terms: Agent based - Allocation algorithm - Computing clusters - Computing environments - Final objective - Malicious nodes - Prototype system - Security - Subtasks - Task allocation - Task partitioning - Terminal nodes - User terminalsClassification Code: 721 Computer Circuits and Logic Elements - 722 Computer Systems and Equipment - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A new result on the distinctness of primitive sequences over Z/(pq) modulo 2
Zheng, Qun-Xiong1; Qi, Wen-Feng1, 2
Source: Finite Fields and their Applications, v 17, n 3, p 254-274, May 2011; ISSN: 10715797, E-ISSN: 10902465; DOI: 10.1016/j.ffa.2010.12.004;
Publisher: Academic Press Inc.
Author affiliation: 1 Department of Applied Mathematics, Zhengzhou Information Science and Technology Institute, Zhengzhou, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Let Z/(pq) be the integer residue ring modulo pq with odd prime numbers p and q. This paper studies the distinctness problem of modulo 2 reductions of two primitive sequences over Z/(pq), which has been studied by H.J. Chen and W.F. Qi in 2009. First, it is shown that almost every element in Z/(pq) occurs in a primitive sequence of order n>2 over Z/(pq). Then based on this element distribution property of primitive sequences over Z/(pq), previous results are greatly improved and the set of primitive sequences over Z/(pq) that are known to be distinct modulo 2 is further enlarged. © 2010 Elsevier Inc. All rights reserved. (19 refs.)Main Heading: PolynomialsUncontrolled terms: Integer residue ring - Linear recurring sequences - Modular reduction - Primitive polynomials - Primitive sequencesClassification Code: 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
OMNeT++ and mixim-based protocol simulator for satellite network
Li, Xiangqun1; Wang, Lu1; Liu, Lixiang1; Hu, Xiaohui1; Xu, Fanjiang1; Chen, Jing2
Source: IEEE Aerospace Conference Proceedings, 2011, 2011 Aerospace Conference, AERO 2011
; ISSN: 1095323X; ISBN-13: 9781424473502; DOI: 10.1109/AERO.2011.5747347; Article number: 5747347; Conference: 2011 IEEE Aerospace Conference, AERO 2011, March 5, 2011 - March 12, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 National Key Laboratory of Integrated Information System Technology, Institute of Software, Chinese Academy of Science, Beijing 100190, China2 Information Center, Ministry of Science and Technology of the People's Republic of China, Beijing 100862, China
Abstract: In this paper, we present a novel satellite network simulation platform (SNSP) based on OMNeT++ and MiXiM for researchers to develop better protocols for satellite networks. In this platform, multi-formatted satellite orbit data are permitted as input, and inter-satellite links (ISL) are created according to satellites position and specified connection rules. Correspondingly, the satellite network topology will be built dynamically. Two kinds of wireless channels that include microwave channels and laser channels are provided to simulate the signal transmission in space. A three layer fundamental satellite protocol stack is provided to utilize new satellite communication protocols as default. Meanwhile, the performance analysis of satellite communication protocol can be observed easily from the friendly graphical user interface. With modularized architecture, SNSP can be extended flexibly according to user requirements. In addition, an example is provided to illustrate how to develop protocol in our platform. © 2011 IEEE. (17 refs.)Main Heading: Satellite linksControlled terms: Communication - Computer simulation - Electric network topology - Graphical user interfaces - Satellite communication systems - SatellitesUncontrolled terms: Inter-satellite link - Modularized architecture - New satellites - OMNET++ - Performance analysis - Protocol simulator - Protocol stack - Satellite communications - Satellite network - Satellite orbit - Signal transmission - Three-layer - User requirements - Wireless channelClassification Code: 655.2 Satellites - 703.1 Electric Networks - 716 Telecommunication; Radar, Radio and Television - 722.2 Computer Peripheral Equipment - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
How to characterize side-channel leakages more accurately?
Liu, Jiye1, 2; Zhou, Yongbin1; Han, Yang1, 2; Li, Jiantang1, 2; Yang, Shuguo1, 2; Feng, Dengguo1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6672 LNCS, p 196-207, 2011, Information Security Practice and Experience - 7th International Conference, ISPEC 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642210303;
DOI: 10.1007/978-3-642-21031-0_15; Conference: 7th International Conference on Information Security Practice and Experience, ISPEC 2011, May 30, 2011 - June 1, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, P.O. Box 8718, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, 19A Yuquan Lu, Beijing 100049, China
Abstract: The effectiveness of side-channel attacks strongly depends on to what extent the underlying leakage model characterizes the physical leakages of cryptographic implementations and on how largely the subsequent distinguisher exploits these leakages. Motivated by this, we propose a compact yet efficient approach to more accurately characterizing side-channel leakages. It is called Bitwise Weighted Characterization (BWC) approach. We use power analysis attacks as illustrative examples and construct two new BWC-based side-channel distinguishers, namely BWC-DPA and BWC-CPA. We present a comparative study of several distinguishers applied to both simulated power traces and real power measurements from an AES microcontroller prototype implementation to demonstrate the validity and the effectiveness of the proposed methods. For example, the number of traces required to perform successful BWC-CPA (resp. BWC-DPA) is only 66% (resp. 49%) of that of CPA (resp. DPA). Our results firmly validate the power and the accuracy of the proposed side-channel leakages characterization approach. © 2011 Springer-Verlag Berlin Heidelberg. (14 refs.)Main Heading: Security of dataControlled terms: Characterization - Security systemsUncontrolled terms: Bitwise Weighted Characterization - Distinguishers - Leakage model - Power Analysis Attack - Side-channel analysisClassification Code: 723.2 Data Processing and Image Processing - 914.1 Accidents and Accident Prevention - 951 Materials Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Exploring an adaptive architecture for service discovery over MANETs
Jin, Beihong1; Zhang, Fusang1; Weng, Haibin1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6905 LNCS, p 237-251, 2011, Ubiquitous Intelligence and Computing - 8th International Conference, UIC 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642236402;
DOI: 10.1007/978-3-642-23641-9_21; Conference: 8th International Conference on Ubiquitous Intelligence and Computing, UIC 2011, September 2, 2011 - September 4, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China
Abstract: What a service discovery system (SDS) pursues is to successfully discover the services at low costs if the qualified ones exist. However, dynamics and diversification in MANETs increases the complexity to achieve SDS's goal. This paper develops a SDS over MANETs named SCN4M-H. To enhance system quality, SCN4M-H combines two architecture styles and provides two working modes: basic mode and volunteer mode. In the basic mode, nodes in SCN4M-H work together as peer partners, mapping and discovering the services in a P2P style, and in the volunteer mode, the nodes who declare as volunteers will play the role of servers, they are responsible for dealing with the service discovery requests targeted for the nodes within specified regions. Depending on their own states as well as their neighbors' states, nodes in SCN4M-H can switch automatically from one mode to another. Moreover, two working modes can coexist in SCN4M-H at the same time, which enables a service discovery request to be dealt with in a locally optimal way. Some system properties are revealed and then extensive experiments are conducted. Experimental data indicate that SCN4M-H adapts well to various dynamic scenarios and shows satisfying software quality in terms of discovery success rate and corresponding costs. © 2011 Springer-Verlag. (12 refs.)Main Heading: Computer software selection and evaluationControlled terms: Ubiquitous computingUncontrolled terms: Adaptive architecture - Architecture styles - Experimental data - Low costs - MANET - Service discovery - software quality - System property - System quality - Working modeClassification Code: 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Collusion attack on a self-healing key distribution with revocation in wireless sensor networks
Bao, Kehua1; Zhang, Zhenfeng1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6513 LNCS, p 221-233, 2011, Information Security Applications - 11th International Workshop, WISA 2010, Revised Selected Papers; ISSN: 03029743, E-ISSN: 16113349; ISBN-10: 3642179541, ISBN-13: 9783642179549; DOI: 10.1007/978-3-642-17955-6-16; Conference: 11th International Workshop on Information Security Applications, WISA 2010, August 24, 2010 - August 26, 2010; Sponsor: Ministry of Public Administration and Security (MoPAS); Korea Communications Commission (KCC);
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China
Abstract: A self-healing key distribution enables non-revoked users to recover the lost session keys on their own from the received broadcast messages and their private information. It decreases the load on the group manager and is very suitable for the unreliable wireless sensor networks. In 2008, Du and He [5] proposed a self-healing key distribution with revocation in wireless sensor networks which is claimed to resist to the collusion attack. In this paper, we show that the scheme 2 in [5] is not secure against the collusion attack. A newly joined user colluding with a revoked user can recover the group session keys that are not supposed to be known to them. Then the scheme will be improved and the modified one, through the analysis, is able to resist the collusion attack. Moreover, the modified scheme has the properties of constant storage, long life-span, forward secrecy and backward secrecy. © 2011 Springer-Verlag. (18 refs.)Main Heading: Wireless sensor networksControlled terms: Security of data - Sensor networksUncontrolled terms: Backward secrecy - Broadcast messages - Collusion attack - constant storage - Forward secrecy - Group managers - Key distribution - Long life - Modified scheme - Private information - self-healing - Session key - Wireless sensorClassification Code: 723.2 Data Processing and Image Processing - 732 Control Devices
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Fine-grained mandatory query access control model and its efficient realization for spatial vector data
Zhang, Yan1, 2; Chen, Chi1, 2; Feng, Deng-Guo1, 2
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 8, p 1872-1883, August 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03868;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 National Engineering Research Center of Information Security, Beijing 100190, China
Abstract: To protect the spatial vector data, which is often in an irregular shape and distributed throughout multiple sensitive areas, the traditional mandatory access control model is extended and explained in this paper. This paper also proposes a fine-grained spatial mandatory query access control model-SV_MAC (spatial vector data mandatory access control model). Also, an AR+ spatial index tree technique is advanced, which combines the search of both spatial data and access control policies together to efficiently enforce the SV_MAC model in the course of spatial vector data searching. Experiment results shows that AR+ tree can not only provide fine-grained security protection for sensitive spatial vector data, but can also guarantee good user experience for GIS (geography information system) applications. © 2011 ISCAS. (11 refs.)Main Heading: Access controlControlled terms: Plant extracts - Security systems - Trees (mathematics) - VectorsUncontrolled terms: Access control models - Authorization model - Mandatory access control - Spatial vector data security - Tree indicesClassification Code: 461.9 Biology - 723 Computer Software, Data Handling and Applications - 914.1 Accidents and Accident Prevention - 921.1 Algebra - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A hypervolume based approach for minimal visual coverage shortest path
Li, Jie1; Zheng, Changwen2; Hu, Xiaohui2
Source: 2011 IEEE Congress of Evolutionary Computation, CEC 2011, p 1777-1784, 2011, 2011 IEEE Congress of Evolutionary Computation, CEC 2011; ISBN-13: 9781424478347; DOI: 10.1109/CEC.2011.5949830; Article number: 5949830; Conference: 2011 IEEE Congress of Evolutionary Computation, CEC 2011, June 5, 2011 - June 8, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China2 National Key Laboratory of Integrated Information System Technology, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: In this paper, the minimal visual coverage shortest path in raster terrain is studied with the proposal of a hypervolume contribution based multiobjective evolutionary approach. The main feature of the presented method is that all individuals in the population are periodically replaced by the selected non-dominated candidates in the archive based on hypervolume contribution, besides the well designed evolutionary operators and some popular techniques such as dominated relation and archive. Our algorithm may obtain well distributed Pareto set approximation efficiently, which is superior to the implementations based on the framework of NSGA-II and SMS-EMOA with respect to the hypervolume. © 2011 IEEE. (12 refs.)Main Heading: Graph theoryControlled terms: Approximation algorithms - Mathematical operatorsUncontrolled terms: Evolutionary approach - Evolutionary operators - Hypervolume - Multi objective - NSGA-II - Pareto set - Shortest path - Visual coveragesClassification Code: 921 Mathematics - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Visualizing inference process of a rule engine
Shi, Jian1; Qiao, Ying1; Wang, Hongan1
Source: ACM International Conference Proceeding Series, 2011, VINCI 2011 - The 4th Visual Information Communication - International Symposium; ISBN-13: 9781450308755; DOI: 10.1145/2016656.2016666; Conference: 4th Visual Information Communication - International Symposium, VINCI 2011, August 4, 2011 - August 5, 2011; Sponsor: ACM SIGCHI China; China Computer Federation;
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: In this paper, we introduce an approach to visualize the inference process in a rule engine - Drools, which employs Rete as its pattern matching algorithm. As a software visualization work, our approach is focused on both static structure of the Rete network and dynamic behavior of the inference process. Since logic programming is distinct from other traditional programming paradigms, our approach is also different from traditional program/algorithm visualization methods. In this paper, we first introduce the target we choose to visualize, and then provide a description of the problem and our visualization approach. Finally, with an implementation and an interesting case - sudoku solving, we show that the visualization work is helpful to understanding not only the Rete algorithm, but also the rules used in the inference. Besides, our work supports debugging, tracing and analyzing the rule engine, which is useful in finding errors and optimization. © 2011 ACM. (17 refs.)Main Heading: Inference enginesControlled terms: Algorithms - Java programming language - Logic programming - Pattern matching - Visual communication - VisualizationUncontrolled terms: Dynamic behaviors - Inference process - inference visualization - Pattern matching algorithms - Programming paradigms - Rete - Rete algorithm - Rete network - Rule engine - software visualization - Static structures - Visualization method - Work supportClassification Code: 717.1 Optical Communication Systems - 723 Computer Software, Data Handling and Applications - 902.1 Engineering Graphics - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Efficient algorithm for extreme maximal biclique mining in cognitive frequency decision making
Zhong-Ji, Fan1; Ming-Xue, Liao1; Xiao-Xin, He1; Xiao-Hui, Hu1; Xin, Zhou1
Source: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, p 25-30, 2011, 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011; ISBN-13: 9781612844855; DOI: 10.1109/ICCSN.2011.6013538; Article number: 6013538; Conference: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, May 27, 2011 - May 29, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 Institue of Software, Chinese Academy of Sciences, CAS, Beijing, China
Abstract: The cognitive radio is growing an important interest in wireless communication study. A cognitive radio ad hoc network may take a master-slave tree-based structure in some special applications. For a master node with limited communication capability, slave nodes usually use the same frequency to access a subnet managed by the master. Each slave node can acquire many frequencies for communication by a local spectrum sensing process. However, there may be no common set of frequencies available for every slave node. In this case, we should delete some slave nodes and keep other nodes staying in the subnet as many as possible. By mapping the node set and frequency set to be both parts of a bipartite graph respectively, the problem can be turned into a special case of searching for maximal bicliques. Based on a well-known LCM (Linear time Closed itemset Miner) algorithm which enumerates frequent item sets (maximal bicliques), and using a new technique in terms of dynamic thresholds, we have solved this problem in real time to meet requirements of most cases from our application. And we also improved the LCM algorithm by pruning more rows and columns of vertices in a bipartite graph and by mining more heuristic information about what vertices make others unclosed to achieve much better performance. © 2011 IEEE. (15 refs.)Main Heading: Graph theoryControlled terms: Ad hoc networks - Algorithms - Communication - Radio - Wireless telecommunication systemsUncontrolled terms: Biclique - Bipartite graphs - Cognitive radio - Dynamic threshold - dynamic thresholds - Efficient algorithm - frequency decision - Heuristic information - Item sets - Itemset - Limited communication - Linear time - Local spectrum - Master-slave - maximal bicliques - Real time - Special applications - Tree-based structures - Wireless communicationsClassification Code: 716 Telecommunication; Radar, Radio and Television - 716.3 Radio Systems and Equipment - 723 Computer Software, Data Handling and Applications - 921 Mathematics - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Fault propagation pattern based relevant faulty ciphertexts filtering towards DFA on AES
Wang, Na1; Zhou, Yongbin1
Source: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, p 208-214, 2011, 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011; ISBN-13: 9781612844855; DOI: 10.1109/ICCSN.2011.6014707; Article number: 6014707; Conference: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, May 27, 2011 - May 29, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, P. O. Box 8718, Beijing, China
Abstract: Basically, Differential Fault Analysis (DFA) against ciphers consists of two stages: fault induction and fault exploitation. Success rate of the latter strongly depends upon the availability of the required type of some faulty ciphertexts, which typically assumes that adversaries have precise control over the location, timing and/or even the type of the fault induced. In view of this, there seems to be a technical gap between these two stages. In this paper, by amplifying the fault propagation pattern inherent in block ciphers, we propose an algorithmic method to narrow this gap, or to relax the underlying assumption. Our method provides a faulty ciphertext filtering process between two basic stages in DFA. This additional process equips adversaries with the capabilities to deterministically decide whether or not one faulty ciphertext is relevant to a specific DFA, before or without performing DFA itself. We take AES as a concrete case of study and conduct theoretical analysis and simulation experiments as well, the results of which strongly demonstrate the validity and power of our proposed filtering method. © 2011 IEEE. (20 refs.)Main Heading: CommunicationControlled terms: FiltrationUncontrolled terms: AES - Algorithmic methods - Analysis and simulation - Block ciphers - Ciphertexts - Differential Fault Analysis - Fault propagation - Filtering method - Filtering process - Precise control - Two stageClassification Code: 716 Telecommunication; Radar, Radio and Television - 802.3 Chemical Operations
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On enumeration of polynomial equivalence classes and their application to MPKC
Lin, Dongdai1; Faugère, Jean-Charles2, 3; Perret, Ludovic2, 3; Wang, Tianze1, 4
Source: Finite Fields and their Applications, 2011
; ISSN: 10715797, E-ISSN: 10902465; DOI: 10.1016/j.ffa.2011.09.001 Article in Press
Author affiliation: 1 SKLOIS, Institute of Software, Chinese Academy of Sciences, Haidian District, Beijing 100190, China2 Paris-Rocquencourt Center, SALSA Project, UPMC Univ Paris 06, UMR 7606, LIP6, F-75005, Paris, France3 CNRS, UMR 7606, LIP6, F-75005, Paris, France4 Graduate University of Chinese Academy of Sciences, Beijing 100149, China
Abstract: The Isomorphism of Polynomials (IP) is one of the most fundamental problems in multivariate public key cryptography (MPKC). In this paper, we introduce a new framework to study the counting problem associated to IP. Namely, we present tools of finite geometry allowing to investigate the counting problem associated to IP. Precisely, we focus on enumerating or estimating the number of isomorphism equivalence classes of homogeneous quadratic polynomial systems. These problems are equivalent to finding the scale of the key space of a multivariate cryptosystem and the total number of different multivariate cryptographic schemes respectively, which might impact the security and the potential capability of MPKC. We also consider their applications in the analysis of a specific multivariate public key cryptosystem. Our results not only answer how many cryptographic schemes can be derived from monomials and how big the key space is for a fixed scheme, but also show that quite many HFE cryptosystems are equivalent to a Matsumoto-Imai scheme. © 2011 Elsevier Inc. All rights reserved.Main Heading: Equivalence classesControlled terms: Polynomials - Public key cryptography - Set theoryUncontrolled terms: Counting problems - Cryptographic schemes - Finite geometry - Fundamental problem - Isomorphism of polynomials - Key space - Multivariate public key cryptosystem - Polynomial equivalence - Potential capability - Quadratic polynomialClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An algorithm of real-time rendering earth with atmosphere from the viewpoint near ground
Zhang, Liqiang1; Li, Chao1; Zheng, Changwen2; Hu, Xiaohui2; Lv, Pin2
Source: Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, v 3, p 1-5, 2011, Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011; ISBN-13: 9781424487257; DOI: 10.1109/CSAE.2011.5952622; Article number: 5952622; Conference: 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, June 10, 2011 - June 12, 2011; Sponsor: IEEE Beijing Section; Pudong New Area Association for Computer; Pudong New Area Science and Technology Development Fund; Tongji University; Xiamen University;
Publisher: IEEE Computer Society
Author affiliation: 1 National Key Laboratory of Integrated Information System Technology, Graduate University, CAS, Beijing, China2 National Key Laboratory of Integrated Information System Technology, Institute of Software, CAS, Beijing, China
Abstract: A new algorithm is proposed to render the earth with atmospheric effects, in which the light intensity is analyzed and divided into three parts that can be calculated individually. Satellite remote sensing data is used to be inverted to ground bidirectional reflectance model. Texture of surface reflectance, optical depth, emissivity and temperature is generated in the phase of data processing, which is applied to GPU-based real-time rendering system. Thus light interacting effect between ground and atmosphere is realized. The method is proved to render the scene more realistic and immersive, and shows high efficiency and flexibility for complex application. © 2011 IEEE. (12 refs.)Main Heading: Earth atmosphereControlled terms: Algorithms - Computer science - Data handling - Reflection - Remote sensingUncontrolled terms: Atmospheric effects - Bidirectional reflectance - biodirectional reflectance model - Complex applications - GPU - High efficiency - Immersive - Light intensity - Optical depth - real-time - Real-time rendering - Satellite remote sensing data - Surface reflectanceClassification Code: 731.1 Control Systems - 723.2 Data Processing and Image Processing - 723 Computer Software, Data Handling and Applications - 921 Mathematics - 722 Computer Systems and Equipment - 711 Electromagnetic Waves - 443.1 Atmospheric Properties - 721 Computer Circuits and Logic Elements
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Releasing control policy for semiconductor wafer fabrication based on fuzzy Petri nets-reasoning
Cao, Zheng-Cai1, 2; Zhao, Hui-Dan1; Wang, Yong-Ji2
Source: Tien Tzu Hsueh Pao/Acta Electronica Sinica, v 39, n 7, p 1545-1550, July 2011; Language: Chinese
; ISSN: 03722112;
Publisher: Chinese Institute of Electronics
Author affiliation: 1 College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Acad. of Sci., Beijing 100080, China
Abstract: Because of large amount of stochastic and uncertain factors in semiconductor wafer fabrication (SWF), we often meet with all kinds of disruptions which make the former optimal scheduling scheme lose superiority. Considering the advantage of the fuzzy Petri-net (FPN) on the knowledge expression and logical reasoning, a releasing control policy for SWF based on FPN is proposed. According to the releasing FPN -reasoning model and the real-time data collected from the line, the control method can help to take the feasible releasing action to adopted to different conditions. With the releasing strategy, the throughput of the wafer fabrication can be maximized and the system can be optimized to a large extent. Finally, the effectiveness of this proposed method is confirmed by simulation. (7 refs.)Main Heading: Silicon wafersControlled terms: Fabrication - Fuzzy systems - Optimization - Petri netsUncontrolled terms: Control methods - Control policy - Fuzzy Petri nets - Knowledge expression - Logical reasoning - Optimal scheduling - Real-time data - Reasoning models - Releasing strategy - Semiconductor wafer fabrication - Uncertain factors - Wafer fabricationsClassification Code: 712.1.1 Single Element Semiconducting Materials - 913.4 Manufacturing - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Model construction and priority synthesis for simple interaction systems
Cheng, Chih-Hong1; Bensalem, Saddek2; Jobstmann, Barbara2; Yan, Rongjie3; Knoll, Alois1; Ruess, Harald4
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6617 LNCS, p 466-471, 2011, NASA Formal Methods - Third International Symposium, NFM 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642203978; DOI: 10.1007/978-3-642-20398-5_34; Conference: 3rd NASA Formal Methods Symposium, NFM 2011, April 18, 2011 - April 20, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Department of Informatics, Technischen Universität München, Germany2 Verimag Laboratory, Grenoble, France3 State Key Laboratory of Computer Science, Institute of Software, CAS, China4 Fortiss GmbH, Munich, Germany
Abstract: VissBIP is a software tool for visualizing and automatically orchestrating component-based systems consisting of a set of components and their possible interactions. The graphical interface of VissBIP allows the user to interactively construct BIP models [3], from which executable code (C/C++) is generated. The main contribution of VissBIP is an analysis and synthesis engine for orchestrating components. Given a set of BIP components together with their possible interactions and a safety property, the VissBIP synthesis engine restricts the set of possible interactions in order to rule out unsafe states. The synthesis engine of VissBIP is based on automata-based (game-theoretic) notions. It checks if the system satisfies a given safety property. If the check fails, the tool automatically generates additional constraints on the interactions that ensure the desired property. The generated constraints define priorities between interactions and are therefore well-suited for conflict resolution between components. © 2011 Springer-Verlag. (6 refs.)Main Heading: Formal methodsControlled terms: Game theory - NASAUncontrolled terms: Analysis and synthesis - Component based systems - Conflict Resolution - Executable codes - Graphical interface - Interaction systems - Model construction - Safety property - Software toolClassification Code: 655 Spacecraft - 656 Space Flight - 723.1 Computer Programming - 922.1 Probability Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A kind of analysis method of off-line TTP fair non-repudiation protocol
Liu, Dongmei1, 3; Qing, Sihan1, 2; Ma, Hengtai4; Li, Shuren5
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 4, p 656-665, April 2011; Language: Chinese
; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 National Engineering Research Center for Fundamental Software, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 School of Software and Microelectronics, Peking University, Beijing 102600, China3 Graduate University of Chinese Academy of Sciences, Beijing 100049, China4 National Key Laboratory of Integrated Information System Technology, Chinese Academy of Science, Beijing 100190, China5 Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Off-line TTP fair non-repudiation protocols have been studied widely. Compared with on-line TTP fair non-repudiation protocol, off-line TTP fair non-repudiation protocols are analyzed rarely. Off-line TTP fair non-repudiation protocols are often composed by several subprotocols, which are defined as protocol cluster. In this paper, a kind of analysis method of off-line TTP fair non-repudiation protocol is proposed. There are three main points in this paper. Firstly, according to the cluster properties of off-line TTP fair non-repudiation protocol, protocols are instanced. Through instancing, non-repudiation and effectiveness of off-line TTP fair non-repudiation protocol can be analyzed within each single instance. Secondly, as asynchronous communication, sending and receiving actions can not exactly reflect true events of protocol. Through refining the actions of participants, protocols can be represented as the participants' action sequence. And the participants' action sequence can be used to analyze the violation of execution. Thirdly, the time determiner is introduced to express and verify the timeliness property of the protocol. Finally, two off-line TTP fair non-repudiation protocols are analyzed, among which ZG off-line TTP protocol is composed by two subprotocols and CCD off-line TTP protocol is composed by three subprotocols. The results of analysis indicate that ZG off-line TTP protocol is verified, which does not meet timeliness, and CCD off-line TTP protocol is verified which does not meet fairness. (13 refs.)Uncontrolled terms: Fairness - Non repudiation - Non-repudiation protocol - Off-line TTP - Timeliness
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Efficient identity-based authenticated key agreement protocol in the standard model
Gao, Zhi-Gang1; Feng, Deng-Guo1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 5, p 1031-1040, May 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03828;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China
Abstract: This paper proposes an efficient Identity-Based authenticated key agreement protocol based on Waters' Identity-Based Encryption scheme and gives a detail security analysis with provable security techniques in the standard model. It is more efficient than other similar protocols, and provides known-key security and forward secrecy. And it also resists key-compromise impersonation and unknown key share attacks. Moreover, this protocol is extended to satisfy the requirement that the session key should be escrowed by the Private Key Generation (PKG) center, and is given a key confirmation property with a secure message authentication code algorithm. © 2011 ISCAS. (15 refs.)Main Heading: AuthenticationControlled terms: Public key cryptography - StandardsUncontrolled terms: Authenticated key agreement protocols - Forward secrecy - Identity Based Encryption - Identity-based - Key agreement - Key confirmation - Key-compromise impersonation - Private key - Provable security - Secure messages - Security analysis - Session key - Standard model - The standard model - Unknown key-share attackClassification Code: 723 Computer Software, Data Handling and Applications - 902.2 Codes and Standards
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Automatic mining of change set size information from repository for precise productivity estimation
Huang, Hui1, 2; Yang, Qiusong1; Xiao, Junchao1; Zhai, Jian1
Source: Proceedings - International Conference on Software Engineering, p 72-80, 2011, ICSSP'11 - Proceedings of the 2011 International Conference on Software and Systems Process, Co-located with ICSE 2011
; ISSN: 02705257; ISBN-13: 9781450307307; DOI: 10.1145/1987875.1987889; Conference: 2011 International Conference on Software and Systems Process, ICSSP 2011, Co-located with ICSE 2011, May 21, 2011 - May 22, 2011; Sponsor: International Software Process Association (ISPA);
Publisher: IEEE Computer Society
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, Beijing 100049, China
Abstract: Productivity is a crucial concern for most software organizations. It can help project managers to make project plan, supervise project progress, and measure the project members' performance. Thus it has been widely measured and analyzed by both industry and researchers. But in the actual software project management, the project data filled by the developers may be incomplete and imprecise. Especially it is very hard for the developers to give the precise work product scale of each task. Therefore, the productivity calculated basing on those data is also imprecise. To solve the problem, this paper presents a method for precise productivity estimation. The method calculates work product scale of each task using change set size information by rebuilding relationships between the tasks and the SVN commits, and then calculates the productivity. And an experimental study has been done basing on Qone. Qone is an integrated system for project management developed by Institute of Software Chinese Academy of Sciences (ISCAS). It has been used in more than 200 software companies in China. © 2011 ACM. (39 refs.)Main Heading: ProductivityControlled terms: Project management - Software engineeringUncontrolled terms: Chinese Academy of Sciences - Experimental studies - Integrated systems - Productivity estimation - Project data - Project managers - Project plans - Software company - Software organization - software productivity - Software project - Software project management - Work productsClassification Code: 723.1 Computer Programming - 912.2 Management - 913.1 Production Engineering
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Physically-based tree animation and leaf deformation using CUDA in real-time
Yang, Meng1, 3; Huang, M.-C.1, 3; Wu, En-Hua1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6758, p 27-39, 2011, Transactions on Edutainment VI
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642226380; DOI: 10.1007/978-3-642-22639-7_4;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Department of Computer and Information Science, Faculty of Science and Technology, Uni. of Macau, Macau, China3 Graduate University of Chinese, Academy of Sciences, Beijing 100049, China
Abstract: This paper presents a novel physically-based parallel approach to animate tree motion in realtime and the leaf deformation is accelerated on a CUDA-based platform. Since physically-based tree animation can hardly achieve realtime performance due to the complicated geometry and expensive calculation. Therefore in this paper, three main measures are taken to overcome this problem. Firstly, we briefly introduce a method of physically-based tree motion called hierarchical matrix structure model driven by the external forces such as the wind; then we analyze the model on a parallel platform in detail; finally, all the tree data structures will be redefined as arrays which are suitable for parallel implementation on GPU. In addition, leaf deformation with a double layer structure, caused by its internal forces, will also be well mapped from CPU to GPU using a similar parallel mechanism. Experimental results show that many species of trees can animate realistically and naturally in realtime; Meanwhile, leaf deformation can be plausibly simulated and the performance will be improved by up to ten times. © 2011 Springer-Verlag Berlin Heidelberg. (14 refs.)Main Heading: Trees (mathematics)Controlled terms: Animation - Data structures - Deformation - Mechanisms - Plant extractsUncontrolled terms: Complicated geometry - CUDA - Double layer structure - External force - Hierarchical matrices - Internal forces - leaf deformation - Parallel implementations - Parallel mechanisms - Parallel platforms - Physically-based simulation - Real time - Real time performance - Tree data structuresClassification Code: 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory - 723.5 Computer Applications - 723.3 Database Systems - 601.3 Mechanisms - 461.9 Biology - 422 Strength of Building Materials; Test Equipment and Methods - 421 Strength of Building Materials; Mechanical Properties
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A generative approach to searching algorithmic programs development
Shi, Haihe1, 2, 3; Xue, Jinyun1, 2
Source: Proceedings - 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011, p 76-81, 2011, Proceedings - 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011; ISBN-13: 9780769545066; DOI: 10.1109/TASE.2011.34; Article number: 6042064; Conference: 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011, August 29, 2011 - August 31, 2011; Sponsor: IEEE CS; IFIP;
Publisher: IEEE Computer Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Provincial Key Lab. for High-Performance Computing Technology, Jiangxi Normal University, Nanchang 330022, China3 Graduate University, Chinese Academy of Sciences, Beijing 100049, China
Abstract: Using highly configurable semi-automatic approach to algorithmic programs development can improve correctness and productivity. This paper explores a way to use generative techniques to produce the algorithmic programs for searching problem. Based on PAR method and PAR platform, it is to formally develop generic type component and algorithm components, and to design a formal algorithm generative model that models an invariant behavior in terms of variant behaviors, and then to automatically generate a variety of specialized searching algorithmic programs through replacing the generic identifiers with a few concrete operations. Through the super framework and underlying components, the reliability and productivity of domain specific algorithms are dramatically improved. © 2011 IEEE. (11 refs.)Main Heading: AlgorithmsControlled terms: Productivity - Software engineeringUncontrolled terms: Configurable - Domain specific - Generative model - generative techniques - generic domain component - Generic types - PAR method - searching algorithm - Semi-automatics - Underlying componentsClassification Code: 723 Computer Software, Data Handling and Applications - 723.1 Computer Programming - 913.1 Production Engineering - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Geometrically invariant image watermarking using SVR correction in NSCT domain
Yang, Hong-Ying1; Wang, Xiang-Yang1, 2, 3; Chen, Li-Li1
Source: Computers and Electrical Engineering, v 37, n 5, p 695-713, September 2011
; ISSN: 00457906; DOI: 10.1016/j.compeleceng.2011.07.002;
Publisher: Elsevier Ltd
Author affiliation: 1 School of Computer and Information Technology, Liaoning Normal University, Dalian 116029, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 Network and Data Security Key Laboratory of Sichuan Province, Chengdu 611731, China
Abstract: Based on the support vector regression (SVR) geometric distortions correction, we propose a robust image watermarking algorithm in nonsubsampled contourlet transform (NSCT) domain with good visual quality and reasonable resistance toward geometric attacks in this paper. Firstly, the NSCT is performed on original host image, and corresponding low-pass subband is selected for embedding watermark. Then, the selected low-pass subband is divided into small blocks. Finally, the digital watermark is embedded into host image by modulating the NSCT coefficients in small blocks. In digital watermark detecting procedure, the SVR geometrical distortions correction is utilized. Experimental results show that the proposed image watermarking is invisible, and robust against common image processing and some geometrical attacks. © 2010 Elsevier Ltd. All rights reserved. (27 refs.)Main Heading: WatermarkingControlled terms: Digital watermarking - Image processing - Mathematical transformationsUncontrolled terms: Digital water-marks - Embedding watermarks - Geometric attacks - Geometric distortion - Geometrical attacks - Geometrical distortion - Host images - Image Watermarking - Image watermarking algorithm - Low-pass - Nonsubsampled contourlet transforms - Sub-bands - Support vector regressions - Visual qualitiesClassification Code: 723.2 Data Processing and Image Processing - 811.1.1 Papermaking Processes - 921.3 Mathematical Transformations
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface
Qing, Sihan1; Susilo, Willy2; Wang, Guilin2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p VI, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 Chinese Academy of Sciences, Institute of Software, Beijing 100190, China2 University of Wollongong, School of Computer Science and Software Engineering, Northfields Avenue, Wollongong, NSW 2522, Australia
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
P2P incentive mechanism based on electronic coupons combined with global reputation values
Xu, Xiao-Long1, 2; Xiong, Jing-Yi1; Yang, Geng3; Li, Ling-Juan1
Source: Beijing Ligong Daxue Xuebao/Transaction of Beijing Institute of Technology, v 31, n 10, p 1236-1241, October 2011; Language: Chinese
; ISSN: 10010645;
Publisher: Beijing Institute of Technology
Author affiliation: 1 College of Computer, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210003, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 Institute of Computer Technology, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210003, China
Abstract: In order to make up the deficiencies of free-riding, tragedy of the commons, fake-file, collusion and non-cooperation of peer to peer (P2P) networks, by analyzing the types of peers and setting the incentive principles, a new P2P incentive mechanism based on electronic coupons combined with a novel global reputation evaluation algorithm was proposed to regulate the behavior of peers. The mechanism utilized the hash chain and Payword technology to design the electronic coupon suitable for the distribution payment in P2P networks and also introduced the punishment factor to realize the peer's reputation evaluation algorithm. To verify the feasibility, the performance and the security of the mechanism, a series of experiments was implemented in the simulation environment. The results of experiment and analysis show that the mechanism can effectively curb the malicious nodes and stimulate peers to offer their reSources and cooperation actively and honestly. Moreover, it could promote P2P networks to be harmonious and systematic computing environments able to support task collaboration and reSource sharing. (15 refs.)Main Heading: Peer to peer networksControlled terms: Algorithms - Distributed computer systems - ExperimentsUncontrolled terms: Computing environments - Electronic coupons - Evaluation algorithm - Experiment and analysis - Free-riding - Hash chains - Incentive mechanism - Malicious nodes - P2P network - PayWord - Peer to peer - Peer-to-peer computing - Reputation values - ReSource sharing - Simulation environment - Tragedy of the commonsClassification Code: 722 Computer Systems and Equipment - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 901.3 Engineering Research - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Identity-based key distribution for mobile Ad Hoc networks
Lv, Xixiang1; Li, Hui1; Wang, Baocang1, 2
Source: Frontiers of Computer Science in China, v 5, n 4, p 442-447, December 2011
; ISSN: 16737350, E-ISSN: 16737466; DOI: 10.1007/s11704-011-0197-5;
Publisher: Higher Education Press
Author affiliation: 1 State Key Laboratory of Integrated Service Networks, Xidian University, Xi'an 710071, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100049, China
Abstract: An identity-based cryptosystem can make a special contribution to building key distribution and management architectures in reSource-constrained mobile ad hoc networks since it does not suffer from certificate management problems. In this paper, based on a lightweight cryptosystem, elliptic curve cryptography (ECC), we propose an identity-based distributed key-distribution protocol for mobile ad hoc networks. In this protocol, using secret sharing, we build a virtual private key generator which calculates one part of a user's secret key and sends it to the user via public channels, while, the other part of the secret key is generated by the user. So, the secret key of the user is generated collaboratively by the virtual authority and the user. Each has half of the secret information about the secret key of the user. Thus there is no secret key distribution problem. In addition, the user's secret key is known only to the user itself, therefore there is no key escrow. © 2011 Higher Education Press and Springer-Verlag Berlin Heidelberg. (12 refs.)Main Heading: Mobile ad hoc networksControlled terms: Access control - Ad hoc networks - CryptographyUncontrolled terms: Certificate management - Elliptic curve cryptography - Identity based cryptography - Identity-based - Identity-based cryptosystem - Key distribution - Key escrow - Key-distribution protocols - Management architectures - Private key generators - ReSource-constrained - Secret information - Secret key - Secret key distribution - Secret sharingClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A comparative study of TF*IDF, LSI and multi-words for text classification
Zhang, Wen1; Yoshida, Taketoshi2; Tang, Xijin3
Source: Expert Systems with Applications, v 38, n 3, p 2758-2765, March 2011
; ISSN: 09574174; DOI: 10.1016/j.eswa.2010.08.066;
Publisher: Elsevier Ltd
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 School of Knowledge Science, Japan Advanced Institute of Science and Technology, 1-1 Ashahidai, Nomi, Ishikawa 923-1292, Japan3 Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
Abstract: One of the main themes in text mining is text representation, which is fundamental and indispensable for text-based intellegent information processing. Generally, text representation inludes two tasks: indexing and weighting. This paper has comparatively studied TFIDF, LSI and multi-word for text representation. We used a Chinese and an English document collection to respectively evaluate the three methods in information retreival and text categorization. Experimental results have demonstrated that in text categorization, LSI has better performance than other methods in both document collections. Also, LSI has produced the best performance in retrieving English documents. This outcome has shown that LSI has both favorable semantic and statistical quality and is different with the claim that LSI can not produce discriminative power for indexing. © 2010 Elsevier Ltd. All rights reserved. (40 refs.)Main Heading: Text processingControlled terms: Data mining - Indexing (of information) - Information retrieval - Natural language processing systemsUncontrolled terms: LSI - Multi-word - Text categorization - Text classification - Text representation - TFIDFClassification Code: 723.2 Data Processing and Image Processing - 723.5 Computer Applications - 903.1 Information Sources and Analysis - 903.3 Information Retrieval and Use
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Simulation of solid burning phenomenon in real-time
Zhu, Jian1; Bao, Kai2; Chang, Yuanzhang1; Liu, Youquan3; Wu, Enhua1, 2
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 1, p 11-20, January 2011; Language: Chinese
; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 Department of Computer and Information Science, University of Macau, Macao, China2 State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences, Beijing 100190, China3 Department of Computer Science, School of Information Engineering, Chang'an University, Xi'an 710064, China
Abstract: In this paper, we present a real-time combustion model to simulate fire phenomena with solid object decomposition process involved. To achieve a good tradeoff between performance and visual appearance, a hybrid structure of grids is employed in the model. Heat transfer is modeled in a compact framework, so the turbulent fire and the burning solid are well coupled. With an improved burning surface update scheme and a novel surface texture projection algorithm for its visualization, convincing results are produced. To increase the efficiency of our model, a few acceleration techniques are proposed as well. As a result, realistic solid burning phenomenon is well simulated in real time. (32 refs.)Main Heading: FiresControlled terms: VisualizationUncontrolled terms: CUDA - Marching cube - Moving grid - Object decomposition - Physically basedClassification Code: 902.1 Engineering Graphics - 914.2 Fires and Fire Protection
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Automatic verification of optimization algorithms: A case study of a quadratic assignment problem solver
Merkel, Robert1; Wang, Daoming2; Lin, Huimin2; Chen, Tsong Yueh3
Source: International Journal of Software Engineering and Knowledge Engineering, v 21, n 2, p 289-307, March 2011
; ISSN: 02181940; DOI: 10.1142/S021819401100527X;
Publisher: World Scientific Publishing Co. Pte. Ltd
Author affiliation: 1 Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia2 Institute of Software, Chinese Academy of Sciences, 4 South Fourth Street, Zhong Guan Cun, Beijing 10090, China3 Centre for Software Analysis and Testing, Swinburne University of Technology, John Street, Hawthorn 3122, Australia
Abstract: Metamorphic testing is a technique for the verification of software output without a complete testing oracle. Mathematical optimization, implemented in software, is a problem for which verification can often be challenging. In this paper, we apply metamorphic testing to one such optimization problem, the quadratic assignment problem (QAP). From simple observations of the properties of the QAP, we describe how to derive a number of metamorphic relations useful for verifying the correctness of a QAP solver. We then compare the effectiveness of these metamorphic relations, in "killing" mutant versions of an exact QAP solver, to a simulated oracle. We show that metamorphic testing can be as effective as the simulated oracle for killing mutants. We examine the relative effectiveness of different metamorphic relations, both singly and in combination, and conclude that combining metamorphic relations can be significantly more effective than using a single relation. © 2011 World Scientific Publishing Company. (20 refs.)Main Heading: Software testingControlled terms: Computer software selection and evaluation - Optimization - VerificationUncontrolled terms: Automatic verification - Mathematical optimizations - metamorphic relation - Metamorphic testing - Optimization algorithms - Optimization problems - quadratic assignment problem - Quadratic assignment problems - testing oracle - Testing oraclesClassification Code: 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On the construction of single cycle T-functions
Yang, Xiao1, 2; Wu, Chuan-Kun1; Wang, Ya-Xiang3
Source: Tongxin Xuebao/Journal on Communications, v 32, n 5, p 162-168, May 2011; Language: Chinese
; ISSN: 1000436X;
Publisher: Editorial Board of Journal on Communications
Author affiliation: 1 The State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of the Chinese Academy of Sciences, Beijing 100049, China3 Computer Application Technology Institute of China North Industries Group Corporation, Beijing 100089, China
Abstract: Two classes of single cycle T-functions on single-word were presented, which could be efficiently implemented in software environment and produce sequences with high linear complexity and being stable. Moreover, based on the research on the relationship between parameters and single cycle T-functions, a method to construct new single cycle T-functions from known single cycle T-functions and even parameters was given. (14 refs.)Uncontrolled terms: Linear complexity - Single cycle - Software environments - T-functions
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Static scheduling of Synchronous Data Flow onto multiprocessors for embedded DSP systems
Liu, Guoxin1; He, Yeping1; Guo, Liang1; Qi, Fang1
Source: Proceedings - 3rd International Conference on Measuring Technology and Mechatronics Automation, ICMTMA 2011, v 3, p 338-341, 2011, Proceedings - 3rd International Conference on Measuring Technology and Mechatronics Automation, ICMTMA 2011; ISBN-13: 9780769542966; DOI: 10.1109/ICMTMA.2011.655; Article number: 5721492; Conference: 3rd International Conference on Measuring Technology and Mechatronics Automation, ICMTMA 2011, January 6, 2011 - January 7, 2011; Sponsor: IEEE Instrumentation and Measurement Society; Shanghai University of Engineering Science; City University of Hongkong; Changsha University of Science and Technology; Hunan University of Science and Technology;
Publisher: IEEE Computer Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China
Abstract: A method of scheduling onto multiprocessors for embedded DSP system applications is proposed. This method basing on SDF (Synchronous Data Flow) performs all of the scheduling at compile time by methods of periodic schedules. And it uses a Hierarchical Priority Scheduling algorithm, which first schedules the module of highest priority, to solve the problem, static scheduling of SDF onto Multiprocessors. Compared to other algorithms, it has better Time and Space Complexity because the conversion from SDF to APG (Acyclic Precedence Graphs) is unnecessary. Experimental results prove the validity of the proposed method. © 2011 IEEE. (10 refs.)Main Heading: Computer aided software engineeringControlled terms: Data transfer - Embedded software - Embedded systems - Mechatronics - Multiprocessing systems - Response time (computer systems) - Scheduling algorithmsUncontrolled terms: Compile time - Embedded DSP - Multiprocessors - Other algorithms - Periodic schedule - Precedence graph - Static scheduling - Synchronous Dataflow - Time and spaceClassification Code: 608 Mechanical Engineering, General - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Efficient rendering of single-pass order-independent transparency via CUDA renderer
Huang, Meng-Cheng1, 4; Liu, Fang1, 3; Liu, Xue-Hui1; Wu, En-Hua1, 2
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 8, p 1927-1933, August 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03932;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macao, China3 Supercomputing Center, Computer Network Information Center, The Chinese Academy of Sciences, Beijing 100190, China4 Graduate University, The Chinese Academy of Sciences, Beijing 100049, China
Abstract: This paper presents a highly efficient algorithm for efficient order-independent transparency via compute unified device architecture (CUDA) in a single geometry pass. The study designs a CUDA renderer system to rasterize the scene by the scan-line algorithm, generating multiple fragments for each pixel. Meanwhile, a fixed size array is allocated per pixel in a GPU (graphics processing unit) global memory for storage. Next, this paper describes two schemes to capture and sorts the fragments per pixel via the atomic operations in CUDA. The first scheme stores the depth values of the fragments into an array of the corresponding pixel and sorts them on the fly using the atomicMin operation in CUDA. A following CUDA kernel will blend the fragments per pixel in depth order. The second scheme captures the fragments in rasterization order using the atomicInc operation in CUDA. During post-processing, the fragments per pixel array will be sorted in depth order before blending. Experimental result shows that this algorithm shows a significant improvement in classical depth peeling, producing faithful results. © 2011 ISCAS. (16 refs.)Main Heading: TransparencyControlled terms: Algorithms - Blending - Computational geometry - Computer graphics - Computer graphics equipment - Pixels - Program processorsUncontrolled terms: Atomic operation - Compute unified device architectures - Depth peeling - Graphics Processing Unit - Order-independent transparencyClassification Code: 722.2 Computer Peripheral Equipment - 723 Computer Software, Data Handling and Applications - 741.1 Light/Optics - 802.3 Chemical Operations - 921 Mathematics - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Characterizations of locally testable linear- and affine-invariant families
Li, Angsheng1; Pan, Yicheng1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6842 LNCS, p 467-478, 2011, Computing and Combinatorics - 17th Annual International Conference, COCOON 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642226847; DOI: 10.1007/978-3-642-22685-4_41; Conference: 17th Annual International Computing and Combinatorics Conference, COCOON 2011, August 14, 2011 - August 16, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China
Abstract: The linear- or affine-invariance is the property of a function family that is closed under linear- or affine- transformations on the domain, and closed under linear combinations of functions, respectively. Both the linear- and affine-invariant families of functions are generalizations of many symmetric families, for instance, the low degree polynomials. Kaufman and Sudan [21] started the study of algebraic properties test by introducing the notions of "constraint" and " characterization" to characterize the locally testable affine- and linear-invariant families of functions over finite fields of constant size. In this article, it is shown that, for any finite field double-struck F of size q and characteristic p, and its arbitrary extension field double-struck K of size Q, if an affine-invariant family ℱ ⊆ {double-struck Kn &rarr double-struck F} has a k-local constraint, then it is k′-locally testable for k′; = k2Q/P Q 2Q/P+4; and that if a linear-invariant family ℱ ⊆ {double-struck Kn &rarr double-struck F} has a k-local characterization, then it is k′-locally testable for k′ = 2k 2Q/P Q4(Q/P+1). Consequently, for any prime field double-struck F of size q, any positive integer k, we have that for any affine-invariant family ℱ over field double-struck F, the four notions of "the constraint", "the characterization", "the formal characterization" and "the local testability" are equivalent modulo a poly(k,q) of the corresponding localities; and that for any linear-invariant family, the notions of "the characterization", "the formal characterization" and "the local testability" are equivalent modulo a poly(k,q) of the corresponding localities. The results significantly improve, and are in contrast to the characterizations in [21], which have locality exponential in Q, even if the field double-struck K is prime. In the research above, a missing result is a characterization of linear-invariant function families by the more natural notion of constraint. For this, we show that a single strong local constraint is sufficient to characterize the local testability of a linear-invariant Boolean function family, and that for any finite field double-struck F of size q greater than 2, there exists a linear-invariant function family ℱ over double-struck F such that it has a strong 2-local constraint, but is not qd/q-1-1-locally testable. The proof for this result provides an appealing approach towards more negative results in the theme of characterization of locally testable algebraic properties, which is rare, and of course, significant. © 2011 Springer-Verlag. (27 refs.)Main Heading: Linear transformationsControlled terms: Algebra - Boolean functions - Characterization - Codes (symbols) - Combinatorial mathematics - Finite element methodUncontrolled terms: Algebraic properties - Error correcting codes - Extension field - Finite fields - Functions over finite fields - Linear combinations - Local constraints - Locally testable codes - Low degree - Positive integers - Prime field - Property tests - TestabilityClassification Code: 723.2 Data Processing and Image Processing - 921 Mathematics - 951 Materials Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research and implementation on web services integration of automatic identification system
Yuan, Yuqian1; Hu, Xiaohui2; Yang, Jie3
Source: Proceedings - PACCS 2011: 2011 3rd Pacific-Asia Conference on Circuits, Communications and System, 2011, Proceedings - PACCS 2011: 2011 3rd Pacific-Asia Conference on Circuits, Communications and System; ISBN-13: 9781457708565; DOI: 10.1109/PACCS.2011.5990236; Article number: 5990236; Conference: 2011 3rd Pacific-Asia Conference on Circuits, Communications and System, PACCS 2011, July 17, 2011 - July 18, 2011; Sponsor: Wuhan University of Technology;
Publisher: IEEE Computer Society
Author affiliation: 1 School of Automation Science and Electrical Engineering, Beijing University of Aeronautics and Astronautics, Beijing, China2 Institute of Software Chinese Academy of Sciences, Beijing, China3 Electrical Engineering Department of Information, Shi Jiazhuang University, Shi Jiazhuang,Hebei, China
Abstract: To solve the problem of Web services integration, a Web services-based integration environment of an automatic identification system was proposed and implemented, which included encapsulation of Web services with specific function based on the original function module, service integration achieved through enterprise service bus as well as message transmission mechanism, and implementation of the management client. It follows and supports the international open specifications, such as WSDL(Web Services Description Language) ASOAP(Simple Object Access Protocol),and provides good solution for Web services integration. © 2011 IEEE. (5 refs.)Main Heading: Web servicesControlled terms: Automation - Industry - Information services - Internet protocols - Service oriented architecture (SOA) - Telecommunication servicesUncontrolled terms: Automatic identification system - Enterprise service bus - Function module - Integration - Message transmissions - Service integration - Service Oriented - Simple object access protocols - Web Services Description Language - Web services integrationClassification Code: 913 Production Planning and Control; Manufacturing - 912 Industrial Engineering and Management - 911 Cost and Value Engineering; Industrial Economics - 903.4 Information Services - 732 Control Devices - 731 Automatic Control Principles and Applications - 723 Computer Software, Data Handling and Applications - 722.4 Digital Computers and Systems - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Color image segmentation using automatic pixel classification with support vector machine
Wang, Xiang-Yang1, 2; Wang, Qin-Yan1; Yang, Hong-Ying1; Bu, Juan1
Source: Neurocomputing, v 74, n 18, p 3898-3911, November 2011
; ISSN: 09252312, E-ISSN: 18728286; DOI: 10.1016/j.neucom.2011.08.004;
Publisher: Elsevier
Author affiliation: 1 School of Computer and Information Technology, Liaoning Normal University, Dalian 116029, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Automatic segmentation of images is a very challenging fundamental task in computer vision and one of the most crucial steps toward image understanding. In this paper, we present a color image segmentation using automatic pixel classification with support vector machine (SVM). First, the pixel-level color feature is extracted in consideration of human visual sensitivity for color pattern variations, and the image pixel's texture feature is represented via steerable filter. Both the pixel-level color feature and texture feature are used as input of SVM model (classifier). Then, the SVM model (classifier) is trained by using fuzzy c-means clustering (FCM) with the extracted pixel-level features. Finally, the color image is segmented with the trained SVM model (classifier). This image segmentation not only can fully take advantage of the local information of color image, but also the ability of SVM classifier. Experimental evidence shows that the proposed method has a very effective segmentation results and computational behavior, and decreases the time and increases the quality of color image segmentation in compare with the state-of-the-art segmentation methods recently proposed in the literature. © 2011 Elsevier B.V. (28 refs.)Main Heading: Image segmentationControlled terms: Behavioral research - Color - Computer vision - Fuzzy clustering - Fuzzy systems - Pixels - Sensitivity analysis - Support vector machines - TexturesUncontrolled terms: Automatic segmentations - Color features - Color image segmentation - Color images - Color pattern - Complexity measures - Experimental evidence - Fuzzy C mean - Fuzzy C means clustering - Human visual - Image pixels - Local information - Pixel classification - Segmentation methods - Segmentation results - Steerable filters - Support vector - SVM classifiers - SVM model - Texture featuresClassification Code: 961 Systems Science - 933 Solid State Physics - 921 Mathematics - 971 Social Sciences - 741.2 Vision - 723.5 Computer Applications - 723 Computer Software, Data Handling and Applications - 741.1 Light/Optics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
New key-sieving algorithm in impossible differential attacks on AES
Dong, Xiao-Li1; Hu, Yu-Pu1; Chen, Jie1, 2
Source: Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, v 40, n 3, p 396-400, May 2011; Language: Chinese
; ISSN: 10010548; DOI: 10.3969/j.issn.1001-0548.2011.03.014;
Publisher: Univ. of Electronic Science and Technology of China
Author affiliation: 1 Key Laboratory of Computer Networks and Information Security of Ministry of Education, Xidian University, Xi'an 710071, China2 Institute of Software, Chinese Academy of Sciences, Beijing 100049, China
Abstract: A new key-sieving algorithm used in impossible differential attacks on advanced encryption standard (AES) is proposed. In the new algorithm, table look-up technique is firstly applied to eliminate some error keys, and then a divide-and-conquer technique is adopted to sieve the others. It is shown that the new algorithm gains some advantage over previously published key-sieving algorithms with respect to the time complexity when proper independent variables are chosen in the function of the time complexity. Moreover, we improve the impossible differential attacks on AES proposed in INDOCRYPT2008 by means of the new algorithm, meanwhile the curves of time complexity are drawn and the best points are obtained. The memory accesses of attacks on 7-round AES-128, 7-round AES-192, 7-round AES-256, and 8-round AES-256 are reduced to 2116.35, 2116.54, 2116.35, and 2228.21 from 2117.2, 2118.8, 2118.8, and 2229.7, respectively, and in the meanwhile the data complexity keeps unchanged. (15 refs.)Main Heading: CryptographyControlled terms: Algorithms - Data privacyUncontrolled terms: Advanced Encryption Standard - Block ciphers - Cryptanalysis - Impossible differential - Key sieving - Time complexityClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research on simulation for influence of earth and atmosphere reflected light on detect ability of space optical sensor
Zhang, Li-Qiang1, 2; Su, Kang3; Zheng, Chang-Wen1; Wang, Hai-Bo1; Wu, Jia-Ze1, 2
Source: Yuhang Xuebao/Journal of Astronautics, v 32, n 2, p 233-241, February 2011; Language: Chinese
; ISSN: 10001328; DOI: 10.3873/j.issn.1000-1328.2011.02.001;
Publisher: China Spaceflight Society
Author affiliation: 1 National Key Laboratory of Integrated Information System Technology, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China3 Beijing Electro-Mechanical Engineering Institute, Beijing 100074, China
Abstract: Some errors are introduced into existing sensor radiance models due to ignoring the anisotropy of the earth surface. Aiming at this problem, a novel calculation model for sensor irradiance contribution, including effects of earth surface and atmosphere reflected light is proposed. This model takes three factors into account: earth surface reflection, atmospheric scattering and radiative transfer effect. Based on the model, a heuristic Monte Carlo predominant light sampling method is presented. Given parameters of sensor and spatial geometry, the influence of earth surface and atmosphere reflected light on space optical sensor can be calculated efficiently. (18 refs.)Main Heading: Earth atmosphereControlled terms: Heuristic methods - Monte Carlo methods - Optical sensors - Radiative transferUncontrolled terms: Atmospheric scattering - Calculation models - Earth surface - Earth surface reflection - MONTE CARLO - Monte Carol method - Radiance models - Radiative transfer effects - Reflected light - Sampling method - Space optical sensor - Spatial geometryClassification Code: 443.1 Atmospheric Properties - 701 Electricity and Magnetism - 801 Chemistry - 921 Mathematics - 922.2 Mathematical Statistics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Cheating immune visual cryptography scheme
Liu, F.1, 2; Wu, C.1; Lin, X.3
Source: IET Information Security, v 5, n 1, p 51-59, March 2011
; ISSN: 17518709, E-ISSN: 17518717; DOI: 10.1049/iet-ifs.2008.0064;
Publisher: Institution of Engineering and Technology
Author affiliation: 1 Chinese Academy of Sciences, State Key Laboratory of Information Security, Institute of Software, Beijing 100190, China2 Graduate School, Chinese Academy of Sciences, Beijing 100190, China3 Ocean University of China, Computer Science and Technology, Qingdao 266100, China
Abstract: Most cheating immune visual cryptography schemes (CIVCS) are based on a traditional visual cryptography scheme (VCS) and are designed to avoid cheating when the secret image of the original VCS is to be recovered. However, all the known CIVCS have some drawbacks. Most usual drawbacks include the following: the scheme needs an online trusted authority, or it requires additional shares for the purpose of verification, or it has to sacrifice the properties by means of pixel expansion and contrast reduction of the original VCS or it can only be based on such VCS with specific access structures. In this study, the authors propose a new CIVCS that can be based on any VCS, including those with a general access structure, and show that their CIVCS can avoid all the above drawbacks. Moreover, their CIVCS does not care about whether the underlying operation is OR or XOR. © 2011 The Institution of Engineering and Technology. (9 refs.)Main Heading: CryptographyUncontrolled terms: Access structure - General access structure - Pixel expansion - Secret images - Trusted authorities - Visual cryptography schemesClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Exploiting FM radio data system for adaptive clock calibration in sensor networks
Li, Liqun1, 2, 3; Xing, Guoliang3; Sun, Limin1; Huangfu, Wei1; Zhou, Ruogu3; Zhu, Hongsong1 Source: MobiSys'11 - Compilation Proceedings of the 9th International Conference on Mobile Systems, Applications and Services and Co-located Workshops, p 169-181, 2011, MobiSys'11 - Compilation Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services and Co-located Workshops; ISBN-13: 9781450306430; DOI: 10.1145/1999995.2000012; Conference: 9th International Conference on Mobile Systems, Applications, and Services, MobiSys'11 and Co-located Workshops, June 28, 2011 - July 1, 2011; Sponsor: ACM SIGMOBILE; USENIX Association;
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, China2 Graduate University, Chinese Academy of Sciences, China3 Department of Computer Science and Engineering, Michigan State University, United States
Abstract: Clock synchronization is critical for Wireless Sensor Networks (WSNs) due to the need of inter-node coordination and collaborative information processing. Although many message passing protocols can achieve satisfactory clock synchronization accuracy, they incur prohibitively high overhead when the network scales to more than tens of nodes. An alternative approach is to take advantage of the global time reference induced by existing infrastructures including GPS, timekeeping radio stations, or power grid. However, high power consumption and geographic constraints present them from being widely adopted in WSNs. In this paper, we propose ROCS, a new clock synchronization approach exploiting the Radio Data System (RDS) of FM radios. First, we design a new hardware FM receiver that can extract a periodic pulse from FM broadcasts, referred to as RDS clock. We then conduct a large-scale measurement study of RDS clock in our lab for a period of six days and on a vehicle driving through a metropolitan area of over 40 km2. Our results show that RDS clock is highly stable and hence is a viable means to calibrate the clocks of large-scale city-wide sensor networks. To reduce the high power consumption of FM receiver, ROCS intelligently predicts the time error due to drift, and adaptively calibrates the native clock via the RDS clock. We implement ROCS in TinyOS on our hardware FM receiver and a TelosB-compatible WSN platform. Our extensive experiments using a 12-node testbed and our driving measurement traces show that ROCS achieves accurate and precise clock synchronization with low power consumption. © 2011 ACM. (22 refs.)Main Heading: Mobile radio systemsControlled terms: Data processing - Frequency modulation - Mechanical clocks - Message passing - Radio broadcasting - Radio receivers - Radio stations - Sensor nodes - Sensors - Synchronization - Wireless networksUncontrolled terms: Alternative approach - Clock calibration - Clock Synchronization - Collaborative information - FM receiver - Global time reference - High power consumption - Large-scale measurement - Low-power consumption - Metropolitan area - Network scale - Periodic pulse - Power grids - radio data system - Radio data systems - Time synchronization - Wireless sensorClassification Code: 943.3 Special Purpose Instruments - 801 Chemistry - 723.2 Data Processing and Image Processing - 961 Systems Science - 718 Telephone Systems and Related Technologies; Line Communications - 716.3 Radio Systems and Equipment - 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A flexible official document system based on authority management
Wu, Suyan1; Li, Wenbo2
Source: BMEI 2011 - Proceedings 2011 International Conference on Business Management and Electronic Information, v 3, p 529-532, 2011, BMEI 2011 - Proceedings 2011 International Conference on Business Management and Electronic Information; ISBN-13: 9781612841069; DOI: 10.1109/ICBMEI.2011.5917856; Article number: 5917856; Conference: 2011 International Conference on Business Management and Electronic Information, BMEI 2011, May 13, 2011 - May 15, 2011; Sponsor: Guangdong University of Business Studies; IEEE Beijing Section; IEEE Wuhan Section; South China University of Technology; Zhongnan University of Economics and Law; Engineering Information Institute;
Publisher: IEEE Computer Society
Author affiliation: 1 Beijing Municioal Institute of Science and Technology Information, Beijing, China2 Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: For the process of official document transmission is uncertainty, this paper presents a flexible workflow idea. In these ideas, the processes of official document transmission is divided into several modules and dynamically select the next execution module in the course of each document implementation. Compared with other dynamic workflow method, this method can realize a flexibility workflow management during system execution. The designed OA system used this idea has opening, transplantation and universality with high flexibility. It is more appropriate for use in the office work proofed by applying it to the practical work, and it can solve the problem of workflow instance transition from the old circuit to the new in traditional dynamic workflow system. © 2011 IEEE. (5 refs.)Main Heading: Information managementControlled terms: Office automation - Work simplificationUncontrolled terms: Dynamic workflow - Flexible workflows - High flexibility - office docment - System-based - Workflow managementsClassification Code: 912.1 Industrial Engineering - 912.2 Management
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Reasoning and change management in modular fuzzy ontologies
Jiang, Yuncheng1, 2; Tang, Yong1; Chen, Qimai1; Wang, Ju3
Source: Expert Systems with Applications, v 38, n 11, p 13975-13986, October 2011
; ISSN: 09574174; DOI: 10.1016/j.eswa.2011.04.205;
Publisher: Elsevier Ltd
Author affiliation: 1 School of Computer Science, South China Normal University, Guangzhou 510631, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 School of Computer Science and Information Technology, Guangxi Normal University, Guilin 541004, China
Abstract: The growing emphasis on complexity concerns for ontologies has attracted significant interest from both the researcher's and the practitioner's communities in modularization techniques as a way to decrease the complexity of managing huge ontologies. On the other hand, it has been widely pointed out that classical ontologies are not appropriate to deal with imprecise and vague knowledge, which is inherent to several real world domains. In order to handle these types of knowledge, some fuzzy extensions of classical ontologies are presented, yielding fuzzy ontologies. In this paper, we integrate modular ontologies with fuzzy ontologies, i.e.; the notion of modular fuzzy ontologies is presented. Furthermore, we present an infrastructure for the representation of and reasoning with modular fuzzy ontologies based on distributed fuzzy description logics. © 2011 Elsevier Ltd. All rights reserved. (58 refs.)Main Heading: OntologyControlled terms: Data description - Formal languages - Modular constructionUncontrolled terms: Description logic - Distributed description logic - Fuzzy description logic - Fuzzy ontology - Modular ontologiesClassification Code: 405.2 Construction Methods - 723 Computer Software, Data Handling and Applications - 903 Information Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Loiss: A byte-oriented stream cipher
Feng, Dengguo1; Feng, Xiutao1; Zhang, Wentao1; Fan, Xiubin1; Wu, Chuankun1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6639 LNCS, p 109-125, 2011, Coding and Cryptology - Third International Workshop, IWCC 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642209000; DOI: 10.1007/978-3-642-20901-7_7; Conference: 3rd International Workshop on Coding and Cryptology, IWCC 2011, May 30, 2011 - June 3, 2011; Sponsor: Qingdao University; Nanyang Technological University;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China
Abstract: This paper presents a byte-oriented stream cipher - Loiss, which takes a 128-bit initial key and a 128-bit initial vector as inputs, and outputs a keystream in bytes. The algorithm is based on a linear feedback shift register, and uses a structure called BOMM in the filter generator, which has good property on resisting algebraic attacks, linear distinguishing attacks and fast correlation attacks. In order for the BOMM to be balanced, the S-boxes in the BOMM must be orthomorphic permutations. To further improve the capability in resisting against those attacks, the S-boxes in the BOMM must also possess some good cryptographic properties, for example, high algebraic immunity, high nonlinearity, and so on. However current researches on orthomorphic permutations pay little attention on their cryptographic properties, and we believe that the proposal of Loiss will enrich the application of orthomorphic permutations in cryptography, and also motivate the research on a variety of cryptographic properties of orthomorphic permutations. © 2011 Springer-Verlag Berlin Heidelberg. (24 refs.)Main Heading: CryptographyControlled terms: Algebra - Shift registersUncontrolled terms: Algebraic attack - Algebraic immunity - BOMM - Distinguishing attacks - Fast correlation attacks - Filter generators - High nonlinearity - Initial vectors - Keystream - Linear feedback shift registers - Loiss - orthomorphic permutation - S-boxes - Stream CiphersClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 721.3 Computer Circuits - 723 Computer Software, Data Handling and Applications - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Stable cohesion metrics for evolving ontologies
Ma, Yinglong1; Wu, Haijiang1; Ma, Xinyu1; Jin, Beihong2; Huang, Tao2; Wei, Jun2
Source: Journal of Software Maintenance and Evolution, v 23, n 5, p 343-359, August 2011
; ISSN: 1532060X, E-ISSN: 15320618; DOI: 10.1002/smr.509;
Publisher: John Wiley and Sons Ltd
Author affiliation: 1 School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: With the drastic development of semantic-driven applications, assessing the quality of ontologies has received more attention. Measuring and assessing the quality of ontologies can help ontology engineers to control project management and reduce the risk of project failures. However, most of the existing ontology metrics for measuring and assessing the quality of ontologies are defined based on ontology structure, and neglect the stability of ontology measurement. In this paper, we concentrate on stable ontology measurement by using semantically derived ontology metrics. We propose four ontology cohesion metrics, which fully consider the implicitly expressed semantic information and are defined based on ontological semantics rather than ontology structure. Before measuring and assessing an ontology, we materialize a pre-processing for stable ontology measurement by treating the ontology. The proposed ontology cohesion metrics are theoretically validated by the validation criteria of object-oriented software. The experimental results show that we can successfully collect more semantic knowledge from the testing ontologies for stable ontology measurement by using the proposed ontology cohesion metrics. The ontology cohesion metrics proposed in this paper can be reasonably used as a cogent complementarity of existing ontology metrics. © 2010 John Wiley & Sons, Ltd. (32 refs.)Main Heading: OntologyControlled terms: Adhesion - Project management - SemanticsUncontrolled terms: Cohesion metrics - Object oriented software - Pre-processing - Risk of projects - Semantic information - Semantic knowledge - stable metrics - Validation criteriaClassification Code: 723 Computer Software, Data Handling and Applications - 801 Chemistry - 903 Information Science - 903.2 Information Dissemination - 912.2 Management - 951 Materials Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Syntax and behavior semantics analysis of network protocol of malware
Ying, Ling-Yun1, 2, 3; Yang, Yi1; Feng, Deng-Guo1, 2; Su, Pu-Rui1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 7, p 1676-1689, July 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03858;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Information Security, Graduate University, The Chinese Academy of Sciences, Beijing 100049, China3 National Engineering Research Center for Information Security, Beijing 100190, China
Abstract: Network protocol reverse analysis is an important aspect of malware analysis. There are many different network protocols and every protocol contains different types of fields that result in various malware behaviors. Without the protocol syntax and filed semantics, analyzers cannot understand how malware interacts with the outside network. This paper presents a syntax and a behavior semantics analysis method of the network protocol. By monitoring the way malware parse the network data and by using different fields in a virtual execution environment, this method can identify protocol fields, extract protocol syntax and correlate each syntax with malware behaviors, accordingly. This paper designs and implements the prototype Prama (protocol reverse analyzer for malware analysis). Experimental results show that this method can correctly infer protocol syntax and tag fields with meaningful malware behaviors. © Copyright 2011, Institute of Software, the Chinese Academy of Sciences. (28 refs.)Main Heading: Network securityControlled terms: Computer crime - Dynamic analysis - Network protocols - Semantics - SyntacticsUncontrolled terms: Malware analysis - Malwares - Network data - Protocol field - Reverse analysis - Semantics analysis - Virtual executionClassification Code: 422.2 Strength of Building Materials : Test Methods - 723 Computer Software, Data Handling and Applications - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Extending logic programs with description logic expressions for the semantic web
Shen, Yi-Dong1; Wang, Kewen2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7031 LNCS, n PART 1, p 633-648, 2011, The Semantic Web, ISWC 2011 - 10th International Semantic Web Conference, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642250729; DOI: 10.1007/978-3-642-25073-6_40; Conference: 10th International Semantic Web Conference, ISWC 2011, October 23, 2011 - October 27, 2011; Sponsor: AI Journal; Elsevier; Fluid Operations; OASIS; THESEUS;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 School of Computing and Information Technology, Griffith University, Brisbane, QLD 4111, Australia
Abstract: Recently much attention has been directed to extending logic programming with description logic (DL) expressions, so that logic programs have access to DL knowledge bases and thus are able to reason with ontologies in the Semantic Web. In this paper, we propose a new extension of logic programs with DL expressions, called normal DL logic programs. In a normal DL logic program arbitrary DL expressions are allowed to appear in rule bodies and atomic DL expressions (i.e., atomic concepts and atomic roles) allowed in rule heads. We extend the key condition of well-supportedness for normal logic programs under the standard answer set semantics to normal DL logic programs and define an answer set semantics for DL logic programs which satisfies the extended well-supportedness condition. We show that the answer set semantics for normal DL logic programs is decidable if the underlying description logic is decidable (e.g. SHOIN or SROIQ). © 2011 Springer-Verlag. (20 refs.)Main Heading: Computability and decidabilityControlled terms: Atoms - Data description - Formal languages - Logic programming - Ontology - Semantic Web - Semantics - User interfacesUncontrolled terms: Answer set semantics - Atomic role - Description logic - Knowledge basis - Logic programsClassification Code: 931.3 Atomic and Molecular Physics - 903.2 Information Dissemination - 903 Information Science - 723 Computer Software, Data Handling and Applications - 722.2 Computer Peripheral Equipment - 721.2 Logic Elements - 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Optimizing grid construction in linear complexity
Li, Jing1; Wang, Wen-Cheng1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 10, p 2488-2496, October 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03927;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China
Abstract: Uniform grid is one of the important spatial organization structures, which is widely used in many applications such as ray tracing, collision detection, path planning and so on. Its simplicity to compute makes it very suitable for processing dynamic scenes. Because its construction time, space requirement and application efficiency are closely related with the grid resolution, optimizing grid construction has always been an important topic in the world. Hense, a new optimization method for grid construction which ensures that both of the construction time and the space requirement are in the complexity O(N), where N is the number of the primitives in the scene is proposed. In the related applications, such as ray tracing, it can achieve high acceleration efficiency, comparable with the best acceleration structures to date, such as the kd-tree, while reducing the construction time dramatically. These have been approved by the experimental results. © 2011 ISCAS. (24 refs.)Main Heading: OptimizationControlled terms: Ray tracingUncontrolled terms: Acceleration structures - Collision detection - Construction time - Dynamic scenes - Grid - Grid resolution - High acceleration - Kd-tree - Large scale scene - Linear complexity - Optimization method - Space requirements - Spatial organization - Uniform gridsClassification Code: 741.1 Light/Optics - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On-line cache strategy reconfiguration for elastic caching platform: A machine learning approach
Qin, Xiulei1, 3; Zhang, Wenbo1; Wang, Wei1; Wei, Jun1, 2; Zhong, Hua1; Huang, Tao1, 2
Source: Proceedings - International Computer Software and Applications Conference, p 523-534, 2011, Proceedings - 35th Annual IEEE International Computer Software and Applications Conference, COMPSAC 2011
; ISSN: 07303157; ISBN-13: 9780769544397; DOI: 10.1109/COMPSAC.2011.73; Article number: 6032392; Conference: 35th Annual IEEE International Computer Software and Applications Conference, COMPSAC 2011, July 18, 2011 - July 21, 2011; Sponsor: IEEE; IEEE Computer Society;
Publisher: IEEE Computer Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, China2 State Key Laboratory of Computer Science, China3 Graduate University of Chinese Academy of Sciences, Beijing, China
Abstract: Cloud computing provide scalability and high availability for web applications using such techniques as distributed caching and clustering. As one database offloading strategy, elastic caching platforms (ECPs) are introduced to speed up the performance or handle application state management with fault tolerance. Several cache strtegies for ECPs have been proposed, say replicated strategy, partitioned strategy and near strategy. We first evaluate the impact of the three cache strategies using the TPC-W benchmark and find that there is no single cache strategy suitable for all conditions, the selection of the best strategy is related with workload patterns, cluster size and the number of concurrent users. This raises the question of when and how the cache strategy should be reconfigured as the condition varies which has received comparatively less attention. In this paper, we present a machine learning based approach to solving this problem. The key features of the approach are off-line training coupled with on-line system monitoring and robust synchronization process after triggering a reconfiguration, at the same time the performance model is periodically updated. More explicitly, first a rule set used to identify which cache strategy is optimal under the current condition are trained with the system statistics and performance results. We then introduce a framework to switch the cache strategy on-line as the workload varies and keep its overhead to acceptable levels. Finally, we illustrate the advantages of this approach by carrying out a set of experiments. © 2011 IEEE. (37 refs.)Main Heading: Computer applicationsControlled terms: Cloud computing - Fault tolerance - Learning systems - Scalability - User interfacesUncontrolled terms: Cache strategy - Cluster sizes - Distributed caching - Elastic caching platform - High availability - Key feature - Machine-learning - Off-line training - Performance Model - Reconfiguration - Robust synchronization - Rule set - Selection of the best - Speed-ups - State management - System monitoring - System statistics - TPC-W benchmark - WEB application - Workload patternsClassification Code: 718 Telephone Systems and Related Technologies; Line Communications - 722.2 Computer Peripheral Equipment - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Analysis of telephone call detail records based on fuzzy decision tree
Ding, Liping1; Gu, Jian2; Wang, Yongji1; Wu, Jingzheng1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 56 LNICST, p 301-311, 2011, Forensics in Telecommunications, Information, and Multimedia - Third International ICST Conference, e-Forensics 2010, Revised Selected Papers
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642236013; Conference: 3rd International ICST Conference on Forensic Applications and Techniques in Telecommunications, Information and Multimedia, E-Forensics 2010, November 11, 2010 - November 12, 2010; Sponsor: ICST;
Publisher: Springer Verlag
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Key Lab. of Information Network Security of Ministry of Public Security, Third Research Institute of Ministry of Public Security, Shanghai, 200031, China
Abstract: Digital evidences can be obtained from computers and various kinds of digital devices, such as telephones, mp3/mp4 players, printers, cameras, etc. Telephone Call Detail Records (CDRs) are one important Source of digital evidences that can identify suspects and their partners. Law enforcement authorities may intercept and record specific conversations with a court order and CDRs can be obtained from telephone service providers. However, the CDRs of a suspect for a period of time are often fairly large in volume. To obtain useful information and make appropriate decisions automatically from such large amount of CDRs become more and more difficult. Current analysis tools are designed to present only numerical results rather than help us make useful decisions. In this paper, an algorithm based on fuzzy decision tree (FDT) for analyzing CDRs is proposed. We conducted experimental evaluation to verify the proposed algorithm and the result is very promising. © 2011 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering. (16 refs.)Main Heading: TelephoneControlled terms: Algorithms - Computer crime - Decision trees - Digital devices - Forestry - Telephone sets - Telephone systems - Trees (mathematics)Uncontrolled terms: Court orders - Current analysis - Digital evidence - Enforcement authorities - Experimental evaluation - Forensics - fuzzy decision tree - Fuzzy decision trees - Numerical results - telephone call records - Telephone calls - Telephone-service providersClassification Code: 718.1 Telephone Systems and Equipment - 721 Computer Circuits and Logic Elements - 723 Computer Software, Data Handling and Applications - 821.0 Woodlands and Forestry - 921 Mathematics - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
CoolMag: A tangible interaction tool to customize instruments for children in music education
Zhang, Cheng1, 2; Shen, Li1, 2; Wang, Danli1; Tian, Feng1; Wang, Hongan1
Source: UbiComp'11 - Proceedings of the 2011 ACM Conference on Ubiquitous Computing, p 581-582, 2011, UbiComp'11 - Proceedings of the 2011 ACM Conference on Ubiquitous Computing; ISBN-13: 9781450309103; DOI: 10.1145/2030112.2030223; Conference: 13th International Conference on Ubiquitous Computing, UbiComp'11 and the Co-located Workshops, September 17, 2011 - September 21, 2011; Sponsor: ACM SIGCHI; ACM SIGMOBILE; ACM SIGSPATIAL;
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China2 Graduate University, Chinese Academy of Sciences, Beijing, China
Abstract: In this paper, we describe CoolMag, a tangible interaction tool to enable children to create different instruments collaboratively in music education. With CoolMag, children could learn the basic playing methods of different instruments. It also has the potential to inspire children's creativity, because children could adopt objects in daily life (broom, cup, pen etc.) as the carrier of their novel instruments whose appearance may differ from the traditional one. © 2011 Authors. (5 refs.)Main Heading: EducationControlled terms: Human computer interaction - Instruments - Ubiquitous computing - User interfacesUncontrolled terms: children - Creativity support - Music education - playing instrument - Tangible user interfacesClassification Code: 944 Moisture, Pressure and Temperature, and Radiation Measuring Instruments - 943 Mechanical and Miscellaneous Measuring Instruments - 942 Electric and Electronic Measuring Instruments - 941 Acoustical and Optical Measuring Instruments - 901.2 Education - 722.3 Data Communication, Equipment and Techniques - 722.2 Computer Peripheral Equipment
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Monocular tracking of human motion with local prior models
Wang, Wenzhong1, 2; Wang, Zhaoqi1; Deng, Xiaoming3; Xia, Shihong1; Qiu, Xianjie1
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 9, p 1545-1552, September 2011; Language: Chinese
; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 Advanced Computing Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China2 Graduate School of Chinese Academy of Sciences, Beijing 100049, China3 Laboratory of Human-Computer Interaction and Intelligent Information Processing, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: A novel approach to modelling the dynamics of human motion is presented. The proposed method utilizes locally learnt prior models to encode the time-varying characteristics of human motion. The local priors consist of the probability density of poses and dynamical process of motions. For each input image, the proposed method automatically learns the parameters of these models from a set of training examples that closely match with the query. The experimental results showed that the proposed method outperforms those with global motion models. (18 refs.)Main Heading: Image segmentationControlled terms: Probability density functionUncontrolled terms: Deformable models - Dynamical process - Global motion - Human motions - Input image - Local prior model - Motion tracking - Probability densities - Time-varying characteristics - Tracking of human motion - Training exampleClassification Code: 741.1 Light/Optics - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Analysis and application of Petri subnet reduction
Xia, Chuanliang1, 2
Source: Journal of Computers, v 6, n 8, p 1662-1669, 2011; ISSN: 1796203X; DOI: 10.4304/jcp.6.8.1662-1669;
Publisher: Academy Publisher
Author affiliation: 1 School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China2 State Key Laboratory of Computer Science, Institute of Software, Academy of Sciences, Beijing, China
Abstract: We motivate and study the subnet reduction of Petri nets. Subnet reduction can avoid the state exploration problem by guaranteeing the correctness in the Petri net. For systems specified in Petri nets, this paper proposes two subnet reduction methods. One major advantage of these reduction methods is that the resultant ordinary Petri net is guaranteed to be live, bounded and reversible. A group of sufficient conditions or sufficient and necessary conditions of liveness preservation, boundedness preservation and reversibility preservation are proposed. A flexible manufacturing system has been verified. These results are useful for studying the static and dynamic properties of Petri nets, analyzing properties for large complex system. © 2011 ACADEMY PUBLISHER. (21 refs.)Main Heading: Petri netsControlled terms: Flexible manufacturing systemsUncontrolled terms: Boundedness - Liveness - Ordinary Petri net - Property analysis - Reduction method - State exploration - Static and dynamic - Sufficient and necessary condition - Sufficient conditions - System verificationsClassification Code: 913.4.1 Flexible Manufacturing Systems - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A computational proof of complexity of some restricted counting problems
Cai, Jin-Yi1; Lu, Pinyan2; Xia, Mingji3
Source: Theoretical Computer Science, v 412, n 23, p 2468-2485, May 20, 2011; ISSN: 03043975; DOI: 10.1016/j.tcs.2010.10.039;
Publisher: Elsevier
Author affiliation: 1 Computer Sciences Department, University of Wisconsin-Madison, Madison, WI 53706, United States2 Microsoft Research Asia, Beijing, 100190, China3 State Key Laboratory of Computer Science, Institute of Software, CAS, Beijing 100190, China
Abstract: We explore a computational approach to proving the intractability of certain counting problems. These problems can be described in various ways, and they include concrete problems such as counting the number of vertex covers or independent sets for 3-regular graphs. The high level principle of our approach is algebraic, which provides sufficient conditions for interpolation to succeed. Another algebraic component is holographic reductions. We then analyze in detail polynomial maps on R2 induced by some combinatorial constructions. These maps define sufficiently complicated dynamics of R 2 that we can only analyze them computationally. In this paper we use both numerical computation (as intuitive guidance) and symbolic computation (as proof theoretic verification) to derive that a certain collection of combinatorial constructions, in myriad combinations, fulfills the algebraic requirements of proving P-hardness. The final result is a dichotomy theorem for a class of counting problems. This includes a class of generic holant problems with an arbitrary real valued edge signature over (2,3)-regular undirected graphs. In particular, it includes all partition functions with 01 vertex assignments and an arbitrary real valued edge function over all 3-regular undirected graphs. © 2010 Elsevier B.V. All rights reserved. (25 refs.)Main Heading: Graph theoryControlled terms: AlgebraUncontrolled terms: Computational approach - Counting problems - Dichotomy theorem - Edge function - Holant problem - Holographic reduction - Independent set - Numerical computations - Partition functions - Regular graphs - Sufficient conditions - Symbolic computation - Undirected graph - Vertex coverClassification Code: 921.1 Algebra - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An easy and fast method for dominant texture extraction
Hua, Miao1, 2; Chen, Xin1, 2; Wang, Wencheng1
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 1, p 46-53, January 2011; Language: Chinese
; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: Existing methods for extracting dominant textures are always complicated and very expensive, e. g. more than 10 minutes are often required to handle an image sample. This paper proposes an easy and fast method, being able to extract dominant textures from an image sample in less than 1 second, and its results are comparable with those of existing methods. The new method is based on the key observation that dominant textures often take up a larger portion in the image sample, and their color features appear frequently. Thus, with multiscale histograms, we may find the color features with high frequencies, and have their corresponding pixels belonged to the dominant textures. We adopt the HIS color model and re-sample the image sample in a few resolutions, and perform the investigation on them respectively, by which the pixels often regarded as belonged to dominant textures will be finally taken to form the dominant textures. (9 refs.)Main Heading: ExtractionControlled terms: Color - Graphic methods - Image processing - Pixels - Statistical methods - TexturesUncontrolled terms: Dominant textures - Extract - High efficiency - Histogram - The HIS color modelClassification Code: 933 Solid State Physics - 922.2 Mathematical Statistics - 902.1 Engineering Graphics - 802.3 Chemical Operations - 741.1 Light/Optics - 741 Light, Optics and Optical Devices - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Non-parametric statistical fault localization
Zhang, Zhenyu1; Chan, W.K.2; Tse, T.H.3; Yu, Y.T.2; Hu, Peifeng4
Source: Journal of Systems and Software, v 84, n 6, p 885-905, June 2011
; ISSN: 01641212; DOI: 10.1016/j.jss.2010.12.048;
Publisher: Elsevier Inc.
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Department of Computer Science, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Hong Kong3 Department of Computer Science, University of Hong Kong, Pokfulam, Hong Kong4 China Merchants Bank, Central, Hong Kong, Hong Kong
Abstract: Fault localization is a major activity in program debugging. To automate this time-consuming task, many existing fault-localization techniques compare passed executions and failed executions, and suggest suspicious program elements, such as predicates or statements, to facilitate the identification of faults. To do that, these techniques propose statistical models and use hypothesis testing methods to test the similarity or dissimilarity of proposed program features between passed and failed executions. Furthermore, when applying their models, these techniques presume that the feature spectra come from populations with specific distributions. The accuracy of using a model to describe feature spectra is related to and may be affected by the underlying distribution of the feature spectra, and the use of a (sound) model on inapplicable circumstances to describe real-life feature spectra may lower the effectiveness of these fault-localization techniques. In this paper, we make use of hypothesis testing methods as the core concept in developing a predicate-based fault-localization framework. We report a controlled experiment to compare, within our framework, the efficacy, scalability, and efficiency of applying three categories of hypothesis testing methods, namely, standard non-parametric hypothesis testing methods, standard parametric hypothesis testing methods, and debugging-specific parametric testing methods. We also conduct a case study to compare the effectiveness of the winner of these three categories with the effectiveness of 33 existing statement-level fault-localization techniques. The experimental results show that the use of non-parametric hypothesis testing methods in our proposed predicate-based fault-localization model is the most promising. © 2011 Elsevier Inc. All rights reserved. (78 refs.)Main Heading: Tracking (position)Controlled terms: Program debuggingUncontrolled terms: Controlled experiment - Fault localization - Hypothesis testing - Localization models - Localization technique - Non-parametric - Nonparametric methods - Parametric method - Parametric testing - Specific distribution - Statistical models - Time-consuming tasks - Underlying distributionClassification Code: 716.2 Radar Systems and Equipment - 723.1 Computer Programming
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
SecGuard: Secure and practical integrity protection model for operating systems
Zhai, Ennan1, 2; Shen, Qingni1, 3, 4; Wang, Yonggang3, 4; Yang, Tao3, 4; Ding, Liping2; Qing, Sihan1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6612 LNCS, p 370-375, 2011, Web Technologies and Applications - 13th Asia-Pacific Web Conference, APWeb 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642202902; DOI: 10.1007/978-3-642-20291-9_38; Conference: 13th Asia-Pacific Conference on Web Technology, APWeb 2011, April 18, 2011 - April 20, 2011;
Publisher: Springer Verlag
Author affiliation: 1 School of Software and Microelectronics, Peking University, China2 Institute of Software, Chinese Academy of Sciences, China3 MoE Key Lab of Network and Software Assurance, Peking University, China4 Network and Information Security Lab, Institute of Software, Peking University, China
Abstract: Host compromise is a serious security problem for operating systems. Most previous solutions based on integrity protection models are difficult to use; on the other hand, usable integrity protection models can only provide limited protection. This paper presents SecGuard, a secure and practical integrity protection model. To ensure the security of systems, SecGuard provides provable guarantees for operating systems to defend against three categories of threats: network-based threat, IPC communication threat and contaminative file threat. To ensure practicability, SecGuard introduces several novel techniques. For example, SecGuard leverages the information of existing discretionary access control information to initialize integrity labels for subjects and objects in the system. We developed the prototype system of SecGuard based on Linux Security Modules framework (LSM), and evaluated the security and practicability of SecGuard. © 2011 Springer-Verlag Berlin Heidelberg. (10 refs.)Main Heading: Access controlControlled terms: Computer operating systems - Network securityUncontrolled terms: Discretionary access control - Integrity protection - Linux security modules - Network-based - Novel techniques - Operating systems - Prototype system - Security problemsClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Local search with edge weighting and configuration checking heuristics for minimum vertex cover
Cai, Shaowei1; Su, Kaile2, 3; Sattar, Abdul2, 4
Source: Artificial Intelligence, v 175, n 9-10, p 1672-1696, 2011
; ISSN: 00043702; DOI: 10.1016/j.artint.2011.03.003 Article in Press
Author affiliation: 1 Key laboratory of High Confidence Software Technologies (Peking University), Ministry of Education, Beijing, China2 Institute for Integrated and Intelligent Systems, Griffith University, Brisbane, Australia3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China4 ATOMIC Project, Queensland Research Lab, NICTA, Australia
Abstract: The Minimum Vertex Cover (MVC) problem is a well-known combinatorial optimization problem of great importance in theory and applications. In recent years, local search has been shown to be an effective and promising approach to solve hard problems, such as MVC. In this paper, we introduce two new local search algorithms for MVC, called EWLS (Edge Weighting Local Search) and EWCC (Edge Weighting Configuration Checking). The first algorithm EWLS is an iterated local search algorithm that works with a partial vertex cover, and utilizes an edge weighting scheme which updates edge weights when getting stuck in local optima. Nevertheless, EWLS has an instance-dependent parameter. Further, we propose a strategy called Configuration Checking for handling the cycling problem in local search. This is used in designing a more efficient algorithm that has no instance-dependent parameters, which is referred to as EWCC. Unlike previous vertex-based heuristics, the configuration checking strategy considers the induced subgraph configurations when selecting a vertex to add into the current candidate solution.A detailed experimental study is carried out using the well-known DIMACS and BHOSLIB benchmarks. The experimental results conclude that EWLS and EWCC are largely competitive on DIMACS benchmarks, where they outperform other current best heuristic algorithms on most hard instances, and dominate on the hard random BHOSLIB benchmarks. Moreover, EWCC makes a significant improvement over EWLS, while both EWLS and EWCC set a new record on a twenty-year challenge instance. Further, EWCC performs quite well even on structured instances in comparison to the best exact algorithm we know. We also study the run-time behavior of EWLS and EWCC which shows interesting properties of both algorithms. © 2011 Elsevier B.V.Main Heading: Heuristic algorithmsControlled terms: Combinatorial optimization - Learning algorithms - Problem solvingUncontrolled terms: Combinatorial optimization problems - Edge weights - Efficient algorithm - Exact algorithms - Experimental studies - Hard instances - Hard problems - Induced subgraphs - Iterated local search - Local optima - Local search - Local search algorithm - Minimum vertex cover - Runtimes - Vertex cover - Weighting schemeClassification Code: 723 Computer Software, Data Handling and Applications - 723.1 Computer Programming - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Rotation-symmetric attack on filter generators
Yang, Xiao1, 2; Wu, Chuan-Kun1
Source: Tien Tzu Hsueh Pao/Acta Electronica Sinica, v 39, n 3, p 494-499, March 2011; Language: Chinese
; ISSN: 03722112;
Publisher: Chinese Institute of Electronics
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 Graduate University of the Chinese Acad. of Sci., Beijing 100049, China
Abstract: The security of filter generators is provided by the filter function. For the resistance to algebraic attack, functions with maximum algebraic immunity were used for designing filter functions. We find that the existing algebraic immune functions have a strong property of rotation symmetry and present a rotation-symmetric attack on the filter functions. We also discuss the rotation-symmetric property of filter functions and its influence on the rotation-symmetric attack. After the survey of the vulnerability of algebraic immunity function to the rotation-symmetric attack, we give a new criterion for the choice of filter function. (15 refs.)Main Heading: RotationControlled terms: AlgebraUncontrolled terms: Algebraic attack - Algebraic immunity - Filter function - Filter generators - Immune function - Rotation symmetry - Rotation-symmetric functionClassification Code: 601.1 Mechanical Devices - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A short signature scheme from the RSA family
Yu, Ping1; Xue, Rui1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6531 LNCS, p 307-318, 2011, Information Security - 13th International Conference, ISC 2010, Revised Selected Papers
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642181771; DOI: 10.1007/978-3-642-18178-8-27; Conference: 13th Information Security Conference, ISC 2010, October 25, 2010 - October 28, 2010; Sponsor: Center for Cryptology and Information Security (CCIS); Cent. Secur. Assur. IT (C-SAIT) Florida State Univ.; Charles E. Schmidt Coll. Sci. Florida Atlantic Univ.; Datamaxx Group;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: We propose a short signature scheme based on the complexity assumptions related to the RSA modulus. More specifically, the new scheme is secure in the standard model based on the strong RSA subgroup assumption. Most short signature schemes are based on either the discrete logarithm problem (or its variants), or the problems from bilinear mapping. So far we are not aware of any signature schemes in the RSA family can produce a signature shorter than the RSA modulus (in a typical setting, an RSA modulus is 1024 bits). The new scheme can produce a 420-bit signature, much shorter than the RSA modulus. In addition, the new scheme is very efficient. It only needs one modulo exponentiation with a 200-bit exponent to produce a signature. In comparison, most RSA-type signature schemes at least need one modulo exponentiation with 1024-bit exponent, whose cost is more than five times of the new scheme's. © 2011 Springer-Verlag. (19 refs.)Main Heading: Security of dataControlled terms: Algebra - AuthenticationUncontrolled terms: Bilinear mapping - Complexity assumptions - Digital Signature - Discrete logarithm problems - Exponentiations - RSA moduli - Short signatures - Signature Scheme - Strong RSA assumption - The standard modelClassification Code: 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Ubiquitous human-computer interaction in cloud manufacturing
Ma, Cui-Xia1; Ren, Lei2, 3; Teng, Dong-Xing1; Wang, Hong-An1; Dai, Guo-Zhong1
Source: Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, v 17, n 3, p 504-510, March 2011; Language: Chinese
; ISSN: 10065911;
Publisher: CIMS
Author affiliation: 1 Intelligence Engineering Lab., Institute of Software Chinese Academy of Sciences, Beijing 100190, China2 School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China3 Engineering Research Center of Complex Product Advanced Manufacturing Systems, Ministry of Education, Beihang University, Beijing 100191, China
Abstract: In order to enable user's interface satisfy various individualized requirements in Cloud Manufacturing(CMfg) environment, characteristics of user's interface were firstly analyzed, such as ubiquitous, natural, intelligent and mobile, virtualized and loose coupled, individualized and be customized in entire product lifecycle. Then the next generation Human-Computer Interaction(HCI) technologies were reviewed including reality-based HCI, natural user's interface, pen-based user interface and context perception. Further, a research framework of ubiquitous HCI technology in CMFg was presented and the key technologies were analyzed in particular, among which dynamic requirement configuration of virtual reSources in user's interface, ubiquitous interface customization, natural interaction oriented to ubiquitous equipment, visual analysis oriented to ubiquitous information and pen-based operation platform. Finally, the future of HCI technology research in CMFg was discussed. (16 refs.)Main Heading: Cloud computingControlled terms: Human computer interaction - Knowledge management - Life cycle - Manufacture - Technology - Ubiquitous computing - User interfaces - VisualizationUncontrolled terms: Cloud service - Human-computer - Key technologies - Natural interactions - Pen based user interfaces - Pen-based - Product-life-cycle - Technology research - Ubiquitous information - Ubiquitous interfaces - Virtual reSource - Virtualizations - Visual analysisClassification Code: 722 Computer Systems and Equipment - 723.5 Computer Applications - 901 Engineering Profession - 902.1 Engineering Graphics - 913.1 Production Engineering - 913.4 Manufacturing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An instruction-level software simulation approach to resistance evaluation of cryptographic implementations against power analysis attacks
Li, Jiantang1; Zhou, Yongbin1; Liu, Jiye1; Zhang, Hailong1
Source: Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, v 2, p 680-686, 2011, Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011; ISBN-13: 9781424487257; DOI: 10.1109/CSAE.2011.5952597; Article number: 5952597; Conference: 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, June 10, 2011 - June 12, 2011; Sponsor: IEEE Beijing Section; Pudong New Area Association for Computer; Pudong New Area Science and Technology Development Fund; Tongji University; Xiamen University;
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, P. O. Box 8718, Beijing, China
Abstract: Power analysis attack, one of the most important side-channel cryptanalysis, poses serious threats to the physical security of cryptographic implementations. In order to assess the physical security of cryptographic implementations, especially within design phases, some fundamental supporting tools appear to be highly helpful. Additionally, such tools are also necessary for performing fair comparisons among various power analysis attacks and different countermeasures. Motivated by this, we proposed an instruction-level power consumption software simulation approach, aiming to analyze and assess the resistance of cryptographic implementations against power analysis attack. One prototype system, which is called IMScale, is developed to validate the correctness and feasibility of our approach. Using IMScale, we carried out multiple DPA attacks against an unprotected AES implementation and a masked AES implementation as well. The results of our experiments firmly validate the correctness and feasibility of our instruction-level power consumption software simulation approach, which are also completely consistent with known ones. © 2011 IEEE. (32 refs.)Main Heading: Computer softwareControlled terms: Computer science - Computer simulation - CryptographyUncontrolled terms: Cryptographic implementation - Evaluation - Instruction-level - Physical security - Power analysis attackClassification Code: 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Guaranteed seamless transmission technique for NGEO satellite networks
Lu, Wang1; Lixiang, Liu2; Xiaohui, Hu2
Source: Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, v 3, p 617-621, 2011, Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011; ISBN-13: 9781424487257; DOI: 10.1109/CSAE.2011.5952753; Article number: 5952753; Conference: 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, June 10, 2011 - June 12, 2011; Sponsor: IEEE Beijing Section; Pudong New Area Association for Computer; Pudong New Area Science and Technology Development Fund; Tongji University; Xiamen University;
Publisher: IEEE Computer Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Graduate University O, Beijing, China2 Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Non-geostationary(NGEO) satellite communication systems are able to provide global communication with reasonable latency and low terminal power requirements. However the highly topological dynamics, large delay and error prone links have been a matter of fact in the satellite network studies. This paper proposes a novel Guaranteed Seamless Transmission Technique(GST), which is a Hop-by-Hop scheme enhanced with the End-to-End scheme and associated with a link algorithm, which updates the link load explicitly and sends it back to the Sources that use the link. We analyze GST theoretically by adopting a simple fluid model. The good performance of GST, in terms of bandwidth utilization, effective transmission ratio and fairness, is verified via a set of simulations. © 2011 IEEE. (13 refs.)Main Heading: Satellite communication systemsControlled terms: Communication systems - Computer science - Geostationary satellitesUncontrolled terms: Band-width utilization - Effective transmission - Error prones - Global communication - Hop by hop - Large delays - Link Loads - NGEO Satellite Networks - Power requirement - Satellite network - Seamless - Seamless transmissions - Simple fluids - Topological dynamicsClassification Code: 655.2 Satellites - 716 Telecommunication; Radar, Radio and Television - 721 Computer Circuits and Logic Elements - 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Erratum to "An adjustable approach to intuitionistic fuzzy soft sets based decision making" [Applied Mathematical Modelling 35 (2011) 824-836]
Jiang, Yuncheng1, 2; Tang, Yong1; Chen, Qimai1
Source: Applied Mathematical Modelling, v 35, n 5, p 2584, May 2011
; ISSN: 0307904X; DOI: 10.1016/j.apm.2010.12.009;
Publisher: Elsevier Inc.
Author affiliation: 1 School of Computer Science, South China Normal University, Guangzhou 510631, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A parallel well-balanced finite volume method for shallow water equations with topography on the cubed-sphere
Yang, Chao1; Cai, Xiao-Chuan2
Source: Journal of Computational and Applied Mathematics, v 235, n 18, p 5357-5366, July 15, 2011; ISSN: 03770427; DOI: 10.1016/j.cam.2011.01.016;
Publisher: Elsevier
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Department of Computer Science, University of Colorado at Boulder, Boulder, CO 80309, United States
Abstract: A finite volume scheme for the global shallow water model on the cubed-sphere mesh is proposed and studied in this paper. The new cell-centered scheme is based on Osher's Riemann solver together with a high-order spatial reconstruction. On each patch interface of the cubed-sphere only one layer of ghost cells is needed in the scheme and the numerical flux is calculated symmetrically across the interface to ensure the numerical conservation of total mass. The discretization of the topographic term in the equation is properly modified in a well-balanced manner to suppress spurious oscillations when the bottom topography is non-smooth. Numerical results for several test cases including a steady-state nonlinear geostrophic flow and a zonal flow over an isolated mountain are provided to show the flexibility of the scheme. Some parallel implementation details as well as some performance results on a parallel supercomputer with more than one thousand processor cores are also provided. © 2011 Elsevier B.V. All rights reserved. (26 refs.)Main Heading: SpheresControlled terms: Equations of motion - Finite volume method - Supercomputers - TopographyUncontrolled terms: Cubed-sphere - Exact-C-property - Parallel scalability - Shallow water equations - Well- balanced schemesClassification Code: 631 Fluid Flow - 631.1 Fluid Flow, General - 722.4 Digital Computers and Systems - 921.2 Calculus - 951 Materials Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
ListOPT: Learning to optimize for XML ranking
Gao, Ning1; Deng, Zhi-Hong1, 2; Yu, Hang1; Jiang, Jia-Jian1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6635 LNAI, n PART 2, p 482-492, 2011, Advances in Knowledge Discovery and Data Mining - 15th Pacific-Asia Conference, PAKDD 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642208461; DOI: 10.1007/978-3-642-20847-8-40; Conference: 15th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2011, May 24, 2011 - May 27, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Key Laboratory of Machine Perception (Ministry of Education), School of Electronic Engineering and Computer Science, Peking University, China2 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Many machine learning classification technologies such as boosting, support vector machine or neural networks have been applied to the ranking problem in information retrieval. However, since the purpose of these learning-to-rank methods is to directly acquire the sorted results based on the features of documents, they are unable to combine and utilize the existing ranking methods proven to be effective such as BM25 and PageRank. To solve this defect, we conducted a study on learning-to-optimize, which is to construct a learning model or method for optimizing the free parameters in ranking functions. This paper proposes a listwise learning-to-optimize process ListOPT and introduces three alternative differentiable query-level loss functions. The experimental results on the XML dataset of Wikipedia English show that these approaches can be successfully applied to tuning the parameters used in an existing highly cited ranking function BM25. Furthermore, we found that the formulas with optimized parameters indeed improve the effectiveness compared with the original ones. © 2011 Springer-Verlag. (19 refs.)Main Heading: OptimizationControlled terms: Adaptive boosting - Data mining - Information retrieval - Neural networks - XMLUncontrolled terms: BM25 - Data sets - Free parameters - Learning models - learning-to-optimize - Loss functions - Machine learning classification - Optimized parameter - PageRank - ranking - Ranking functions - Ranking methods - Ranking problems - WikipediaClassification Code: 723 Computer Software, Data Handling and Applications - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Congestion control in networks with mixed IP and P2P traffic
Shi, Zhiqiang1; Ionescu, Dan2; Zhang, Dongli2
Source: Proceedings - International Conference on Computer Communications and Networks, ICCCN, 2011, 2011 20th International Conference on Computer Communications and Networks, ICCCN 2011 - Proceedings
; ISSN: 10952055; ISBN-13: 9781457706387; DOI: 10.1109/ICCCN.2011.6006061; Article number: 6006061; Conference: 2011 20th International Conference on Computer Communications and Networks, ICCCN 2011, July 31, 2011 - August 4, 2011; Sponsor: IEEE Communications Society; U.S. National Science Foundation (NSF); Qualcomm; Microsoft Research; EiC;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China2 SITE, University of Ottawa, Ottawa, ON, Canada
Abstract: Services such as multimedia, VoIP, video-conferencing, social networking and others impose new requirements on providers and constraints on network designers. Fair Queueing algorithms like CSFQ or Stochastic Fair BLUE have been used to improve the quality of the packet transmission. Such mechanisms usually supervise the bandwidth consumption per-flow and become helpless in the presence of P2P traffic. In quest for high quality transmission, multimedia applications are designed to use more and more P2P paradigms. As P2P traffic is also exposed to congestion, few works address congestion control in mixed traditional IP (for short called IP traffic) and P2P traffic. In this paper, we propose a model flow for the mixture of the two and present a principle and a method based on per-subscriber flow control, for congestion control. An architecture based on the Token-Based Traffic Control for P2P applications is introduced. The token reSource consumed by each subscriber is counted and controls for both core and edge routers are generated in the case of IP and P2P traffic. The traffic is measured at core routers and the measurement data is conveyed to edge routers. They label the Token-Level on incoming packets according to the congestion index, and police the total input token of each P2P subscriber. Simulations results and the analysis of the impact on the performance of this approach on some P2P experiments are given. © 2011 IEEE. (16 refs.)Main Heading: Traffic congestionControlled terms: Distributed computer systems - Internet telephony - Multimedia services - Peer to peer networks - Queueing networks - Video conferencingUncontrolled terms: CSFQ - Flow Fairness - P2P and multimedia - Subscriber Fairness - TBTCClassification Code: 723 Computer Software, Data Handling and Applications - 722.4 Digital Computers and Systems - 722 Computer Systems and Equipment - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television - 432.4 Highway Traffic Control
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Guessing specific variables in algebraic attacks on Bivium
Li, Xin1, 2; Lin, Dong-Dai1
Source: Tien Tzu Hsueh Pao/Acta Electronica Sinica, v 39, n 8, p 1727-1732, August 2011; Language: Chinese
; ISSN: 03722112;
Publisher: Chinese Institute of Electronics
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 Graduate School of the Chinese Acad. of Sci., Beijing 100190, China
Abstract: Solving an equation system is a very important step in algebraic attack. For a cryptosystem, after being transformed to equations, we often need to employ guess-and-determine algorithm to estimate computational complexity of this attack. In this paper, we introduce a model to estimate average time in solving subsystems more accurately, and propose some criteria on selecting specific guessed variables to speed up the solving efficiency, which based on static weight and dynamic weight etc. For comupting Gro¨bner bases, we use serveral varible order which are AB, S, S-rev etc. Meanwhile, we introduce the concept of conflicting equations, and show the importance for correct analysis and narrow guessing space. In the end, we estimate the time of attacking Bivium. Experiments showed that, in the worst cases, guessing 60 varibles in the Evy3 position and with DM-rev varible order will have the optimal result, that is about 2 exp(39.16) seconds. (17 refs.)Main Heading: EstimationControlled terms: Algebra - Algorithms - Computational complexityUncontrolled terms: Algebraic attack - Bivium - Conflicting equations - Dynamic weight - Equation systems - Equations solving - Optimal results - Worst caseClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 921 Mathematics - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
PolyE+CTR: A Swiss-Army-Knife mode for block ciphers
Zhang, Liting1; Wu, Wenling1; Wang, Peng2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6980 LNCS, p 266-280, 2011, Provable Security - 5th International Conference, ProvSec 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642243158; DOI: 10.1007/978-3-642-24316-5_19; Conference: 5th International Conference on Provable Security, ProvSec 2011, October 16, 2011 - October 18, 2011; Sponsor: The National Natural Science Foundation of China (NSFC); Xidian Univ., Key Lab. Comput. Networks; Inf. Secur., Minist. Educ.;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: In this paper, we propose a new kind of mode of operation for block ciphers. By a single key, such a mode can protect data for privacy, authenticity and they both respectively, so we call it Swiss-Army-Knife mode. The purpose of SAK mode is to increase diversity of security services for a single key, thus we can provide different protections for data with different security requirements, without rekeying the underlying block cipher. As an example, we propose PolyE+CTR, an SAK mode that combines an authentication mode PolyE and a nonce-based encryption mode CTR in the authentication-and-encryption method. PolyE+CTR is provably secure with high efficiency. © 2011 Springer-Verlag. (24 refs.)Main Heading: CryptographyControlled terms: Authentication - Lyapunov methodsUncontrolled terms: Block ciphers - Encryption mode - Mode of operations - Provable Security - Provably secure - Re-keying - Security requirements - Security servicesClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Towards a reliable spam-proof tagging system
Zhai, Ennan1; Ding, Liping1; Qing, Sihan1, 2
Source: Proceedings - 2011 5th International Conference on Secure Software Integration and Reliability Improvement, SSIRI 2011, p 174-181, 2011, Proceedings - 2011 5th International Conference on Secure Software Integration and Reliability Improvement, SSIRI 2011; ISBN-13: 9780769544533; DOI: 10.1109/SSIRI.2011.30; Article number: 5992016; Conference: 2011 5th International Conference on Secure Software Integration and Reliability Improvement, SSIRI 2011, June 27, 2011 - June 29, 2011; Sponsor: Korea Software Engineering Society;
Publisher: IEEE Computer Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, China2 School of Software and Microelectronics, Peking University, China
Abstract: Tagging systems are particularly vulnerable to tag spam. Although some previous efforts aim to address this problem with detection-based or demotion-based approaches tricky attacks launched by attackers who can exploit vulnerabilities of spam-resistant mechanisms are still able to invalidate those efforts. Therefore it is challenging to resist tricky spam attacks in tagging systems. This paper proposes a novel spam-proof tagging system which can provide high-quality tag search results even under tricky attacks based on four key insights demotion-based strategy reputation altruistic users and social networking. Specifically our system upgrades/degrades the ranks of correct/incorrect content items in search results through introducing personalized user's reliability degrees and responsible users thus avoiding clients pick unwanted content. Experimental results illustrated our system could effectively defend against tricky tag spam attacks and work better than current prevalent tag search models. © 2011 IEEE. (30 refs.)Main Heading: User interfacesControlled terms: Internet - Reliability - Social networking (online) - Supervisory personnel - ThesauriUncontrolled terms: High quality - Reliability degree - Reputation - Search models - Search results - Spam-resistance - System upgrade - Tag spam - Tagging systemsClassification Code: 421 Strength of Building Materials; Mechanical Properties - 722.2 Computer Peripheral Equipment - 723 Computer Software, Data Handling and Applications - 903.3 Information Retrieval and Use - 912.4 Personnel
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A comparative evaluation of cache strategies for elastic caching platforms
Qin, Xiulei1, 3; Zhang, Wenbo1; Wang, Wei1; Wei, Jun1, 2; Zhong, Hua1; Huang, Tao1, 2
Source: Proceedings - International Conference on Quality Software, p 166-175, 2011, Proceedings - 11th International Conference on Quality Software, QSIC 2011
; ISSN: 15506002; ISBN-13: 9780769544687; DOI: 10.1109/QSIC.2011.14; Article number: 6004324; Conference: 11th International Conference on Quality Software, QSIC 2011, July 13, 2011 - July 14, 2011; Sponsor: Computer Science School of the Universidad Complutense de Madrid; Madrid Convention Bureau of the Madrid City Council;
Publisher: IEEE Computer Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China2 State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing, China3 Graduate University, Chinese Academy of Sciences, Beijing, China
Abstract: With the rapid development of cloud computing, traditional TP applications are evloving into the Extreme Transaction Processing (XTP) applications which are characterized by exceptionally demanding performance, scalability, availability, security, manageability and dependability require requirements, elastic caching platforms (ECPs) are introduced to help meet these requirements. Three popular cache strategies for ECPs have been proposed, say replicated strategy, partitioned strategy and near strategy. According to our investigations, many ECPs support multiple cache strategies. In this paper, we evaluate the impact of the three cache strategies using the TPC-W benchmark. To the best of our knowledge, this paper is the first evaluation of distributed cache strategies for ECPs. The main contribution of this work is guidelines that could help system administrators decide effectively which cache strategy would perform better under different conditions. Our work shows that the selection of the best cache strategy is related with workload patterns, cluster size and the number of concurrent users. We also find that four important metrics (number of "get" operations, message throughput, get/put ratio, and cache hit rate) could be used to help characterize the current condition. © 2011 IEEE. (37 refs.)Main Heading: Cloud computingControlled terms: Availability - ScalabilityUncontrolled terms: Cache hit rates - cache strategy - Cluster sizes - Comparative evaluations - Distributed cache - Elastic caching platform - Help systems - Rapid development - Selection of the best - TPC-W benchmark - Transaction processing - Workload patternsClassification Code: 718 Telephone Systems and Related Technologies; Line Communications - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 913.5 Maintenance - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
DRiVeR: Diagnosing runtime property violations based on dependency rules
Liu, Yanbin1, 2; Yang, Ye1; Yang, Qiusong1; Li, Mingshu1
Source: 2011 5th International Conference on Secure Software Integration and Reliability Improvement - Companion, SSIRI-C 2011, p 194-201, 2011, 2011 5th International Conference on Secure Software Integration and Reliability Improvement - Companion, SSIRI-C 2011; ISBN-13: 9780769544540; DOI: 10.1109/SSIRI-C.2011.38; Article number: 6004476; Conference: 2011 5th International Conference on Secure Software Integration and Reliability Improvement - Companion, SSIRI-C 2011, June 27, 2011 - June 29, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 Lab for Internet Software Technology, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Department of Equipment Command and Management, Ordnance Engineering College, Shijiazhuang, China
Abstract: To ensure the reliability of complex software systems, runtime software monitoring is widely accepted to monitor and check system execution against formal properties specification at runtime. Runtime software monitoring can detect property violations, however it can not explain why a violation has occurred. Diagnosing runtime property violations is still a challenge issue. In this paper, a novel diagnosis method based on dependency rules is constructed to diagnose runtime property violations in complex software systems. A set of rules is formally defined to isolate software fault from hardware fault, then software faults is localized by combining trace slicing and dicing. The method is implemented in the runtime software monitoring system SRMS, and experimental results demonstrate that the method can effectively isolate and locate the related faults with property violations. © 2011 IEEE. (30 refs.)Main Heading: Software reliabilityControlled terms: C (programming language) - Failure analysis - Monitoring - Program diagnosticsUncontrolled terms: Dependency rules - Fault localization - Program slicing - Property violation - Runtime MonitoringClassification Code: 944 Moisture, Pressure and Temperature, and Radiation Measuring Instruments - 943 Mechanical and Miscellaneous Measuring Instruments - 942 Electric and Electronic Measuring Instruments - 941 Acoustical and Optical Measuring Instruments - 921 Mathematics - 723 Computer Software, Data Handling and Applications - 421 Strength of Building Materials; Mechanical Properties
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Network analysis of OSS evolution: An empirical study on ArgoUML project
Zhang, Wen1; Yang, Ye1; Wang, Qing1
Source: IWPSE-EVOL'11 - Proceedings of the 12th International Workshop on Principles on Software Evolution, p 71-80, 2011, IWPSE-EVOL'11 - Proceedings of the 12th International Workshop on Principles on Software Evolution; ISBN-13: 9781450308489; DOI: 10.1145/2024445.2024459; Conference: 2011 12th International Workshop on Principles on Software Evolution and 7th ERCIM Workshop on Software Evolution, IWPSE-EVOL'11, September 5, 2011 - September 6, 2011; Sponsor: Special Interest Group on Software Engineering (SIGSOFT); ERCIM;
Publisher: Association for Computing Machinery
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: While complexity is an essential problem inherent in software system and its development, OSS (Open-Source Software) is not an exception and is not immune to this problem as well. The fast growth of OSS movement has impressed us with reduced cost but high quality software. To learn some lessons from successful OSS in handling the complexity, social network analysis is prevalent in analyzing both human-aspect and Source-code-aspect interaction of OSS. This paper conducted an empirical study of an OSS project-ArgoUML. Unlike most previous studies regarding OSS email archives as a whole social network, our focus is on the quantitative analysis of a series of social networks produced in the process of OSS version evolution and module development. Through the empirical study, we have found that all the social network measures employed in this study are comparable to identify core developers of ArgoUML project. The frequency of co-occurrence of developers within the same topic is not a decisive factor to identify core developers. Developers within the same module communicate closely and frequently with each other. The more modules a developer developed, the more communication he (or she) will have with other developers. Although participants of developers' mailing lists are fluctuating in a large magnitude, the committers of the Source code are kept stable in each version evolution. Moreover, the variation of committers of Source code in version evolutions is almost unpredictable based on the variation of participants in developers' mailing lists. © 2011 ACM. (18 refs.)Main Heading: Open systemsUncontrolled terms: Co-occurrence - E-mail archives - Empirical studies - Essential problems - High-quality software - Mailing lists - module community - Open Sources - Open-Source softwares - Reduced cost - Social Network Analysis - Social Networks - Software systems - Source codes - version evolutionClassification Code: 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A novel formalization of symbolic trajectory evaluation semantics in Isabelle/HOL
Li, Yongjian1; Hung, William N.N.2; Song, Xiaoyu3
Source: Theoretical Computer Science, v 412, n 25, p 2746-2765, June 3, 2011
; ISSN: 03043975; DOI: 10.1016/j.tcs.2011.01.032;
Publisher: Elsevier
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Synopsys Inc., Mountain View, CA 94043, United States3 Department of ECE, Portland State University, Portland, OR 97207, United States
Abstract: This paper presents a formal symbolic trajectory evaluation (STE) theory based on a structural netlist circuit model, instead of an abstract next state function. We introduce an inductive definition for netlists, which gives an accurate and formal definition for netlist structures. A closure state function of netlists is formally introduced in terms of the formal netlist model. We refine the definition of the defining trajectory and the STE implementation to deal with the closure state function. The close correspondence between netlist structures and properties is discussed. We present a set of novel algebraic laws to characterize the relation between the structures and properties of netlists. Finally, the application of the new laws is demonstrated by parameterized verification of the properties of content-addressable memories. © 2010 Elsevier B.V. All rights reserved. (28 refs.)Main Heading: Formal methodsControlled terms: Semantics - TrajectoriesUncontrolled terms: Closure semantics - Formal semantics - Isabelle/HOL - Netlist - Symbolic trajectory evaluationClassification Code: 404.1 Military Engineering - 723.1 Computer Programming - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A couple scheme for a large-scale fluid simulation
Wu, Xiaolong1, 4; Wu, Enhua1, 2; Zhang, Hui3
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 6, p 1028-1033, June 2011; Language: Chinese
; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macao, China3 Department of Engineering Physics, Tsinghua University, Beijing 100084, China4 Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: When simulating tsunami etc., only around the place where the disaster occurs, full three-dimension simulation is needed. At the other places, two-dimensional wave simulation can be used. To effectively use the computing reSources for solving this kind of problem, a new algorithm based on the region coupling is proposed. By the algorithm, the whole computing region is divided into complex regions and simple regions, where the three-dimension Navier-Stokes simulation and two-dimension wave simulation are used respectively. Along the boundary of complex or simple regions, related variables are extrapolated from the other regions. As a result, the physical information can be exchanged between different regions and the computing regions are tightly coupled. Finally in this paper, an example of wave creation and propagation is given to show the feasibility of the algorithm. (19 refs.)Main Heading: Navier Stokes equationsControlled terms: AlgorithmsUncontrolled terms: Computing reSource - Couple scheme - Fluid simulations - Navier Stokes simulation - Physical information - Related variables - Three-dimension - Tightly-coupled - Two-dimension - Two-dimensional waves - Wave simulation - Wave simulationsClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics - 921.2 Calculus
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Updating preconditioner for iterative method in time domain simulation of power systems
Wang, Ke1, 2; Xue, Wei2, 3; Lin, Haixiang4; Xu, Shiming4; Zheng, Weimin2, 3
Source: Science China Technological Sciences, v 54, n 4, p 1024-1034, April 2011, Special Topic on Safety of Watershed Water and Major Projects (767-810)
; ISSN: 16747321, E-ISSN: 1862281X; DOI: 10.1007/s11431-010-4267-y;
Publisher: Springer Verlag
Author affiliation: 1 Laboratory of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China3 Tsinghua National Laboratory for Information Science and Technology (TNList), Beijing 100084, China4 Delft Institute of Applied Mathematics, Delft University of Technology, Delft 2628 CD, Netherlands
Abstract: The numerical solution of the differential-algebraic equations (DAEs) involved in time domain simulation (TDS) of power systems requires the solution of a sequence of large scale and sparse linear systems. The use of iterative methods such as the Krylov subspace method is imperative for the solution of these large and sparse linear systems. The motivation of the present work is to develop a new algorithm to efficiently precondition the whole sequence of linear systems involved in TDS. As an improvement of dishonest preconditioner (DP) strategy, updating preconditioner strategy (UP) is introduced to the field of TDS for the first time. The idea of updating preconditioner strategy is based on the fact that the matrices in sequence of the linearized systems are continuous and there is only a slight difference between two consecutive matrices. In order to make the linear system sequence in TDS suitable for UP strategy, a matrix transformation is applied to form a new linear sequence with a good shape for preconditioner updating. The algorithm proposed in this paper has been tested with 4 cases from real-life power systems in China. Results show that the proposed UP algorithm efficiently preconditions the sequence of linear systems and reduces 9%-61% the iteration count of the GMRES when compared with the DP method in all test cases. Numerical experiments also show the effectiveness of UP when combined with simple preconditioner reconstruction strategies. © 2011 Science China Press and Springer-Verlag Berlin Heidelberg. (28 refs.)Main Heading: Iterative methodsControlled terms: Algorithms - Differential equations - Differentiation (calculus) - Linear systems - Linear transformations - Matrix algebra - Time domain analysisUncontrolled terms: Differential algebraic equations - GMRES - Iteration count - Krylov subspace method - Linear sequence - Linearized systems - Matrix transformation - Numerical experiments - Numerical solution - power system simulation - Power systems - Preconditioners - Sparse linear systems - Test case - Time-domain simulations - updating preconditionerClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research on orbit determination of spacecraft based on series image from motion platform
Liu, Qing-Wu1, 2; Cao, Zong-Sheng3; Zheng, Chang-Wen1; Hu, Xiao-Hui1
Source: Yuhang Xuebao/Journal of Astronautics, v 32, n 1, p 167-171, January 2011; Language: Chinese
; ISSN: 10001328; DOI: 10.3873/j.issn.1000-1328.2011.01.026;
Publisher: China Spaceflight Society
Author affiliation: 1 National Key Laboratory of Integrated Information System Technology, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100190, China3 Beijing Special Engineering Design Institute, Beijing 100028, China
Abstract: A new method using image series of monocular optical imaging system on motion platform for spacecraft orbit determination is developed in the absence of position or velocity information of target spacecraft. Through analysis of characteristics about the imaging system and space relation of the imaging system and the spacecraft, the concept on the real average angular velocity and the apparent average angular velocity is presented to design an equation in order to estimate slope distance of the imaging system according to longitude and latitude data derived from image series and to calculate spacecraft initial state parameters. This method offers an approach for orbit determination with series image from the monocular optical imaging system by coping with the problem to calculate the longitude and latitude according to image series and then to estimate the slope distance. Finally a simulation instance of a GEO satellite is adopted to validate this method. (7 refs.)Main Heading: SpacecraftControlled terms: Angular velocity - Estimation - Imaging systems - Optical image storage - OrbitsUncontrolled terms: GEO satellites - Image series - Initial state - Motion platforms - On orbit - Optical imaging system - Orbit determination - Series image - Slope distance estimation - Space-based - Spacecraft orbit - Velocity informationClassification Code: 931.1 Mechanics - 921 Mathematics - 746 Imaging Techniques - 741.3 Optical Devices and Systems - 741 Light, Optics and Optical Devices - 655.2 Satellites - 655.1 Spacecraft, General
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An efficient dynamic authenticated key exchange protocol with selectable identities
Guo, Hua1, 2; Li, Zhoujun3; Mu, Yi4; Zhang, Fan2; Wu, Chuankun5; Teng, Jikai5
Source: Computers and Mathematics with Applications, v 61, n 9, p 2518-2527, May 2011
; ISSN: 08981221; DOI: 10.1016/j.camwa.2011.02.041;
Publisher: Elsevier Ltd
Author affiliation: 1 State Key Laboratory of Software Development Environment, Beihang University, Beijing, China2 School of Computer Science and Engineering, Beihang University, Beijing, China3 Beijing Key Laboratory of Network Technology, BeiHang University, Beijing, China4 School of Computer Science Software Engineering, University of Wollongong, NSW, Australia5 State Key Lab of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: In the traditional identity-based cryptography, when a user holds multiple identities as its public keys, it has to manage an equal number of private keys. The recent advances of identity-based cryptography allow a single private key to map multiple public keys (identities) that are selectable by the user. This approach simplifies the private key management. Unfortunately, the existing schemes have a heavy computation overhead, since the private key generator has to authenticate all identities in order to generate a resultant private key. In particular, it has been considered as a drawback that the data size for a user is proportional to the number of associated identities. Moreover, these schemes do not allow dynamic changes of user identities. When a user upgrades its identities, the private key generator (PKG) has to authenticate the identities and generate a new private key. To overcome these problems, in this paper we present an efficient dynamic identity-based key exchange protocol with selectable identities, and prove its security under the bilinear DiffieHellman assumption in the random oracle model. © 2011 Elsevier Ltd. All rights reserved. (17 refs.)Main Heading: Security of dataControlled terms: Cryptography - Security systemsUncontrolled terms: Authenticated key exchange protocols - Computation overheads - Computer security - Data size - Diffie-Hellman assumption - Dynamic changes - Identity based cryptography - Identity-based - Identity-based key exchange - Key exchange protocols - Multiple identities - Private key - Private key generators - Public keys - Random Oracle model - User identityClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 914.1 Accidents and Accident Prevention
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Efficiently retrieving longest common route patterns of moving objects by summarizing turning regions
Huang, Guangyan1; Zhang, Yanchun1; He, Jing1; Ding, Zhiming2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6634 LNAI, n PART 1, p 375-386, 2011, Advances in Knowledge Discovery and Data Mining - 15th Pacific-Asia Conference, PAKDD 2011,Proceedings;ISSN:03029743,E-ISSN:16113349;ISBN-13:9783642208409;
DOI: 10.1007/978-3-642-20841-6-31; Conference: 15th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2011, May 24, 2011 - May 27, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Centre for Applied Informatics, School of Engineering and Science, Victoria University, Australia2 Institute of Software, Chinese Academy of Sciences, China
Abstract: The popularity of online location services provides opportunities to discover useful knowledge from trajectories of moving objects. This paper addresses the problem of mining longest common route (LCR) patterns. As a trajectory of a moving object is generally represented by a sequence of discrete locations sampled with an interval, the different trajectory instances along the same route may be denoted by different sequences of points (location, timestamp). Thus, the most challenging task in the mining process is to abstract trajectories by the right points. We propose a novel mining algorithm for LCR patterns based on turning regions (LCRTurning), which discovers a sequence of turning regions to abstract a trajectory and then maps the problem into the traditional problem of mining longest common subsequences (LCS). Effectiveness of LCRTurning algorithm is validated by an experimental study based on various sizes of simulated moving objects datasets. © 2011 Springer-Verlag. (13 refs.)Main Heading: Data miningControlled terms: Abstracting - Algorithms - Fading (radio) - Mining - TrajectoriesUncontrolled terms: Data sets - Discrete location - Experimental studies - Location services - longest common route patterns - Longest common subsequences - Mining algorithms - Mining process - Moving objects - Spatial-temporal data - Time-stampClassification Code: 404.1 Military Engineering - 502.1 Mine and Quarry Operations - 716.3 Radio Systems and Equipment - 723 Computer Software, Data Handling and Applications - 903.1 Information Sources and Analysis - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Multiresolution terrain generation based on implicit restricted quadtree
Zhang, Jie1; Zheng, Changwen1; Lv, Pin1; Hu, Xiaohui1
Source: Proceedings of the 12th IASTED International Conference on Computer Graphics and Imaging, CGIM 2011, p 40-46, 2011, Proceedings of the 12th IASTED International Conference on Computer Graphics and Imaging, CGIM 2011; ISBN-13: 9780889868656; DOI: 10.2316/P.2011.722-011; Conference: 12th IASTED International Conference on Computer Graphics and Imaging, CGIM 2011, February 16, 2011 - February 18, 2011;
Publisher: Acta Press
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, China
Abstract: Generation and simplification of multiresolution terrain model in real-time is significant for large scale terrain visualization applications demanding high rendering quality at interactive frame rates. An efficient multiresolution terrain modeling algorithm is proposed in this paper on the basis of a hybrid structure, the implicit restricted quadtree. Represented as a compact flag matrix, the implicit restricted quadtree is simple to construct and traverse, resulting in an effective construction and simplification process of the terrain model. A crack prevention approach to eliminate cracks on the multiresolution terrain model is also presented in accordance with the relationship between the elements in the flag matrix. Simulation results suggest that the algorithm can produce realistic multiresolution terrain scene in real time. (13 refs.)Main Heading: LandformsControlled terms: Algorithms - Computer simulation - Cracks - Interactive computer graphics - User interfaces - VisualizationUncontrolled terms: Crack prevention - Hybrid structure - Interactive frame rates - Large-scale terrain - Level-of-detail - matrix - Multi-resolutions - Quad trees - Real time - Real-time terrains - Rendering quality - Simulation result - Terrain model - Terrain ModelingClassification Code: 921 Mathematics - 902.1 Engineering Graphics - 723.5 Computer Applications - 723 Computer Software, Data Handling and Applications - 722.2 Computer Peripheral Equipment - 481.1 Geology - 421 Strength of Building Materials; Mechanical Properties
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Towards practical ABox abduction in large OWL DL ontologies
Du, Jianfeng1, 2; Qi, Guilin3, 4; Shen, Yi-Dong5; Pan, Jeff Z.6
Source: Proceedings of the National Conference on Artificial Intelligence, v 2, p 1160-1165, 2011, AAAI-11 / IAAI-11 - Proceedings of the 25th AAAI Conference on Artificial Intelligence and the 23rd Innovative Applications of Artificial Intelligence Conference; ISBN-13: 9781577355090; Conference: 25th AAAI Conference on Artificial Intelligence and the 23rd Innovative Applications of Artificial Intelligence Conference, AAAI-11 / IAAI-11, August 7, 2011 - August 11, 2011; Sponsor: Association for the Advancement of Artificial Intelligence (AAAI); National Science Foundation; AI Journal; Google, Inc.; Microsoft Research;
Publisher: AI Access Foundation
Author affiliation: 1 Guangdong University of Foreign Studies, Guangzhou 510006, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China3 School of Computer Science and Engineering, Southeast University, NanJing 211189, China4 State Key Laboratory for Novel Software Technology, Nanjing University, China5 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China6 Department of Computing Science, University of Aberdeen, Aberdeen AB243UE, United Kingdom
Abstract: ABox abduction is an important aspect for abductive reasoning in Description Logics (DLs). It finds all minimal sets of ABox axioms that should be added to a background ontology to enforce entailment of a specified set of ABox axioms. As far as we know, by now there is only one ABox abduction method in expressive DLs computing abductive solutions with certain minimality. However, the method targets an ABox abduction problem that may have infinitely many abductive solutions and may not output an abductive solution in finite time. Hence, in this paper we propose a new ABox abduction problem which has only finitely many abductive solutions and also propose a novel method to solve it. The method reduces the original problem to an abduction problem in logic programming and solves it with Prolog engines. Experimental results show that the method is able to compute abductive solutions in benchmark OWL DL ontologies with large ABoxes. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved. (16 refs.)Main Heading: OntologyControlled terms: Artificial intelligence - Data description - Logic programming - PROLOG (programming language)Uncontrolled terms: Abductive reasoning - Description logic - Finite time - Minimality - Novel methodsClassification Code: 723 Computer Software, Data Handling and Applications - 903 Information Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Bench4Q: A QoS-oriented e-commerce benchmark
Zhang, Wenbo1; Wang, Sa1; Wang, Wei1; Zhong, Hua1
Source: Proceedings - International Computer Software and Applications Conference, p 38-47, 2011, Proceedings - 35th Annual IEEE International Computer Software and Applications Conference, COMPSAC 2011
; ISSN: 07303157; ISBN-13: 9780769544397; DOI: 10.1109/COMPSAC.2011.14; Article number: 6032323; Conference: 35th Annual IEEE International Computer Software and Applications Conference, COMPSAC 2011, July 18, 2011 - July 21, 2011; Sponsor: IEEE; IEEE Computer Society;
Publisher: IEEE Computer Society
Author affiliation: 1 Technology Center of Software Engineering, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: E-commerce systems are typically QoS-sensitive, so QoS-oriented tunings of e-commerce servers are very important for such systems. However, existing e-commerce benchmarks are insufficient for supporting QoS-oriented tunings, because some critical QoS features of e-commerce systems cannot be precisely evaluated by them. One example of these features is the integrality of service, which is usually expressed as a session, provided to customers. This paper presents a QoS-oriented e-commerce benchmark, which is named Bench4Q and is an extension of TPC-W supporting QoS-oriented tuning of e-commerce servers. The main features of Bench4Q include: (1) supporting session-based metrics analysis and (2) simulating QoS-sensitive load for QoS-oriented capacity analysis. We illustrate the promising benefits of these features for QoS-oriented tuning of an e-commerce server by a series of Bench4Q benchmarking on a typical e-commerce server. © 2011 IEEE. (42 refs.)Main Heading: Electronic commerceControlled terms: Benchmarking - Computer applications - Quality of serviceUncontrolled terms: Benchmark - Capacity analysis - E-commerce servers - E-commerce systems - TPC-WClassification Code: 913 Production Planning and Control; Manufacturing - 912 Industrial Engineering and Management - 723.5 Computer Applications - 723 Computer Software, Data Handling and Applications - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Optimizing SpMV for diagonal sparse matrices on GPU
Sun, Xiangzheng1, 2, 3; Zhang, Yunquan1, 2; Wang, Ting1; Zhang, Xianyi1; Yuan, Liang1, 2, 3; Rao, Li1, 2, 3
Source: Proceedings of the International Conference on Parallel Processing, p 492-501, 2011, Proceedings - 2011 International Conference on Parallel Processing, ICPP 2011
; ISSN: 01903918; ISBN-13: 9780769545103; DOI: 10.1109/ICPP.2011.53; Article number: 6047217; Conference: 40th International Conference on Parallel Processing, ICPP 2011, September 13, 2011 - September 16, 2011; Sponsor: Int. Assoc. Comput. Commun. (IACC);
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 Lab. of Parallel Software and Computational Science, Institute of Software, Chinese Academy of Sciences, Beijing, China2 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China3 Graduate University of Chinese Academy of Sciences, Beijing, China
Abstract: Sparse Matrix-Vector multiplication (SpMV) is an important computational kernel in scientific applications. Its performance highly depends on the nonzero distribution of sparse matrices. In this paper, we propose a new storage format for diagonal sparse matrices, defined as Compressed Row Segment with Diagonal-pattern (CRSD). In CRSD, we design diagonal patterns to represent the diagonal distribution. As the Graphics Processing Units (GPUs) have tremendous computation power and OpenCL makes them more suitable for the scientific computing, we implement the SpMV for CRSD format on the GPUs using OpenCL. Since the OpenCL kernels are complied at runtime, we design the code generator to produce the codelets for all diagonal patterns after storing matrices into CRSD format. Specifically, the generated codelets already contain the index information of nonzeros, which reduces the memory pressure during the SpMV operation. Furthermore, the code generator also utilizes property of memory architecture and thread schedule on the GPUs to improve the performance. In the evaluation, we select four storage formats from prior state-of-the-art implementations (Bell and Garland, 2009) on GPU. Experimental results demonstrate that the speedups reach up to 1.52 and 1.94 in comparison with the optimal implementation of the four formats for the double and single precision respectively. We also evaluate on a two-socket quad-core Intel Xeon system. The speedups reach up to 11.93 and 12.79 in comparison with CSR format under 8 threads for the double and single precision respectively. © 2011 IEEE. (20 refs.)Main Heading: Matrix algebraControlled terms: Memory architecture - Network components - Optimization - Program processorsUncontrolled terms: Code generators - Computation power - Computational kernels - Graphics processing units - Index information - Memory pressure - Runtimes - Scientific applications - Single precision - Sparse matrices - Sparse matrix-vector multiplication - Storage formatsClassification Code: 703.1 Electric Networks - 722 Computer Systems and Equipment - 723.1 Computer Programming - 921.1 Algebra - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On the number of infinite sequences with trivial initial segment complexity
Barmpalias, George1; Sterkenburg, T.F.2
Source: Theoretical Computer Science, v 412, n 52, p 7133-7146, December 9, 2011
; ISSN: 03043975; DOI: 10.1016/j.tcs.2011.09.020;
Publisher: Elsevier
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, P.O. Box 8718, Beijing 100190, China2 Institute for Logic, Language and Computation, Universiteit Van Amsterdam, P.O. Box 94242, 1090 GE Amsterdam, Netherlands
Abstract: The sequences which have trivial prefix-free initial segment complexity are known as K-trivial sets, and form a cumulative hierarchy of length ω. We show that the problem of finding the number of K-trivial sets in the various levels of the hierarchy is Δ30. This answers a question of Downey/Miller/Yu (see Downey (2010) [7, Section 10.1.4]) which also appears in Nies (2009) [17, Problem 5.2.16]. We also show the same for the hierarchy of the low for K sequences, which are the ones that (when used as oracles) do not give a shorter initial segment complexity compared to the computable oracles. In both cases the classification Δ30 is sharp. © 2011 Elsevier B.V. All rights reserved. (21 refs.)Uncontrolled terms: Arithmetical complexity - K-trivial sets - Kolmogorov complexity - Prefix-free - Trees
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Simulation for light power distribution of 3D InGaN/GaN MQW LED with textured surface
Cheng, Li-Wen1; Sheng, Yang2; Xia, Chang-Sheng2; Lu, Wei1; Lestrade, Michel3; Li, Zhan-Ming3
Source: Optical and Quantum Electronics, v 42, n 11-13, p 739-745, October 2011
; ISSN: 03068919, E-ISSN: 1572817X; DOI: 10.1007/s11082-011-9470-y;
Publisher: Springer New York
Author affiliation: 1 National Lab for Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, 500 Yu Tian Road, Shanghai 200083, China2 Crosslight Software China, Building JieDi, 2790 Zhongshan Bei Road, Shanghai 200063, China3 Crosslight Software Inc., 121-3989 Henning Drive, Burnaby, BC V6C 6P7, Canada
Abstract: In this paper, we introduce a full 3D simulation for light power distribution of an InGaN/GaN MQW LED with a textured surface. Device simulation was performed with the APSYS software to get power distribution of light Sources inside the LED. Based on this, ray tracing simulation was carried out to get light power distribution outside the LED. During the process of ray tracing, the textured surface was treated as a special material interface whose reflectivity, transmittance and refraction angle are obtained with a Finite-Difference Time-Domain (FDTD) method instead of using the usual Fresnel formulas for normal material interfaces. By comparing the ray tracing results with and without the textured surface, we found that the textured surface yields a smoother transmitted power distribution and greatly improved power extraction efficiency, which are comparable to experiment. These effects may be further improved by optimizing the texture geometry. © 2011 Springer Science+Business Media, LLC. (13 refs.)Main Heading: SurfacesControlled terms: Computer software - Extraction - Finite difference time domain method - Interfaces (materials) - Light emitting diodes - Light Sources - Ray tracing - Three dimensional - Three dimensional computer graphics - Time domain analysisUncontrolled terms: 3D simulations - Device simulations - Fresnel formula - InGaN/GaN - Light power - Light power distribution - Material interfaces - Power distributions - Power extraction efficiency - Ray tracing simulation - Refraction angles - Simulation - Texture geometry - Textured surface - Transmitted powerClassification Code: 951 Materials Science - 931 Classical Physics; Quantum Theory; Relativity - 921 Mathematics - 802.3 Chemical Operations - 744 Lasers - 741.1 Light/Optics - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Characterizations of one-way general quantum finite automata
Li, Lvzhou1; Qiu, Daowen1, 2, 3; Zou, Xiangfu1; Li, Lvjun1; Wu, Lihua1; Mateus, Paulo2 Source: Theoretical Computer Science, 2011; ISSN: 03043975; DOI: 10.1016/j.tcs.2011.10.021 Article in Press
Author affiliation: 1 Department of Computer Science, Sun Yat-sen University, Guangzhou 510006, China2 SQIG-Instituto de Telecomunicações, Departamento de Matemática, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1049-001, Lisbon, Portugal3 The State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100080, China
Abstract: Generally, unitary transformations limit the computational power of quantum finite automata (QFA). In this paper, we study a generalized model named one-way general quantum finite automata (1gQFA), in which each symbol in the input alphabet induces a trace-preserving quantum operation, instead of a unitary transformation. Two different kinds of 1gQFA will be studied: measure-once one-way general quantum finite automata (MO-1gQFA) where a measurement deciding to accept or reject is performed at the end of a computation, and measure-many one-way general quantum finite automata (MM-1gQFA) where a similar measurement is performed after each trace-preserving quantum operation on reading each input symbol. We characterize the measure-once model from three aspects: the closure property, the language recognition power, and the equivalence problem. We prove that MO-1gQFA recognize, with bounded error, precisely the set of all regular languages. Our results imply that some models of quantum finite automata proposed in the literature, which were expected to be more powerful, still cannot recognize non-regular languages. We prove that MM-1gQFA also recognize only regular languages with bounded error. Thus, MM-1gQFA and MO-1gQFA have the same language recognition power, in sharp contrast with traditional MO-1QFA and MM-1QFA, the former being strictly less powerful than the latter. Finally, we present a necessary and sufficient condition for two MM-1gQFA to be equivalent. © 2011 Elsevier B.V. All rights reserved.Main Heading: Automata theoryControlled terms: Equivalence classesUncontrolled terms: Bounded errors - Closure property - Computational power - Equivalence problem - Generalized models - Language recognition - Non-regular languages - Quantum finite automata - Quantum operations - Sharp contrast - Sufficient conditions - Unitary transformationsClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Improvement and analysis of VDP method in time/memory tradeoff applications
Wang, Wenhao1, 2; Lin, Dongdai1; Li, Zhenqi1, 2; Wang, Tianze1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p 282-296, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426; DOI: 10.1007/978-3-642-25243-3_23; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 SKLOIS, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Graduate University, Chinese Academy of Sciences, Beijing, China
Abstract: In many cases, cryptanalysis of a cryptographic system can be interpreted as the process of inverting a one-way function. TMTO is designed to be a generic approach that can be used on any one-way function independent of the structure of the specific target system. It was first introduced to attack block ciphers by Hellman in 1980. The distinguished point (DP) method is a technique that reduces the number of table look-ups performed by Hellman's algorithm. A variant of the DP (VDP) method is introduced to reduce the amount of memory required to store the pre-computed tables while maintaining the same success rate and online time. Both the DP method and VDP method can be applied to Hellman tradeoff or rainbow tradeoff. We carefully examine the technical details of the VDP method and find that it is possible to construct functions for which the original method fails. Based on the analysis, we propose a modification of the VDP method. Furthermore, we present an accurate version of the tradeoff curve that does not ignore the effect of false alarms and takes storage reduction techniques into consideration. We find optimal parameter sets of this new method by minimizing the tradeoff coefficient. A more exact and fair comparison between tradeoff algorithms is also given, which shows that our method applied to the Hellman tradeoff performs best among them. © 2011 Springer-Verlag. (10 refs.)Main Heading: Security of dataControlled terms: Algorithms - CryptographyUncontrolled terms: Block ciphers - Cryptographic systems - False alarms - Generic approach - One-way functions - Online time - Optimal parameter - Reduction techniques - Target systems - Technical details - Trade-off coefficient - Trade-off curvesClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
HyperCrop: A hypervisor-based countermeasure for return oriented programming
Jiang, Jun1; Jia, Xiaoqi1; Feng, Dengguo1; Zhang, Shengzhi2; Liu, Peng2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p 360-373, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426; DOI: 10.1007/978-3-642-25243-3_29; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Pennsylvania State University, University Park, PA 16802, United States
Abstract: Return oriented programming (ROP) has recently caught great attention of both academia and industry. It reuses existing binary code instead of injecting its own code and is able to perform arbitrary computation due to its Turing-completeness. Hence, It can successfully bypass state-of-the-art code integrity mechanisms such as NICKLE and SecVisor. In this paper, we present HyperCrop, a hypervisor-based approach to counter such attacks. Since ROP attackers extract short instruction sequences ending in ret called "gadgets" and craft stack content to "chain" these gadgets together, our method recognizes that the key characteristics of ROP is to fill the stack with plenty of addresses that are within the range of libraries (e.g. libc). Accordingly, we inspect the content of the stack to see if a potential ROP attack exists. We have implemented a proof-of-concept system based on the open Source Xen hypervisor. The evaluation results exhibit that our solution is effective and efficient. © 2011 Springer-Verlag. (21 refs.)Main Heading: Security of dataControlled terms: Codes (symbols) - Open systemsUncontrolled terms: Code integrity - Evaluation results - Hypervisor - Hypervisor-based security - Key characteristics - Open Sources - Proof of concept - Stack contents - System-based - VirtualizationsClassification Code: 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research on key problems of covert channel in cloud computing
Wu, Jing-Zheng1, 2; Ding, Li-Ping1; Wang, Yong-Ji1, 3
Source: Tongxin Xuebao/Journal on Communications, v 32, n 9 A, p 184-203, September 2011; Language: Chinese
; ISSN: 1000436X;
Publisher: Editorial Board of Journal on Communications
Author affiliation: 1 National Engineering Research Center for Fundamental Software, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, The Chinese Academy of Sciences, Beijing 100049, China3 State Key Laboratory of Computer Science, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China
Abstract: First the development of cloud computing, virtual techonolody and the cloud security were surveied. Then the evolvements of the covert channel in operating system, database, and network in the last 40 years were reviewed. Several examples of covert channel were introduced, which indicated the necessity of research. The potential covert channels in cloud computing were classified into two new categories from the aspects of theoretical research and the engineering practice. The four key problems including the lack of definition, the lack of systemic identification and evaluation approach and the lack of security criterions were pointed out. The covert channel in cloud computing was formally defined. Finally, the academic and industrial values of covert channel research are presented. (102 refs.)Main Heading: Cloud computingControlled terms: Industrial researchUncontrolled terms: Covert channels - Engineering practices - Identification and evaluation - Security criterion - Theoretical research - Virtual technologyClassification Code: 722.4 Digital Computers and Systems - 901.3 Engineering Research
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Computational soundness about formal encryption in the presence of secret shares and key cycles
Lei, Xinfeng1; Xue, Rui1; Yu, Ting2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p 29-41, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426; DOI: 10.1007/978-3-642-25243-3_3; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Department of Computer Science, North Carolina State University, United States
Abstract: The computational soundness of formal encryption is studied extensively following the work of Abadi and Rogaway[1]. Recent work considers the scenario in which secret sharing is needed, and separately, the scenario when key cycles are present. The novel technique is the use of a co-induction definition of the adversarial knowledge. In this paper, we prove a computational soundness theorem of formal encryption in the presence of both key cycles and secret shares at the same time, which is a non-trivial extension of former approaches. © 2011 Springer-Verlag. (28 refs.)Main Heading: Security of dataControlled terms: Computation theory - CryptographyUncontrolled terms: Co inductions - Computational soundness - Computational soundness theorem - Formal encryption - Non-trivial - Novel techniques - Secret sharingClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Two-dimensional clock synchronization algorithm for vehicular delay tolerant network
Zhao, Zhong-Hua1, 2, 3; Huang, Fu-Wei1; Liu, Yan4; Sun, Li-Min1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n SUPPL. 1, p 51-61, October 2011; Language: Chinese
; ISSN: 10009825;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Information Security, Beijing 100049, China3 Graduate University, The Chinese Academy of Sciences, Beijing 100049, China4 School of Software and Microelectronics, Peking University, Beijing 102600, China
Abstract: In vehicular delay tolerant network (VDTN), vehicles nodes often move frequently, which results in existing no end-to-end communication path between any nodes at a given moment, and VDTN has intermittently connectivity. The clock oscillators in the vehicle nodes are also susceptible to environment factors, and the crystal frequency fluctuates in the state of irregular. There exits much limitation and difficulty, when the traditional network clock synchronization algorithms are introduced directly into VDTN. A two-dimensional clock synchronization algorithm for vehicular delay tolerant network is proposed, which includes two synchronization processes in the vertical dimension and in the horizontal dimension. The two-dimensional clock synchronization algorithm reduces the time synchronization error and improves the synchronization accuracy, compared with the one-way synchronization. The experimental results show that VDTN obtains a higher synchronization accuracy through the two-dimensional clock synchronization algorithm. © Copyright 2011, Editorial Department of Journal of Software. All rights reserved. (15 refs.)Main Heading: SynchronizationControlled terms: Algorithms - Crystal oscillators - Mechanical clocks - Two dimensional - Wireless networksUncontrolled terms: Clock oscillators - Clock synchronization - Delay Tolerant Networks - End-to-End communication - Environment factors - Synchronization process - Time synchronization - Two-dimension - Vehicular delay tolerant network - Vertical dimensionsClassification Code: 961 Systems Science - 943.3 Special Purpose Instruments - 921 Mathematics - 723.5 Computer Applications - 723 Computer Software, Data Handling and Applications - 722.4 Digital Computers and Systems - 713.2 Oscillators
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Pattern-based moving object tracking
Liu, Kuien1; Ding, Zhiming1; Li, Mingshu1; Deng, Ke2; Zhou, Xiaofang2
Source: TDMA'11 - Proceedings of the 2011 International Workshop on Trajectory Data Mining and Analysis, p 5-14, 2011, TDMA'11 - Proceedings of the 2011 International Workshop on Trajectory Data Mining and Analysis; ISBN-13: 9781450309332; DOI: 10.1145/2030080.2030083; Conference: 2011 International Workshop on Trajectory Data Mining and Analysis, TDMA'11, Co-located with UbiComp 2011, September 18, 2011 - September 18, 2011; Sponsor: ACM SIGCHI; ACM SIGMOBILE;
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China2 School of ITEE, University of Queensland, Brisbane, Australia
Abstract: Monitoring the locations of a large number of objects that travel in a certain space is a popular problem for its importance in various application scenarios. It brings us a challenge of how to efficiently handle large volumes of location updates required to guarantee the error bound among an object's current, actual location and its current location in the tracking system. Current solutions predict the future locations based on the recent movements of the moving object. However, it is reliable to predict the position in near future only and the prediction accuracy is poor in the long term. This paper is aimed at the above weakness by introducing the movement pattern in Euclidean space based on the historical trajectories of moving objects. Dominant path pattern is proposed and employed in the moving object tracking system, which can estimate where an object will go next and how to get there. Specifically, dominant path pattern is discovered and indexed by a novel access method of efficient query processing. In addition, the pattern mining techniques with consideration of the accuracy and coverage in dominant path patterns discovering are presented. The experiments demonstrate the superiority of the proposed method comparing to existing methods by up to 73%(91%) less overall location updates on practical Taxi(Truck) dataset. Copyright 2011 ACM. (18 refs.)Main Heading: Tracking (position)Controlled terms: Data mining - Forecasting - Navigation - Taxicabs - TrajectoriesUncontrolled terms: Application scenario - Data sets - Dominant path pattern - Error bound - Euclidean spaces - Location update - Movement pattern - Moving object tracking - Moving objects - Novel access - Pattern mining - Prediction accuracy - Tracking systemClassification Code: 404.1 Military Engineering - 432.2 Passenger Highway Transportation - 716.2 Radar Systems and Equipment - 716.3 Radio Systems and Equipment - 723.3 Database Systems - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Handling missing data in software effort prediction with naive Bayes and em algorithm
Zhang, Wen1; Yang, Ye1; Wang, Qing1
Source: ACM International Conference Proceeding Series, 2011, PROMISE 2011 - 7th International Conference on Predictive Models in Software Engineering, Co-located with ESEM 2011; ISBN-13: 9781450307093; DOI: 10.1145/2020390.2020394; Article number: 2020394; Conference: 7th International Conference on Predictive Models in Software Engineering, PROMISE 2011, Co-located with ESEM 2011, September 20, 2011 - September 21, 2011;
Publisher: Association for Computing Machinery
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Background: Missing data, which usually appears in software effort datasets, is becoming an important problem in software effort prediction. Aims: In this paper, we adapt nai¨Bayes and EM (Expectation Maximization) for software effort prediction, and develop two embedded strategies: missing data toleration and missing data imputation, to handle the missing data in software effort datasets. Method: The missing data toleration strategy ignores missing values in software effort datasets while missing data imputation strategy uses observed values to impute missing values. Results: Experiments on ISBSG and CSBSG datasets demonstrate that: 1)both proposed strategies outperform BPNN with classic imputation techniques as MI and MINI. Meanwhile, the imputation strategy outperforms toleration strategy in most cases and has produced the highest accuracy as 75.15%; 2) the unlabeled projects used in training prediction model has signifintly improved the performances of effort prediction of nai¨Bayes and EM with both strategies, especially when the size of training data to the size of unlabeled data is at a relatively optimal level; 3) each class of software effort data exactly corresponds to a Gaussian component for both ISBSG and CSBSG datasets. Conclusion: Although initial experiments on ISBSG data set demonstrate some promising aspects of the proposed strategies, we cannot draw that they can be generalized to be applied in all the other software effort datasets. Copyright © 2011 ACM. (30 refs.)Main Heading: Data miningControlled terms: Algorithms - Data handling - Embedded software - Experiments - Forecasting - Mathematical models - Models - Predictive control systems - Software engineeringUncontrolled terms: Data sets - Effort prediction - EM algorithms - Expectation Maximization - Gaussian components - Imputation strategy - Imputation techniques - Missing data - Missing data toleration - Missing values - Naive Bayes - Optimal level - Prediction model - Software effort - Software effort prediction - Training data - Unlabeled dataClassification Code: 723 Computer Software, Data Handling and Applications - 731.1 Control Systems - 901.3 Engineering Research - 902.1 Engineering Graphics - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Inferring specifications for reSources from natural language API documentation
Zhong, Hao1; Zhang, Lu2, 3; Xie, Tao4; Mei, Hong2, 3
Source: Automated Software Engineering, v 18, n 3-4, p 227-261, December 2011, Special Issue on Selected Topics in Automated Software Engineering: Specification Mining and Defect Detection; Guest Editors: Mats P.E. Heimdahl and Gabriele Taentzer
; ISSN: 09288910, E-ISSN: 15737535; DOI: 10.1007/s10515-011-0082-3;
Publisher: Springer Netherlands
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software, Chinese Academy of Sciences, Beijing, China2 School of Electronics Engineering and Computer Science, Peking University, Beijing, China3 Key Laboratory of High Confidence Software Technologies, Peking University, Ministry of Education, Beijing, China4 Department of Computer Science, North Carolina State University, Raleigh, United States
Abstract: Many software libraries, especially those commercial ones, provide API documentation in natural languages to describe correct API usages. However, developers may still write code that is inconsistent with API documentation, partially because many developers are reluctant to carefully read API documentation as shown by existing research. As these inconsistencies may indicate defects, researchers have proposed various detection approaches, and these approaches need many known specifications. As it is tedious to write specifications manually for all APIs, various approaches have been proposed to mine specifications automatically. In the literature, most existing mining approaches rely on analyzing client code, so these mining approaches would fail to mine specifications when client code is not sufficient. Instead of analyzing client code, we propose an approach, called Doc2Spec, that infers reSource specifications from API documentation in natural languages. We evaluated our approach on the Javadocs of five libraries. The results show that our approach performs well on real scale libraries, and infers various specifications with relatively high precisions, recalls, and F-scores. We further used inferred specifications to detect defects in open Source projects. The results show that specifications inferred by Doc2Spec are useful to detect real defects in existing projects. © Springer Science+Business Media, LLC 2011. (75 refs.)Main Heading: SpecificationsControlled terms: Application programming interfaces (API) - Defects - Software engineeringUncontrolled terms: Client code - Detection approach - High precision - Natural languages - Open Source projects - Real defects - ReSource specification - Software librariesClassification Code: 423 Non Mechanical Properties and Tests of Building Materials - 723 Computer Software, Data Handling and Applications - 723.1 Computer Programming - 902.2 Codes and Standards - 951 Materials Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A new spectral method on triangles
Li, Youyun2; Wang, Li-Lian1; Li, Huiyuan3; Ma, Heping4
Source: Lecture Notes in Computational Science and Engineering, v 76 LNCSE, p 237-246, 2011, Spectral and High Order Methods for Partial Differential Equations - Selected Papers from the ICOSAHOM'09 Conference
; ISSN: 14397358; ISBN-13: 9783642153365; DOI: 10.1007/978-3-642-15337-2_21; Conference: 8th International Conference on Spectral and High Order Methods, ICOSAHOM'09, June 22, 2009 - June 26, 2009;
Publisher: Springer Verlag
Author affiliation: 1 Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371, Singapore2 College of Mathematics and Computing Science, Changsha University of Science and Technology, Hunan 410004, China3 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China4 Department of Mathematics, College of Sciences, Shanghai University, Shanghai 200444, China
Abstract: We propose in this note a spectral method on triangles based on a new rectangle-to-triangle mapping, which leads to more reasonable grid distributions and efficient implementations than the usual approaches based on the collapsed transform. We present the detailed implementation for spectral approximations on a triangle and discuss the extension to spectral-element methods and three dimensions. © 2011 Springer. (9 refs.)Main Heading: PhotomappingControlled terms: Differential equations - Drug products plants - Image segmentation - SpectroscopyUncontrolled terms: Efficient implementation - Spectral approximations - Spectral element method - Spectral methods - Three dimensionsClassification Code: 405.3 Surveying - 462.5 Biomaterials (including synthetics) - 741.1 Light/Optics - 801 Chemistry - 921.2 Calculus
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Fast GMRES-GPU solver for large scale sparse linear systems
Liu, Youquan1, 2; Yin, Kangxue1; Wu, Enhua2, 3
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 4, p 553-560, April 2011; Language: Chinese
; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 School of Information Engineering, Chang'an University, Xi'an 710064, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 Faculty of Science and Technology, University of Macau, China
Abstract: As a popular iterative method to solve linear equations, restarted generalized minimal residual method (GMRES) has the advantages of fast convergence and good stability. This paper implements a parallel GMRES in GPU based on CUDA. Particularly, the sparse matrix vector multiplication is optimized with coherence visiting and shared memory, which significantly improves the performance. We tested the paralleled GMRES on a GPU of GeForce GTX260, and compared its performance with those of the traditional GMRES on Intel Core 2 Quad CPU Q9400@2.66GHz and Intel Core i7 CPU 920@2.67GHz, which showed 40 times of speed-up and 20 times of speed-up on average respectively. (12 refs.)Main Heading: Least squares approximationsControlled terms: Linear systems - Program processorsUncontrolled terms: CUDA - Fast convergence - Generalized minimal residual methods - Good stability - GPGPU - Shared memories - Sparse linear systems - Sparse matrix-vector multiplication - Speed-upsClassification Code: 723.1 Computer Programming - 921 Mathematics - 921.6 Numerical Methods - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A file system for malware analysis and protection
Liang, Hong-Liang1, 2; Dong, Shou-Ji3, 4; Liu, Shu-Chang1
Source: Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, v 34, n 3, p 58-61, June 2011; Language: Chinese
; ISSN: 10075321;
Publisher: Beijing University of Posts an Telecommunications
Author affiliation: 1 School of Computer Science, Beijing University of Posts and Telecommunications, Beijing 100876, China2 Institute of Software, Chinese Acad. of Sci., Beijing 100190, China3 Institute of National Security Science and Technology, Beijing 100044, China4 School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China
Abstract: Malwares and their resulting threats are growing urgently. A method at the file system level is provided for analysis and defense against malwares with reducing the loss as possible, and implements a file system for malware analysis and protection (MAPFS). With check-point and file versioning technology, MAPFS can record the modifications in file systems during the process. These records are important for analysis of malware behavior, and may be used to recover the files damaged by the malwares. Experiments show that this method is effective in analysis and defense of malwares, and MAPFS only brings a little loss lower than 10 percent. (12 refs.)Main Heading: Computer crimeUncontrolled terms: Check points - File systems - Malware - Malware analysis - Malwares - Versioning - Versioning technologiesClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A formal approach for incremental construction with an application to autonomous robotic systems
Bensalem, Saddek1; De Silva, Lavindra3; Griesmayer, Andreas1; Ingrand, Felix3; Legay, Axel4; Yan, Rongjie1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6708 LNCS, p 116-132, 2011, Software Composition - 10th International Conference, SC 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642220449; DOI: 10.1007/978-3-642-22045-6_8; Conference: 10th International Conference on Software Composition, SC 2011, June 30, 2011 - July 1, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Verimag Laboratory, Université Joseph Fourier, CNRS, Grenoble, France2 State Key Laboratory of Computer Science, Institute of Software, CAS, Beijing, China3 LAAS/CNRS, Université de Toulouse, France4 INRIA/IRISA, Rennes, France
Abstract: In this paper, we propose a new workflow for the design of composite systems. Contrary to existing approaches, which build on traditional techniques for single-component systems, our methodology is incremental in terms of both the design and the verification process. The approach exploits the hierarchy between components and can detect errors at an early stage of the design. As a second contribution of the paper, we apply our methodology to automatically generate C code to coordinate the various modules of an autonomous robot. To the best of our knowledge, this is the first time that such a coordination code is generated automatically. © 2011 Springer-Verlag. (30 refs.)Main Heading: RobotsControlled terms: DesignUncontrolled terms: Autonomous robot - Autonomous robotic systems - C codes - Design of composites - Formal approach - Incremental construction - Single-component systems - Traditional techniques - Verification processClassification Code: 408 Structural Design - 731.5 Robotics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Construction of complete description logics ontologies using attribute exploration
Tang, Su-Qin1, 2; Cai, Zi-Xing1; Wang, Ju2; Jiang, Yun-Cheng2, 3
Source: Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, v 24, n 1, p 1-13, February 2011; Language: Chinese
; ISSN: 10036059;
Publisher: Journal Of Pattern Recognition and Artificial Intelligence
Author affiliation: 1 School of Information Science and Engineering, Central South University, Changsha 410083, China2 School of Computer Science and Information Engineering, Guangxi Normal University, Guilin 541004, China3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: The importance and current research progress of the description logics ontologies are analyzed, especially the application of attribute exploration in the construction of description logics ontologies according to the completeness of the description logics ontologies. The insufficiency in the knowledge that domain experts are supposed to have in using attribute exploration to construct the description logics ontologies is discussed. The construction of complete description logics ontologies under which circumstances domain experts do not have all the knowledge required in the domain is discussed as well. Moreover, a definition for the completeness of description logics ontologies under the description contexts is provided, and the incomplete contexts under the description contexts are set. An algorithm of constructing the description logics ontologies is constructed under the condition that domain experts are unable to define the attribute implications between those attribute sets. The proposed algorithm is used to acquire an embodying knowledge and then construct the knowledge base. And it can be proved that the description logics ontologies constructed by the proposed method is a complete one. (35 refs.)Main Heading: Data descriptionControlled terms: Algorithms - Formal languages - Information analysis - Knowledge based systems - OntologyUncontrolled terms: Attribute exploration - Description logic - Description logics - Domain experts - Formal Concept Analysis - Knowledge base - Research progressClassification Code: 723 Computer Software, Data Handling and Applications - 903 Information Science - 903.1 Information Sources and Analysis - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A hybrid line-drawing algorithm via shape cues shading
Lu, Jian1, 2; Wang, Shandong1, 2; Liu, Xuehui1; Wu, Enhua1, 3
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 2, p 208-215, February 2011; Language: Chinese
; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China3 Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, China
Abstract: This paper presents a GPU-based real-time line-drawing algorithm. By approximating the differential properties of the 3D mesh, the algorithm combines a view dependent and a view independent line-drawing approach to render the feature lines in real time. It firstly extracts view-dependent feature lines in the image space according to view curvatures, and renders view-independent feature lines as supplement by picking in style-texture according to the standardized principle curvatures. The algorithm works totally in pixel shader where two cues (view-dependent and view-independent cues) are processed, and produces pleasant results. As the experimental results showed, since all computation takes place in the image space with GPU, the performance is great. Moreover, by using the style-texture, the algorithm has a good control on styling which covers the shortages of the image-based line-drawing algorithms. The algorithm is not only a real-time line-drawing method but also a first step to the further application of artistic simulations, such as Chinese ink painting. (16 refs.)Main Heading: AlgorithmsControlled terms: TexturesUncontrolled terms: 3D meshes - Artistic simulation - Drawing algorithms - Feature lines - Image space - Image-based - Ink paintings - Line-drawing algorithm - Non-photorealistic rendering - Pixel shader - Real time - Time line - View-dependentClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics - 933 Solid State Physics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A pseudo-Zernike moment based audio watermarking scheme robust against desynchronization attacks
Wang, Xiang-Yang1, 2; Ma, Tian-Xiao1; Niu, Pan-Pan1
Source: Computers and Electrical Engineering, v 37, n 4, p 425-443, July 2011
; ISSN: 00457906; DOI: 10.1016/j.compeleceng.2011.05.011;
Publisher: Elsevier Ltd
Author affiliation: 1 School of Computer and Information Technology, Liaoning Normal University, Dalian 116029, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Desynchronization attack is known as one of the most difficult attacks to resist, for it can desynchronize the location of the watermark and hence causes incorrect watermark detection. It is a challenging work to design a robust audio watermarking scheme against desynchronization attacks. Based on pseudo-Zernike moment and synchronization code, we propose a new digital audio watermarking algorithm with good auditory quality and reasonable resistance toward desynchronization attacks in this paper. Firstly, the origin digital audio is segmented and then each segment is cut into two parts. Secondly, with the spatial watermarking technique, synchronization code is embedded into the statistics average value of audio samples in the first part. And then, map 1D digital audio signal in the second part into 2D form, and calculate its pseudo-Zernike moments. Finally, the watermark bit is embedded into the average value of modulus of the low-order pseudo-Zernike moments. Meanwhile combining the two adjacent synchronization code searching technology, the algorithm can extract the watermark without the help from the origin digital audio signal. Simulation results show that the proposed watermarking scheme is not only inaudible and robust against common signals processing such as MP3 compression, noise addition, resampling, re-quantization, etc., but also robust against the desynchronization attacks such as random cropping, amplitude variation, pitch shifting, jittering, etc. © 2011 Elsevier Ltd. All rights reserved. (24 refs.)Main Heading: Computer crimeControlled terms: Algorithms - Digital watermarking - Face recognition - Synchronization - WatermarkingUncontrolled terms: Amplitude variations - Audio samples - Audio watermarking schemes - Auditory quality - Average values - Desynchronization attack - Digital audio - Digital audio signals - Digital audio watermarking - MP3 compression - Noise addition - Pitch shifting - Pseudo-Zernike moments - Random cropping - Resampling - Robust audio watermarking - Simulation result - Spatial watermarking technique - Statistics average value - Synchronization codes - Watermark detection - Watermarking schemesClassification Code: 716 Telecommunication; Radar, Radio and Television - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 811.1.1 Papermaking Processes - 921 Mathematics - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Embedded extended visual cryptography schemes
Liu, Feng1; Wu, Chuankun1
Source: IEEE Transactions on Information Forensics and Security, v 6, n 2, p 307-322, June 2011
; ISSN: 15566013; DOI: 10.1109/TIFS.2011.2116782; Article number: 5719550;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: A visual cryptography scheme (VCS) is a kind of secret sharing scheme which allows the encoding of a secret image into ν shares distributed to $n$ participants. The beauty of such a scheme is that a set of qualified participants is able to recover the secret image without any cryptographic knowledge and computation devices. An extended visual cryptography scheme (EVCS) is a kind of VCS which consists of meaningful shares (compared to the random shares of traditional VCS). In this paper, we propose a construction of EVCS which is realized by embedding random shares into meaningful covering shares, and we call it the embedded EVCS. Experimental results compare some of the well-known EVCSs proposed in recent years systematically, and show that the proposed embedded EVCS has competitive visual quality compared with many of the well-known EVCSs in the literature. In addition, it has many specific advantages against these well-known EVCSs, respectively. © 2011 IEEE. (30 refs.)Main Heading: CryptographyUncontrolled terms: Extended Visual Cryptography Scheme - Secret images - Secret sharing - Secret sharing schemes - Visual cryptography schemes - Visual qualitiesClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Results on the immunity of Boolean functions against probabilistic algebraic attacks
Liu, Meicheng1; Lin, Dongdai1; Pei, Dingyi2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6812 LNCS, p 34-46, 2011, Information Security and Privacy - 16th Australasian Conference, ACISP 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642224966; DOI: 10.1007/978-3-642-22497-3_3; Conference: 16th Australasian Conference on Information Security and Privacy, ACISP 2011, July 11, 2011 - July 13, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 College of Mathematics and Information Sciences, Guangzhou University, Guangzhou 510006, China
Abstract: In this paper, we study the immunity of Boolean functions against probabilistic algebraic attacks. We first show that there are functions, using as filters in a linear feedback shift register based nonlinear filter generator, such that probabilistic algebraic attacks outperform deterministic ones. Then we introduce two notions, algebraic immunity distance and k-error algebraic immunity, to measure the ability of Boolean functions resistant to probabilistic algebraic attacks. We analyze both lower and upper bounds on algebraic immunity distance, and also present the relations among algebraic immunity distance, k-error algebraic immunity, algebraic immunity and high order nonlinearity. © 2011 Springer-Verlag. (25 refs.)Main Heading: Boolean functionsControlled terms: Algebra - Nonlinear feedback - Security of data - Shift registersUncontrolled terms: Algebraic attack - Algebraic immunity - High order - Linear feedback shift registers - Lower and upper bounds - Non-Linearity - Nonlinear filter generatorsClassification Code: 721.3 Computer Circuits - 723.2 Data Processing and Image Processing - 731.1 Control Systems - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A dynamic fault localization technique with noise reduction for java programs
Xu, Jian1; Chan, W.K.2; Zhang, Zhenyu3; Tse, T.H.4; Li, Shanping1
Source: Proceedings - International Conference on Quality Software, p 11-20, 2011, Proceedings - 11th International Conference on Quality Software, QSIC 2011
; ISSN: 15506002; ISBN-13: 9780769544687; DOI: 10.1109/QSIC.2011.32; Article number: 6004307; Conference: 11th International Conference on Quality Software, QSIC 2011, July 13, 2011 - July 14, 2011; Sponsor: Computer Science School of the Universidad Complutense de Madrid; Madrid Convention Bureau of the Madrid City Council;
Publisher: IEEE Computer Society
Author affiliation: 1 Department of Computer Science, Zhejiang University, Hangzhou, China2 Department of Computer Science, City University of Hong Kong, Tat Chee Avenue, Hong Kong, Hong Kong3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China4 Department of Computer Science, University of Hong Kong, Pokfulam, Hong Kong, Hong Kong
Abstract: Existing fault localization techniques combine various program features and similarity coefficients with the aim of precisely assessing the similarities among the dynamic spectra of these program features to predict the locations of faults. Many such techniques estimate the probability of a particular program feature causing the observed failures. They ignore the noise introduced by the other features on the same set of executions that may lead to the observed failures. In this paper, we propose both the use of chains of key basic blocks as program features and an innovative similarity coefficient that has noise reduction effect. We have implemented our proposal in a technique known as MKBC. We have empirically evaluated MKBC using three real-life medium-sized programs with real faults. The results show that MKBC outperforms Tarantula, Jaccard, SBI, and Ochiai significantly. © 2011 IEEE. (27 refs.)Main Heading: Computer softwareControlled terms: Acoustic noise measurement - Java programming language - Tracking (position)Uncontrolled terms: Basic blocks - Dynamic faults - Dynamic spectrum - Fault localization - Java program - Key block - Noise reduction effect - Noise reductions - Similarity coefficientsClassification Code: 716.2 Radar Systems and Equipment - 723 Computer Software, Data Handling and Applications - 723.1.1 Computer Programming Languages - 941.2 Acoustic Variables Measurements
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Identification and evaluation of sharing memory covert timing channel in Xen virtual machines
Wu, Jing Zheng1, 3; Ding, Liping1; Wang, Yongji1, 2; Han, Wei1, 3
Source: Proceedings - 2011 IEEE 4th International Conference on Cloud Computing, CLOUD 2011, p 283-291, 2011, Proceedings - 2011 IEEE 4th International Conference on Cloud Computing, CLOUD 2011; ISBN-13: 9780769544601; DOI: 10.1109/CLOUD.2011.10; Article number: 6008721; Conference: 2011 IEEE 4th International Conference on Cloud Computing, CLOUD 2011, July 4, 2011 - July 9, 2011; Sponsor: IEEE; IEEE CS; TC-SVC; IBM; SAP;
Publisher: IEEE Computer Society
Author affiliation: 1 National Engineering Research Center for Fundamental Software, Institute of Software, Beijing, China2 State Key Laboratory of Computer Science, Institute of Software, Beijing, China3 Graduate School Chinese Academy of Science, Beijing, China
Abstract: Virtualization technology is the basis of cloud computing, and the most important property of virtualization is isolation. Isolation guarantees security between virtual machines. However, covert channel breaks the isolation and leaks sensitive message covertly. In this paper, we formally model the isolation into noninterference, and define that all the transmission channels violating noninterference are covert channels. With this definition, we present an identification method based on information flow. This method first compiles the Source code into a more structured equivalent code with LLVM. And then a search algorithm is proposed to obtain the shared reSources and the operational processes in the equivalent code. A new covert channel termed sharing memory covert timing channel (SMCTC) is identified from Xen Source code. We construct channel scenario for SMCTC, and evaluate its threat with the metrics of channel capacity and transmission accuracy. The results show that SMCTC is much more threatened than CPU load based and cache based covert channels etc. © 2011 IEEE. (42 refs.)Main Heading: Cloud computingControlled terms: Virtual realityUncontrolled terms: Channel identification - Covert channels - Covert timing channels - Equivalent codes - Identification and evaluation - Identification method - Information flows - Operational process - Performance evaluation - Search Algorithms - Shared reSources - Sharing memories - Source codes - Transmission channels - Virtual machines - Virtualizations - XenClassification Code: 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Improved mobile robot's corridor-scene classifier based on probabilistic spiking neuron model
Wang, Xiuqing1; Hou, Zeng-Guang2; Tan, Min2; Wang, Yongji3; Fu, Siyao4; Chen, Lihui1 Source: Proceedings of the 10th IEEE International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC 2011, p 348-355, 2011, Proceedings of the 10th IEEE International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC 2011; ISBN-13: 9781457716966; DOI: 10.1109/COGINF.2011.6016164; Article number: 6016164; Conference: 10th IEEE International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC 2011, August 18, 2011 - August 20, 2011; Sponsor: IEEE; IEEE Computer Society (CS); IEEE Computational Intelligence Society (CIS); University of Calgary; The IEEE ICCI Steering Committee;
Publisher: IEEE Computer Society
Author affiliation: 1 Hebei Normal University, Shijiazhuang 050031, China2 Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100090, China3 Laboratory for Internet Technologies, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China4 Minzu University of China, Beijing 100081, China
Abstract: The ability of cognition and recognition for complex environment is very important for a real autonomous robot. A improved Corridor-Scene-Classifier based on probabilistic Spiking Neuron Model(pSNM) for mobile robot is designed. In the SNN classifier, the model pSNM is used. As network's training, Thorpe's learning rule is used. The experimental results show that the improved Classifier is more effective and it also has stronger robustness than the previous classifier based on Integrated-and-Fire (IAF) spiking neuron model for the structural corridor-scene. It also has better robustness than the traditional kernel-pca and the BP Corridor-Scene-classifier. © 2011 IEEE. (26 refs.)Main Heading: Neural networksControlled terms: Information science - Mobile robotsUncontrolled terms: Autonomous robot - Complex environments - Learning rules - Spiking neuron modelsClassification Code: 723.4 Artificial Intelligence - 731.5 Robotics - 903 Information Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Cryptanalysis of an identity based broadcast encryption scheme without random oracles
Wang, Xu An1; Weng, Jian2, 3, 4; Yang, Xiaoyuan1; Yang, Yanjiang5
Source: Information Processing Letters, v 111, n 10, p 461-464, April 30, 2011
; ISSN: 00200190; DOI: 10.1016/j.ipl.2011.02.007;
Publisher: Elsevier
Author affiliation: 1 Key Laboratory of Information and Network Security, Engineering College of Chinese Armed Police Force, Xian 710086, China2 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China3 Department of Computer Science, Jinan University, Guangzhou 510632, China4 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100080, China5 Institute for Infocomm Research (I2R), Singapore 119613, Singapore
Abstract: Identity based broadcast encryption allows a centralized transmitter to send encrypted messages to a set of identities S, so that only the users with identity in S can decrypt these ciphertexts using their respective private key. Recently [Information Processing Letters 109 (2009)], an identity-based broadcast encryption scheme was proposed (Ren and Gu, 2009) [1], and it was claimed to be fully chosen-ciphertext secure without random oracles. However, by giving a concrete attack, we indicate that this scheme is even not chosen-plaintext secure. © 2011 Elsevier B.V. All rights reserved. (12 refs.)Main Heading: CryptographyControlled terms: Data processing - Security of dataUncontrolled terms: Broadcast encryption - Broadcast encryption schemes - Chosen ciphertext attack - Chosen-plaintext attack - Ciphertexts - Encrypted messages - Identity based broadcast encryption - Identity-based - Information processing - Plaintext - Private key - Without random oraclesClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Scalability studies of an implicit shallow water solver for the rossby-haurwitz problem
Yang, Chao1, 2; Cai, Xiao-Chuan2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6449 LNCS, p 172-184, 2011, High Performance Computing for Computational Science, VECPAR 2010 - 9th International Conference, Revised Selected Papers
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642193279; DOI: 10.1007/978-3-642-19328-6_18; Conference: 9th International Conference on High Performance Computing for Computational Science, VECPAR 2010, June 22, 2010 - June 25, 2010; Sponsor: Allinea Software; Meyer Sound Laboratories Inc.; ParaTools Inc.; Lawrence National Berkeley Laboratory; Universidade do Porto;
Publisher: Springer Verlag
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Department of Computer Science, University of Colorado, Boulder, CO 80309, United States
Abstract: The scalability of a fully implicit global shallow water solver is studied in this paper. In the solver a conservative second-order finite volume scheme is used to discretize the shallow water equations on a cubed-sphere mesh which is free of pole-singularities. Instead of using the popular explicit or semi-implicit methods in climate modeling, we employ a fully implicit method so that the restrictions on the time step size can be greatly relaxed. Newton-Krylov-Schwarz method is then used to solve the nonlinear system of equations at each time step. Within each Newton iteration, the linear Jacobian system is solved by using a Krylov subspace method preconditioned with a Schwarz method. To further improve the scalability of the algorithm, we use multilevel hybrid Schwarz preconditioner to suppress the increase of the iteration number as the mesh is refined or more processors are used. We show by numerical experiments on the Rossby-Haurwitz problem that the fully implicit solver scales well to thousands of processors on an IBM BlueGene/L supercomputer. © 2011 Springer-Verlag Berlin Heidelberg. (15 refs.)Main Heading: Nonlinear equationsControlled terms: Computer software selection and evaluation - Iterative methods - Scalability - Spheres - SupercomputersUncontrolled terms: Climate modeling - Finite volume schemes - Fully implicit methods - IBM BlueGene/L supercomputer - Implicit solvers - Iteration numbers - Jacobians - Krylov subspace method - Newton iterations - Newton-Krylov - Numerical experiments - Preconditioners - Schwarz - Schwarz method - Second orders - Semi-implicit methods - Shallow water equations - Shallow waters - System of equations - Time step - Time step sizeClassification Code: 961 Systems Science - 921.6 Numerical Methods - 921.1 Algebra - 723 Computer Software, Data Handling and Applications - 722.4 Digital Computers and Systems - 718 Telephone Systems and Related Technologies; Line Communications - 631 Fluid Flow
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Web service selection approach based on the authenticity of QoS data and workflow model
Yuan, Yuqian1; Hu, Xiaohui2
Source: Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, v 37, n 4, p 390-394, April 2011; Language: Chinese
; ISSN: 10015965;
Publisher: Beijing University of Aeronautics and Astronautics (BUAA)
Author affiliation: 1 School of Automation Science and Electrical Engineering, Beijing University of Aeronautics and Astronautics, Beijing 100191, China2 Institute of Software Chinese Academy of Sciences, Beijing 100190, China
Abstract: The existing QoS(quality of service)-based services selection approaches always assume that the QoS data coming from service providers and users are effective and trustworthy, which is actually impossible in reality. A service selection approach based on the authenticity of QoS data and workflow model was proposed to solve the problem. For the QoS data coming from service providers, the statistics of past runtime data to revise the providers' QoS data were used. For the subjective evaluation coming from users, the confidence values of users were calculated based on the workflow-organization-model, and then the evaluation was computed by using the confidence. At last, the optimal web services could be selected by combing the two values as the judgment standard. The experimental results show that this approach can effectively weaken the influence of untrustworthy QoS data on Web service selection and strengthen the accuracy of service selection. (8 refs.)Main Heading: Web servicesControlled terms: Quality control - Quality of service - Telecommunication services - User interfacesUncontrolled terms: Confidence values - QoS (quality of service) - Run-time data - Service provider - Service selection - Services selection - Subjective evaluations - Web service selection - Workflow - Workflow modelsClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 722.2 Computer Peripheral Equipment - 723 Computer Software, Data Handling and Applications - 913.3 Quality Assurance and Control
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Value-based program characterization and its application to software plagiarism detection
Jhi, Yoon-Chan1; Wang, Xinran1; Jia, Xiaoqi2; Zhu, Sencun1; Liu, Peng1; Wu, Dinghao1
Source: Proceedings - International Conference on Software Engineering, p 756-765, 2011, ICSE 2011 - 33rd International Conference on Software Engineering, Proceedings of the Conference
; ISSN: 02705257; ISBN-13: 9781450304450; DOI: 10.1145/1985793.1985899; Conference: 33rd International Conference on Software Engineering, ICSE 2011, May 21, 2011 - May 28, 2011; Sponsor: Assoc. Comput. Mach., Spec. Interest Group Softw.; Eng. (ACM SIGSOFT); IEEE Computer Society; Technical Council on Software Engineering (TCSE);
Publisher: IEEE Computer Society
Author affiliation: 1 Penn State University, University Park, PA 16802, United States2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, China
Abstract: Identifying similar or identical code fragments becomes much more challenging in code theft cases where plagiarizers can use various automated code transformation techniques to hide stolen code from being detected. Previous works in this field are largely limited in that (1) most of them cannot handle advanced obfuscation techniques; (2) the methods based on Source code analysis are less practical since the Source code of suspicious programs is typically not available until strong evidences are collected; and (3) those depending on the features of specific operating systems or programming languages have limited applicability. Based on an observation that some critical runtime values are hard to be replaced or eliminated by semantics-preserving transformation techniques, we introduce a novel approach to dynamic characterization of executable programs. Leveraging such invariant values, our technique is resilient to various control and data obfuscation techniques. We show how the values can be extracted and refined to expose the critical values and how we can apply this runtime property to help solve problems in software plagiarism detection. We have implemented a prototype with a dynamic taint analyzer atop a generic processor emulator. Our experimental results show that the value-based method successfully discriminates 34 plagiarisms obfuscated by SandMark, plagiarisms heavily obfuscated by KlassMaster, programs obfuscated by Thicket, and executables obfuscated by Loco/Diablo. © 2011 ACM. (40 refs.)Main Heading: Computer crimeControlled terms: Cosine transforms - Semantics - Software engineeringUncontrolled terms: Automated code - Code fragments - Critical value - Data obfuscation - Dynamic characterization - Executable programs - Executables - Generic processors - Runtimes - software plagiarism detection - Software plagiarisms - Source code analysis - Source codes - Transformation techniques - Value-basedClassification Code: 723 Computer Software, Data Handling and Applications - 723.1 Computer Programming - 903.2 Information Dissemination - 921.3 Mathematical Transformations
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A new two-round certificateless authenticated key agreement protocol without bilinear pairings
He, Debiao1, 2; Chen, Yitao1; Chen, Jianhua1; Zhang, Rui1; Han, Weiwei3
Source: Mathematical and Computer Modelling, v 54, n 11-12, p 3143-3152, December 2011
; ISSN: 08957177; DOI: 10.1016/j.mcm.2011.08.004;
Publisher: Elsevier Ltd
Author affiliation: 1 School of Mathematics and Statistics, Wuhan University, Wuhan, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China3 School of Mathematics and Computer Science, Guangdong University of Business Studies, Guangzhou, China
Abstract: Certificateless public key cryptography (CLPKC), which can simplify the complex certificate management in the traditional public key cryptography and resolve the key escrow problem in identity-based cryptography, has been widely studied. As an important part of CLPKC, certificateless two-party authenticated key agreement (CTAKA) protocols have also received considerable attention. Recently, many CTAKA protocols using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. To improve the performance, several CTAKA protocols without pairings have been proposed. In this paper, we will show a latest CTAKA protocol is not secure against a type 1 adversary. To improve the security and performance, we also propose a new CTAKA protocol without pairing. At last, we show the proposed protocol is secure under the random oracle model. © 2011. (20 refs.)Main Heading: Public key cryptographyUncontrolled terms: Authenticated key agreement - Bilinear pairing - Certificateless - Elliptic curve - Provable securityClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Vision-based gesture interfaces toolkit for interactive games
Wu, Hui-Yue1, 2; Zhang, Feng-Jun1; Liu, Yu-Jin1; Hu, Yin-Huan1; Dai, Guo-Zhong1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 5, p 1067-1081, May 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03733;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Intelligence Engineering Laboratory, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 Graduate University, Chinese Acad. of Sci., Beijing 100049, China
Abstract: In this paper, a toolkit is created for designing vision-based gesture interactions. First, an abstract model for non-contact devices is proposed. Then, based on a data flow diagram method and an interactive learning approach, the IEToolkit is presented. It is designed based on the attributes of vision interaction and shields the underlying details of the computer vision algorithms. It has the following characteristics: a scalable interface to facilitate developers to add new classifiers, a unified management mechanism that provides dynamic configuration for all of the classifiers, and a visual user interface that supports the definition of a high-level semantic gesture. Finally, several prototypes are given. Experimental results show that the IEToolkit can provide a unified platform and a general solution for vision-based hand gesture games. © 2011 ISCAS. (32 refs.)Main Heading: Computer visionControlled terms: Knowledge management - Semantics - User interfaces - Wearable computersUncontrolled terms: Abstract models - Computer vision algorithms - Data flow diagrams - Dynamic configuration - General solutions - Gesture interaction - Gesture interfaces - Hand gesture - HCI (human-computer interaction) - High level semantics - Interaction technology - Interactive games - Interactive learning - Management mechanisms - Non-contact - Scalable interface - Toolkit - Vision based - Visual user interfacesClassification Code: 722.2 Computer Peripheral Equipment - 722.4 Digital Computers and Systems - 723.5 Computer Applications - 903.2 Information Dissemination - 903.3 Information Retrieval and Use
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Calibrating effective focal length for central catadioptric cameras using one space line
Duan, Fuqing1; Wu, Fuchao2; Zhou, Mingquan1; Deng, Xiaoming2, 3; Tian, Yun1
Source: Pattern Recognition Letters, 2011
; ISSN: 01678655; DOI: 10.1016/j.patrec.2011.05.012 Article in Press
Author affiliation: 1 College of Information Science and Technology, Beijing Normal University, Beijing 100875, PR China2 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, PR China3 Laboratory of Human-Computer Interaction and Intelligent Information Processing, Institute of Software, Chinese Academy of Sciences, Beijing 100190, PR China
Abstract: In camera calibration, focal length is the most important parameter to be estimated, while other parameters can be obtained by prior information about scene or system configuration. In this paper, we present a polynomial constraint on the effective focal length with the condition that all the other parameters are known. The polynomial degree is 4 for paracatadioptric cameras and 16 for other catadioptric cameras. However, if the skew is 0 or the ratio between the skew and effective focal length is known, the constraint becomes a linear one or a polynomial one with degree 4 on the effective focal length square for paracatadioptric cameras and other catadioptric cameras, respectively. Based on this constraint, we propose a simple method for estimation of the effective focal length of central catadioptric cameras. Unlike many approaches using lines in literature, the proposed method needs no conic fitting of line images, which is error-prone and highly affects the calibration accuracy. It is easy to implement, and only a single view of one space line is enough with no other space information needed. Experiments on simulated and real data show this method is robust and effective. © 2011 Elsevier B.V. All rights reserved.Main Heading: CamerasControlled terms: Calibration - Focusing - PolynomialsUncontrolled terms: Calibration accuracy - Camera calibration - Catadioptric cameras - Central catadioptric cameras - Conic fitting - Effective focal lengths - Error prones - Focal lengths - Line images - Paracatadioptric - Polynomial degree - Prior information - SIMPLE method - Space lines - System configurationsClassification Code: 711.1 Electromagnetic Waves in Different Media - 742.2 Photographic Equipment - 921.1 Algebra - 941 Acoustical and Optical Measuring Instruments - 942 Electric and Electronic Measuring Instruments - 943 Mechanical and Miscellaneous Measuring Instruments - 944 Moisture, Pressure and Temperature, and Radiation Measuring Instruments
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Researches on privacy enhanced direct anonymous attestation scheme
Chen, Xiao-Feng1; Feng, Deng-Guo1
Source: Tien Tzu Hsueh Pao/Acta Electronica Sinica, v 39, n 9, p 2166-2172, September 2011; Language: Chinese
; ISSN: 03722112;
Publisher: Chinese Institute of Electronics
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100080, China
Abstract: The direct anonymous attestation proposed by Brickell adopts an anonymous scheme which is "all or none" for verifiers, and the direct anonymous scheme is applicable for only a few scenarios, which can not satisfy the needs in several cases. To design a flexible anonymous scheme becomes a critical issue for deploying the trusted computing platform. We analyze the problems of the anonymous scheme and propose the direct anonymous scheme with sub-group privacy enhancement properties. This proposed scheme provides the privacy solution for the small group, of which we propose two schemes and analyze the performance and security. (10 refs.)Uncontrolled terms: Anonymous authentication - Critical issues - Direct anonymous attestations - Privacy solutions - Small groups - Trusted computing platform - Trusted platform - Trusted platform module
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A universal distributed model for password cracking
Zou, Jing1, 2, 3; Lin, Dong-Dai1; Mi, Guo-Cui1, 2
Source: Proceedings - International Conference on Machine Learning and Cybernetics, v 3, p 955-960, 2011, Proceedings of 2011 International Conference on Machine Learning and Cybernetics, ICMLC 2011; ISSN: 2160133X, E-ISSN: 21601348; ISBN-13: 9781457703065; DOI: 10.1109/ICMLC.2011.6016851; Article number: 6016851; Conference: 2011 International Conference on Machine Learning and Cybernetics, ICMLC 2011, July 10, 2011 - July 13, 2011; Sponsor: Hebei University; IEEE Systems, Man and Cybernetics Society; Chongqing University; South China University of Technology; Hong Kong Baptist University;
Publisher: IEEE Computer Society
Author affiliation: 1 SKLOIS Lab., Institute of Software, Chinese Academy of Sciences, China2 Graduate School, Chinese Academy of Sciences, Beijing 100080, China3 Huaiyin Normal University, Huai'an, China
Abstract: Due to the parallel nature of brute force attack, it is well suited to use parallel or distributed computing to recover passwords encrypted by one-way hash functions. When distributing the password cracking task between threads or nodes, it is necessary to divide the search space into several subspaces. It is ideal that the size of these subspaces is the same for a load balanced solution. However, increasing the length of passwords will raise the number of combinations exponentially. This will raises issues for the search space uniform partition. This paper presents a universal distributed model for password cracking capable of dividing the search space evenly. For this, the paper addresses the definition of a base-n-like number, and discusses operations on it and relations with base-n number. The model is data independent, easily implemented and needs a few of communications. Experimental results are then shown. © 2011 IEEE. (6 refs.)Main Heading: CyberneticsControlled terms: Hash functions - Learning systemsUncontrolled terms: Brute-force attack - GPU - password attacking - Password-based authentication - Search spacesClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Investigating the implementation of restricted sets of multiqubit operations on distant qubits: A communication complexity perspective
Situ, Haozhen1; Qiu, Daowen1, 2, 3
Source: Quantum Information Processing, v 10, n 5, p 609-618, October 2011;
ISSN: 15700755; DOI: 10.1007/s11128-010-0222-x;
Publisher: Springer New York
Author affiliation: 1 Department of Computer Science, Sun Yat-sen University, 510006 Guangzhou, China2 SQIG-Instituto de Telecomunicações, IST, TULisbon, Av. Rovisco Pais, 1049-001 Lisbon, Portugal3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, 100080 Beijing, China
Abstract: We propose a protocol for Alice to implement amultiqubit quantum operation from the restricted sets on distant qubits possessed by Bob, and then we investigate the communication complexity of the task in different communication scenarios. By comparing with the previous work, our protocol works without prior sharing of entanglement, and requires less communication reSources than the previous protocol in the qubit-transmission scenario. Furthermore, we generalize our protocol to d-dimensional operations. © Springer Science+Business Media, LLC 2010. (19 refs.)Main Heading: CommunicationControlled terms: Quantum communication - Quantum entanglementUncontrolled terms: Communication complexity - Communication reSources - Nonlocal operation - Quantum operations - Remote implementationClassification Code: 716 Telecommunication; Radar, Radio and Television - 931.4 Quantum Theory; Quantum Mechanics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A variant of Boyen-Waters anonymous IBE scheme
Luo, Song1, 3, 4; Shen, Qingni2; Jin, Yongming3, 4; Chen, Yu5; Chen, Zhong2, 3, 4; Qing, Sihan2, 6
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p 42-56, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings;
ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426;
DOI: 10.1007/978-3-642-25243-3_4; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 College of Computer Science and Engineering, Chongqing University of Technology, China2 School of Software and Microelectronics, MoE Key Lab. of Network and Software Assurance, Peking University, Beijing, China3 Institute of Software, School of Electronics Engineering and Computer Science, Peking University, China4 Key Laboratory of High Confidence Software Technologies (Peking University), Ministry of Education, China5 Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China6 Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: An identity-based encryption (IBE) scheme is called anonymous if the ciphertext leaks no information about the recipient's identity. In this paper, we present a novel anonymous identity-based encryption scheme. Our scheme comes from the analysis of Boyen-Waters anonymous IBE Scheme in which we find a method to construct anonymous IBE schemes. We show that Boyen-Waters anonymous IBE scheme can be transformed from BB1-IBE scheme. Our scheme is also transformed from BB1-IBE scheme and can be seemed as a variant of Boyen-Waters anonymous IBE scheme. The security proof shows the transformed scheme has the same semantic security as the original scheme and has anonymous security. We prove anonymity under the Decision Linear assumption. © 2011 Springer-Verlag. (28 refs.)Main Heading: Security of dataControlled terms: Cryptography - SemanticsUncontrolled terms: Anonymity - Ciphertexts - Identity-Based Encryption - Security proofs - Semantic security - TransformationClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Combining strategies for XML retrieval
Gao, Ning1; Deng, Zhi-Hong1, 2; Jiang, Jia-Jian1; Lv, Sheng-Long1; Yu, Hang1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6932 LNCS, p 319-331, 2011, Comparative Evaluation of Focused Retrieval - 9th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2010, Revised Selected Papers;
ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642235764;
DOI: 10.1007/978-3-642-23577-1_30; Conference: 9th International Workshop of the Initiative for the Evaluation of XML Retrieval, INEX 2010, December 13, 2010 - December 15, 2010;
Publisher: Springer Verlag
Author affiliation: 1 Key Laboratory of Machine Perception (Ministry of Education), School of Electronic Engineering and Computer Science, Peking University, China2 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: This paper describes Peking University's approaches to the Ad Hoc, Data Centric and Relevance Feedback track. In Ad Hoc track, results for four tasks were submitted, Efficiency, Restricted Focused, Relevance In Context and Restricted Relevance In Context. To evaluate the relevance between documents and a given query, multiple strategies, such as Two-Step retrieval, MAXLCA query results, BM25, distribution measurements and learn-to-optimize method are combined to form a more effective search engine. In Data Centric track, to gain a set of closely related nodes that are collectively relevant to a given keyword query, we promote three factors, correlation, explicitnesses and distinctiveness. In Relevance Feedback track, to obtain useful information from feedbacks, our implementation employs two techniques, a revised Rocchio algorithm and criterion weight adjustment. © 2011 Springer-Verlag. (22 refs.)Main Heading: Information retrievalControlled terms: Feedback - Search engines - XMLUncontrolled terms: Ad Hoc - Ad hoc tracks - Data centric - Distribution measurement - INEX - Keyword queries - Multiple strategy - Peking University - Query results - Relevance feedback - Rocchio algorithm - XML RetrievalClassification Code: 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications - 731.1 Control Systems
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Efficient remote attestation mechanism with privacy protection
Xu, Zi-Yao1, 2; He, Ye-Ping1; Deng, Ling-Li1, 3
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 2, p 339-352, February 2011; Language: Chinese; ISSN: 10009825;
DOI: 10.3724/SP.J.1001.2011.03714;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Information Center, China North Industries Group Corporation, Beijing 100089, China3 Department of Network Technology, China Mobile Research Institute, Beijing 100053, China
Abstract: A remote attestation mechanism, with high efficiency, flexibility and privacy protection based on Merkle hash tree is proposed in this paper. The problems of IMA (integrity measurement architecture) architecture are analyzed for a special target application scenario; followed by a detailed description of RAMT (remote attestation mechanism based on Merkle hash tree) architecture and its process of integrity measuring and verifying. The function and pseudo-code of command TPM_HashTree, which is a function enhancement to the existing TPM (trusted platform module), are presented for the newly proposed mechanism. The advantages of the new mechanism are analyzed and discussed. © 2011 ISCAS. (26 refs.)Main Heading: Security of dataControlled terms: ArchitectureUncontrolled terms: High efficiency - Integrity measurement - Merkle hash tree - New mechanisms - Privacy protection - Pseudo-code - Remote attestation - Target application - Trusted computing - Trusted platform moduleClassification Code: 402 Buildings and Towers - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Time-dependent cryptographic protocol logic and its formal semantics
Lei, Xin-Feng1, 2; Liu, Jun2; Xiao, Jun-Mo2
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 3, p 534-557, March 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03732;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Institute of Communications Engineering, PLA University of Science and Technology, Nanjing 210007, China
Abstract: In cryptographic protocols, the agent's epistemic and doxastic states are changeable over time. To model these dynamics, a time-dependent cryptographic protocol logic is proposed. Our logic is based on the predicate modal logic and the time factor can be expressed in it by invoking a time variable as a parameter of predicates and modal operators. This makes it possible to model every agent's actions, knowledges and beliefs at different time points. We also give the formal semantics of our logic to avoid the ambiguity of its language and make the logic sound. The semantics is based on the kripke structure and the possible world in it is built both on the local world of agent and the specific world of time. This makes every possible world can give a global view of each point of the protocol. Our logic provides a flexible method for analyzing the cryptographic protocols, especially the time-dependent cryptographic protocols, and increases the power of the logical method for analyzing protocols. © ISCAS. (33 refs.)Main Heading: Formal methodsControlled terms: Cryptography - Formal logic - SemanticsUncontrolled terms: Cryptographic protocols - Formal semantics - Global view - Kripke structure - Logical methods - Modal logic - Modal operators - Possible worlds - Predicate modal logic - Time factors - Time points - Time variable - Time-dependentClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 723 Computer Software, Data Handling and Applications - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On the invisibility of designated confirmer signatures
Xia, Fubiao1; Wang, Guilin2; Xue, Rui3
Source: Proceedings of the 6th International Symposium on Information, Computer and Communications Security, ASIACCS 2011, p 268-276, 2011, Proceedings of the 6th International Symposium on Information, Computer and Communications Security, ASIACCS 2011; ISBN-13: 9781450305648; DOI: 10.1145/1966913.1966948; Conference: 6th International Symposium on Information, Computer and Communications Security, ASIACCS 2011, March 22, 2011 - March 24, 2011; Sponsor: ACM Spec. Interest Group Secur., Audit, Control (SIGSAC);
Publisher: Association for Computing Machinery
Author affiliation: 1 School of Computer Science, University of Birmingham, Birmingham, B15 2TT, United Kingdom2 School of Computer Science and Software Engineering, University of Wollongong, Wollongong, NSW 2522, Australia3 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Science, Beijing 100080, China
Abstract: As an important cryptographic primitive, designated confirmer signatures are introduced to control the public verifiability of signatures. That is, only the signer or a semi-trusted party, called designated confirmer, can interactively assist a verifier to check the validity of a designated confirmer signature. The central security property of a designated confirmer signature scheme is called invisibility, which requires that even an adaptive adversary cannot determine the validity of an alleged signature without direct cooperation from either the signer or the designated confirmer. However, in the literature researchers have proposed two other related properties, called impersonation and transcript simulatability, though the relations between them are not clear. In this paper, we first explore the relations among these three invisibility related concepts and conclude that invisibility, impersonation and transcript simulatability forms an increasing stronger order. After that, we turn to study the invisibility of two designated confirmer signature schemes recently presented by Zhang et al. and Wei et al. By demonstrating concrete and effective attacks, we show that both of those two scheme fail to meet invisibility, the central security property of designated confirmer signatures. Copyright 2011 ACM. (21 refs.)Main Heading: Electronic document identification systemsControlled terms: Authentication - Network securityUncontrolled terms: Confirmer signature - Digital Signature - Invisibility - Transcript-simulatability - UnimpersonationClassification Code: 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Symbolic decision procedure for termination of linear programs
Xia, Bican1; Yang, Lu2; Zhan, Naijun3; Zhang, Zhihai1
Source: Formal Aspects of Computing, v 23, n 2, p 171-190, March 2011
; ISSN: 09345043, E-ISSN: 1433299X; DOI: 10.1007/s00165-009-0144-5;
Publisher: Springer London
Author affiliation: 1 School of Mathematical Sciences, Peking University, LMAM, Beijing, China2 Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China3 Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Tiwari proved that the termination of a class of linear programs is decidable in Tiwari (Proceedings of CAV'04. Lecture notes in computer science, vol 3114, pp 70-82, 2004). The decision procedure proposed therein depends on the computation of Jordan forms. Thus, people may draw a wrong conclusion from this procedure, if they simply apply floating-point computation to compute Jordan forms. In this paper, we first use an example to explain this problem, and then present a symbolic implementation of the decision procedure. Thus, the rounding error problem is therefore avoided. Moreover, we also show that the symbolic decision procedure is as efficient as the numerical one given in Tiwari (Proceedings of CAV'04. Lecture notes in computer science, vol 3114, pp 70-82, 2004). The complexity of former is max{O(n6),O(nm+3)}, while that of the latter is O(nm+3), where n is the number of variables of the program and m is the number of its Boolean conditions. In addition, for the case when the characteristic polynomial of the assignment matrix is irreducible, we design a more efficient symbolic algorithm whose complexity is max(O(n6),O(mn3)). BCS © 2009. (18 refs.)Main Heading: Engineering educationControlled terms: Computer science - Manganese compoundsUncontrolled terms: Characteristic polynomials - Decision procedure - Floating-point computation - Jordan form - Lecture Notes - Linear programs - matrix - Numerical computations - Rounding errors - Symbolic algorithms - Symbolic computation - Symbolic decision procedures - TerminationClassification Code: 721 Computer Circuits and Logic Elements - 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications - 804.1 Organic Compounds - 901.2 Education
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An authorization model and implementation for vector data in spatial DBMS
Zhang, Desheng1, 2; Feng, Dengguo1; Chen, Chi1
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 8, p 1524-1533, August 2011; Language: Chinese
; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 National Engineering and Research Center of Information Security, Beijing 100190, China
Abstract: Spatial DBMS and its applications have become more and more popular today. It makes our daily life more convenient and comfortable, but it also brings serious threats to security and privacy. Most applications require a fine-granularity flexible access control model which supports negative authorization; meanwhile, they also require an authorization implementation with high performance. According to the security requirements of the applications of vector data in spatial DBMS, a predicate-based access control model(PBAC) is presented, and predicate rewrite technique is adopted to implement the authorization model in spatial DBMS. Compared with the existing works, in our model, spatial predicate is adopted to specify the authorized objects which improves the flexibility of expression, and negative authorizations are also supported; in our implementation, predicate rewrite technique is used which not only avoids an additional spatial query in authorization enforcement but also assures the low coupling degree between implementations of vector data's authorization and spatial DBMS and the convenience of eliminating spatial predicate redundancies. Experiments results show that our method could satisfy the security requirements and realize the effective authorization of vector data in spatial DBMS. (19 refs.)Main Heading: VectorsControlled terms: Access control - Security systemsUncontrolled terms: Negative authorization - Predicate rewrite - Spatial DBMS - Spatial predicates - Vector dataClassification Code: 723 Computer Software, Data Handling and Applications - 914.1 Accidents and Accident Prevention - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Indexing frequently updated trajectories of network-constrained moving objects
Ding, Zhiming1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6861 LNCS, n PART 2, p 464-474, 2011, Database and Expert Systems Applications - 22nd International Conference, DEXA 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642230905; DOI: 10.1007/978-3-642-23091-2_40; Conference: 22nd International Conference on Database and Expert Systems Applications, DEXA 2011, August 29, 2011 - September 2, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, South-Fourth-Street 4, Zhong-Guan-Cun, Beijing 100190, China
Abstract: Index is a key technique in improving the query processing performance of moving objects databases. However, current index methods for moving object trajectories take trajectory units as the basic index records and frequent index updates are needed when location updates occur, which greatly affects the overall performance of moving objects databases. To solve this problem, we propose a new index method, network-constrained Moving Object Sketched-Trajectory R-Tree (MOSTR-Tree) in this paper, which outperforms previously proposed methods under frequent location updates. © 2011 Springer-Verlag Berlin Heidelberg. (8 refs.)Main Heading: TrajectoriesControlled terms: Database systems - Decision trees - Expert systems - Indexing (materials working) - Plant extractsUncontrolled terms: Current index - Database - Frequent index - Index - Key techniques - Location update - Moving object trajectories - Moving objects - Moving Objects Databases - New indices - Processing performance - Spatial temporalsClassification Code: 961 Systems Science - 922 Statistical Methods - 723.4.1 Expert Systems - 723.3 Database Systems - 603.2 Machine Tool Accessories - 461.9 Biology - 404.1 Military Engineering
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Introduction to the special issue on Chinese language processing
Chen, Keh-Jiann1; Liu, Qun2; Xue, Nianwen3; Sun, Le4
Source: ACM Transactions on Asian Language Information Processing, v 10, n 3, September 2011
; ISSN: 15300226, E-ISSN: 15583430; DOI: 10.1145/2002980.2002981; Article number: 11;
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Information Science, Academia Sinica, China2 Institute of Computing Technology, Chinese Academy of Sciences, China3 Brandeis University, Computer Science Department, MS 018, Waltham, MA 02454, United States4 Institute of Software, Chinese Academy of Sciences, China
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Searching for doubly self-orthogonal Latin squares
Lu, Runming1, 2; Liu, Sheng1, 2; Zhang, Jian1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6876 LNCS, p 538-545, 2011, Principles and Practice of Constraint Programming, CP 2011 - 17th International Conference, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642237850; DOI: 10.1007/978-3-642-23786-7_41; Conference: 17th International Conference on Principles and Practice of Constraint Programming, CP 2011, September 12, 2011 - September 16, 2011; Sponsor: Agreement Technologies (COST Action IC0801); Artificial Intelligence Journal; Association for Constraint Programming (ACP); Associazione Italiana per l'Intelligenza Artificiale (AI*IA); Comune di Assisi;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China2 Graduate University, Chinese Academy of Sciences, China
Abstract: A Doubly Self Orthogonal Latin Square (DSOLS) is a Latin square which is orthogonal to its transpose to the diagonal and its transpose to the back diagonal. It is challenging to find a non-trivial DSOLS. For the orders n = 2 (mod 4), the existence of DSOLS(n) is unknown except for n = 2, 6. We propose an efficient approach and data structure based on a set system and exact cover, with which we obtained a new result, i.e., the non-existence of DSOLS(10). © 2011 Springer-Verlag. (8 refs.)Main Heading: Computer programmingControlled terms: Constraint theory - Data structuresUncontrolled terms: Exact covers - Latin square - New results - Non-existence - Non-trivial - Orthogonal latin squares - Set system - Structure-basedClassification Code: 723 Computer Software, Data Handling and Applications - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Spiking neural networks and its application for mobile robots
Wang, Xiuqing1, 2; Hou, Zeng-Guang2; Tan, Min2; Wang, Yongji3; Huang, Zhanhua1
Source: Proceedings of the 30th Chinese Control Conference, CCC 2011, p 4133-4138, 2011, Proceedings of the 30th Chinese Control Conference, CCC 2011; Language: Chinese; ISBN-13: 9789881725592; Article number: 6001535; Conference: 30th Chinese Control Conference, CCC 2011, July 22, 2011 - July 24, 2011; Sponsor: Academy of Mathematics and Systems Science, CAS; IEEE Control Systems Society; IEEE Industrial Electronics Society; The Society of Instr. and Contr. Engineers of Japan; Institute of Control, Robotics and Systems of Korea;
Publisher: IEEE Computer Society
Author affiliation: 1 Hebei Normal University, Shijiazhuang 050031, China2 Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China3 Laboratory for Internet Technologies, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: In this paper, the third generation neural network- Spiking neural networks (SNNs) are introduced. SNNs' origination, characteristic, coding, the training methods, and the models of Spiking neurons and synaptics are also discussed. A survey for SNNs' research and application in mobile robots is made. Because of SNNs' unique characteristic: good biological plausibility, Spiking neurons involving both temporal and spatial information, faster and more efficiently computing, good robustness and easy implementation by hardware, SNNs have been used in mobile robots' control, environment perception and robots' vision successfully. © 2011 Chinese Assoc of Automati. (61 refs.)Main Heading: Neural networksControlled terms: Computer vision - Control - Mobile robotsUncontrolled terms: Research and application - Spatial informations - Spike - Spiking neural networks - Spiking neuron - Third generation - Training methodsClassification Code: 723.4 Artificial Intelligence - 731 Automatic Control Principles and Applications - 732 Control Devices
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A generic framework for constructing cross-realm C2C-PAKA protocols based on the smart card
Xu, Jing1; Zhu, Wen-Tao2; Jin, Wen-Ting2
Source: Concurrency Computation Practice and Experience, v 23, n 12, p 1386-1398, August 25, 2011
; ISSN: 15320626, E-ISSN: 15320634; DOI: 10.1002/cpe.1616;
Publisher: John Wiley and Sons Ltd
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Information Security, Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: A cross-realm client-to-client password-authenticated key agreement (C2C-PAKA) protocol allows network clients from different realms managed by different servers to agree on a session key in an authentic manner based on easily memorizable passwords. In this paper, we present a generic framework for constructing a cross-realm C2C-PAKA protocol from any secure smart card-based password authentication (PA-SC) protocol. The security proof of our construction can be derived from the underlying PA-SC protocol employing the same assumptions. Our generic framework appears to be the first one with provable security. In addition, compared with similar protocols, the instantiation of our construction achieves improved efficiency. Copyright © 2010 John Wiley & Sons, Ltd. (17 refs.)Main Heading: Smart cardsControlled terms: AuthenticationUncontrolled terms: Client-to-client - Cross-realm - Cryptographic protocols - Generic frameworks - Password authentication - Password-authenticated key agreement - provable security - Security proofs - Session keyClassification Code: 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On general factorizations for n-D polynomial matrices
Liu, Jinwang1; Li, Dongmei1; Wang, Mingsheng2
Source: Circuits, Systems, and Signal Processing, v 30, n 3, p 553-566, June 2011
; ISSN: 0278081X, E-ISSN: 15315878; DOI: 10.1007/s00034-010-9229-x;
Publisher: Birkhause Boston
Author affiliation: 1 College of Mathematics and Computation, Hunan Science and Technology University, Xiangtan, Hunan, 411201, China2 Key Lab of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Multivariate (n-D) polynomial matrix factorizations are basic research subjects in multidimensional (n-D) systems and signal processing. In this paper, several results on general matrix factorizations are provided for extracting a matrix factor from a given n-D polynomial matrix whose lower order minors satisfy certain conditions. These results are further generalizations of previous results in (Lin et al. in Circuits Syst. Signal Process. 20(6):601-618, 2001). As a consequence, the application range of the constructive algorithm in (Lin et al. in Circuits Syst. Signal Process. 20(6):601-618, 2001) has been greatly extended. Three examples are worked out in detail to show the practical value of the proposed method for obtaining general factorizations for a class of n-D polynomial matrices. © 2010 Springer Science+Business Media, LLC. (27 refs.)Main Heading: FactorizationControlled terms: Algorithms - Polynomials - Signal processingUncontrolled terms: Application range - Basic research - Constructive algorithms - matrix - Matrix factorizations - Multidimensional systems - n-D polynomial matrices - Polynomial matrices - Reduced minors - Signal processClassification Code: 716.1 Information Theory and Signal Processing - 921 Mathematics - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Generalized constraint neural network model system parameter identification
Zhang, Shuang1, 2; Jin, Gang1; Xiao, Jing6; Li, Shu2, 5; Qin, Yu-Ping3; Liu, Jin-Hua2, 4; An, Tao1; Zhong, Wei-Fan6
Source: Advanced Materials Research, v 143-144, p 1207-1212, 2011, Smart Materials and Intelligent Systems;
ISSN:10226680;ISBN-13:9780878492237; DOI: 10.4028/www.scientific.net/AMR.143-144.1207; Conference: International Conference on Smart Materials and Intelligent Systems 2010, SMIS 2010, December 17, 2010 - December 20, 2010; Sponsor: Shanghai Jiao Tong University; Nanyang Normal University; Hebei Polytechnic University; Henan Institute of Science and Technology; Chongqing University of Arts and Sciences; Hunan Institute of Engineering;
Publisher: Trans Tech Publications
Author affiliation: 1 Key Laboratory of Beam Control, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, 610209, China2 Chinese Academy of Sciences, Graduate University, Beijing, 100039, China3 College of Mathematics and Software Science, Sichuan Normal University, Chengdu, 610068, China4 Chinese People' S Liberation, Army University of Military Traffic, Tianjin, China5 Beijing Institute of Oil Research, Laboratory of Oil Storage and Transportation Automation, Beijing, 102300, China6 Xu Zhou Air Force College, XuZhou, 221000, China
Abstract: By analyzing and deducing generalized constraint neural network (GCNN) with model some present theories, the identification method of the m-input n-output (MINO) and multiple-input multiple-output (MIMO) systems is acquired. It is possible to improve the transparency of the black box through the practical test. This identification method is useful to enhance identification of GCNN model's parameters, moreover, the identification ability of the neural network black box system model is further made better. © (2011) Trans Tech Publications. (10 refs.)Main Heading: Neural networksControlled terms: Intelligent materials - Intelligent systems - MIMO systems - Parameter estimationUncontrolled terms: GCNN - MIMO - MINO - Parameter identifiability - SISOClassification Code: 415 Metals, Plastics, Wood and Other Structural Materials - 723.4 Artificial Intelligence - 731.1 Control Systems
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Mesoporous rare earth fluoride nanocrystals and their photoluminescence properties
Wen, Chunye1; Sun, Lingling1; Yan, Jinghui1; Liu, Yang2; Song, Jinzhuang1; Zhang, Yao1; Lian, Hongzhou3; Kang, Zhenhui2
Source: Journal of Colloid and Interface Science, v 357, n 1, p 116-120, 01 May 2011
; ISSN: 00219797; DOI: 10.1016/j.jcis.2011.01.100;
Publisher: Academic Press Inc.
Author affiliation: 1 College of Chemistry and Environmental Engineering, Changchun University of Science and Technology, Changchun 130022, China2 Inst. of Functional Nano and Soft Materials (FUNSOM), Jiangsu Key Laboratory for Carbon-Based Functional Materials and Devices, Soochow University, Suzhou 215123, China3 Key Laboratory of Rare Earth Chemistry and Physics, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun 130022, China
Abstract: YF3 and YF3:Eu3+ mesoporous hexagonal nanocrystals were successfully synthesized via a simple hydrothermal process based on the in situ assembly of the as-synthesized YF3 and YF3:Eu3+ nanoparticles. The well defined mesoporous nanostructures are formed by phenanthroline assisted assembly of ∼20nm nanoparticles, and 2-4nm pores are contained as indicated by N2 adsorption-desorption studies. The obtained YF3:Eu3+ mesoporous hexagonal nanoplates show a significant photoluminescence intensity enhancement compared with other shaped YF3:Eu3+ nanocrystals. © 2011 Elsevier Inc. (51 refs.)Main Heading: NanoparticlesControlled terms: Adsorption - Desorption - Europium - Fluorine compounds - Mesoporous materials - Nanocrystals - Photoluminescence - Rare earthsUncontrolled terms: Adsorption desorption - Hexagonal nanocrystals - Hydrothermal process - In-situ - Mesopore - Mesoporous - Nanoplates - Phenanthrolines - Photoluminescence intensities - Photoluminescence properties - Rare earth fluoridesClassification Code: 933 Solid State Physics - 931.2 Physical Properties of Gases, Liquids and Solids - 804.1 Organic Compounds - 802.3 Chemical Operations - 761 Nanotechnology - 741.1 Light/Optics - 708 Electric and Magnetic Materials - 547.2 Rare Earth Metals - 481.2 Geochemistry
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Formalized approach for componentized software process modeling
Zhai, Jian1, 2; Yang, Qiu-Song1; Xiao, Jun-Chao1; Li, Ming-Shu1, 3
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 1, p 1-16, January 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03769;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, The Chinese Academy of Sciences, Beijing 100049, China3 State Key Laboratory of Computer Science, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China
Abstract: To address the problems of current approaches in software process reuse, in particular the low efficiency in reuse for the operational rules and for the lack of a precise definition of process components, a formalized approach for componentized software process modeling (CSPM) is presented in this paper. CSPM provides a mechanism to support the formal definition of reusable software process components and presents a series of rules to turn process components into a process model. By using CSPM, the reusage of process components can be conducted in a rigorous manner, and the potential errors caused by ambiguity in traditional non-formal modeling methods can be effectively avoided. CSPM can also turn the verification of a combined process model, against certain properties, into a series of sub-problems into its own corresponding components, making an original infeasible problem, under certain circumstances, into feasible ones by exponentially reducing the state space needed to be explored. © Copyright 2011, Institute of Software, the Chinese Academy of Sciences. All rights reserved. (25 refs.)Main Heading: Formal methodsControlled terms: Computer software reusabilityUncontrolled terms: Combined process - Formal definition - Formal modeling - Potential errors - Precise definition - Process component - Process model - Process modeling - Process reuse - Reusable softwares - Software process - Software process modeling - State space - Sub-problemsClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
k-nearest neighbors query in dynamic spatial network databases
Yin, Xiao-Lan1, 2
Source: Tien Tzu Hsueh Pao/Acta Electronica Sinica, v 39, n 2, p 389-394, February 2011; Language: Chinese
; ISSN: 03722112;
Publisher: Chinese Institute of Electronics
Author affiliation: 1 Technology Center of Software Engineering, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 Graduate School, Chinese Acad. of Sci., Beijing 100049, China
Abstract: One of the most important kinds of queries in Spatial Network Databases to support Location-Based Services is the k-Nearest Neighbors (k-NN) query. In this paper, we propose a novel approach to efficiently and accurately evaluate k-NN queries in spatial network databases using network space diagram. This approach is based on partitioning a large network to small regions, and then precomputing distances both within and across the regions. Our empirical experiments with several random data sets show that our proposed solution outperforms approaches that are based on on-line distance computation by up to one order of magnitude. (10 refs.)Main Heading: Database systemsUncontrolled terms: Distance computation - Distance index - Empirical experiments - K-nearest neighbors - k-nearest neighbors (k-NN) - K-NN query - Large networks - Location-Based Services - Moving object - Network space - Order of magnitude - Precomputing - Random data - Small region - Spatial network databaseClassification Code: 723.3 Database Systems
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
D-finder 2: Towards efficient correctness of incremental design
Bensalem, Saddek1; Griesmayer, Andreas1; Legay, Axel3; Nguyen, Thanh-Hung1; Sifakis, Joseph1; Yan, Rongjie1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6617 LNCS, p 453-458, 2011, NASA Formal Methods - Third International Symposium, NFM 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642203978; DOI: 10.1007/978-3-642-20398-5_32; Conference: 3rd NASA Formal Methods Symposium, NFM 2011, April 18, 2011 - April 20, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Verimag Laboratory, Université Joseph Fourier Grenoble, CNRS, France2 State Key Laboratory of Computer Science, Institute of Software, CAS, Beijing, China3 INRIA/IRISA, Rennes, France
Abstract: D-Finder 2 is a new tool for deadlock detection in concurrent systems based on effective invariant computation to approximate the effects of interactions among modules. It is part of the BIP framework, which provides various tools centered on a component-based language for incremental design. The presented tool shares its theoretical roots with a previous implementation, but was completely rewritten to take advantage of a new version of BIP and various new results on the theory of invariant computation. The improvements are demonstrated by comparison with previous work and reports on new results on a practical case study. © 2011 Springer-Verlag. (16 refs.)Main Heading: Formal methodsControlled terms: Computation theory - NASAUncontrolled terms: Component-based language - Concurrent systems - Deadlock detection - New results - Theory of invariantsClassification Code: 655 Spacecraft - 656 Space Flight - 723.1 Computer Programming - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A hyperbolic tree based interface for exploring massive files
Zeng, Zhirong1; Wang, Kun2; Teng, Dongxing1; Wang, Hongan1; Dai, Guozhong1
Source: ACM International Conference Proceeding Series, 2011, VINCI 2011 - The 4th Visual Information Communication - International Symposium; ISBN-13: 9781450308755; DOI: 10.1145/2016656.2016660; Conference: 4th Visual Information Communication - International Symposium, VINCI 2011, August 4, 2011 - August 5, 2011; Sponsor: ACM SIGCHI China; China Computer Federation;
Publisher: Association for Computing Machinery
Author affiliation: 1 Intelligence Engineering Lab., Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China2 Beijing Jiaotong University, Beijing, 100044, China
Abstract: Along with the rapid development of information technology, individual users of computers are faced with many problems in dealing with massive files, such as enormous amount, serious redundancy and complex version evolution relation. These lead that the file system becomes more and more rough-and-tumble, to a certain extent putting users to trouble in daily work. This paper systematically analyzed the problems in file system management, put forward user cognitive model when faced with massive files, and then introduced the FDTVAS system which is a visual interface based on hyperbolic tree view for exploring the massive files. This system used specific algorithm to do the intelligent analysis of file relationship, took hyperbolic tree as the main visual form, and provided a series of interaction tasks according with users' cognitive model. The built system can help users get a clear insight of the file system and do the analysis and management efficiently. © 2011 ACM. (19 refs.)Main Heading: Trees (mathematics)Controlled terms: Information technology - Plant extracts - Visual communication - VisualizationUncontrolled terms: Cognitive model - File systems - Hyperbolic tree - Intelligent analysis - Rapid development - Tree management - Visual analysis - Visual InterfaceClassification Code: 461.9 Biology - 717.1 Optical Communication Systems - 902.1 Engineering Graphics - 903 Information Science - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On EA-equivalence of certain permutations to power mappings
Li, Yongqiang1, 2; Wang, Mingsheng1
Source: Designs, Codes, and Cryptography, v 58, n 3, p 259-269, March 2011
; ISSN: 09251022; DOI: 10.1007/s10623-010-9406-8;
Publisher: Springer Netherlands
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, PO Box 8718, Beijing 100190, China2 Graduate School, Chinese Academy of Sciences, Beijing 100190, China
Abstract: In this paper we investigate the existence of permutation polynomials of the form x d + L(x) on F2n[x], where L(x) F2n[x] is a linearized polynomial. It is shown that for some special d with gcd(d, 2 n -1) > 1, x d + L(x) is nerve a permutation on F2n[x] for any linearized polynomial L(x) F2n[x]. For the Gold functions x 2i-1+Lx, it is shown that is a permutation on F 2n[x] if and only if n is odd and L(x)= α2ix + ax2i for some α Ε F *2n[x]. We also disprove a conjecture in (Macchetti Addendum to on the generalized linear equivalence of functions over finite fields. Cryptology ePrint Archive, Report2004/347, 2004) in a very simple way. At last some interesting results concerning permutation polynomials of the form x -1 + L(x) are given. © 2010 Springer Science+Business Media, LLC. (21 refs.)Main Heading: FunctionsControlled terms: Equivalence classes - Linearization - PolynomialsUncontrolled terms: AB function - APN function - CCZ- equivalence - EA-equivalence - Permutation polynomial - S-boxClassification Code: 921 Mathematics - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Cryptanalysis of a certificateless signcryption scheme in the standard model
Weng, Jian1, 2, 3; Yao, Guoxiang2; Deng, Robert H.4; Chen, Min-Rong5; Li, Xiangxue6
Source: Information Sciences, v 181, n 3, p 661-667, 2011
; ISSN: 00200255; DOI: 10.1016/j.ins.2010.09.037;
Publisher: Elsevier Inc.
Author affiliation: 1 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China2 Department of Computer Science, Jinan University, Guangzhou 510632, China3 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100080, China4 School of Information Systems, Singapore Management University, Singapore 178902, Singapore5 College of Information Engineering, Shenzhen University, Shenzhen 518060, China6 Department of Computer Science and Technology, East China Normal University, Shanghai 200241, China
Abstract: Certificateless signcryption is a useful primitive which simultaneously provides the functionalities of certificateless encryption and certificateless signature. Recently, Liu et al. [15] proposed a new certificateless signcryption scheme, and claimed that their scheme is provably secure without random oracles in a strengthened security model, where the malicious-but-passive KGC attack is considered. Unfortunately, by giving concrete attacks, we indicate that Liu et al. certificateless signcryption scheme is not secure in this strengthened security model. © 2010 Elsevier Inc. All rights reserved. (18 refs.)Main Heading: CryptographyUncontrolled terms: Certificateless - Certificateless signature - Malicious-but-passive KGC attack - Provably secure - Security model - Semantic security - Signcryption - Signcryption schemes - The standard model - Unforgeability - Without random oraclesClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Semantic decision making using ontology-based soft sets
Jiang, Yuncheng1, 2; Liu, Hai1; Tang, Yong1; Chen, Qimai1
Source: Mathematical and Computer Modelling, v 53, n 5-6, p 1140-1149, March 2011
; ISSN: 08957177; DOI: 10.1016/j.mcm.2010.11.080;
Publisher: Elsevier Ltd
Author affiliation: 1 School of Computer Science, South China Normal University, Guangzhou 510631, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Molodtsov initiated the concept of soft set theory, which can be used as a generic mathematical tool for dealing with uncertainty. Description Logics (DLs) are a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. To extend the expressive power of soft sets, ontology-based soft sets are presented by using the concepts of DLs to act as the parameters of soft sets. In this paper, we investigate soft set based decision making problems more deeply. Concretely, we first point out that the traditional approaches to (fuzzy) soft set based decision making are not fit to solve decision making problems involving user queries through some motivating examples. Furthermore, we present a novel approach to semantic decision making by using ontology-based soft sets and ontology (i.e., DL) reasoning. Lastly, the implementation method of semantic decision making is also discussed. © 2010 Elsevier Ltd. (48 refs.)Main Heading: Decision makingControlled terms: Data description - Knowledge representation - Ontology - Semantics - Set theoryUncontrolled terms: Application domains - Decision-making problem - Description logic - Expressive power - Implementation methods - Knowledge representation language - Mathematical tools - Ontology-based - Query - Soft sets - User queryClassification Code: 723 Computer Software, Data Handling and Applications - 903 Information Science - 903.2 Information Dissemination - 912.2 Management - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Precise propagation of fault-failure correlations in program flow graphs
Zhang, Zhenyu1; Chan, W.K.2; Tse, T.H.3; Jiang, Bo3
Source: Proceedings - International Computer Software and Applications Conference, p 58-67, 2011, Proceedings - 35th Annual IEEE International Computer Software and Applications Conference, COMPSAC 2011
; ISSN: 07303157; ISBN-13: 9780769544397; DOI: 10.1109/COMPSAC.2011.16; Article number: 6032325; Conference: 35th Annual IEEE International Computer Software and Applications Conference, COMPSAC 2011, July 18, 2011 - July 21, 2011; Sponsor: IEEE; IEEE Computer Society;
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Department of Computer Science, City University of Hong Kong, Tat Chee Avenue, Hong Kong, Hong Kong3 Department of Computer Science, University of Hong Kong, Pokfulam, Hong Kong
Abstract: Statistical fault localization techniques find suspicious faulty program entities in programs by comparing passed and failed executions. Existing studies show that such techniques can be promising in locating program faults. However, coincidental correctness and execution crashes may make program entities indistinguishable in the execution spectra under study, or cause inaccurate counting, thus severely affecting the precision of existing fault localization techniques. In this paper, we propose a BlockRank technique, which calculates, contrasts, and propagates the mean edge profiles between passed and failed executions to alleviate the impact of coincidental correctness. To address the issue of execution crashes, Block-Rank identifies suspicious basic blocks by modeling how each basic block contributes to failures by apportioning their fault relevance to surrounding basic blocks in terms of the rate of successful transition observed from passed and failed executions. BlockRank is empirically shown to be more effective than nine representative techniques on four real-life medium-sized programs. © 2011 IEEE. (30 refs.)Main Heading: Tracking (position)Controlled terms: Computer applications - Electric network analysisUncontrolled terms: Basic blocks - BlockRank - Edge profile - Fault localization - Graph - Program flow - Social Network AnalysisClassification Code: 703.1.1 Electric Network Analysis - 716.2 Radar Systems and Equipment - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Model checking for protocols using verds
Ma, Ming1, 2
Source: Proceedings - 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011, p 231-234, 2011, Proceedings - 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011; ISBN-13: 9780769545066; DOI: 10.1109/TASE.2011.17; Article number: 6042085; Conference: 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011, August 29, 2011 - August 31, 2011; Sponsor: IEEE CS; IFIP;
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, Beijing 100049, China
Abstract: We present the techniques of ternary boolean diagram-based model checking together with bounded semantics model checking used in the model checker called Verds, which is developed in our laboratory. In the experiment of protocol verification under different scenarios, we compare the performance of Verds against those of model checkers CMurphi and NuSMV, showing that Verds overall compares favorably to NuSMV and CMurphi. © 2011 IEEE. (11 refs.)Main Heading: Model checkingControlled terms: Semantics - Software engineeringUncontrolled terms: Model checker - Protocol verification - VerdsClassification Code: 723.1 Computer Programming - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Analysis of telephone call detail records based on fuzzy decision tree
Ding, Liping1; Gu, Jian2; Wang, Yongji1; Wu, Jingzheng1
Source: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, v 56, p 301-311, 2011, Forensics in Telecommunications, Information, and Multimedia: Third International ICST Conference, e-Forensics 2010, Shanghai, China, November 11-12, 2010, Revised Selected Papers
; ISSN: 18678211; ISBN-13: 9783642236013; DOI: 10.1007/978-3-642-23602-0_30;
Publisher: Springer Verlag
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Key Lab of Information Network Security of Ministry of Public Security, Third Research Institute of Ministry of Public Security, Shanghai 200031, China
Abstract: Digital evidences can be obtained from computers and various kinds of digital devices, such as telephones, mp3/mp4 players, printers, cameras, etc. Telephone Call Detail Records (CDRs) are one important Source of digital evidences that can identify suspects and their partners. Law enforcement authorities may intercept and record specific conversations with a court order and CDRs can be obtained from telephone service providers. However, the CDRs of a suspect for a period of time are often fairly large in volume. To obtain useful information and make appropriate decisions automatically from such large amount of CDRs become more and more difficult. Current analysis tools are designed to present only numerical results rather than help us make useful decisions. In this paper, an algorithm based on fuzzy decision tree (FDT) for analyzing CDRs is proposed. We conducted experimental evaluation to verify the proposed algorithm and the result is very promising. © 2011 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering. (16 refs.)Main Heading: TelephoneControlled terms: Algorithms - Decision trees - Digital devices - Plant extracts - Telecommunication equipment - Telephone sets - Telephone systems - Trees (mathematics)Uncontrolled terms: Court orders - Current analysis - Digital evidence - Enforcement authorities - Experimental evaluation - Forensics - Fuzzy decision trees - Numerical results - Telephone call records - Telephone calls - Telephone-service providersClassification Code: 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory - 921 Mathematics - 723 Computer Software, Data Handling and Applications - 721 Computer Circuits and Logic Elements - 718.1 Telephone Systems and Equipment - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television - 461.9 Biology
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A multi-compositional enforcement on information flow security
Sun, Cong1, 2, 3; Zhai, Ennan4; Chen, Zhong2, 3; Ma, Jianfeng1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p 345-359, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426; DOI: 10.1007/978-3-642-25243-3_28; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 Key Lab. of Computer Networks and Information Security, Xidian Univ., MoE, China2 Key Lab. of High Confidence Software Technologies, Peking Univ., MoE, China3 Key Lab. of Network and Software Security Assurance, Peking Univ., MoE, China4 Institute of Software, Chinese Academy of Sciences, China
Abstract: Interactive/Reactive computational model is known to be proper abstraction of many pervasively used systems, such as client-side web-based applications. The critical task of information flow control mechanisms aims to determine whether the interactive program can guarantee the confidentiality of secret data. We propose an efficient and flow-sensitive static analysis to enforce information flow policy on program with interactive I/Os. A reachability analysis is performed on the abstract model after a form of transformation, called multi-composition, to check the conformance with the policy. In the multi-composition we develop a store-match pattern to avoid duplicating the I/O channels in the model, and use the principle of secure multi-execution to generalize the security lattice model which is supported by other approaches based on automated verification. We also extend our approach to support a stronger version of termination-insensitive noninterference. The results of preliminary experiments show that our approach is more precise than existing flow-sensitive analysis and the cost of verification is reduced through the store-match pattern. © 2011 Springer-Verlag. (25 refs.)Main Heading: Security of dataControlled terms: Abstracting - Flow control - Public policy - Static analysisUncontrolled terms: Information flow security - Interactive models - Program analysis - Pushdown systems - Security policyClassification Code: 631.1 Fluid Flow, General - 723 Computer Software, Data Handling and Applications - 903.1 Information Sources and Analysis - 971 Social Sciences
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A probabilistic secret sharing scheme for a compartmented access structure
Yu, Yuyin1, 2; Wang, Mingsheng1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p 136-142, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426; DOI: 10.1007/978-3-642-25243-3_11; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, Beijing 100049, China
Abstract: In a compartmented access structure, there are disjoint participants C 1,...,Cm. The access structure consists of subsets of participants containing at least ti from Ci for i = 1,...,m, and a total of at least t0 participants. Tassa [2] asked: whether there exists an efficient ideal secret sharing scheme for such an access structure? Tassa and Dyn [5] realized this access structure with the help of its dual access structure. Unlike the scheme constructed in [5], we propose a direct solution here, in the sense that it does not utilize the dual access structure. So our method is compact and simple. © 2011 Springer-Verlag. (9 refs.)Main Heading: Security of dataControlled terms: Information theoryUncontrolled terms: Access structure - Direct solution - Ideality - Secret sharing - Secret sharing schemesClassification Code: 716.1 Information Theory and Signal Processing - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A new SVM-based image watermarking using Gaussian-Hermite moments
Wang, Xiang-Yang1, 2, 3; Miao, E-No1; Yang, Hong-Ying1
Source: Applied Soft Computing Journal, 2011
; ISSN: 15684946; DOI: 10.1016/j.asoc.2011.10.003 Article in Press
Author affiliation: 1 School of Computer and Information Technology, Liaoning Normal University, Dalian 116029, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 Network and Data Security Key Laboratory of Sichuan Province, Chengdu 611731, China
Abstract: Geometric attack is known as one of the most difficult attacks to resist, for it can desynchronize the location of the watermark and hence causes incorrect watermark detection. It is a challenging work to design a robust image watermarking scheme against geometric attacks. Based on the support vector machine (SVM) and Gaussian-Hermite moments (GHMs), we propose a robust image watermarking algorithm in nonsubsampled contourlet transform (NSCT) domain with good visual quality and reasonable resistance toward geometric attacks in this paper. Firstly, the NSCT is performed on original host image, and corresponding low-pass subband is selected for embedding watermark. Then, the selected low-pass subband is divided into small blocks. Finally, the digital watermark is embedded into host image by modulating adaptively the NSCT coefficients in small block. The main steps of digital watermark detecting procedure include: (1) some low-order Gaussian-Hermite moments of training image are computed, which are regarded as the effective feature vectors; (2) the appropriate kernel function is selected for training, and a SVM training model can be obtained; (3) the watermarked image is corrected with the well trained SVM model; (4) the digital watermark is extracted from the corrected watermarked image. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as filtering, noise adding, JPEG compression, etc., but also robust against the geometric attacks. © 2011 Elsevier B.V. All rights reserved.Main Heading: WatermarkingControlled terms: Digital watermarking - Gaussian distribution - Geometry - Image processing - Mathematical transformations - Support vector machinesUncontrolled terms: Digital water-marks - Embedding watermarks - Feature vectors - Gaussian-Hermite moments - Geometric attacks - Host images - Image Watermarking - Image watermarking algorithm - JPEG compression - Kernel function - Low-pass - Noise adding - Nonsubsampled contourlet transforms - Processing operations - Sub-bands - SVM model - Training image - Training model - Visual qualities - Watermark detection - Watermarked imagesClassification Code: 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 811.1.1 Papermaking Processes - 921 Mathematics - 921.3 Mathematical Transformations - 922.1 Probability Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Local bias and its impacts on the performance of parametric estimation models
Yang, Ye1; Xie, Lang1, 2; He, Zhimin1, 2; Li, Qi3; Nguyen, Vu3; Boehm, Barry3; Valerdi, Ricardo4
Source: ACM International Conference Proceeding Series, 2011, PROMISE 2011 - 7th International Conference on Predictive Models in Software Engineering, Co-located with ESEM 2011; ISBN-13: 9781450307093; DOI: 10.1145/2020390.2020404; Article number: 2020404; Conference: 7th International Conference on Predictive Models in Software Engineering, PROMISE 2011, Co-located with ESEM 2011, September 20, 2011 - September 21, 2011;
Publisher: Association for Computing Machinery
Author affiliation: 1 Lab for Internet Software Technology, Institute of Software, Chinese Academy of Sciences, Beijing, China2 Graduate University of Chinese Academy of Sciences, Beijing, China3 Center for Systems and Software Engineering, University of Southern California, Los Angeles, United States4 Lean Advancement Initiative, Massachusetts Institute of Technology, Cambridge, United States
Abstract: Background: Continuously calibrated and validated parametric models are necessary for realistic software estimates. However, in practice, variations in model adoption and usage patterns introduce a great deal of local bias in the resultant historical data. Such local bias should be carefully examined and addressed before the historical data can be used for calibrating new versions of parametric models. Aims: In this study, we aim at investigating the degree of such local bias in a cross-company historical dataset, and assessing its impacts on parametric estimation model's performance. Method: Our study consists of three parts: 1) defining a method for measuring and analyzing the local bias associated with individual organization data subset in the overall dataset; 2) assessing the impacts of local bias on the estimation performance of COCOMO II 2000 model; 3) performing a correlation analysis to verify that local bias can be harmful to the performance of a parametric estimation model. Results: Our results show that the local bias negatively impacts the performance of parametric model. Our measure of local bias has a positive correlation with the performance by statistical importance. Conclusion: Local calibration by using the whole multi-company data would get worse performance. The influence of multi-company data could be defined by local bias and be measured by our method.Copyright © 2011 ACM. (27 refs.)Main Heading: Parameter estimationControlled terms: Estimation - Models - Predictive control systems - Software engineeringUncontrolled terms: Accuracy indicators - COCOMO II - Correlation analysis - Data sets - Data subsets - Effort estimation - Estimation performance - Historical data - Local bias - Parametric estimation - Parametric models - Positive correlations - Usage patternsClassification Code: 723.1 Computer Programming - 731.1 Control Systems - 902.1 Engineering Graphics - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Analysis on coverage performance of staring sensors infrared LEO constellation
Deng, Yong1, 2; Wang, Chun-Ming2; Zhang, Zhong-Zhao1
Source: Yuhang Xuebao/Journal of Astronautics, v 32, n 1, p 123-128, January 2011; Language: Chinese
; ISSN: 10001328; DOI: 10.3873/j.issn.1000-1328.2011.01.019;
Publisher: China Spaceflight Society
Author affiliation: 1 Communication Research Center, Harbin Institute of Technology, Harbin 150001, China2 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: An infrared LEO constellation is designed to detect and track objects above the horizon against a cold space background. First, detection conditions of staring sensors for the infrared LEO constellation are analyzed and a geometric observation model is built. Coverage criterions of staring sensors covering over a space region are discussed in bearing-only location system. Some indexes are proposed to describe the space coverage performance of infrared LEO constellation. Finally, based on a Vertex Cover Numerical Simulation Method, space coverage performance of a typical constellation is given from three aspects: height distribution of multiple coverage, coverage distribution of multiple coverage over a latitude belt, height distribution of multiple coverage over a three-dimensional space region, and the variability can be embodied well. Some conclusions are drawn, and give a reference to infrared LEO constellation design and its space application. (10 refs.)Main Heading: Mathematical modelsControlled terms: Computer simulation - Numerical methods - SensorsUncontrolled terms: Bearing-only location - Coverage - Geometric observation - Height distribution - Latitude belts - LEO constellation - Numerical simulation - Numerical simulation method - Space coverage - Space regions - Staring sensor - Three dimensional space - Vertex coverClassification Code: 723.5 Computer Applications - 801 Chemistry - 921 Mathematics - 921.6 Numerical Methods
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Single-pass data access for multi-fragment effects rendering on GPUs
Xie, Guo-Fu1, 2; Wang, Wen-Cheng1
Source: Jisuanji Xuebao/Chinese Journal of Computers, v 34, n 3, p 473-481, March 2011; Language: Chinese
; ISSN: 02544164; DOI: 10.3724/SP.J.1016.2011.00473;
Publisher: Science Press
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 Graduate University of Chinese Acad. of Sci., Beijing 100049, China
Abstract: Rendering of multi-fragment effects can be greatly accelerated on the GPU. However, existing methods always need to read the model data in more than one passes, due to the requirements for depth ordering of fragments and the architecture limitation of the GPU. This has been a bottleneck for increasing the rendering efficiency, because of the limited transmittance bandwidth from CPU to GPU. Though there have been methods proposed to use CUDA with the data loaded once, they cannot process large models due to the limited storage on the GPU. This paper proposes a new method to implement single-pass GPU rendering of multi-fragment effects. It first decomposes the 3D model into a set of convex polyhedrons, and then by the viewpoint determines the order of transmitting the convex polyhedrons one by one to the GPU, to guarantee the correct ordering of fragments. In the process, the new method immediately performs illumination computation and blends the rendering results of the transmitted convex polyhedrons, so that it can greatly reduce the storage requirement. As a result, it can take more shading parameters to promote the rendering effects. Experimental results show that the new method can be faster than existing methods, even compared with the methods using CUDA, and can conveniently handle large models, even those with high depth complexity. (15 refs.)Main Heading: Computer graphics equipmentControlled terms: Computer graphics - Geometry - Program processors - Three dimensional - TransparencyUncontrolled terms: 3D models - Convex polyhedrons - Data access - Depth complexity - Existing method - Fragment processing - GPU rendering - Graphics Processing Unit - Limited storage - Model data - One-pass - Storage requirements - TranslucencyClassification Code: 722.2 Computer Peripheral Equipment - 723.1 Computer Programming - 723.5 Computer Applications - 741.1 Light/Optics - 921 Mathematics - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Constructing technology for hypervideo based on sketch interface
Yang, Haiyan1, 2; Chen, Jia1, 2; Ma, Cuixia1; He, Lili3; Teng, Dongxing1; Dai, Guozhong1; Wang, Hong'an1
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 2, p 289-295, February 2011; Language: Chinese; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 Intelligence Engineering Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China3 College of Informatics and Electronics, Zhejiang Science and Technology University, Hangzhou 310018, China
Abstract: Due to video's temporally varying nature, it is difficult for user to capture video main content at a glance. Constructing a hypervideo always depends on complex computer vision techniques. It leads to high cognitive load and complicated operations to users. Traditional operations based on time bar or video frame sequence provide ways to interact with video, but it still inconvenient for users to edit or navigate videos according to video semantic flexibly. In order to solve these problems, a novel method based on sketch is proposed with the purpose of providing natural and efficient interaction for hypervideo construction. Due to the properties of abstract and vague, sketch is very suitable to describe and enrich video content, and can bridge the gap between low level features and high semantics. In this paper, we analyze complex semantic relation between different video reSources and divide sketch into two types according to different level of video semantic to assist hypervideo construction. Two key problems about sketch creation are resolved. One is sketch similarity matching, the other is fusion of sketch and video. Finally, an example of application is given to illustrate how to realize efficient hypervideo construction by sketch based video semantic representation. (16 refs.)Main Heading: SemanticsControlled terms: Computer visionUncontrolled terms: Hypervideo - Interaction technology for hypervideo construction - Sketch based annotation - Sketch based interface - Video semantic representationClassification Code: 723.5 Computer Applications - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Distinguishing attacks on LPMAC based on the full RIPEMD and reduced-step RIPEMD-{256,320}
Wang, Gaoli1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6584 LNCS, p 199-217, 2011, Information Security and Cryptology - 6th International Conference, Inscrypt 2010, Revised Selected Papers
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642215179; DOI: 10.1007/978-3-642-21518-6_15; Conference: 6th China International Conference on Information Security and Cryptology, Inscrypt 2010, October 20, 2010 - October 24, 2010; Sponsor: State Key Laboratory of Information Security; Chinese Academy of Sciences; Chinese Association for Cryptologic Research;
Publisher: Springer Verlag
Author affiliation: 1 School of Computer Science and Technology, Donghua University, Shanghai 201620, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: This paper presents the first distinguishing attack on the LPMAC based on RIPEMD, 58-step reduced RIPEMD-256 and 48-step reduced RIPEMD-320, and the LPMAC is the secret-prefix MAC with the message length prepended to the message before hashing. Wang et al. presented the first distinguishing attack on HMAC/NMAC-MD5 without the related-key setting in [27], then they extended this technique to give a distinguishing attack on the LPMAC based on 61-step SHA-1 in [24]. In this paper, we utilize the techniques in [24,27] combined with our pseudo-near-collision differential path on the full RIPEMD, 58-step reduced RIPEMD-256 and 48-step reduced RIPEMD-320 to distinguish the LPMAC based on the full RIPEMD, 58-step reduced RIPEMD-256 and 48-step reduced RIPEMD-320 from the LPMAC based on a random function respectively. Because RIPEMD and RIPEMD-{256,320} all contain two different and independent parallel lines of operations, the difficulty of our attack is to choose proper message differences and to find proper near-collision differential paths of the two parallel lines of operations. The complexity of distinguishing the LPMAC based on the full RIPEMD is about 266 MAC queries. For the LPMAC based on 58-step reduced RIPEMD-256 and 48-step reduced RIPEMD-320, the complexities are about 2163.5 MAC queries and 2208.5 MAC queries respectively. © 2011 Springer-Verlag. (30 refs.)Main Heading: Security of dataControlled terms: Hash functionsUncontrolled terms: Distinguishing attacks - MAC - Message length - Parallel line - Random functions - RIPEMD- familyClassification Code: 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
LBlock: A lightweight block cipher
Wu, Wenling1; Zhang, Lei1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6715 LNCS, p 327-344, 2011, Applied Cryptography and Network Security - 9th International Conference, ACNS 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642215537; DOI: 10.1007/978-3-642-21554-4_19; Conference: 9th International Conference on Applied Cryptography and Network Security, ACNS 2011, June 7, 2011 - June 10, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: In this paper, we propose a new lightweight block cipher called LBlock. Similar to many other lightweight block ciphers, the block size of LBlock is 64-bit and the key size is 80-bit. Our security evaluation shows that LBlock can achieve enough security margin against known attacks, such as differential cryptanalysis, linear cryptanalysis, impossible differential cryptanalysis and related-key attacks etc. Furthermore, LBlock can be implemented efficiently not only in hardware environments but also in software platforms such as 8-bit microcontroller. Our hardware implementation of LBlock requires about 1320 GE on 0.18 μm technology with a throughput of 200 Kbps at 100 KHz. The software implementation of LBlock on 8-bit microcontroller requires about 3955 clock cycles to encrypt a plaintext block. © 2011 Springer-Verlag Berlin Heidelberg. (36 refs.)Main Heading: CryptographyControlled terms: Hardware - Lyapunov methods - Microcontrollers - Network securityUncontrolled terms: 8-bit microcontrollers - Block ciphers - Block sizes - Clock cycles - Cryptanalysis - Differential cryptanalysis - Hardware efficiency - Hardware environment - Hardware implementations - Key sizes - Lightweight - Lightweight blocks - Linear cryptanalysis - M-Technologies - Plaintext - Related-key attacks - Security evaluation - Security margins - Software implementation - Software platformsClassification Code: 605 Small Tools and Hardware - 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications - 921 Mathematics - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
KnitSketch: A sketch pad for conceptual design of 2D garment patterns
Ma, Cui-Xia1; Liu, Yong-Jin2; Yang, Hai-Yan1; Teng, Dong-Xing1; Wang, Hong-An1; Dai, Guo-Zhong1
Source: IEEE Transactions on Automation Science and Engineering, v 8, n 2, p 431-437, April 2011; ISSN: 15455955; DOI: 10.1109/TASE.2010.2086444; Article number: 5613218;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 Intelligence Engineering Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Abstract: In this paper, we present a new sketch-based system KnitSketch, to improve the efficiency of process planning for knitting garments at an early design stage. The KnitSketch system utilizes sketching interface with the pen-paper metaphor and users only need to draw outlines of different parts of the garment. Based on sketching understanding, the system automatically makes reasonable geometric inferences about the process-planning data of the garment. The system is designed for nonprofessional users and can design diverse garment styles by freehand drawings. The contributions of this work include contextual extraction of reusable data from sketches, a MDG structure for sketch beautification, and an integrated system with natural expression and effective communication that reduces the cognitive load of human beings. User experience shows that the proposed system helps designers focus on the task instead of the designing tools, and thus improves the efficiency and productivity of human beings. © 2010 IEEE. (19 refs.)Main Heading: Conceptual designControlled terms: Integrated optics - Manufacture - Process engineeringUncontrolled terms: Can design - Cognitive loads - Designing tools - domain knowledge - Early design stages - Effective communication - Free-hand drawing - garment manufacturing - Human being - Integrated systems - Non-professional users - Planning data - sketch-based interface - Sketching interface - User experienceClassification Code: 408 Structural Design - 537.1 Heat Treatment Processes - 731 Automatic Control Principles and Applications - 741.3 Optical Devices and Systems
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An efficient location-based compromise-tolerant key management scheme for sensor networks
Duan, Mei-Jiao1; Xu, Jing1
Source: Information Processing Letters, v 111, n 11, p 503-507, May 15, 2011; ISSN: 00200190; DOI: 10.1016/j.ipl.2011.02.017;
Publisher: Elsevier
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Location information has been paid much more attention in sensor network key management schemes. In 2006, Zhang et al. proposed a location-based key management scheme by binding private keys of individual nodes to both their identities and locations. In this Letter, however, we show that their scheme cannot resist key compromise impersonation (KCI) attack, and does not achieve forward secrecy. In fact, an adversary who compromises the location-based secret key of a sensor node A, can masquerade as any other legitimate node or even fake a node to establish the shared key with A, as well as decrypt all previous messages exchanged between A and its neighboring nodes. We then propose a new scheme which provides KCI resilience, perfect forward secrecy and is also immune to various known types of attacks. Moreover, our scheme does not require any pairing operation or map-to-point hash operation, which is more efficient and more suitable for low-power sensor nodes. © 2011 Elsevier B.V. All rights reserved. (6 refs.)Main Heading: Wireless sensor networksControlled terms: Cryptography - Information management - Network management - Sensor nodes - Sensors - Telecommunication equipment - Wireless networksUncontrolled terms: Forward secrecy - Hash operations - Key management - Key-compromise impersonation - Location based - Location information - Low Power - Neighboring nodes - Perfect forward secrecy - Private key - Secret key - Security in digital systemsClassification Code: 903.2 Information Dissemination - 801 Chemistry - 732 Control Devices - 723 Computer Software, Data Handling and Applications - 722 Computer Systems and Equipment - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716.3 Radio Systems and Equipment - 716 Telecommunication; Radar, Radio and Television
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Post-WIMP user interface model for personal information management
Chen, Ming-Xuan1; Ren, Lei2; Tian, Feng1; Deng, Chang-Zhi1; Dai, Guo-Zhong1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 5, p 1082-1096, May 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03749;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Laboratory of Human-Computer Interaction and Intelligent Information Processing, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
Abstract: This paper proposes a Post-WIMP interface model for personal information management (PWPIM), presents five facets to elaborate user character, domain objects, and interaction tasks, and gives the modeling method. Finally, this paper applies PWPIM to a physical interface based PIM (personal information management) system. The application example of PWPIM shows that, PWPIM can effectively describe the Post-WIMP interface for personal information management. Results also show that PWIM is fit for the feature of PIM system and is capable of guiding the PIM interface in design, development, and evaluation. © 2011 ISCAS. (24 refs.)Main Heading: ManagementControlled terms: Human computer interaction - Knowledge management - User interfacesUncontrolled terms: Human-computer - Interface model - Natural interactions - Personal information management - Post-WIMPClassification Code: 722.2 Computer Peripheral Equipment - 723.5 Computer Applications - 912.2 Management
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Fast discrete Fourier spectra attacks on stream ciphers
Gong, Guang1; Rnjom, Sondre4; Helleseth, Tor5; Hu, Honggang2, 3
Source: IEEE Transactions on Information Theory, v 57, n 8, p 5555-5565, August 2011; ISSN: 00189448; DOI: 10.1109/TIT.2011.2158480; Article number: 5961824;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada2 School of Information Science and Technology, University of Science and Technology of China, Hefei, 230026, China3 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China4 NSM, Rdskiferveien 20, 1352 Brum, Norway5 Selmer Center, Department of Informatics, University of Bergen, PB 7803, N-5020 Bergen, Norway
Abstract: In this paper, some new results are presented on the selective discrete Fourier spectra attack introduced first as the RnjomHelleseth attack and the modifications due to Rnjom, Gong, and Helleseth. The first part of this paper fills some gaps in the theory of analysis in terms of the discrete Fourier transform (DFT). The second part introduces the new fast selective DFT attacks, which are closely related to the fast algebraic attacks in the literature. However, in contrast to the classical view that successful algebraic cryptanalysis of LFSR-based stream cipher depends on the degree of certain annihilators, the analysis in terms of the DFT spectral properties of the sequences generated by these functions is far more refined. It is shown that the selective DFT attack is more efficient than known methods for the case when the number of observed consecutive bits of a filter generator is less than the linear complexity of the sequence. Thus, by utilizing the natural representation imposed by the underlying LFSRs, in certain cases, the analysis in terms of DFT spectra is more efficient and has more flexibility than classical and fast algebraic attacks. Consequently, the new attack imposes a new criterion for the design of cryptographic strong Boolean functions, which is defined as the spectral immunity of a sequence or a Boolean function. © 2011 IEEE. (27 refs.)Main Heading: Boolean functionsControlled terms: Algebra - Cryptography - Discrete Fourier transforms - Mathematical transformations - Shift registersUncontrolled terms: Algebraic cryptanalysis - Fast algebraic attack - Filter generators - Fourier spectra - LFSR-based stream ciphers - Linear complexity - Linear feedback shift registers - Natural representation - New results - spectral immunity - Spectral properties - Stream CiphersClassification Code: 921.3 Mathematical Transformations - 921.1 Algebra - 723 Computer Software, Data Handling and Applications - 721.3 Computer Circuits - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research on trusted computing technology
Feng, Dengguo1; Qin, Yu1; Wang, Dan1; Chu, Xiaobo1
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 8, p 1332-1349, August 2011; Language: Chinese; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Trusted computing, as a novel technology of information security, has become an important research area of information security. TCG comprised of the international IT giants has published a series of trusted computing specifications to promote the comprehensive development of the trusted computing technology and industry, and the core specifications have been accepted as international standardization by ISO/IEC. In academia, the research institutions at home and abroad study the trusted computing technology in depth and have gained rich achievements. In China, the independent trusted computing standard infrastructure is founded with the core of TCM on the basis of the independent cryptography algorithms, forming the whole trusted computing industry chains, which breaks the monopolization of the trusted computing technology and industry by the international IT giants. With the rapid development of trusted computing field, there are still lots of problems on the key technologies to be solved, and the related research has been done in succession recently. This paper comprehensively illustrates our research results on trusted computing technology. Beginning with establishing the trust of the terminal platforms, we propose a trustworthiness-based trust model and give a method of building trust chain dynamically with information flow, which ensure the real time and security protection of the trust establishment to some extent. Aiming at the security and efficiency problems of the remote attestation protocols, we propose the first property-based attestation scheme on bilinear map and the first direct anonymous attestation scheme based on the q-SDH assumption from the bilinear maps. In trusted computing testing and evaluation, we propose a method of generating test cases automatically with EFSM, and from the method develop a trusted computing platform testing and evaluation system which is the first to be applied in China practically. (85 refs.)Main Heading: Information technologyControlled terms: Automatic test pattern generation - Industry - Network security - Research - Societies and institutions - SpecificationsUncontrolled terms: Remote attestation - TCM - TPM - Trust chain - Trusted computingClassification Code: 913 Production Planning and Control; Manufacturing - 912 Industrial Engineering and Management - 911 Cost and Value Engineering; Industrial Economics - 903 Information Science - 902.2 Codes and Standards - 901.3 Engineering Research - 901.1.1 Societies and Institutions - 723 Computer Software, Data Handling and Applications - 714.2 Semiconductor Devices and Integrated Circuits
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Static detection of bugs caused by incorrect exception handling in java programs
Wu, Xiaoquan1, 2, 3; Xu, Zhongxing2; Wei, Jun1, 2
Source: Proceedings - International Conference on Quality Software, p 61-66, 2011, Proceedings - 11th International Conference on Quality Software, QSIC 2011
; ISSN: 15506002; ISBN-13: 9780769544687; DOI: 10.1109/QSIC.2011.25; Article number: 6004312; Conference: 11th International Conference on Quality Software, QSIC 2011, July 13, 2011 - July 14, 2011; Sponsor: Computer Science School of the Universidad Complutense de Madrid; Madrid Convention Bureau of the Madrid City Council;
Publisher: IEEE Computer Society
Author affiliation: 1 Technology Center of Software Engineering, Institute of Software, Chinese Academy of Sciences, Beijing, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China3 Graduate University, Chinese Academy of Sciences, Beijing, China
Abstract: Exception handling is a vital but often poorly tested part of a program. Static analysis can spot bugs on exceptional paths without actually making the exceptions happen. However, the traditional methods only focus on null dereferences on exceptional paths, but do not check the states of variables, which may be corrupted by exceptions. In this paper we propose a static analysis method that combines forward flow sensitive analysis and backward path feasibility analysis, to detect bugs caused by incorrect exception handling in Java programs. We found 8 bugs in three open Source server applications, 6 of which cannot be found by Find Bugs. The experiments showed that our method is effective for finding bugs related to poorly handled exceptions. © 2011 IEEE. (22 refs.)Main Heading: Program debuggingControlled terms: Java programming language - Software testing - Static analysisUncontrolled terms: Exception handling - Feasibility analysis - Flow-sensitive analysis - Java program - Open Sources - Server applications - Static analysis method - Unsafe UseClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Video steganography with perturbed motion estimation
Cao, Yun1, 2; Zhao, Xianfeng1; Feng, Dengguo1; Sheng, Rennong3
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6958 LNCS, p 193-207, 2011, Information Hiding - 13th International Conference, IH 2011, Revised Selected Papers;
ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642241772;
DOI: 10.1007/978-3-642-24178-9_14; Conference: 13th International Conference on Information Hiding, IH 2011, May 18, 2011 - May 20, 2011; Sponsor: European Office of Aerospace Research and Development; Office of Naval Research; Digimarc Corporation; Technicolor;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, Beijing 100049, China3 Beijing Institute of Electronics Technology and Application, Beijing 100091, China
Abstract: In this paper, we propose an adaptive video steganography tightly bound to video compression. Unlike traditional approaches utilizing spatial/transformed domain of images or raw videos which are vulnerable to certain existing steganalyzers, our approach targets the internal dynamics of video compression. Inspired by Fridrich et al's perturbed quantization (PQ) steganography, a technique called perturbed motion estimation (PME) is introduced to perform motion estimation and message hiding in one step. Intending to minimize the embedding impacts, the perturbations are optimized with the hope that these perturbations will be confused with normal estimation deviations. Experimental results show that, satisfactory levels of visual quality and security are achieved with adequate payloads. © 2011 Springer-Verlag. (18 refs.)Main Heading: SteganographyControlled terms: Estimation - Image compression - Motion estimationUncontrolled terms: Internal dynamics - Message hiding - Normal estimation - One step - Perturbed quantization - Visual qualitiesClassification Code: 716.1 Information Theory and Signal Processing - 723.2 Data Processing and Image Processing - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
General construction of chameleon all-but-one trapdoor functions
Liu, Shengli1, 2; Lai, Junzuo3; Deng, Robert H.3
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6980 LNCS, p 257-265, 2011, Provable Security - 5th International Conference, ProvSec 2011, Proceedings;
ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642243158;
DOI: 10.1007/978-3-642-24316-5_18; Conference: 5th International Conference on Provable Security, ProvSec 2011, October 16, 2011 - October 18, 2011; Sponsor: The National Natural Science Foundation of China (NSFC); Xidian Univ., Key Lab. Comput. Networks; Inf. Secur., Minist. Educ.;
Publisher: Springer VerlagAuthor affiliation: 1 Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, China3 School of Information Systems, Singapore Management University, Singapore 178902, Singapore
Abstract: Lossy trapdoor functions enable black-box construction of public key encryption (PKE) schemes secure against chosen-ciphertext attack [18]. Recently, a more efficient black-box construction of public key encryption was given in [12] with the help of chameleon all-but-one trapdoor functions (ABO-TDFs). In this paper, we propose a black-box construction for transforming any ABO-TDFs into chameleon ABO-TDFs with the help of chameleon hash functions. Instantiating the proposed general black-box construction of chameleon ABO-TDFs, we can obtain the first chameleon ABO-TDFs based on the Decisional Diffie-Hellman (DDH) assumption. © 2011 Springer-Verlag. (21 refs.)Main Heading: Public key cryptographyControlled terms: Hash functionsUncontrolled terms: Black boxes - Chosen ciphertext attack - Diffie Hellman - Lossy Trapdoor Functions - Public-key encryption - Trapdoor functionsClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Achieving efficient dynamic cryptographic access control in cloud storage
Hong, Cheng1; Zhang, Min1; Feng, Deng-Guo1
Source: Tongxin Xuebao/Journal on Communications, v 32, n 7, p 125-132, July 2011; Language: Chinese
; ISSN: 1000436X;
Publisher: Editorial Board of Journal on Communications
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China
Abstract: To keep the data in the cloud confidential against unauthorized parties, a cryptographic access control solution called hybrid cloud re-encryption (HCRE) based on attribute-based encryption (ABE) was introduced. HCRE designed a secret sharing scheme to delegate the task of ABE re-encryption to the cloud service provider (CSP), which alleviates the administering burdens on the data owner. Experiments show that HCRE can handle dynamic access policies in a more efficient way. Additionally, HCRE does not reveal extra information of the plaintext to the CSP, thus it does no harm to the data confidentiality. (16 refs.)Main Heading: Access controlControlled terms: Cloud computing - Cryptography - Security systemsUncontrolled terms: Access policies - Attributes-based encryption - Cloud services - Control solutions - Cryptographic access control - Data confidentiality - Plaintext - Proxy re encryptions - Re-encryption - Secret sharing schemesClassification Code: 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 914.1 Accidents and Accident Prevention
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An adjustable approach to intuitionistic fuzzy soft sets based decision making
Jiang, Yuncheng1, 2; Tang, Yong1; Chen, Qimai1
Source: Applied Mathematical Modelling, v 35, n 2, p 824-836, February 2011; ISSN: 0307904X; DOI: 10.1016/j.apm.2010.07.038;
Publisher: Elsevier Inc.
Author affiliation: 1 School of Computer Science, South China Normal University, Guangzhou 510631, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Molodtsov initiated the concept of soft set theory, which can be used as a generic mathematical tool for dealing with uncertainty. There has been some progress concerning practical applications of soft set theory, especially the use of soft sets in decision making. In this paper we generalize the adjustable approach to fuzzy soft sets based decision making. Concretely, we present an adjustable approach to intuitionistic fuzzy soft sets based decision making by using level soft sets of intuitionistic fuzzy soft sets and give some illustrative examples. The properties of level soft sets are presented and discussed. Moreover, we also introduce the weighted intuitionistic fuzzy soft sets and investigate its application to decision making. © 2010 Elsevier Inc. (43 refs.)Main Heading: Decision makingControlled terms: Decision theory - Fuzzy setsUncontrolled terms: Fuzzy soft sets - Illustrative examples - Intuitionistic fuzzy - Intuitionistic fuzzy soft sets - Mathematical tools - Soft setsClassification Code: 912.2 Management - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
ShadowStory: Creative and collaborative digital storytelling inspired by cultural heritage
Lu, Fei1; Tian, Feng1; Jiang, Yingying1; Cao, Xiang2; Luo, Wencan1; Li, Guang1; Zhang, Xiaolong3; Dai, Guozhong1, 4; Wang, Hongan1, 4
Source: Conference on Human Factors in Computing Systems - Proceedings, p 1919-1928, 2011, CHI 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts; ISBN-13: 9781450302289; DOI: 10.1145/1978942.1979221; Conference: 29th Annual CHI Conference on Human Factors in Computing Systems, CHI 2011, May 7, 2011 - May 12, 2011; Sponsor: Assoc. Comput. Mach., Spec. Interest Group; Comput.-Hum. Interact. (ACM SIGCHI);
Publisher: Association for Computing Machinery
Author affiliation: 1 Intelligence Engineering Lab., Institute of Software, Chinese Academy of Sciences, Beijing, China2 Microsoft Research Cambridge, Cambridge, United Kingdom3 Pennsylvania State University, PA, United States4 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: With the fast economic growth and urbanization of many developing countries come concerns that their children now have fewer opportunities to express creativity and develop collaboration skills, or to experience their local cultural heritage. We propose to address these concerns by creating technologies inspired by traditional arts, and allowing children to create and collaborate through playing with them. ShadowStory is our first attempt in this direction, a digital storytelling system inspired by traditional Chinese shadow puppetry. We present the design and implementation of ShadowStory and a 7-day field trial in a primary school. Findings illustrated that ShadowStory promoted creativity, collaboration, and intimacy with traditional culture among children, as well as interleaved children's digital and physical playing experience. Copyright 2011 ACM. (27 refs.)Main Heading: EducationControlled terms: Developing countries - Economics - Human computer interaction - Human engineeringUncontrolled terms: Children - Collaboration - Creativity - Cultural heritages - Shadow puppet - StorytellingClassification Code: 461.4 Ergonomics and Human Factors Engineering - 901.2 Education - 901.4 Impact of Technology on Society - 971 Social Sciences
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An efficient identity-based blind signature scheme without bilinear pairings
He, Debiao1, 2; Chen, Jianhua1; Zhang, Rui1
Source: Computers and Electrical Engineering, v 37, n 4, p 444-450, July 2011; ISSN: 00457906; DOI: 10.1016/j.compeleceng.2011.05.009;
Publisher: Elsevier Ltd
Author affiliation: 1 School of Mathematics and Statistics, Wuhan University, Wuhan, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: The blind signature schemes are useful in some applications where the anonymity is a big issue. Examples include the online voting systems and the electronic cash systems. Since the first identity-based blind signature scheme was proposed by Zhang et al., many identity-based blind signature schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately 20 times higher than that of the scalar multiplication over elliptic curve group. In order to save the running time and the size of the signature, we propose an identity based blind signature scheme without bilinear pairings. With both the running time and the size of the signature being saved greatly, our scheme is more practical than the related schemes in application. © 2011 Published by Elsevier Ltd. All rights reserved. (24 refs.)Main Heading: AuthenticationControlled terms: Online systems - Voting machinesUncontrolled terms: Bilinear pairing - Blind signature scheme - Computation costs - Electronic cash systems - Elliptic curve - Identity-based - Running time - Scalar multiplication - Voting systemsClassification Code: 601.1 Mechanical Devices - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Semantic operations of multiple soft sets under conflict
Jiang, Yuncheng1, 2; Tang, Yong1; Chen, Qimai1; Cao, Zhanmao1
Source: Computers and Mathematics with Applications, v 62, n 4, p 1923-1939, August 2011; ISSN: 08981221; DOI: 10.1016/j.camwa.2011.06.036;
Publisher: Elsevier Ltd
Author affiliation: 1 School of Computer Science, South China Normal University, Guangzhou 510631, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Molodtsov initiated the concept of soft set theory, which can be used as a generic mathematical tool for dealing with uncertainty. Description Logics (DLs) are a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. Nowadays, properties and semantics of ontology constructs mainly are determined by DLs. In this paper we investigate semantic operations of multiple standard soft sets by using domain ontologies (i.e., DL intensional knowledge bases). Concretely, we give some semantic operations such as complement, restricted difference, extended union, restricted intersection, restricted union, extended intersection, AND, and OR for (multiple) standard soft sets from a semantic point of view. Especially, we also present an approach to deal with conflict from a semantic point of view when we define these semantic operations. Moreover, the basic properties and implementation methods of these semantic operations under conflict are also presented and discussed. © 2011 Elsevier Ltd. All rights reserved. (48 refs.)Main Heading: SemanticsControlled terms: Data description - Formal languages - Knowledge representation - Ontology - Set theoryUncontrolled terms: Application domains - Basic properties - Conflict - Description logic - Domain ontologies - Implementation methods - Knowledge basis - Knowledge representation language - Mathematical tools - Restricted intersections - Semantic Operation - Semantic operations - Soft setsClassification Code: 723 Computer Software, Data Handling and Applications - 903 Information Science - 903.2 Information Dissemination - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A smart card based generic construction for anonymous authentication in mobile networks
Xu, Jing1; Zhu, Wen-Tao2; Feng, Deng-Guo1
Source: SECRYPT 2011 - Proceedings of the International Conference on Security and Cryptography, p 269-274, 2011, SECRYPT 2011 - Proceedings of the International Conference on Security and Cryptography; ISBN-13: 9789898425713; Conference: International Conference on Security and Cryptography, SECRYPT 2011, July 18, 2011 - July 21, 2011; Sponsor: Inst. Syst. Technol. Inf., Control Commun. (INSTICC);
Publisher: INSTICC Press
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, 100190 Beijing, China2 State Key Laboratory of Information Security, Graduate University of Chinese Academy of Sciences, 100049 Beijing, China
Abstract: The global mobility network can offer effective roaming services for a mobile wireless user between his home network and a visited network. For the sake of privacy, user anonymity has recently become an important security requirement for roaming services, and is a topic of concern in designing related protocols such as mutual authentication and key agreement. In this paper we present a generic construction, which converts any password authentication scheme based on the smart card into an anonymous authentication protocol for roaming services. Compared with the original password authentication scheme, the transformed protocol does not sacrifice authentication efficiency, and additionally, an agreed session key can be securely established between an anonymous mobile user and the foreign agent in charge of the network being visited. (8 refs.)Main Heading: Network securityControlled terms: Authentication - Cryptography - Global system for mobile communications - Network protocols - Personal communication systems - Smart cards - Wireless networksUncontrolled terms: Key agreement - Password authentication - Roaming services - User anonymity - Wireless securityClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An software vulnerability number prediction model based on micro-parameters
Nie, Chujiang1; Zhao, Xianfeng1; Chen, Kai1, 2; Han, Zhengqing3
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 7, p 1279-1287, July 2011; Language: Chinese
; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Information Security, Graduate University of Chinese Academy of Sciences, Beijing 100049, China3 Institute of Computing Technology of Northern Jiaotong University, Beijing 100029, China
Abstract: As the cost caused by software vulnerabilities keeps increasing, people pay more and more attention to the researches on the vulnerability. Although discovering vulnerability is difficult because of the defect of vulnerability analysis, to predict the number of vulnerabilities is very useful in some domain, such as information security assessment. At present, the main methods to estimate the density of the vulnerabilities focus on the macro level, but they can not reflect the essential of vulnerability. A prediction model based on micro-parameter is proposed to predict the number of vulnerability with the micro-parameters of software, and it extracts the typical micro-parameters from some software series for the purpose of discovering the relationship between the vulnerability number and micro-parameters. With the hypothesis of vulnerability inheriting, the prediction model abstracts the micro-parameters from software and tries to find a linear relationship between the vulnerability number and some micro-parameters. This model also gives a method to predict the vulnerability number of software with its micro-parameters and the vulnerability number of its previous versions. This method is verified with 7 software series, and the results show the prediction model is effective. (14 refs.)Main Heading: ForecastingControlled terms: Mathematical models - Security of dataUncontrolled terms: Inherited vulnerability - Linear relationships - Microscopic parameters - Prediction model - Software analysis - Software vulnerabilities - Vulnerability analysis - Vulnerability predictsClassification Code: 723.2 Data Processing and Image Processing - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An efficient group-based secret sharing scheme
Lv, Chunli1, 3; Jia, Xiaoqi2; Lin, Jingqiang1; Jing, Jiwu1; Tian, Lijun3
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6672 LNCS, p 288-301, 2011, Information Security Practice and Experience - 7th International Conference, ISPEC 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642210303;
DOI: 10.1007/978-3-642-21031-0_22; Conference: 7th International Conference on Information Security Practice and Experience, ISPEC 2011, May 30, 2011 - June 1, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Graduate University, Chinese Academy of Sciences, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, China3 College of Information and Electrical Engineering, China Agricultural University, China
Abstract: We propose a new secret sharing scheme which can be computed over an Abelian group, such as (Binary string, XOR) and (Integer, Addition). Therefore, only the XOR or the addition operations are required to implement the scheme. It is very efficient and fits for low-cost low-energy applications such as RFID tags. Making shares has a geometric presentation which makes our scheme be easily understood and analyzed. © 2011 Springer-Verlag Berlin Heidelberg. (22 refs.)Main Heading: Information disseminationControlled terms: Radio navigation - Security of data - Security systemsUncontrolled terms: Abelian group - Binary string - Group-based - Low energies - RF-ID tags - Secret sharing schemesClassification Code: 716.3 Radio Systems and Equipment - 723.2 Data Processing and Image Processing - 903.2 Information Dissemination - 914.1 Accidents and Accident Prevention
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Foot menu: Using heel rotation information for menu selection
Zhong, Kang1; Tian, Feng1; Wang, Hongan1
Source: Proceedings - International Symposium on Wearable Computers, ISWC, p 115-116, 2011, Proceedings - 15th Annual International Symposium on Wearable Computers, ISWC 2011
; ISSN: 15504816; ISBN-13: 9780769544380; DOI: 10.1109/ISWC.2011.10; Article number: 5959580; Conference: 15th Annual International Symposium on Wearable Computers, ISWC 2011, June 12, 2011 - June 15, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 Intelligence Engineering Lab., Institute of Software, Chinese Academy of Sciences, China
Abstract: Hand-based interactions are dominant in the HCI field today. However, in the daily life, people can often encounter such situations in which it is not convenient or optimal to interact with computers by using their hands. In this case, an alternative interaction method for hands is required. In this paper, we present the Foot Menu, a new technique which uses heel rotation information to perform selection tasks. As a hands-free technique, the Foot Menu is applicable in hands-busy situations, and users with hand impairment also can benefit from it. © 2011 IEEE. (4 refs.)Main Heading: RotationControlled terms: Wearable computersUncontrolled terms: Daily lives - Hands-free - Interaction methods - Menu selectionClassification Code: 601.1 Mechanical Devices - 722.4 Digital Computers and Systems
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Improved integral attacks on rijndael
Li, Yan-Jun1, 2, 3; Wu, Wen-Ling1, 2
Source: Journal of Information Science and Engineering, v 27, n 6, p 2031-2045, November 2011
; ISSN: 10162364;
Publisher: Institute of Information Science
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing, 100049, China3 Department of Information Security, Beijing Electronic Science and Technology Institute, Beijing, 100070, China
Abstract: In this paper, we present some improved integral attacks on Rijndael whose block sizes are larger than 128 bits. We will introduce some 4-round distinguishers for Rijndael with large blocks proposed by Marine Minier (AFRICACRYPT 2009), and propose a new 4th-order 4-round distinguisher for Rijndael-192. Based on these distinguishers, together with the partial sum technique proposed by Niels Ferguson (FSE 2000), we can apply integral attacks up to 8-round Rijndael-160, 9-round Rijndael-192, and 9-round Rijndael-224. Compared to the square attack proposed by Samuel Galice (AFRICACRYPT 2008), we give different attacks on Rijndael-256 to 8 and 9 rounds. Except the attack on Rijndael-256, all the other results are the best cryptanalytic results on Rijndael with large blocks so far. (15 refs.)Uncontrolled terms: Block ciphers - Distinguishers - Integral attack - Partial sum technique - Rijndael
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Dependency-based malware similarity comparison method
Yang, Yi1, 3; Su, Pu-Rui1; Ying, Ling-Yun1; Feng, Deng-Guo1, 2
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 10, p 2438-2453, October 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03888;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Information Security, Graduate University, The Chinese Academy of Sciences, Beijing 100049, China3 National Engineering Research Center for Information Security, Beijing 100190, China
Abstract: Malware similarity comparison is one of the basic works in malware analysis and detection. Presently, most similarity comparison methods treat malware as CFG, or behavior sequences. Malware writers use obfuscation, packers and other means of technique to confuse traditional similarity comparison methods. This paper proposes a new approach in identifyling the similarities between malware samples, which rely on control dependence and data dependence. First, the dynamic taint analysis is performed to obtain control dependence relations and data dependence relations. Next, a control dependence graph and data dependence graph are constructed. Similarity information is obtained by comparing these two types of graph. In order to take full advantage of the inherent behavior of malicious codes and to increase the accuracy of comparison and anti-jamming capability, the loops are recued and the rubbish is removed by means of the dependence graph pre-processing, which reduces the complexity of the similarity comparison algorithm and improves the performance of the algorithm. The proposed prototype system has been applied to wild malware collections. The results show that the accuracy of the method and comparison capabilities all have an obvious advantage. © 2011 ISCAS. (21 refs.)Main Heading: Computer crimeControlled terms: Algorithms - Dynamic analysisUncontrolled terms: Anti-jamming capability - Behavior sequences - Comparison methods - Control-dependence graphs - Data dependence - Data dependence graphs - Dependence graphs - Dependence relation - Malicious codes - Malware analysis - Malwares - Pre-processing - Prototype system - Similarity comparison - Taint propagationClassification Code: 422.2 Strength of Building Materials : Test Methods - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An inductive approach to provable anonymity
Li, Yongjian1; Pang, Jun2
Source: Proceedings of the 2011 6th International Conference on Availability, Reliability and Security, ARES 2011, p 454-459, 2011, Proceedings of the 2011 6th International Conference on Availability, Reliability and Security, ARES 2011; ISBN-13: 9780769544854; DOI: 10.1109/ARES.2011.70; Article number: 6046000; Conference: 2011 6th International Conference on Availability, Reliability and Security, ARES 2011, August 22, 2011 - August 26, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Laboratory of Computer Sciences, Institute of Software, Chinese Academy of Sciences, China2 Computer Science and Communications, University of Luxembourg, Luxembourg
Abstract: We formalise in a theorem prover the notion of provable anonymity proposed by Garcia et al. Our formalization relies on inductive definitions of message distinguish ability and observational equivalence over observed traces by the intruder. Our theory differs from its original proposal which essentially boils down to the existence of a reinterpretation function. We build our theory in Isabelle/HOL to have a mechanical framework for the analysis of anonymity protocols. Its feasibility is illustrated through the onion routing protocol. © 2011 IEEE. (9 refs.)Main Heading: Network securityUncontrolled terms: Inductive definitions - Isabelle/HOl - Observational equivalences - Theorem proversClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
New p-ary sequence family with low correlation and large linear span
Zhou, Zhengchun1, 2; Tang, Xiaohu3; Parampalli, Udaya4; Peng, Daiyuan3
Source: Applicable Algebra in Engineering, Communications and Computing, v 22, n 4, p 301-309, November 2011; ISSN: 09381279; DOI: 10.1007/s00200-011-0151-7;
Publisher: Springer Verlag
Author affiliation: 1 School of Mathematics, Southwest Jiaotong University, Chengdu 610031, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China3 Institute of Mobile Communications, Southwest Jiaotong University, Chengdu 610031, China4 Department of Computer Science and Software Engineering, University of Melbourne, Parkville, VIC 3010, Australia
Abstract: In this paper, for an odd prime p and positive integers n, m, and e such that n = me, a new family S of p-ary sequences of period pn - 1 with low correlation and large linear span is constructed. It is shown that S has maximum correlation 1+pn+2e/2, family size pn, and maximal linear span (m+3)n/2. When m is even, the proposed family S contains Tang, Udaya, and Fan's construction as a subset. Furthermore, when n is even and e = 1, S has the same correlation and family size, but larger linear span compared with the construction by Seo, Kim, No, and Shin. © 2011 Springer-Verlag. (14 refs.)Main Heading: Correlation methodsControlled terms: Number theoryUncontrolled terms: Linear span - Low correlation - Maximum correlations - Odd prime - P-ary sequence - Positive integers - Quadratic formClassification Code: 921 Mathematics - 922.2 Mathematical Statistics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Universal composable password authenticated key exchange protocol in the standard model
Hu, Xue-Xian1, 2; Zhang, Zhen-Feng2; Liu, Wen-Fen1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 11, p 2820-2832, November 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03910;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Institute of Information Engineering, PLA Information Engineering University, Zhengzhou 450002, China2 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China
Abstract: Through constructing and utilizing non-malleable, extractable, and weak simulation-sound trapdoor commitment schemes and corresponding smooth projective hash function familes, this paper proposes an efficient two-party password authenticated key exchange (PAKE) protocol within the universal composable (UC) framework, which is the optimal two-round PAKE protocol in this setting. Rigorous security proofs based on standard assumptions in the presence of static corruption adversary are then given out. Comparisons with previously proposed protocols show that, this protocol avoids the use of zero-knowledge protocols, and achieves a higher performance in terms of communication efficiency while attaining a comparable computational complexity. © Copyright 2011, Institute of Software, the Chinese Academy of Sciences. All rights reseved. (23 refs.)Main Heading: Knowledge based systemsControlled terms: Computational complexity - Computer simulation - Hash functions - StandardsUncontrolled terms: Communication efficiency - Key exchange protocols - Non-malleable - Password authenticated - Password authenticated key exchange protocols - Security proofs - Standard assumptions - Standard model - The standard model - Trapdoor commitments - Universal composable - Zero-knowledge protocolsClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 723 Computer Software, Data Handling and Applications - 902.2 Codes and Standards - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Detection for steganography based on Hilbert Huang transform
Wu, Suyan1; Li, Wenbo2; Shi, Yanqin1
Source: Proceedings of SPIE - The International Society for Optical Engineering, v 8205, 2011, 2011 International Conference on Photonics, 3D-Imaging, and Visualization;
ISSN: 0277786X; ISBN-13: 9780819488473; DOI: 10.1117/12.906288; Article number: 820519; Conference: 2011 International Conference on Photonics, 3D-Imaging, and Visualization, October 30, 2011 - October 31, 2011; Sponsor: South China Normal University; International Computer Science Society; National Natural Science Foundation of China;
Publisher: SPIE
Author affiliation: 1 Beijing Municioal Institute of Science and Technology Information, Beijing, China2 Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: This paper will introduce the Hilbert-Huang Transform for Analysis of Universal blind steganalysis detection task. Firstly, This method use its efficient unsteady-state modeling ability to constucte features of steganography object and adopt independent variable analysis method to extract and refine feature. In the classification stage, by constructing the compound ANOVN kernel function of the Steganalysis field to improve data processing capabilities of nonlinear. © 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE). (10 refs.)Main Heading: Mathematical transformationsControlled terms: Data handling - Photonics - Steganography - Three dimensional - VisualizationUncontrolled terms: blind detection for steganography method - compound ANOVN kernel - Hilbert Huang transforms - Independent variables - styleClassification Code: 712 Electronic and Thermionic Materials - 717 Optical Communication - 723.2 Data Processing and Image Processing - 744 Lasers - 902.1 Engineering Graphics - 921.3 Mathematical Transformations
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
MRG-OHTC database for online handwritten Tibetan character recognition
Ma, Long-Long1; Liu, Hui-Dan1; Wu, Jian1
Source: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, p 207-211, 2011, Proceedings - 11th International Conference on Document Analysis and Recognition, ICDAR 2011
; ISSN: 15205363; ISBN-13: 9780769545202; DOI: 10.1109/ICDAR.2011.50; Article number: 6065305; Conference: 11th International Conference on Document Analysis and Recognition, ICDAR 2011, September 18, 2011 - September 21, 2011; Sponsor: TC10 (Graph. Recogn.) TC11 (Read. Syst.) (IAPR); Chinese Academy of Sciences; NSFC; FUJITSU; Hanvon Technology;
Publisher: IEEE Computer Society
Author affiliation: 1 National Engineering Research Center of Fundamental Software, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: A handwritten Tibetan database, MRG-OHTC, is presented to facilitate the research of online handwritten Tibetan character recognition. The database contains 910 Tibetan character classes written by 130 persons from Tibetan ethnic minority. These characters are selected from basic set and extension set A of Tibetan coded character set. The current version of this database is collected using electronic pen on digital tablet. We investigate some characteristic of writing style from different writers. We evaluate MRG-OHTC database using existing algorithms as a baseline. Experimental results reveal a big challenge to higher recognition performance. To our knowledge, MRG-OHTC is the first publicly available database for handwritten Tibetan research. It provides a basic database to compare empirically different algorithms for handwritten Tibetan character recognition. © 2011 IEEE. (22 refs.)Main Heading: Character recognitionControlled terms: Algorithms - Character sets - Database systemsUncontrolled terms: Electronic pen - Ethnic minorities - evaluation - Extension sets - MRG-OHTC - Recognition performance - Tibetan character recognition - Tibetans - Writing styleClassification Code: 722.2 Computer Peripheral Equipment - 723 Computer Software, Data Handling and Applications - 723.3 Database Systems
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On context-Aware distributed event dissemination
Lin, Chen1; Jin, Beihong1; Long, Zhenyue1; Chen, Haibiao1
Source: Personal and Ubiquitous Computing, v 15, n 3, p 305-314, March 2011;
ISSN: 16174909; DOI: 10.1007/s00779-010-0330-8;
Publisher: Springer London
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: In pervasive environments, the Pub/Sub paradigm is regarded as an important means of information sharing and event dissemination. In this paper, we first analyze different context in Pub/Sub systems that has remarkable impacts upon user's satisfaction to event dissemination and then give corresponding strategies by exploiting time context and event-preference context so as to provide personalized event dissemination. That is, by leveraging time context, we provide the extended matching against long-standing events, and by leveraging eventpreference context, we present the recommendation algorithm which is based on hidden Markov process. Performance analysis and experiment evaluation show that both strategies can improve user's experiences of event dissemination. © Springer-Verlag London Limited 2010. (12 refs.)Main Heading: Markov processesUncontrolled terms: Context-Aware - Event dissemination - Event-preference context - Hidden Markov process - Information sharing - Performance analysis - Pervasive environments - Pub/sub - Recommendation algorithms - Remarkable impact - Time context - User's satisfactionClassification Code: 922.1 Probability Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
The complexity and approximability of minimum contamination problems
Li, Angsheng1; Tang, Linqing1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6648 LNCS, p 298-307, 2011, Theory and Applications of Models of Computation - 8th Annual Conference, TAMC 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642208768;
DOI: 10.1007/978-3-642-20877-5_30; Conference: 8th Annual Conference on Theory and Applications of Models of Computation, TAMC 2011, May 23, 2011 - May 25, 2011; Sponsor: University of Electro-Communications;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, P.O. Box 8718, Beijing, 100190, China2 Graduate University, Chinese Academy of Sciences, Beijing, China
Abstract: In this article, we investigate the complexity and approximability of the Minimum Contamination Problems, which are derived from epidemic spreading areas and have been extensively studied recently. We show that both the Minimum Average Contamination Problem and the Minimum Worst Contamination Problem are NP-hard problems even on restrict cases. For any Ε > 0, we give (1 + Ε, O(1+Ε/Ε log n))-bicriteria approximation algorithm for the Minimum Average Contamination Problem. Moreover, we show that the Minimum Average Contamination Problem is NP-hard to be approximated within 5/3 - Ε and the Minimum Worst Contamination Problem is NP-hard to be approximated within 2 - Ε, for any Ε > 0, giving the first hardness results of approximation of constant ratios to the problems. © 2011 Springer-Verlag. (11 refs.)Main Heading: ContaminationControlled terms: Approximation algorithms - Computational complexity - Water cooling systemsUncontrolled terms: Approximability - Bicriteria approximation - Constant ratio - Contamination problem - Epidemic spreading - Hardness result - NP-hard - NP-HARD problemClassification Code: 714.2 Semiconductor Devices and Integrated Circuits - 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 802.1 Chemical Plants and Equipment - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Determinacy and rewriting of conjunctive queries over unary database schemas
Zheng, Lixiao1, 2; Chen, Haiming1
Source: Proceedings of the ACM Symposium on Applied Computing, p 1039-1044, 2011, 26th Annual ACM Symposium on Applied Computing, SAC 2011; ISBN-13: 9781450301138; DOI: 10.1145/1982185.1982413; Conference: 26th Annual ACM Symposium on Applied Computing, SAC 2011, March 21, 2011 - March 24, 2011; Sponsor: ACM Special Interest Group on Applied Computing (SIGAPP); Tunghai University; Taiwan Ministry of Education; Taiwan Bureau of Foreign Trade; Taiwan National Science Council (NSC);
Publisher: Association for Computing Machinery
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, China2 Graduate University, Chinese Academy of Sciences, China
Abstract: The problem of answering queries using views arises in a wide variety of data management applications. From the information-theoretic perspective, a notion of determinacy has been recently introduced to formalize the intuitive notion that whether a set of views V is sufficient to answer a query Q. We say that V determines Q iff for any two databases D1, D2, V(D1) = V(D2) implies Q(D1) = Q(D2). Determinacy has been investigated for many query and view languages including first order logic (FO) and unions of conjunctive queries (UCQ) and a considerable number of cases are resolved. However the problem remains open for queries and views defined by conjunctive queries (CQ) and appears to be quite challenging. In this paper we study the problem of determinacy for conjunctive queries and views over unary database schemas where each relation has only one attribute. We show that determinacy is decidable in ptime in this case. We provide syntactic characterizations for a CQ query Q to be determined by a set of CQ views V and give an algorithm for checking determinacy which runs in time O(|Q|*|V|) where |Q| and |V| are the sizes of Q and V respectively. Furthermore we show that whenever V determines Q there exists a CQ query which is an equivalent rewriting of Q using V. © 2011 ACM. (18 refs.)Main Heading: Query languagesControlled terms: Computability and decidability - Information management - Information theory - Query processingUncontrolled terms: Answering queries - Conjunctive queries - Database schemas - First-order logic - Query rewritings - Syntactic characterization - view determinacyClassification Code: 716.1 Information Theory and Signal Processing - 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 723.3 Database Systems - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Face sketch synthesis via multivariate output regression
Chang, Liang1; Zhou, Mingquan1; Deng, Xiaoming2; Wu, Zhongke1; Han, Yanjun3
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6761 LNCS, n PART 1, p 555-561, 2011, Human-Computer Interaction: Design and Development Approaches - 14th International Conference, HCI International 2011, Proceedings
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642216015; DOI: 10.1007/978-3-642-21602-2_60; Conference: 14th International Conference on Human-Computer Interaction, HCI International 2011, July 9, 2011 - July 14, 2011;
Publisher: Springer Verlag
Author affiliation: 1 College of Information Science and Technology, Beijing Normal University, Beijing 100875, China2 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
Abstract: This paper presents a multivariate output regression based method to synthesize face sketches from photos. The training photos and sketches are divided into small image patches. For each pairs of photo patch and its corresponding sketch patch in training data, a local regression model is built by multivariate output regression methods such as kernel ridge regression and relevance vector machine (RVM). Compared with commonly used single-output regression, multivariate output regression can enforce the synthesized sketch patches with structure constraints. Experiments are given to show the validity and effectiveness of the approach. © 2011 Springer-Verlag. (9 refs.)Main Heading: Regression analysisControlled terms: Human computer interaction - Knowledge managementUncontrolled terms: Face sketch synthesis - Image patches - Local regression models - Multivariate regression - Regression method - Relevance Vector Machine - Ridge regression - Structure constraints - Training dataClassification Code: 461.4 Ergonomics and Human Factors Engineering - 723.5 Computer Applications - 922.2 Mathematical Statistics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
SaclTCP: a cross-layer design-based transport protocol for satellite network
Chen, Jing1, 2; Liu, Li-Xiang1; Hu, Xiao-Hui1
Source: Yuhang Xuebao/Journal of Astronautics, v 32, n 3, p 627-633, March 2011; Language: Chinese; ISSN: 10001328; DOI: 10.3873/j.issn.1000-1328.2011.03.026;
Publisher: China Spaceflight Society
Author affiliation: 1 Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Information Center of Ministry of Science and Technology, Beijing 100862, China
Abstract: Satellite network has some special characteristics, such as the big propagation delay, high bit-error rate and asymmetric channels, and these characteristics make TCP/IP protocols incapable of providing satisfying service for satellite network. The idea of cross-layer design can reduce the redundancy of multi-layer, and capture network status information at any moment. The protocol can set a window congestion gate-limit threshold more effectively by getting the available bandwidth information from the link layer. And in the link layer a router buffer queue managing mechanism is set to compute the network congestion probability and then feedback it to the sender. It can also differentiate the packet losses between congestion and link error to avoid reducing the send window unnecessarily. The protocol will use this information to regulate the size of window dynamically. The experiment shows the protocol greatly improved the transport performance of satellite networks. (15 refs.)Main Heading: SatellitesControlled terms: Bandwidth - Bit error rate - DesignUncontrolled terms: Asymmetric channel - Available bandwidth - Congestion window - Cross-layer design - Link errors - Link layers - Network congestions - Network status - Propagation delays - Router buffer - Satellite network - TCP/IP protocol - Transport performance - Transport protocolsClassification Code: 408 Structural Design - 655.2 Satellites - 716.1 Information Theory and Signal Processing - 723.1 Computer Programming
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Implementation mechanism of adaptive user interface for furniture layout customizing system
Fan, Yinting1, 2; Teng, Dongxing1; Wang, Gongzheng1, 2; Yang, Haiyan1, 2; Wang, Hongan1 Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 4, p 705-712, April 2011; Language: Chinese; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 Intelligence Engineering Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: To fulfill the personal requirements in the interaction, an adaptive user interface based on the interactive history is proposed. The virtual furniture layout customizing system is adopted as the example. Firstly, the typical interaction tasks in personalized customization are classified. Then, the on-line tracking mechanism based on the interaction history is discussed. Finally, an operation predicting method based on historical interaction sequence analysis is proposed for the self-adaptive layout of visual objects. The implementation results show that the proposed method is feasible. (20 refs.)Main Heading: User interfacesControlled terms: Virtual realityUncontrolled terms: Adaptive user interface - Implementation mechanisms - Interaction history - Interactive history analysis - On-line tracking - Predicting method - Self-adaptive - Sequence analysis - Visual objectsClassification Code: 722.2 Computer Peripheral Equipment - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Simple power analysis attacks using chosen message against ECC hardware implementations
Li, Huiyun1; Wu, Keke1; Xu, Guoqing1; Yuan, Hai1; Luo, Peng2
Source: World Congress on Internet Security, WorldCIS-2011, p 68-72, 2011, World Congress on Internet Security, WorldCIS-2011; ISBN-13: 9780956426376; Article number: 5749885; Conference: World Congress on Internet Security, WorldCIS-2011, February 21, 2011 - February 23, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Chinese University of Hong Kong, Shenzhen 518055, China2 State Key Laboratory of Information Security, Institute of Software, Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: Chosen-message simple power analysis (SPA) attacks were powerful against public-key cryptosystem based on modular exponentiation, due to the special results of modular square and modular multiplication for input pair X and - X. However, the characteristics can not be applied to public-key cryptosystems based on scalar multiplications. This paper proposes novel chosen-message side-channel analysis attacks for public-key cryptosystems based on scalar multiplications, where special input point P is chosen close to X-axis to generate noticeable variations for point doubling and point addition. The proposed attack can be applied to all standard implementations of the binary algorithms, both for left-to-right and right-to-left methods. This chosen-message method can also circumvent typical countermeasures such as the double-and-add-always algorithm. © 2011 WorldCIS. (9 refs.)Main Heading: Security of dataControlled terms: Algorithms - Hardware - Internet - Public key cryptography - Telecommunication networksUncontrolled terms: Binary algorithms - Hardware implementations - Modular Exponentiation - Modular Multiplication - Point additions - Point doublings - Public-key cryptosystems - Scalar multiplication - Side-channel analysis - Simple power analysisClassification Code: 921 Mathematics - 723.2 Data Processing and Image Processing - 723 Computer Software, Data Handling and Applications - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television - 605 Small Tools and Hardware
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Line drawings abstraction from 3D models
Zhao, Shujie1; Wu, Enhua1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6530, p 104-111, 2011, Transactions on Edutainment V
; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642184512; DOI: 10.1007/978-3-642-18452-9_8;
Publisher: Springer Verlag
Author affiliation: 1 Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, China2 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Non-photorealistic rendering, also called stylistic rendering, emphasizes on expressing special features and omitting extraneous information to generate a new scene different from the primary one through digital processing. The stylistic rendering is of importance in various applications, in particular in the entertainment such as production of cartoon, digital media for mobiles etc. Line drawing is one of the rendering techniques in non-photorealistic rendering. Using feature lines to convey salient and important aspects of a scene while rendering could provide clearer ideas for model representation. In this regard, we propose a method to extract feature lines directly from three-dimensional models in this paper. By the method, linear feature lines are extracted through finding intersections of two implicit functions that can work without lighting, and rendered with visibility in a comprehensive way. Starting from an introduction to the purpose of line drawings, the development of the method is described in this paper. An algorithm for line extraction using implicit functions is presented in the main part of the paper. Test results and analysis on performance of the test are given. Finally, a conclusion is made, and the future development on line drawings is discussed. © 2011 Springer-Verlag Berlin Heidelberg. (18 refs.)Main Heading: Three dimensionalControlled terms: Digital storageUncontrolled terms: Feature lines - Implicit function - Isosurface - Line drawing - Non-Photorealistic Rendering - Stylistic renderingClassification Code: 722.1 Data Storage, Equipment and Techniques - 902.1 Engineering Graphics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Network traffic monitoring, analysis and anomaly detection [Guest Editorial]
Wang, Wei1; Zhang, Xiangliang2; Shi, Wenchang3; Lian, Shiguo4; Feng, Dengguo5
Source: IEEE Network, v 25, n 3, p 4-7, May-June 2011, Network Traffic Monitoring, Analysis and Anomaly Detection
; ISSN: 08908044; DOI: 10.1109/MNET.2011.5772054; Article number: 5772054;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 Xi'an Shiyou University, Xi'an, China2 Division of Mathematical and Computer Sciences and Engineering, King Abdullah University of Science and Technology(KAUST), Saudi Arabia3 Graduate University, Chinese Academy of Sciences, Beijing, China4 Nanjing University of Science and Technology, China5 Institute of Software, Chinese Academy of Sciences, China
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
TH3D: A two-handed interaction framework for desktop virtual environment
Fu, Yonggang1; Dai, Guozhong2
Source: Journal of Computers, v 6, n 7, p 1416-1423, 2011; ISSN: 1796203X;
DOI: 10.4304/jcp.6.7.1416-1423;
Publisher: Academy Publisher
Author affiliation: 1 Digital Media Laboratory, College of Information Sciences, Beijing Language and Culture University, Beijing, China2 Intelligence Engineering Lab, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: This paper describes a two-handed interaction framework for desktop virtual environment, called TH3D. The design goal of this framework is to support the deviceindependent, task-centered, and fast development of twohanded applications on desktop computers. Existing research on two-handed interaction focuses mainly on specific interaction techniques and fails to achieve this goal. Our framework is characterized by its multi-layer design applied to two-handed interaction, including the normalization of raw input data, interaction primitive and task construction, and an efficient, built-in, two-handed interaction technique that integrates the benefits of both egocentric and exocentric interactions. With this framework, application developers can pay more attention to the analysis of interaction tasks and the design of an interaction technique without mainly considering a variety of devices and the mapping between interaction task and technique. The current implementation is tested with ordinary Spacemouse and 2D mouse, but the framework can easily be ported to other device combinations. © 2011 ACADEMY PUBLISHER. (21 refs.)Main Heading: Virtual realityControlled terms: Computer applications - Design - Mammals - Personal computersUncontrolled terms: Application developers - Design goal - Desktop virtual environment - Framework - Input datas - Interaction techniques - Multilayer designs - Specific interaction - Two-handed interaction - TwohandedClassification Code: 408 Structural Design - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications - 821 Agricultural Equipment and Methods; Vegetation and Pest Control
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Scale adaptation of mean shift based on graph cuts theory
Zhao, Ling1; An, Guocheng1; Zhang, Fengjun1; Wang, Hongan1; Dai, Guozhong1
Source: Proceedings - 12th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2011, p 202-205, 2011, Proceedings - 12th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2011; ISBN-13: 9780769544977; DOI: 10.1109/CAD/Graphics.2011.36; Article number: 6062788; Conference: 12th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2011, September 15, 2011 - September 17, 2011; Sponsor: China Computer Federation;
Publisher: IEEE Computer Society
Author affiliation: 1 Intelligence Engineering Lab., Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China
Abstract: The classical Mean Shift can't change the scale of tracking window in real time while tracking target is changing in size. This paper adopts graph cuts theory to the problem of scale adaptation for Mean Shift tracking. According to the result of Mean Shift iteration in every frame, implementing graph cuts using skin color Gaussian mixture model(GMM) in a small area around it, and updating tracking window size through the largest skin lump among the result of graph cuts. Experimental results clearly demonstrate that the method can reflect the real scale change of tracking target, avoid the interference of other objects in background, and has good usability and robustness. Besides it enriches manipulation method of Human Computer Interaction by controlling entertainment games. © 2011 IEEE. (7 refs.)Main Heading: Target trackingControlled terms: Color computer graphics - Computer aided design - Graph theory - Graphic methods - Human computer interaction - Image segmentationUncontrolled terms: Gaussian Mixture Model - Graph cut - Manipulation methods - Mean shift - Mean shift tracking - Real time - Skin color - Small area - Window SizeClassification Code: 461.4 Ergonomics and Human Factors Engineering - 716.2 Radar Systems and Equipment - 723.2 Data Processing and Image Processing - 723.5 Computer Applications - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Algorithm for generating short sentences from grammars based on branch coverage criterion
Zheng, Li-Xiao1, 2; Xu, Zhi-Wu1, 2; Chen, Hai-Ming1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 11, p 2564-2576, November 2011; Language: Chinese
; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03964;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, The Chinese Academy of Sciences, Beijing 100049, China
Abstract: This paper presents a sentence generation algorithm, which takes as input a context-free grammar and produces a set of sentences that achieves branch coverage for the grammar. The algorithm incorporates length control, redundancy elimination, and sentence-set size control strategies into a sentence generation process such that the generated sentences are short and simple, and the sentence set is small with no redundancy. The paper also investigates the application of this algorithm to test data generation for grammar-based systems. Experimental results show that the generated test data not only has high fault detection ability, but can also help testers improve the testing speed. © Copyright 2011, Institute of Software, the Chinese Academy of Sciences. All rights reseved. (29 refs.)Main Heading: AlgorithmsControlled terms: Ability testing - Context free grammars - Fault detection - RedundancyUncontrolled terms: Branch coverage - Branch coverage criteria - Detection ability - Generation algorithm - Generation process - Length control - Redundancy elimination - Sentence generation - Size control - Test data - Test data generationClassification Code: 921 Mathematics - 914 Safety Engineering - 912.4 Personnel - 903 Information Science - 723.1.1 Computer Programming Languages - 723 Computer Software, Data Handling and Applications - 422 Strength of Building Materials; Test Equipment and Methods
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A location index for range query in real-time locating system
Guo, Chao1; Li, Kun1; Wang, Yongyan1; Liu, Shenghang1; Wang, Hongan1
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 10, p 1908-1917, October 2011; Language: Chinese; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: The range query of moving objects' location is very important in many mobile applications, especially in analyzing, decision making, predicting, etc. Real-time locating system (RTLS) is a mobile system using RFID technology with the feature of skew object density. There are always storage wastes or performance decline while using existing indices in real-time locating system because of the skew object density. In this paper, a novel index mechanism called RPI (region partition index) is proposed to answer the range queries in RTLS. It firstly divides the region of the RTLS into sub regions according to the object density, and then indexes the division regions with R-tree. The object locations in these division regions are indexed by grid. Furthermore, this index is optimized to be cache conscious. In the optimized index structure, the object locations in a grid cell are stored in a list of arrays. The size of each array is determined by the size of the CPU cache line. Experimental results show that the new index has better search performance than R-tree and grid, and still keeps quite prominent update performance while object density is skew. The optimized index also brings strong performance improvement because it sharply reduces the cache miss rate in range queries. (13 refs.)Main Heading: Query processingControlled terms: Cache memory - Decision trees - Forestry - Optimization - Radio frequency identification (RFID)Uncontrolled terms: Cache miss rates - Cache-conscious - Grid - Grid cells - Index structure - Mobile applications - Mobile systems - Moving objects - New indices - Object location - Performance improvements - R-tree - Range query - Region partition - RFID Technology - RTLS - Search performance - Sub-regionsClassification Code: 961 Systems Science - 922 Statistical Methods - 921.5 Optimization Techniques - 821.0 Woodlands and Forestry - 731.1 Control Systems - 723.3 Database Systems - 722.1 Data Storage, Equipment and Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Efficient and multi-level privacy-preserving communication protocol for VANET
Xiong, Hu1; Chen, Zhong1; Li, Fagen2, 3
Source: Computers and Electrical Engineering, 2011; ISSN: 00457906;
DOI: 10.1016/j.compeleceng.2011.11.009 Article in Press
Author affiliation: 1 Key Laboratory of Network and Software Security Assurance of the Ministry of Education, Institute of Software, School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
Abstract: In this paper, we introduce an efficient and multi-level conditional privacy preservation authentication protocol in vehicular ad hoc networks (VANETs) based on ring signature. The proposed protocol has three appealing characteristics: First, it offers conditional privacy preservation authentication: while every receiver can verify that a message issuer is an authorized participant in the system only a trusted authority can reveal the true identity of a message sender. Second, it is equipped with multi-level countermeasure: each vehicle can select the degree of privacy according to its own requirements. Third, it is efficient: our system outperforms previous proposals in message authentication and verification, cost-effective identity tracking in case of a dispute, and low storage requirements. We demonstrate the merits gained by the proposed protocol through extensive analysis. © 2011 Elsevier Ltd. All rights reserved.Main Heading: Mobile ad hoc networksControlled terms: Ad hoc networks - AuthenticationUncontrolled terms: Authentication protocols - Low-storage - Message authentication - Multi-level - Privacy preservation - Privacy preserving - Ring signatures - True identity - Trusted authorities - Vehicular ad hoc networksClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On the security of a bidirectional proxy re-encryption scheme from PKC 2010
Weng, Jian1, 2, 3; Zhao, Yunlei4; Hanaoka, Goichiro5
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6571 LNCS, p 284-295, 2011, Public Key Cryptography, PKC 2011 - 14th International Conference on Practice and Theory in Public Key Cryptography, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642193781;
DOI: 10.1007/978-3-642-19379-8_18; Conference: 14th International Conference on Practice and Theory in Public Key Cryptography, PKC 2011, March 6, 2011 - March 9, 2011; Sponsor: International Association for Cryptologic Research (IACR);
Publisher: Springer Verlag
Author affiliation: 1 Department of Computer Science, Jinan University, Guangzhou, China2 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China3 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China4 Software School, Fudan University, Shanghai, China5 National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
Abstract: In ACM CCS 2007, Canetti and Hohenberger left an interesting open problem of how to construct a chosen-ciphertext secure proxy re-encryption (PRE) scheme without bilinear maps. This is a rather interesting problem and has attracted great interest in recent years. In PKC 2010, Matsuda, Nishimaki and Tanaka introduced a novel primitive named re-applicable lossy trapdoor function, and then used it to construct a PRE scheme without bilinear maps. Their scheme is claimed to be chosen-ciphertext secure in the standard model. In this paper, we make a careful observation on their PRE scheme, and indicate that their scheme does not satisfy chosen-ciphertext security. The purpose of this paper is to clarify the fact that, it is still an open problem to come up with a chosen-ciphertext secure PRE scheme without bilinear maps in the standard model. © 2011 International Association for Cryptologic Research. (19 refs.)Main Heading: StandardsControlled terms: Public key cryptographyUncontrolled terms: Bilinear map - Chosen ciphertext security - Ciphertexts - Open problems - Proxy re encryptions - proxy re-encryption - standard model - The standard model - Trapdoor functionsClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 902.2 Codes and Standards
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Dichotomy for Holant* problems of boolean domain
Cai, Jin-Yi1; Lu, Pinyan2; Xia, Mingji3
Source: Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, p 1714-1728, 2011, Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011; ISBN-13: 9780898719932; Conference: 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, January 23, 2011 - January 25, 2011; Sponsor: ACM Spec. Interest Group. Algorithms Comput. Theory (SIGACT); SIAM Activity Group on Discrete Mathematics;
Publisher: Association for Computing Machinery
Author affiliation: 1 University of Wisconsin-Madison, United States2 Microsoft Research Asia, China3 Institute of Software, Chinese Academy of Sciences, China
Abstract: Holant problems are a general framework to study counting problems. Both counting Constraint Satisfaction Problems (#CSP) and graph homomorphisms are special cases. We prove a complexity dichotomy theorem for Holant* (F), where F is a set of constraint functions on Boolean variables and output complex values. The constraint functions need not be symmetric functions. We identify four classes of problems which are polynomial time computable; all other problems are proved to be #P-hard. The main proof technique arid indeed the formulation of the theorem use holographic algorithms and reductions. By considering these counting problems over the complex domain, we discover surprising new tractable classes, which are associated with isotropic vectors, i.e., a (lion-zero) vector whose inner product with itself is zero. (45 refs.)Main Heading: Boolean functionsControlled terms: Algorithms - Polynomial approximation - Theorem provingUncontrolled terms: Boolean domain - Boolean variables - Complex domains - Complex values - Complexity dichotomies - Constraint functions - Constraint Satisfaction Problems - Counting problems - Graph homomorphisms - Inner product - Isotropic vectors - Polynomial-time - Symmetric functions - Tractable classClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Decidable temporal dynamic description logic
Chang, Liang1; Shi, Zhong-Zhi2; Gu, Tian-Long1; Wang, Xiao-Feng2
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 7, p 1524-1537, July 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03869;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, Guilin 541004, China2 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, The Chinese Academy of Sciences, Beijing 100190, China
Abstract: The dynamic description logic DDL (dynamic description logic) provides a kind of action theory based on description logics. It is a useful representation of the dynamic application domains in the environment of the Semantic Web. In order to bring the representation capability of the branching temporal logic into the dynamic description logic, this paper treats the time slices of temporal logics as the executions of atomic actions, so that the temporal dimension and the dynamic dimension can be unified. Based on this idea, constructed over the description logic ALCQIO, a temporal dynamic description logic, named TDALCQIO, is presented. Tableau decision algorithm is provided for TDALCQIO. Both the termination and the correctness of this algorithm have been proved. The logic TDALCQIO not only inherits the representation capability provided by the dynamic description logic constructed over ALCQIO (attributive language with complements, qualified number restrictions, inverse roles and nominals), but it also has the ability to describe and reason about some temporal features such as the reachability property and the safety property of the whole dynamic application domains. Therefore, TDALCQIO provides further support for knowledge representation and reasoning in the environment of the Semantic Web. © Copyright 2011, Institute of Software, the Chinese Academy of Sciences. (19 refs.)Main Heading: Temporal logicControlled terms: Algorithms - Computability and decidability - Data description - Formal languages - Knowledge representation - Semantic Web - Semantics - User interfacesUncontrolled terms: Action theory - Branching temporal logic - Decision algorithms - Dynamic description logic - Knowledge representation and reasoningClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 722.2 Computer Peripheral Equipment - 723 Computer Software, Data Handling and Applications - 903 Information Science - 903.2 Information Dissemination - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
The lower bound on the second-order nonlinearity of a class of Boolean functions with high nonlinearity
Sun, Guanghong1, 2; Wu, Chuankun2
Source: Applicable Algebra in Engineering, Communications and Computing, v 22, n 1, p 37-45, February 2011; ISSN: 09381279; DOI: 10.1007/s00200-010-0136-y;
Publisher: Springer Verlag
Author affiliation: 1 College of Sciences, Hohai University, Nanjing 210098, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: The r-th order nonlinearity of Boolean functions is an important cryptographic criterion associated with some attacks on stream and block ciphers. It is also very useful in coding theory, since it is related to the covering radii of Reed-Muller codes. By investigating the lower bound of the nonlinearity of the derivative of the function f, this paper tightens the lower bound of the second-order nonlinearity of a class of Boolean functions over F2n with high nonlinearity in the form f(x) = tr(λ x d ), where λ ∈ F2r * d=2 2rr}+2r+1 and n = 4r. © 2010 Springer-Verlag. (20 refs.)Main Heading: Boolean functionsControlled terms: Computer programming - CryptographyUncontrolled terms: Coding Theory - Cryptographic criterion - Derivation - High nonlinearity - Lower bounds - Non-Linearity - Reed-Muller codes - Second-order nonlinearity - Stream and block ciphers - Walsh spectrumClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 723 Computer Software, Data Handling and Applications - 723.1 Computer Programming
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Generic fair exchange protocol based on malicious agents
Lei, Xin-Feng1, 2; Fan, Xiao-Jian3; Ma, Wen4; Liu, Jun2; Xiao, Jun-Mo2
Source: Jiefangjun Ligong Daxue Xuebao/Journal of PLA University of Science and Technology (Natural Science Edition), v 12, n 1, p 19-24, February 2011; Language: Chinese
; ISSN: 10093443;
Publisher: University of Science and Technology
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Institute of Communications Engineering, PLA Univ. of Sci. and Tech., Nanjing 210007, China3 Unit No. 66008 of PLA, Tianjin 300250, China4 Nanjing Army Command College, Nanjing 210045, China
Abstract: In fair exchange protocols, a protocol without trusted third party (TTP) cannot support fairness fully, and an off-line TTP protocol also needs TTP and suffers low efficiency when the agents of the protocol are malicious. Moreover, most of the current fair exchange protocols aim at exchanging specific items and thus lose their universality. With the method of lowload on-line TTP to tackle the above deficiency, a generic fair exchange protocol based on malicious agents was proposed, and properties affecting the fairness were analyzed. The analysis result shows that the protocol requires less on environments, provides universality on fair exchange, avoids most fairness problems in the current protocols, and keeps high efficiency. (11 refs.)Uncontrolled terms: Fair-exchange protocols - Fairness - Non-repudiation - Secrecy - Timeliness
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research on a model driven development framework for pen-based user interface
Chen, Ming-Xuan1, 2; Deng, Chang-Zhi1; Ren, Lei3; Tian, Feng1; Dai, Guo-Zhong1
Source: Tien Tzu Hsueh Pao/Acta Electronica Sinica, v 39, n 2, p 268-274, February 2011; Language: Chinese
; ISSN: 03722112;
Publisher: Chinese Institute of Electronics
Author affiliation: 1 Intelligence Engineering Laboratory, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 Graduate University of Chinese Acad. of Sci., Beijing 100049, China3 School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
Abstract: In order to make the pen-based user interface easy to develop with personalized requirements and diverse devices, we propose a model driven development framework. For a pen-based user interface, firstly we present a general development framework, then build a platform independent model and a platform specific model based on the model driven architecture. Furthermore, we introduce the transformation way from the former model to the latter one. At last, a toolkit named "Iris" is provided to support the development. The example application built by "Iris" shows that the model driven development framework can benefit the development of a pen-based interface and reduce the complexity of the development efficiently. (13 refs.)Main Heading: User interfacesControlled terms: Human computer interaction - Knowledge management - Software designUncontrolled terms: Diverse devices - Human-computer - Model driven - Model driven architectures - Model driven development - Pen based user interfaces - Pen-based interfaces - Platform independent model - Platform specific model - Software development methodClassification Code: 722.2 Computer Peripheral Equipment - 723.1 Computer Programming - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Related-key rectangle attack on the full ARIRANG encryption mode
Zhang, Peng1; Li, Rui-Lin1; Li, Chao1, 2
Source: Tongxin Xuebao/Journal on Communications, v 32, n 8, p 15-22, August 2011; Language: Chinese
; ISSN: 1000436X;
Publisher: Editorial Board of Journal on Communications
Author affiliation: 1 Department of Mathematics and System Science, Science College, National University of Defense Technology, Changsha 410073, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: The security of the block cipher used in the compression function of ARIRANG, which was one of the SHA-3 candidates, was revaluated. Based on a linear transformation of the master key and the all-one differential of the round function, a full 40-round related-key rectangle attack of the ARIRANG encryption mode was presented, which was the first cryptanalytic result of the ARIRANG encryption mode. The result shows that the ARIRANG encryption mode as a block cipher is not immune to the related-key rectangle attack. (13 refs.)Main Heading: Linear transformationsControlled terms: Geometry - Hash functions - Mathematical transformationsUncontrolled terms: ARIRANG - Block ciphers - Compression functions - Encryption mode - Master key - Related-key rectangle attack - Round functions - Sha-3 candidatesClassification Code: 921 Mathematics - 921.3 Mathematical Transformations
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Demo: A sensor network time synchronization protocol based on fm radio data system
Li, Liqun1, 2, 3; Xing, Guoliang3; Sun, Limin1; Huangfu, Wei1; Zhou, Ruogu3; Zhu, Hongsong1 Source: MobiSys'11 - Compilation Proceedings of the 9th International Conference on Mobile Systems, Applications and Services and Co-located Workshops, p 367, 2011, MobiSys'11 - Compilation Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services and Co-located Workshops; ISBN-13: 9781450306430; DOI: 10.1145/1999995.2000038; Conference: 9th International Conference on Mobile Systems, Applications, and Services, MobiSys'11 and Co-located Workshops, June 28, 2011 - July 1, 2011; Sponsor: ACM SIGMOBILE; USENIX Association;
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, China2 Graduate University, Chinese Academy of Sciences, China3 Department of Computer Science and Engineering, Michigan State University, United States (3 refs.)
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Automatic clustering using genetic algorithms
Liu, Yongguo1, 2, 3, 4; Wu, Xindong4; Shen, Yidong2
Source: Applied Mathematics and Computation, v 218, n 4, p 1267-1279, October 15, 2011
; ISSN: 00963003; DOI: 10.1016/j.amc.2011.06.007;
Publisher: Elsevier Inc.Author affiliation: 1 School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100191, China3 Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun 130012, China4 Department of Computer Science, University of Vermont, Burlington, VT 05405, United States
Abstract: In face of the clustering problem, many clustering methods usually require the designer to provide the number of clusters as input. Unfortunately, the designer has no idea, in general, about this information beforehand. In this article, we develop a genetic algorithm based clustering method called automatic genetic clustering for unknown K (AGCUK). In the AGCUK algorithm, noising selection and division-absorption mutation are designed to keep a balance between selection pressure and population diversity. In addition, the Davies-Bouldin index is employed to measure the validity of clusters. Experimental results on artificial and real-life data sets are given to illustrate the effectiveness of the AGCUK algorithm in automatically evolving the number of clusters and providing the clustering partition. © 2011 Elsevier Inc. All rights reserved. (38 refs.)Main Heading: Clustering algorithmsControlled terms: Genetic algorithmsUncontrolled terms: Automatic clustering - Clustering - Clustering methods - Clustering problems - Davies-Bouldin index - k-Means algorithm - Noising method - Number of clusters - Population diversity - Real life datasets - Selection pressuresClassification Code: 721 Computer Circuits and Logic Elements - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Collective Entity Linking in Web text: A graph-based method
Han, Xianpei1; Sun, Le1; Zhao, Jun2
Source: SIGIR'11 - Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, p 765-774, 2011, SIGIR'11 - Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval; ISBN-13: 9781450309349; DOI: 10.1145/2009916.2010019; Conference: 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'11, July 24, 2011 - July 28, 2011; Sponsor: Assoc. Comput. Mach., Spec. Interest Group Inf. Retr. (ACM SIGIR);
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China2 National Laboratory of Pattern Recognition, Institute of Automation Beijing, China
Abstract: Entity Linking (EL) is the task of linking name mentions in Web text with their referent entities in a knowledge base. Traditional EL methods usually link name mentions in a document by assuming them to be independent. However, there is often additional interdependence between different EL decisions, i.e., the entities in the same document should be semantically related to each other. In these cases, Collective Entity Linking, in which the name mentions in the same document are linked jointly by exploiting the interdependence between them, can improve the entity linking accuracy. This paper proposes a graph-based collective EL method, which can model and exploit the global interdependence between different EL decisions. Specifically, we first propose a graph-based representation, called Referent Graph, which can model the global interdependence between different EL decisions. Then we propose a collective inference algorithm, which can jointly infer the referent entities of all name mentions by exploiting the interdependence captured in Referent Graph. The key benefit of our method comes from: 1) The global interdependence model of EL decisions; 2) The purely collective nature of the inference algorithm, in which evidence for related EL decisions can be reinforced into high-probability decisions. Experimental results show that our method can achieve significant performance improvement over the traditional EL methods. (25 refs.)Main Heading: Information retrievalControlled terms: Algorithms - Inference engines - Knowledge based systems - User interfacesUncontrolled terms: Collective Entity Linking - Collective inference - Entity disambiguation - Graph-based - Graph-based methods - Graph-based representations - Inference algorithm - Knowledge base - Performance improvementsClassification Code: 722.2 Computer Peripheral Equipment - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Static analysis of TOCTTOU vulnerabilities in Unix-style file system
Han, Wei1, 2, 3; He, Yeping1
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 8, p 1430-1437, August 2011; Language: Chinese; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 National Engineering Research Center for Fundamental Software, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, Beijing 100049, China3 School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
Abstract: TOCTTOU is a serious threat to Unix-style file systems. All the existing static detection methods have high false positive rate. There are two reasons: firstly, the function pairs which may cause TOCTTOU vulnerabilities are not defined and enumerated accurately; and secondly, the methods make an over-approximation of the program and omit a lot of useful information. In this paper, we first systematically examine the TOCTTOU pairs in the standard C library. On this basis, a static analysis method is presented to detect the TOCTTOU vulnerabilities. Vulnerability is expressed as a finite safety state property. At each program point, a value is associated to a set of states. To make the analysis more precise, the algorithm is inter-procedurally flow sensitive and intra-procedurally path sensitive. To achieve scalability, the safety state property of each procedural is analyzed independently and the inter-procedurally analysis is summary based. The experimental results show that this method can effectively find TOCTTOU vulnerabilities in C programs. In comparison with other static methods, this method can effectively reduce false positive rate. (21 refs.)Main Heading: Static analysisControlled terms: C (programming language) - UNIXUncontrolled terms: C programs - Detection methods - False positive rates - File race conditions - File systems - Flow sensitive - Flow-sensitive analysis - Path-sensitive analysis - Program points - Safety state - Static analysis method - Static method - TOCTTOU vulnerabilitiesClassification Code: 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Runtime monitoring of data-centric temporal properties for Web services
Wu, Guoquan1; Wei, Jun1; Ye, Chunyang1; Shao, Xiaozhe1; Zhong, Hua1; Huang, Tao1
Source: Proceedings - 2011 IEEE 9th International Conference on Web Services, ICWS 2011, p 161-170, 2011, Proceedings - 2011 IEEE 9th International Conference on Web Services, ICWS 2011; ISBN-13: 9780769544632; DOI: 10.1109/ICWS.2011.124; Article number: 6009385; Conference: 2011 IEEE 9th International Conference on Web Services, ICWS 2011, July 4, 2011 - July 9, 2011; Sponsor: IEEE; IEEE Computer Society (CS); TC-SVC; IBM; SAP;
Publisher: IEEE Computer Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, China
Abstract: Runtime monitoring of Web service compositions has been widely acknowledged as a significant approach to understand and guarantee the quality of services. However, existing runtime monitoring solutions consider only the constraints on the sequence of messages exchanged between partner services and ignore the actual data contents inside the messages. As a result, it is difficult to monitor some dynamic properties such as how message data of interest is processed between different participants. To address this issue, we propose an efficient, non-intrusive online monitoring approach to dynamically analyze data-centric properties for service-oriented applications involving multiple participants. By introducing Par-BCL - a Parametric Behavior Constraint Language for web services - to define monitoring parameters, various data-centric temporal behavior properties for Web services can be specified and monitored. This approach broadens the monitored patterns to include not only message exchange orders, but also the data contents bound to the parameters. To reduce runtime overhead, we statically analyze the monitored properties to generate parameter state machine from the event pattern automata to optimize monitoring. The experiments show that our solution is efficient and promising. © 2011 IEEE. (22 refs.)Main Heading: Web servicesControlled terms: Monitoring - Network componentsUncontrolled terms: Behavior constraints - Data centric - Data contents - Dynamic property - Event pattern - Message exchange - Monitoring parameters - Non-intrusive - Online monitoring - Parameter state - Partner services - Runtime Monitoring - Runtime overheads - Service Oriented - Temporal behavior - Temporal property - Web service compositionClassification Code: 703.1 Electric Networks - 723 Computer Software, Data Handling and Applications - 941 Acoustical and Optical Measuring Instruments - 942 Electric and Electronic Measuring Instruments - 943 Mechanical and Miscellaneous Measuring Instruments - 944 Moisture, Pressure and Temperature, and Radiation Measuring Instruments
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Gateway-oriented password-authenticated key exchange protocol with stronger security
Wei, Fushan1, 2; Ma, Chuangui1; Zhang, Zhenfeng2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6980 LNCS, p 366-379, 2011, Provable Security - 5th International Conference, ProvSec 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642243158; DOI: 10.1007/978-3-642-24316-5_26; Conference: 5th International Conference on Provable Security, ProvSec 2011, October 16, 2011 - October 18, 2011; Sponsor: The National Natural Science Foundation of China (NSFC); Xidian Univ., Key Lab. Comput. Networks; Inf. Secur., Minist. Educ.;
Publisher: Springer Verlag
Author affiliation: 1 Zhengzhou Information Science and Technology Institute, Zhengzhou 450002, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: A gateway-oriented password-based authenticated key exchange (GPAKE) is a three-party protocol, which allows a client and a gateway to establish a common session key with the help of an authentication server. To date, most of the published GPAKE protocols have been subjected to undetectable on-line dictionary attacks. The security models for GPAKE are not strong enough to capture such attacks. In this paper, we define a new security model for GPAKE, which is stronger than previous models and captures desirable security requirement of GPAKE. We also propose an efficient GPAKE protocol and prove its security under the DDH assumption in our model. Our scheme assumes no pre-established secure channels between the gateways and the server unlike previous schemes, but just authenticated channels between them. Compared with related schemes, our protocol achieves both higher efficiency and stronger security. © 2011 Springer-Verlag. (15 refs.)Main Heading: Gateways (computer networks)Controlled terms: Authentication - Computer crimeUncontrolled terms: Authenticated channel - Authenticated key exchange - Authentication servers - DDH - DDH assumptions - Dictionary attack - Higher efficiency - Password-authenticated key exchange - Password-based authentication - Secure channels - Security model - Security requirements - Session key - Three-partyClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Cupping and diamond embeddings: A unifying approach
Fang, Chengling1; Liu, Jiang2; Wu, Guohua1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6735 LNCS, p 71-80, 2011, Models of Computation in Context - 7th Conference on Computability in Europe, CiE 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642218743;
DOI: 10.1007/978-3-642-21875-0_8; Conference: 7th Conference on Computability in Europe, CiE 2011, June 27, 2011 - July 2, 2011; Sponsor: National Science Foundation; Association for Symbolic Logic; European Mathematical Society; European Social Fund; Bulgarian National Science Fund;
Publisher: Springer Verlag
Author affiliation: 1 Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, Singapore2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, 4# South Fourth Street, Zhong Guan Cun, Beijing 100190, China
Abstract: In this paper, we prove that for any nonzero cappable degree c, there is a d.c.e. degree d and a c.e. degree b < d such that c cups d to 0′, caps b to 0 and for any c.e. degree w, either w &le b or w ∨ d = 0′. This result has several well-known theorems as direct corollaries, including Arslanov's cupping theorem, Downey's diamond theorem, Downey-Li-Wu's complementation theorem, and Li-Yi's cupping theorem, etc. © 2011 Springer-Verlag. (11 refs.)Main Heading: Theorem provingControlled terms: C (programming language) - Computability and decidabilityUncontrolled terms: Complementation - EmbeddingsClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 723.1.1 Computer Programming Languages
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Set-theoretic foundation of parametric polymorphism and subtyping
Castagna, Giuseppe1; Xu, Zhiwu1, 2
Source: ACM SIGPLAN Notices, v 46, n 9, p 94-106, September 2011; ISSN: 15232867; DOI: 10.1145/2034574.2034788;
Publisher: Association for Computing Machinery
Author affiliation: 1 CNRS, Laboratoire Preuves, Univ Paris Diderot, Paris, France2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Science, Beijing, China
Abstract: We define and study parametric polymorphism for a type system with recursive, product, union, intersection, negation, and function types. We first recall why the definition of such a system was considered hard-when not impossible-and then present the main ideas at the basis of our solution. In particular, we introduce the notion of "convexity" on which our solution is built up and discuss its connections with parametricity as defined by Reynolds to whose study our work sheds new light. Copyright © 2011 ACM. (24 refs.)Main Heading: Recursive functionsControlled terms: Polymorphism - XMLUncontrolled terms: Parametric polymorphism - Parametricity - Reynolds - Subtypings - Type systems - TypesClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A graph-based implementation for mechanized refinement calculus of OO programs
Liu, Zhiming1; Morisset, Charles2; Wang, Shuling1, 3
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6527 LNCS, p 258-273, 2011, Formal Methods: Foundations and Applications - 13th Brazilian Symposium on Formal Methods, SBMF 2010, Revised Selected Papers;ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642198281; DOI: 10.1007/978-3-642-19829-8_17; Conference: 13th Brazilian Symposium on Formal Methods, SBMF 2010, November 8, 2010 - November 11, 2010; Sponsor: CNPq, Brazilian Scientific and Technological Research Council; CAPES, Brazilian Higher Education Funding Council; The Federal University of Rio Grande do Norte (UFRN); Miranda Computacao e Comercio Ltda; SETIRN;
Publisher: Springer Verlag
Author affiliation: 1 UNU-IIST, P.O. Box 3058, Macau, China2 Royal Holloway, Information Security Group, University of London, Egham, Surrey TW20 0EX, United Kingdom3 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, China
Abstract: This paper extends the mechanization of the refinement calculus done by von Wright in HOL, representing the state of a program as a graph instead of a tuple, in order to deal with object-orientation. The state graph structure is implemented in Isabelle, together with definitions and lemmas, to help the manipulation of states. We then show how proof obligations are automatically generated from the rCOS tool and can be loaded in Isabelle to be proved. We illustrate our approach by generating the proof obligations for a simple example, including object access and method invocation. © 2011 Springer-Verlag Berlin Heidelberg. (24 refs.)Main Heading: Theorem provingControlled terms: Calculations - Formal methods - Machinery - Problem solvingUncontrolled terms: Automatically generated - Graph-based - Isabelle - Method invocation - Object-orientation - Proof obligations - rCOS - Refinement calculi - State graphsClassification Code: 601 Mechanical Design - 721 Computer Circuits and Logic Elements - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On the expressive power of schemes
Dowek, Gilles1; Jiang, Ying2
Source: Information and Computation, v 209, n 9, p 1231-1245, September 2011; ISSN: 08905401, E-ISSN: 10902651; DOI: 10.1016/j.ic.2011.06.003;
Publisher: Elsevier Inc.
Author affiliation: 1 INRIA, 23 avenue dItalie, CS 81321, 75214 Paris Cedex 13, France2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, P.O. Box 8718, 100190 Beijing, China
Abstract: We present a calculus, called the scheme-calculus, that permits to express natural deduction proofs in various theories. Unlike λ-calculus, the syntax of this calculus sticks closely to the syntax of proofs, in particular, no names are introduced for the hypotheses. We show that despite its non-determinism, some typed scheme-calculi have the same expressivity as the corresponding typed λ-calculi. © 2011 Elsevier Inc. (20 refs.)Main Heading: CalculationsControlled terms: Biomineralization - Pathology - SyntacticsUncontrolled terms: Bound variables - Expressive power - Natural deduction - Natural deduction proofs - Non-determinism - Proof normalizationClassification Code: 461 Bioengineering and Biology - 721 Computer Circuits and Logic Elements - 723 Computer Software, Data Handling and Applications - 903.2 Information Dissemination - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
CRSD: Application specific auto-tuning of SpMV for diagonal sparse matrices
Sun, Xiangzheng1; Zhang, Yunquan1; Wang, Ting1; Long, Guoping1; Zhang, Xianyi1; Li, Yan1 Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6853 LNCS, n PART 2, p 316-327, 2011, Euro-Par 2011 Parallel Processing - 17th International Conference, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13:9783642233968;DOI:10.1007/978-3-642-23397-5_32; Conference: 17th International Conference on Parallel Processing, Euro-Par 2011, August 29, 2011 - September 2, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Lab. of Parallel Software and Computational Science, Institute of Software, Graduate University of Chinese Academy of Sciences, China
Abstract: Sparse Matrix-Vector multiplication (SpMV) is an important computational kernel in scientific applications. Its performance highly depends on the nonzero distribution of sparse matrices. In this paper, we propose a new storage format for diagonal sparse matrices, defined as Compressed Row Segment with Diagonal-pattern (CRSD). We design diagonal patterns to represent the diagonal distribution. As the diagonal distributions are similar within matrices from one application, some diagonal patterns remain unchanged. First, we sample one matrix to obtain the unchanged diagonal patterns. Next, the optimal SpMV codelets are generated automatically for those diagonal patterns. Finally, we combine the generated codelets as the optimal SpMV implementation. In addition, the information collected during auto-tuning process is also utilized for parallel implementation to achieve load-balance. Experimental results demonstrate that the speedup reaches up to 2.37 (1.70 on average) in comparison with DIA and 4.60 (2.10 on average) in comparison with CSR under the same number of threads on two mainstream multi-core platforms. © 2011 Springer-Verlag. (15 refs.)Main Heading: Matrix algebraControlled terms: Distributed computer systems - OptimizationUncontrolled terms: Application-specific optimizations - Autotuning - CRSD - Diagonal-pattern - SpMVClassification Code: 722.4 Digital Computers and Systems - 921.1 Algebra - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Complete characterization of the ground-space structure of two-body frustration-free Hamiltonians for qubits
Ji, Zhengfeng1, 2; Wei, Zhaohui3; Zeng, Bei4, 5
Source: Physical Review A - Atomic, Molecular, and Optical Physics, v 84, n 4, October 27, 2011; ISSN:10502947, E-ISSN: 10941622;DOI:10.1103/PhysRevA.84.042338; Article number: 042338;
Publisher: American Physical Society
Author affiliation: 1 Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China3 Centre for Quantum Technologies, National University of Singapore, Singapore 117543, Singapore4 Department of Mathematics and Statistics, University of Guelph, Guelph, ON N1G 2W1, Canada5 Institute for Quantum Computing, University of Waterloo, Waterloo, ON N2L 3G1, Canada
Abstract: The problem of finding the ground state of a frustration-free Hamiltonian carrying only two-body interactions between qubits is known to be solvable in polynomial time. It is also shown recently that, for any such Hamiltonian, there is always a ground state that is a product of single- or two-qubit states. However, it remains unclear whether the whole ground space is of any succinct structure. Here, we give a complete characterization of the ground space of any two-body frustration-free Hamiltonian of qubits. Namely, it is a span of tree tensor network states of the same tree structure. This characterization allows us to show that the problem of determining the ground-state degeneracy is as hard as, but no harder than, its classical analog. © 2011 American Physical Society. (19 refs.)Main Heading: HamiltoniansControlled terms: Characterization - Ground state - Plant extracts - Polynomial approximation - Trees (mathematics)Uncontrolled terms: Network state - Polynomial-time - Tree structures - Two-qubit stateClassification Code: 951 Materials Science - 933 Solid State Physics - 932 High Energy Physics; Nuclear Physics; Plasma Physics - 931 Classical Physics; Quantum Theory; Relativity - 921.6 Numerical Methods - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory - 461.9 Biology
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
EUDTPFA: A pen-based form application development tool for end-user
Fan, Yinting1, 2; Teng, Dongxing1; Ma, Cuixia1; Wang, Hongan1; Dai, Guozhong1
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 10, p 1629-1640, October 2011; Language: Chinese; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 Intelligence Engineering Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: Traditional electronic form development tools typically have complex operations, unnatural interaction, and are difficulty to adapt to the users' need changes. This makes the development of pen-based form applications only amenable by professional developers. This paper proposes an end-user development tool for pen-based form application, called EUDTPFA. We present the system architecture, and the models of the ink data, the form interface, the business rule, the interaction, the scene and the mapping. We illustrate the developing process of the form application with an example. Experimental results demonstrate that the tool can effectively help the end-users develop the pen-based form applications. (18 refs.)Main Heading: User interfacesUncontrolled terms: Business rules - Development tools - eForm - End user development - Pen based user interfacesClassification Code: 722.2 Computer Peripheral Equipment
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
MTBuilder: A user interface toolkit for multi-touch tabletop
Liu, Jiasheng1, 2; Zhang, Fengjun1; Tan, Guofu1, 2; Dai, Zhijun1; Dai, Guozhong1; Wang, Hongan1
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 10, p 1649-1655, October 2011; Language: Chinese; ISSN: 10039775;
Publisher: Institute of Computing TechnologyAuthor affiliation: 1 Intelligence Engineering Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: To address the issue in multi-touch tabletop that the WIMP paradigm-based graphical user interfaces could not handle multi-finger gesture recognition and UI component reorientation, this paper proposes a new multi-touch tabletop UI toolkit, termed MTBuilder, based on the universal foundational metaphors OCGM(objects, containers, gestures, and manipulations). First MTBuilder employs a hierarchical data representation model to store the multi-finger information. Then it configures the gesture recognizers dynamically to improve the efficiency of recognition. Finally design the components of the user interface based on OCGM. Several typical prototypes are developed, such as multi-user InfoScan and city planning. The prototypes and the experimental results showed that MTBuilder can efficiently support construction and prototyping of a tabletop user interface. (8 refs.)Main Heading: Graphical user interfacesControlled terms: Gesture recognition - Human computer interactionUncontrolled terms: Gesture interaction - Hierarchical data - Multi-touch - Multi-user - Tabletop - UI components - User interface toolkitClassification Code: 716 Telecommunication; Radar, Radio and Television - 722.2 Computer Peripheral Equipment
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A new image denoising scheme using support vector machine classification in shiftable complex directional pyramid domain
Yang, Hong-Ying1; Wang, Xiang-Yang1, 2; Fu, Zhong-Kai1
Source: Applied Soft Computing Journal, 2011; ISSN:15684946;DOI: 10.1016/j.asoc.2011.09.014 Article in Press
Author affiliation: 1 School of Computer and Information Technology, Liaoning Normal University, Dalian 116029, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Edge-preserving image denoising has become a very intensive research topic. In this paper, we propose a new image denoising scheme using support vector machine (SVM) classification in shiftable complex directional pyramid (PDTDFB) domain. Firstly, the noisy image is decomposed into different subbands of frequency and orientation responses using a PDTDFB transform. Secondly, the feature vector for a pixel in a noisy image is formed by the spatial regularity in PDTDFB domain, and the least squares support vector machine (LS-SVM) model is obtained by training. Then the PDTDFB detail coefficients are divided into two classes (edge-related coefficients and noise-related ones) by LS-SVM training model. Finally, the detail subbands of PDTDFB coefficients are denoised by using the different parameters to control the multiscale and multidirectional anisotropic diffusion. Extensive experimental results demonstrate that our method can obtain better performances in terms of both subjective and objective evaluations than those state-of-the-art denoising techniques. Especially, the proposed method can preserve edges very well while removing noise. © 2011 Elsevier B.V. All rights reserved.Main Heading: Support vector machinesControlled terms: Frequency response - Image processing - Image retrieval - Noise pollution control - VectorsUncontrolled terms: Anisotropic Diffusion - De-noising techniques - Detail coefficients - Edge preserving - Feature vectors - Image de-noising - Intensive research - Least squares support vector machines - Multi-directional - Multiscales - Noisy image - Objective evaluation - Sub-bands - Support vector machine classification - Training modelClassification Code: 461.7 Health Care - 723 Computer Software, Data Handling and Applications - 731.1 Control Systems - 741 Light, Optics and Optical Devices - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An accurate and practical camera lens model for rendering realistic lens effects
Wu, Jiaze1, 2; Zheng, Changwen1; Hu, Xiaohui1; Li, Chao1, 2
Source: Proceedings - 12th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2011, p 63-70, 2011, Proceedings - 12th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2011; ISBN-13: 9780769544977; DOI: 10.1109/CAD/Graphics.2011.18; Article number: 6062767; Conference: 12th International Conference on Computer-Aided Design and Computer Graphics, CAD/Graphics 2011, September 15, 2011 - September 17, 2011; Sponsor: China Computer Federation;
Publisher: IEEE Computer Society
Author affiliation: 1 National Key Laboratory of Integrated Information System Technology, Institute of Software, Chinese Academy of Sciences, China2 Graduate University, Chinese Academy of Sciences, China
Abstract: In this paper, an accurate and practical camera lens model is proposed to be applied in realistic rendering of lens-related effects. The optical modeling of this new model is firstly presented from two aspects: lens surface modeling and formulation of ray tracing equations. Then, a number of tunable models for controlling lens properties are introduced and combined together to determine overall imaging performance. An implementation framework for this lens model is presented from two important aspects: its internal working framework and a new rendering pipeline for integrating it into a general ray tracer. In contrast to existing lens models, this new one is characterized by its ability to accurately model the image formation process and its friendly tunable models to control its lens properties. As a consequence, it is capable of simulating complex lens-related effects without too much expertise on lens optics. Finally, multiple rendering experiments are performed to demonstrate the ability and usage of this novel model to simulate a variety of complex lens-related effects. © 2011 IEEE. (20 refs.)Main Heading: LensesControlled terms: Camera lenses - Computer aided design - Computer graphicsUncontrolled terms: Image formation process - Imaging performance - Lens model - Lens surface - New model - Optical modeling - Ray-tracing equations - Realistic rendering - Rendering pipelinesClassification Code: 723.5 Computer Applications - 741.3 Optical Devices and Systems - 742.2 Photographic Equipment
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
PEDA: Comprehensive damage assessment for production environment server systems
Zhang, Shengzhi1; Jia, Xiaoqi2; Liu, Peng3; Jing, Jiwu4
Source: IEEE Transactions on Information Forensics and Security, v 6, n 4, p 1323-1334, December 2011
; ISSN: 15566013; DOI: 10.1109/TIFS.2011.2162062; Article number: 5954181;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 Department of Computer Science and Engineering, Pennsylvania State University, University Park, PA 16802, United States2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, United States4 State Key Laboratory of Information Security, Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: Analyzing the intrusion to production servers is an onerous and error-prone work for system security technicians. Existing tools or techniques are quite limited. For instance, system events tracking lacks completeness of intrusion propagation, while dynamic taint tracking is not feasible to be deployed due to significant runtime overhead. Thus, we propose production environment damage assessment (PEDA), a systematic approach to do postmortem intrusion analysis for production workload servers. PEDA replays the has-been-infected execution with high fidelity on a separate analyzing instrumentation platform to conduct the heavy workload analysis. Though the replayed execution runs atop the instrumentation platform (i.e., binary-translation-based virtual machine), PEDA allows the first-run execution to run atop the hardware-assisted virtual machine to ensure minimum runtime overhead. Our evaluation demonstrates the efficiency of the PEDA system with a runtime overhead as low as 5%. The real-life intrusion studies show the advantage of PEDA intrusion analysis over existing techniques. © 2006 IEEE. (38 refs.)Main Heading: Damage detectionControlled terms: Computer simulationUncontrolled terms: Damage assessments - Error prones - Hardware-assisted - Heavy workloads - High fidelity - Intrusion analysis - Production environments - Production workloads - Runtime overheads - Server system - System security - Virtual machinesClassification Code: 421 Strength of Building Materials; Mechanical Properties - 422 Strength of Building Materials; Test Equipment and Methods - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A drum-buffer-rope based scheduling method for semiconductor manufacturing system
Cao, Zhengcai1, 2; Peng, Yazhen1; Wang, Yongji3
Source: IEEE International Conference on Automation Science and Engineering, p 120-125, 2011, 2011 IEEE International Conference on Automation Science and Engineering, CASE 2011
; ISSN: 21618070, E-ISSN: 21618089; ISBN-13: 9781457717307; DOI: 10.1109/CASE.2011.6042397; Article number: 6042397; Conference: 2011 7th IEEE International Conference on Automation Science and Engineering, CASE 2011, August 24, 2011 - August 27, 2011; Sponsor: Ansaldo Sistemi Industriali - Results to the Power of Three;
Publisher: IEEE Computer Society
Author affiliation: 1 College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China2 State Key Laboratory of Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an 710054, China3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Scheduling in semiconductor manufacturing system is an important task for the industries faced with a large amount of reSource competitions. Effective scheduling can improve the overall system performance and customer satisfaction. In this paper, a scheduling method with the core of bottleneck equipment control is designed referring to Drum-Buffer-Rope (DBR) theory. In order to identify the main-bottleneck, the relative load which takes the feature of reentrance into consideration is applied. For the avoidance of local blocking, the management of sub-bottleneck is presented. As both releasing and dispatching are taken into account, a scheduling method based on the compound priority is formed. Finally, HP-24 semiconductor wafer fabrication is used as an example to demonstrate the effectiveness of the proposed method. © 2011 IEEE. (12 refs.)Main Heading: ManufactureControlled terms: Customer satisfaction - Rope - Scheduling - Semiconductor device manufactureUncontrolled terms: Drum-buffer-rope - Equipment control - Scheduling methods - Semiconductor manufacturing systems - Semiconductor wafer fabricationClassification Code: 535 Rolling, Forging and Forming - 537.1 Heat Treatment Processes - 714.2 Semiconductor Devices and Integrated Circuits - 912 Industrial Engineering and Management - 912.2 Management
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A web service QoS prediction approach based on multi-dimension QoS
Li, Lu1; Rong, Mei2; Zhang, Guangquan3, 4
Source: ICCSE 2011 - 6th International Conference on Computer Science and Education, Final Program and Proceedings, p 1319-1322, 2011, ICCSE 2011 - 6th International Conference on Computer Science and Education, Final Program and Proceedings; ISBN-13: 9781424497188; DOI: 10.1109/ICCSE.2011.6028876; Article number: 6028876; Conference: 6th International Conference on Computer Science and Education, ICCSE 2011, August 3, 2011 - August 5, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 Department of Computer Engineering, Suzhou Vocational University, Suzhou, China2 Shenzhen Tourism College, Jinan University, Shenzhen, China3 School of Computer Science and Technology, Soochow University, Suzhou, China4 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Science, Beijing, China
Abstract: With the rapid development of web service, there are a lot of web services to be chosen. Consumers usually choose the suitable web service when they know little about web services. To address this problem, this paper presents a web service QoS prediction approach based on multi-dimension QoS. It normalizes the multi-dimension QoS parameters and maps each dimension QoS parameter into the same interval in order to provide references for consumers to choose the suitable web service. At last, an application example is used to explain this approach. © 2011 IEEE. (5 refs.)Main Heading: Web servicesControlled terms: Computer science - Education computing - Quality of service - User interfacesUncontrolled terms: Application examples - multi-dimension qos - normalization - QoS parameters - Rapid developmentClassification Code: 723 Computer Software, Data Handling and Applications - 722.2 Computer Peripheral Equipment - 722 Computer Systems and Equipment - 723.2 Data Processing and Image Processing - 721 Computer Circuits and Logic Elements - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television - 718 Telephone Systems and Related Technologies; Line Communications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Theory and applications of models of computation (TAMC 2008)
Agrawal, Manindra1; Li, Angsheng2
Source: Theoretical Computer Science, v 412, n 18, p 1645, April 15, 2011; ISSN: 03043975; DOI: 10.1016/j.tcs.2010.12.039;
Publisher: Elsevier
Author affiliation: 1 Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur 208016, India2 Institute of Software, Chinese Academy of Sciences, P.O. Box 8718, Beijing 100080, China
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An empirical study on evolution of API documentation
Shi, Lin1; Zhong, Hao1; Xie, Tao3; Li, Mingshu1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6603 LNCS, p 416-431, 2011, Fundamental Approaches to Software Engineering - 14th International Conference, FASE 2011, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2011, Proceedings
;ISSN:03029743,E-ISSN:16113349;ISBN-13:9783642198106;DOI: 10.1007/978-3-642-19811-3_29; Conference: 14th International Conference on Fundamental Approaches to Software Engineering, FASE 2011, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2011, March 26, 2011 - April 3, 2011; Sponsor: DFG Deutsche Forschungsgemeinschaft; AbsInt Angewandte Informatik GmbH; Microsoft Research; Robert Bosch GmbH; IDS Scheer AG / Software AG;
Publisher: Springer Verlag
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Key Laboratory for Computer Science, Chinese Academy of Sciences, Beijing 100190, China3 Department of Computer Science, North Carolina State University, United States
Abstract: With the evolution of an API library, its documentation also evolves. The evolution of API documentation is common knowledge for programmers and library developers, but not in a quantitative form. Without such quantitative knowledge, programmers may neglect important revisions of API documentation, and library developers may not effectively improve API documentation based on its revision histories. There is a strong need to conduct a quantitative study on API documentation evolution. However, as API documentation is large in size and revisions can be complicated, it is quite challenging to conduct such a study. In this paper, we present an analysis methodology to analyze the evolution of API documentation. Based on the methodology, we conduct a quantitative study on API documentation evolution of five widely used real-world libraries. The results reveal various valuable findings, and these findings allow programmers and library developers to better understand API documentation evolution. © 2011 Springer-Verlag. (26 refs.)Main Heading: Application programming interfaces (API)Controlled terms: Software engineeringUncontrolled terms: Common knowledge - Empirical studies - Library developers - Quantitative knowledge - Quantitative study - Real-worldClassification Code: 723 Computer Software, Data Handling and Applications - 723.1 Computer Programming
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A multi-objective evolutionary algorithm for minimal visual coverage path problem in raster Terrain
Li, Jie1, 2; Zheng, Chang Wen2; Hu, Xiaohui2
Source: ICIC Express Letters, v 5, n 7, p 2299-2304, July 2011; ISSN: 1881803X;
Publisher: ICIC Express Letters Office
Author affiliation: 1 Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China2 National Key Laboratory of Integrated Information System Technology, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Minimal visual coverage path problem has wide applications, such as selecting the marching route and searching the smuggler's path. Average horizon of a path, which is the ratio of its visual coverage to its length, can be used to measure how covert a path is. If there are loops in a path, the path is meaningless though its average horizon is small due to its infinite length. A compromise is to modify the objective of minimal average horizon as the ratio of the length to the invisible region of the path where minimal length and minimal view shed can be satisfied simultaneously. However, the modified objective is not completely equivalent with average horizon. This study treats two elements of average horizon as two objectives and presents a multi-objective evolutionary algorithm for the minimal path visual coverage problem with single objective. By multiobjectivizating as well as the proper chromosome structure and effective operators, the method presented is superior to the simulated annealing algorithm and the evolutionary algorithm for single objective with respect to both higher quality of the solution and less computation time. (14 refs.)Main Heading: Evolutionary algorithmsControlled terms: Landforms - Mathematical operators - Multiobjective optimization - Simulated annealingUncontrolled terms: Average horizon - Chromosome structure - Computation time - Effective operator - If there are - Minimal path - Multi objective evolutionary algorithms - Multiobjectivization - Path problems - Raster terrain - Simulated annealing algorithms - Single objective - Visual coveragesClassification Code: 481.1 Geology - 921 Mathematics - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Tuple density: A new metric for combinatorial test suites (NIER track)
Chen, Baiqiang1, 2; Zhang, Jian1
Source: Proceedings - International Conference on Software Engineering, p 876-879, 2011, ICSE 2011 - 33rd International Conference on Software Engineering, Proceedings of the Conference; ISSN: 02705257; ISBN-13: 9781450304450; DOI: 10.1145/1985793.1985931; Conference: 33rd International Conference on Software Engineering, ICSE 2011, May 21, 2011 - May 28, 2011; Sponsor: Assoc. Comput. Mach., Spec. Interest Group Softw.; Eng. (ACM SIGSOFT); IEEE Computer Society; Technical Council on Software Engineering (TCSE);
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China2 Graduate University, Chinese Academy of Sciences, China
Abstract: We propose tuple density to be a new metric for combinatorial test suites. It can be used to distinguish one test suite from another even if they have the same size and strength. Moreover, it is also illustrated how a given test suite can be optimized based on this metric. The initial experimental results are encouraging. © 2011 ACM. (6 refs.)Main Heading: Software testingControlled terms: Software engineering - TestingUncontrolled terms: Combinatorial testing - metrics - test suites - tuple densityClassification Code: 423.2 Non Mechanical Properties of Building Materials: Test Methods - 723.1 Computer Programming - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
DepSim: A dependency-based malware similarity comparison system
Yi, Yang1, 2; Lingyun, Ying1, 3; Rui, Wang2; Purui, Su1; Dengguo, Feng1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6584 LNCS, p 503-522, 2011, Information Security and Cryptology - 6th International Conference, Inscrypt 2010, Revised Selected Papers; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642215179;
DOI: 10.1007/978-3-642-21518-6_35; Conference: 6th China International Conference on Information Security and Cryptology, Inscrypt 2010, October 20, 2010 - October 24, 2010; Sponsor: State Key Laboratory of Information Security; Chinese Academy of Sciences; Chinese Association for Cryptologic Research;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 State Key Laboratory of Information Security, Graduate University of Chinese Academy of Sciences, Beijing 100049, China3 National Engineering Research Center for Information Security, Beijing 100190, China
Abstract: It is important for malware analysis that comparing unknown files to previously-known malicious samples to quickly characterize the type of behavior and generate signatures. Malware writers often use obfuscation, such as packing, junk-insertion and other means of techniques to thwart traditional similarity comparison methods. In this paper, we introduce DepSim, a novel technique for finding dependency similarities between malicious binary programs. DepSim constructs dependency graphs of control flow and data flow of the program by taint analysis, and then conducts similarity analysis using a new graph isomorphism technique. In order to promote the accuracy and anti-interference capability, we reduce redundant loops and remove junk actions at the dependency graph pre-processing phase, which can also greatly improve the performance of our comparison algorithm. We implemented a prototype of DepSim and evaluated it to malware in the wild. Our prototype system successfully identified some semantic similarities between malware and revealed their inner similarity in program logic and behavior. The results demonstrate that our technique is accurate. © 2011 Springer-Verlag. (29 refs.)Main Heading: Data flow analysisControlled terms: Behavioral research - Computer crime - Cryptography - Dynamic analysis - Network security - Program processors - SemanticsUncontrolled terms: Anti-interference - Binary programs - Comparison methods - Control flows - Data flow - Dependency graphs - Graph isomorphism - Malware analysis - Malwares - Novel techniques - Pre-processing - Program logic - Prototype system - Semantic similarity - Similarity AnalysisClassification Code: 422.2 Strength of Building Materials : Test Methods - 723 Computer Software, Data Handling and Applications - 723.1 Computer Programming - 903.2 Information Dissemination - 971 Social Sciences
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Nonidentical linear pulse-coupled oscillators model with application to time synchronization in wireless sensor networks
An, Zhulin1, 2; Zhu, Hongsong3; Li, Xinrong4; Xu, Chaonong5; Xu, Yongjun1; Li, Xiaowei1 Source: IEEE Transactions on Industrial Electronics, v 58, n 6, p 2205-2215, June 2011; ISSN: 02780046; DOI: 10.1109/TIE.2009.2038407; Article number: 5371913;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, Beijing 100049, China3 Center of Wireless Ad Hoc Network, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China4 Department of Electrical Engineering, College of Engineering, University of North Texas, Denton, TX 76207, United States5 Department of Computer Science and Technology, China University of Petroleum-Beijing, Beijing 102249, China
Abstract: Similar to other cyber infrastructure systems, as wireless sensor networks become larger and more complex, many classic algorithms may no longer work efficiently. This paper presents a wireless sensor network time synchronization model that was initially inspired by synchronous flashing of fireflies. Synchronous flashing of fireflies is an interesting phenomenon that has been studied for decades. A variety of models have been proposed to explain this phenomenon, among which is the pulse-coupled oscillators model that models fireflies as oscillators. The oscillators in such a model interact only through discrete pulses, similar to the flashing of fireflies. In this paper, we propose a new nonidentical linear pulse-coupled oscillators model and use the model to analyze synchronization of pulse-coupled oscillators with different frequencies. The conditions to achieve and maintain synchronization are derived, and then, the results are used to prove that the oscillators in the model can achieve synchronization eventually, except for a set of frequencies with zero Lebesgue measure. Furthermore, through simulations and implementation on a wireless sensor network testbed, we demonstrate that the proposed nonidentical linear pulse-coupled oscillators model can be used in designing lightweight scalable time synchronization protocols for distributed systems. © 2009 IEEE. (32 refs.)Main Heading: Wireless sensor networksControlled terms: Algorithms - Bioluminescence - Computer simulation - Oscillators (electronic) - Oscillators (mechanical) - Sensors - SynchronizationUncontrolled terms: Biologically inspired algorithms - Classic algorithm - Cyber infrastructures - Different frequency - Distributed systems - Lebesgue measure - Network time-synchronization - Pulse-coupled oscillators - Time synchronization - Wireless sensorClassification Code: 961 Systems Science - 921 Mathematics - 801 Chemistry - 741.1 Light/Optics - 732 Control Devices - 723.5 Computer Applications - 723 Computer Software, Data Handling and Applications - 713.2 Oscillators - 601.1 Mechanical Devices
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Attribute-based authorization delegation model in multi-domain environments
Wu, Bin1, 3; Feng, Deng-Guo1, 2
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 7, p 1661-1675, July 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03870;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Information Security, Graduate University, The Chinese Academy of Sciences, Beijing 100049, China2 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China3 National Engineering and Research Centre of Information Security, Beijing 100190, China
Abstract: Traditional identifier-based authorization models are limited to having different authorization paths that will cause the inconsistency in propagation of permission. In addition, unauthorized entities may acquire illegal permission by such an identifier-based authorization path. In order to solve these two problems, an attribute-based authorization delegation model (ABADM), suitable for multi-domain environments is presented. In the ABADM model, the delegation of authority and the propagation of permission are all based on the attribute sets of entities, which ensure that the entities on the same credential chain have the same permission. The model integrates attribute-permission assignation policies inside autonomic domains and the interdomain attributes mapping model. The algorithm for calculating the attribute sets and permissions of entities in the multi-domain environments is proposed. The usage of the ABADM model is illustrated through a common example. © Copyright 2011, Institute of Software, the Chinese Academy of Sciences. (21 refs.)Uncontrolled terms: Attribute-based delegation - Authorization model - Cross-domain - Delegation of authority - Inter-domain - Mapping model - Multi domains - Trust management
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A value-based review process for prioritizing artifacts
Li, Qi1; Boehm, Barry1; Yang, Ye2; Wang, Qing2
Source: Proceedings - International Conference on Software Engineering, p 13-22, 2011, ICSSP'11 - Proceedings of the 2011 International Conference on Software and Systems Process, Co-located with ICSE 2011; ISSN:02705257; ISBN-13: 9781450307307;
DOI: 10.1145/1987875.1987881; Conference: 2011 International Conference on Software and Systems Process, ICSSP 2011, Co-located with ICSE 2011, May 21, 2011 - May 22, 2011; Sponsor: International Software Process Association (ISPA);
Publisher: IEEE Computer Society
Author affiliation: 1 Center for Systems and Software Engineering, University of Southern California, 941 W.37th Place, Los Angeles, CA 90089-0781, United States2 Laboratory for Internet Software Technologies, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: As a new contribution to Value-based V&V process development, a systematic and multi-criteria process is proposed to quantitatively determine the Value-based V&V artifact priority that reviewers can follow for their reviews. This process enables reviewers to prioritize artifacts to be reviewed in a more cost-effective way based on more sophisticated and comprehensive factors, such as importance, quality risks, dependency and cost of V&V investments. Some qualitative and quantitative evidence is provided from a comparative experiment with 22 real-client e-services projects over two years of a graduate software engineering team-project course. It shows that the value-based artifact prioritization enabled reviewers to better focus on artifacts with high importance and risks, to capture issues with high impact in a timely manner, and to improve the cost-effectiveness of reviews. © 2011 ACM. (26 refs.)Main Heading: Cost effectivenessControlled terms: Software engineeringUncontrolled terms: Artifact prioritization - Comparative experiments - E-services - High impact - Multi-criteria - Process development - Quality risks - Review process - validation - Value-based - value-based software engineeringClassification Code: 723.1 Computer Programming - 912.3 Operations Research
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Study on Cloud Computing security
Feng, Deng-Guo1; Zhang, Min1; Zhang, Yan1; Xu, Zhen1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 1, p 71-83, January 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03958;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China
Abstract: Cloud Computing is the fundamental change happening in the field of Information Technology. It is a representation of a movement towards the intensive, large scale specialization. On the other hand, it brings about not only convenience and efficiency problems, but also great challenges in the field of data security and privacy protection. Currently, security has been regarded as one of the greatest problems in the development of Cloud Computing. This paper describes the great requirements in Cloud Computing, security key technology, standard and regulation etc., and provides a Cloud Computing security framework. This paper argues that the changes in the above aspects will result in a technical revolution in the field of information security. © Copyright 2011, Institute of Software, the Chinese Academy of Sciences. All rights reserved. (46 refs.)Main Heading: Cloud computingControlled terms: Computer systems - Information technology - Security of data - StandardsUncontrolled terms: Computing security - Data security and privacy - Fundamental changes - Information security - Security keyClassification Code: 722 Computer Systems and Equipment - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 902.2 Codes and Standards - 903 Information Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
XACML policy evaluation engine based on multi-level optimization technology
Wang, Ya-Zhe1, 2; Feng, Deng-Guo1, 2; Zhang, Li-Wu1; Zhang, Min1
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 2, p 323-338, February 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03707;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 National Engineering Research Center of Information Security, Beijing 100190, China
Abstract: This paper proposes an implementation scheme of XACML (extensible access control markup language) policy evaluation engine based on multi-level optimization technology, MLOBEE (multi-level optimization based evaluation engine). Before evaluating these policies, the scenario implements rule refinement to lessen scale policies and adjust the sequence at the rule. During evaluation, the engine adopts a multi-cache mechanism that includes result cache, attribute cache, and policy cache to reduce the communication cost between engine and other components. To decrease matching magnitudes and enhance matching exactitudes, policy cache practices two stage index techniques. Finally, emulation tests validate that the overall evaluation performance of MLOBEE, using multi-level optimization technology, is better than most other similar systems. © 2011 ISCAS. (26 refs.)Main Heading: Access controlControlled terms: Engines - Markup languages - Optimization - Refining - Security systemsUncontrolled terms: Cache mechanism - Policy evaluation - Policy index - Rule refining - XACML (extensible access control markup language)Classification Code: 612 Engines - 723 Computer Software, Data Handling and Applications - 811.1.1 Papermaking Processes - 914.1 Accidents and Accident Prevention - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An investigation on the feasibility of cross-project defect prediction
He, Zhimin1, 2; Shu, Fengdi1; Yang, Ye1; Li, Mingshu1, 3; Wang, Qing1
Source: Automated Software Engineering, p 1-33, 2011; ISSN: 09288910, E-ISSN: 15737535; DOI: 10.1007/s10515-011-0090-3 Article in Press
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software Chinese Academy of Sciences, Beijing, 100190, China2 Graduate University Chinese Academy of Sciences, Beijing, 100190, China3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China
Abstract: Software defect prediction helps to optimize testing reSources allocation by identifying defect-prone modules prior to testing. Most existing models build their prediction capability based on a set of historical data, presumably from the same or similar project settings as those under prediction. However, such historical data is not always available in practice. One potential way of predicting defects in projects without historical data is to learn predictors from data of other projects. This paper investigates defect predictions in the cross-project context focusing on the selection of training data. We conduct three large-scale experiments on 34 data sets obtained from 10 open Source projects. Major conclusions from our experiments include: (1) in the best cases, training data from other projects can provide better prediction results than training data from the same project; (2) the prediction results obtained using training data from other projects meet our criteria for acceptance on the average level, defects in 18 out of 34 cases were predicted at a Recall greater than 70% and a Precision greater than 50%; (3) results of cross-project defect predictions are related with the distributional characteristics of data sets which are valuable for training data selection. We further propose an approach to automatically select suitable training data for projects without historical data. Prediction results provided by the training data selected by using our approach are comparable with those provided by training data from the same project. © 2011 Springer Science+Business Media, LLC. (40 refs.)Main Heading: ForecastingControlled terms: Data reduction - Defects - Experiments - Software testingUncontrolled terms: Data sets - Defect prediction - Historical data - Large scale experiments - Open Source projects - Prediction capability - ReSources allocation - Software defect prediction - Training dataClassification Code: 423 Non Mechanical Properties and Tests of Building Materials - 723.2 Data Processing and Image Processing - 723.5 Computer Applications - 901.3 Engineering Research - 921 Mathematics - 951 Materials Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Attacking Bivium and Trivium with the characteristic set method
Huang, Zhenyu1; Lin, Dongdai1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6737 LNCS, p 77-91, 2011, Progress in Cryptology, AFRICACRYPT 2011 - 4th International Conference on Cryptology in Africa, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642219689;
DOI: 10.1007/978-3-642-21969-6_5; Conference: 4th International Conference on the Theory and Application of Cryptographic Techniques, AFRICACRYPT 2011, July 5, 2011 - July 7, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: In this paper we utilize an algebraic method called the characteristic set method to attack Bivium and Trivium in the guess-and-determine way. Our attack focuses on recovering the internal states of these two ciphers. We theoretically analyze the performance of different guessing strategies in the guess-and-determine method and present a good one. We show a large amount of experimental results about these two problems with different parameters. From these experimental data we obtain the following results. For Bivium, with 177-bit keystream the expected attack time by the characteristic set method is about 231.81 seconds. And for Trivium, with 288-bit keystream the expected attack time is about 2114.27 seconds. © 2011 Springer-Verlag. (13 refs.)Main Heading: AlgebraControlled terms: CryptographyUncontrolled terms: Algebraic attack - Bivium - Characteristic set method - Stream cipher - TriviumClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An inductive approach to strand spaces
Li, Yongjian1; Pang, Jun2
Source: Formal Aspects of Computing, p 1-37, 2011; ISSN: 09345043, E-ISSN: 1433299X; DOI: 10.1007/s00165-011-0187-2 Article in Press
Author affiliation: 1 The State Key Laboratory of Computer Science and The State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, P.O. Box 8717, Beijing, China2 Faculte des Sciences de la Technologie et de la Communication, Computer Science and Communications, Université du Luxembourg, Luxembourg, Belgium
Abstract: In this paper, we develop an inductive approach to strand spaces, by introducing an inductive definition for bundles. This definition provides us not only a constructive illustration for bundles, but also an effective and rigorous technique of rule induction to reason about properties of bundles. With this induction principle, we can prove that our bundle model is sound in the sense that a bundle is a causally well-founded graph. This approach also gives an alternative to rigorously prove a generalized version of authentication tests. To illustrate the applicability of our approach, we have performed case studies on verification of secrecy and authentication properties of the Needham-Schroeder-Lowe and Otway-Rees protocols. Our approach has been mechanized using Isabelle/HOL. © 2011 British Computer Society. (31 refs.)Main Heading: AuthenticationUncontrolled terms: Authentication tests - Induction principles - Inductive definitions - Isabelle - Rule induction - Strand spaceClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An efficient ID-based verifiably encrypted signature scheme
Zhou, Yousheng1; Sun, Yanbin2; Qing, Sihan3, 4; Yang, Yixian2
Source: Jisuanji Yanjiu yu Fazhan/Computer Research and Development, v 48, n 8, p 1350-1356, August 2011; Language: Chinese; ISSN: 10001239;
Publisher: Science Press
Author affiliation: 1 College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China2 Key Laboratory of Network and Information Attack and Defence Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China3 National Engineering Research Center for Fundamental Software, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China4 School of Software and Microelectronics, Peking University, Beijing 102600, China
Abstract: Verifiably encrypted signature is useful in handling the fair exchange problem, especially online contract signing. A new ID-based verifiably encrypted signature scheme is proposed based on Shim signature scheme. The new scheme does not use any zero-knowledge proofs to provide verifiability, thus eliminates some computation burden from complicated interaction. The creation of verifiably encrypted signature in the scheme is realized by adding a value into one parameter of Shim signature. The verification of verifiably encrypted signature in the scheme is implemented by multiplying one pairing value with the right part of verification equation in Shim signature. Taking account of above reasons, the design of the proposed scheme is compact. The new scheme is provably secure in the random oracle model under the CDH problem assumption. The analysis results show that the presented scheme needs smaller communication requirements and its computation complexity is more optimized compared with the previous ID-based verifiably encrypted signature schemes. ID-based public key cryptography has become a good alternative for certificate based public key setting, especially when efficient key management and moderate security are required. Our new verifiably encrypted signature scheme is an entirely ID-based scheme, which provides an efficient primitive for building fair exchange protocols in ID-based public key cryptosystem. (19 refs.)Main Heading: AuthenticationControlled terms: Electronic document identification systems - Public key cryptography - ShimsUncontrolled terms: Bilinear pairing - ID-based - Provably secure - Random Oracle model - Verifiably encrypted signaturesClassification Code: 601.2 Machine Components - 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A gesture design toolkit for computer games
Wu, Huiyue1; Dai, Guozhong2
Source: Journal of Information and Computational Science, v 8, n 9, p 1561-1568, September 2011; ISSN: 15487741;
Publisher: Binary Information Press
Author affiliation: 1 School of Communication and Design, Sun Yat-sen University, Guangzhou 510006, China2 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Recently, vision based hand gesture interfaces have become very common. However, the development of such applications is very difficult. It requires sophisticated image processing and machine learning knowledge that is out of reach for many traditional game developers. They need an auxiliary tool to design vision based interfaces. This paper describes the motivation, design and development of a toolkit for vision based interactive games. It has the following characteristics: a flexible interface to facilitate adding new classifiers; a dynamic configuration management mechanism for all classifiers; and a visual user interface to define high-level semantic gesture. Evaluation results show that it can provide a unified platform and a general solution for vision based computer games. Copyright © 2011 Binary Information Press. (22 refs.)Main Heading: Computer visionControlled terms: Computer software - Design - Human computer interaction - Navigation - Semantics - User interfaces - Wearable computersUncontrolled terms: Computer game - Design and Development - Dynamic configuration - Evaluation results - Flexible interfaces - General solutions - Gesture - Hand gesture - High level semantics - Interactive games - Toolkit - Vision based - Vision based interface - Vision based interfaces - Visual user interfacesClassification Code: 903.2 Information Dissemination - 723.5 Computer Applications - 723 Computer Software, Data Handling and Applications - 722.4 Digital Computers and Systems - 722.2 Computer Peripheral Equipment - 716.3 Radio Systems and Equipment - 408 Structural Design
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Rapid 3D conceptual design based on hand gesture
Zhong, Kang1, 2, 3; Kang, Jinsheng3; Qin, Shengfeng3; Wang, Hongan1
Source: 2011 3rd International Conference on Advanced Computer Control, ICACC 2011, p 192-197, 2011, 2011 3rd International Conference on Advanced Computer Control, ICACC 2011; ISBN-13: 9781424488087; DOI: 10.1109/ICACC.2011.6016395; Article number: 6016395; Conference: 3rd IEEE International Conference on Advanced Computer Control, ICACC 2011, January 18, 2011 - January 20, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China2 Graduate University, Chinese Academy of Sciences, Beijing, China3 School of Engineering and Design, Brunel University, Uxbridge, United Kingdom
Abstract: The interaction method of current CAD systems has become one of the impediments of rapid 3D conceptual design. Because (1) the keyboard/mouse based interface limits the user to create 3D objects on a 2D plane; (2) current CAD systems are too complex for conceptual design and require users to be professionally trained. In this paper, we present an innovative rapid 3D conceptual design method based on hand gesture and motion capture system. A unique set of hand gestures for rapid 3D conceptual design based on CSG (Constructive Solid Geometry) was proposed, with the requirement of ease of use and the suitability for real time continuous recognition. Real time hand gesture recognition was realized by static gesture (posture) recognition which is accomplished by skeleton model based template matching, and the dynamic gesture recognition which involves the use of Hidden Markov Models (HMMs). The recognized hand gestures were transformed into OpenSCAD scripting language and then the designed 3D geometry was generated and displayed on the screen. An opinion test was conducted to evaluate the system. © 2011 IEEE. (16 refs.)Main Heading: Gesture recognitionControlled terms: Computer aided design - Computer control - Conceptual design - Hidden Markov models - Human computer interaction - Template matching - Three dimensionalUncontrolled terms: 3D geometry - 3D object - CAD system - Constructive solid geometry - Ease of use - Hand gesture - Hand-gesture recognition - Interaction methods - Model-based OPC - Motion capture - Motion capture system - Real time - Scripting languagesClassification Code: 408 Structural Design - 461.4 Ergonomics and Human Factors Engineering - 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications - 922 Statistical Methods
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Differential fault analysis on SMS4 using a single fault
Li, Ruilin1; Sun, Bing1; Li, Chao1, 2; You, Jianxiong1
Source: Information Processing Letters, v 111, n 4, p 156-163, January 15, 2011; ISSN: 00200190; DOI: 10.1016/j.ipl.2010.11.011;
Publisher: Elsevier
Author affiliation: 1 Department of Mathematics and System Science, Science College, National University of Defense Technology, Changsha 410073, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Differential Fault Analysis (DFA) attack is a powerful cryptanalytic technique that could be used to retrieve the secret key by exploiting computational errors in the encryption (decryption) procedure. In this paper, we propose a new DFA attack on SMS4 using a single fault. We show that if a random byte fault is induced into either the second, third, or fourth word register at the input of the 28-th round, the 128-bit key could be recovered with an exhaustive search of 22.11 bits on average. The proposed attack makes use of the characteristic of the cipher's structure and its round function. Furthermore, it can be tailored to any block cipher employing a similar structure and an SPN-style round function as that of SMS4. © 2010 Elsevier B.V. All rights reserved. (22 refs.)Main Heading: CryptographyUncontrolled terms: Block ciphers - Cyptography - Differential fault analysis - Fault attack - SMS4Classification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
BCBC: A more efficient MAC algorithm
Liang, Bo1, 2; Wu, Wenling1, 2; Zhang, Liting1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6672 LNCS, p 233-246, 2011, Information Security Practice and Experience - 7th International Conference, ISPEC 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642210303;
DOI: 10.1007/978-3-642-21031-0_18; Conference: 7th International Conference on Information Security Practice and Experience, ISPEC 2011, May 30, 2011 - June 1, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, Beijing 100190, China
Abstract: In this paper, we construct a new MAC algorithm BCBC-MAC from a block cipher and a fixed function. BCBC-MAC is provably secure under the assumptions that the block cipher is pseudo-random and the fixed function is differentially uniform and preimage-sparse. Such a kind of fixed function is easy to construct and we give several examples, some of which runs very fast. Therefore, BCBC-MAC can be faster than modern block-cipher-based MACs, such as CBC-MAC, OMAC, etc. Furthermore, our scheme can be partly paralleled. Thus, BCBC-MAC offers high efficiency. © 2011 Springer-Verlag Berlin Heidelberg. (28 refs.)Main Heading: Security of dataControlled terms: Algorithms - Public key cryptography - Security systemsUncontrolled terms: Block ciphers - CBC MAC - High efficiency - MAC algorithms - message authentication - provable security - Provably secure - Pseudo randomClassification Code: 921 Mathematics - 914.1 Accidents and Accident Prevention - 723.2 Data Processing and Image Processing - 723 Computer Software, Data Handling and Applications - 718 Telephone Systems and Related Technologies; Line Communications - 717 Optical Communication - 716 Telecommunication; Radar, Radio and Television
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Probabilistic model checking on propositional projection temporal logic
Yang, Xiaoxiao1
Source: IMECS 2011 - International MultiConference of Engineers and Computer Scientists 2011, v 1, p 242-248, 2011, IMECS 2011 - International MultiConference of Engineers and Computer Scientists 2011; ISBN-13: 9789881821034; Conference: International MultiConference of Engineers and Computer Scientists 2011, IMECS 2011, March 16, 2011 - March 18, 2011; Sponsor: IAENG Society of Artificial Intelligence; IAENG Society of Bioinformatics; IAENG Society of Computer Science; IAENG Society of Data Mining; IAENG Society of Electrical Engineering;
Publisher: Newswood Ltd.
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Prepositional Projection Temporal Logic (PPTL) is a useful formalism for reasoning about period of time in hardware and software systems and can handle both sequential and parallel compositions. In this paper, based on discrete time Markov chains, we investigate the probabilistic model checking approach for PPTL towards verifying arbitrary linear-time properties. We first define a normal form graph, denoted by NFGinf, to capture the infinite paths of PPTL formulas. Then we present an algorithm to generate the NFG inf. Since discrete-time Markov chains are the deterministic probabilistic models, we further give an algorithm to determinize and minimize the nondeterministic NFGinf following the Safra's construction. (15 refs.)Main Heading: Model checkingControlled terms: Algorithms - Computer hardware - Computer science - Engineers - Markov processes - Probabilistic logics - Temporal logicUncontrolled terms: Discrete time Markov chains - Hardware and software - Infinite path - Linear-time properties - Normal form - Parallel composition - Probabilistic model checking - Probabilistic models - Projection temporal logicClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications - 912.4 Personnel - 921 Mathematics - 922.1 Probability Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Example-based microfacet synthesis for appearance modeling of thin transparent materials
Dai, Qiang1, 3; Wu, Enhua1, 2
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 7, p 1099-1105, July 2011; Language: Chinese; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macao, China3 Graduate University of Chinese Acad. of Sci., Beijing 100049, China
Abstract: Refractions through thin transparent materials are common in everyday life. Massive data must be captured in order to obtain high quality rendering results with existing modeling methods. We present a new technique that adds additional processing to the captured data, and reduce capture task. We place a camera at a fixed position, and apply microfacet synthesis onto the captured dataset. For the partial normal distribution function (NDF) data at each surface point, we complete the NDF data with partial NDF with similar shapes but different orientations from other surface points. Once we have the complete NDF data, we compute the bidirectional transmission distribution function. Experimental results show the method permits both high quality rendering result and efficient data capture. (15 refs.)Main Heading: Normal distributionControlled terms: Distribution functionsUncontrolled terms: Appearance modeling - Bi-directional transmission distribution functions - Data sets - High quality - Massive data - Microfacet synthesis - Microfacets - Modeling method - Surface points - Transparent materialClassification Code: 922.1 Probability Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
No-go theorem for one-way quantum computing on naturally occurring two-level systems
Chen, Jianxin1; Chen, Xie2; Duan, Runyao1, 3; Ji, Zhengfeng4, 5; Zeng, Bei6
Source: Physical Review A - Atomic, Molecular, and Optical Physics, v 83, n 5, May 9, 2011; ISSN: 10502947, E-ISSN: 10941622; DOI: 10.1103/PhysRevA.83.050301; Article number: 050301;
Publisher: American Physical Society
Author affiliation: 1 Department of Computer Science and Technology, Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China2 Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, United States3 Centre for Quantum Computation and Intelligent Systems (QCIS), Faculty of Engineering and Information Technology, University of Technology, Sydney, NSW, Australia4 Perimeter Institute for Theoretical Physics, Waterloo, ON, Canada5 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China6 Institute for Quantum Computing, Department of Combinatorics and Optimization, University of Waterloo, Waterloo, ON, Canada
Assisting the design of XML schema: Diagnosing nondeterministic content models
Chen, Haiming1; Lu, Ping1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6612 LNCS, p 301-312, 2011, Web Technologies and Applications - 13th Asia-Pacific Web Conference, APWeb 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642202902;
DOI: 10.1007/978-3-642-20291-9_31; Conference: 13th Asia-Pacific Conference on Web Technology, APWeb 2011, April 18, 2011 - April 20, 2011;
Publisher: Springer Verlag
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: One difficulty in the design of XML Schema is the restriction that the content models should be deterministic, i. e., the unique particle attribution (UPA) constraint, which means that the content models are deterministic regular expressions. This determinism is defined semantically without known syntactic definition for it, thus making it difficult for users to design. Presently however, no work can provide diagnostic information if content models are nondeterministic, although this will be of great help for designers to understand and modify nondeterministic ones. In the paper we investigate algorithms that check if a regular expression is deterministic and provide diagnostic information if the expression is not deterministic. With the information provided by the algorithms, designers will be clearer about why an expression is not deterministic. Thus it contributes to reducing the difficulty of designing XML Schema. © 2011 Springer-Verlag Berlin Heidelberg. (8 refs.)Main Heading: DesignControlled terms: Algorithms - XMLUncontrolled terms: Content model - diagnostic information - Regular expressions - XML Schema - XML schemasClassification Code: 408 Structural Design - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Asymptotic granularity reduction and its application
Su, Shenghui1; Lü, Shuwang2; Fan, Xiubin3
Source: Theoretical Computer Science, v 412, n 39, p 5374-5386, September 9, 2011; ISSN: 03043975; DOI: 10.1016/j.tcs.2011.06.008;
Publisher: Elsevier
Author affiliation: 1 College of Computer, Beijing University of Technology, Beijing 100124, China2 Graduate School, Chinese Academy of Sciences, Beijing 100039, China3 Institute of Software, Chinese Academy of Sciences, Beijing 100080, China
Abstract: It is well known that the inverse function of y=x with the derivative y′=1 is x=y, the inverse function of y=c with the derivative y′=0 is nonexistent, and so on. Hence, on the assumption that the noninvertibility of the univariate increasing function y=f(x) with x>0 is in direct proportion to the growth rate reflected by its derivative, the authors put forward a method of comparing difficulties in inverting two functions on a continuous or discrete interval called asymptotic granularity reduction (AGR) which integrates asymptotic analysis with logarithmic granularities, and is an extension and a complement to polynomial time (Turing) reduction (PTR). Prove by AGR that inverting y≡xx(modp) is computationally harder than inverting y≡gx(modp), and inverting y≡gxn(modp) is computationally equivalent to inverting y≡gx(modp), which are compatible with the results from PTR. Besides, apply AGR to the comparison of inverting y≡xn(modp) with y≡gx(modp), y≡gg1x(modp) with y≡gx(modp), and y≡xn+x+1(modp) with y≡xn(modp) in difficulty, and observe that the results are consistent with existing facts, which further illustrates that AGR is suitable for comparison of inversion problems in difficulty. Last, prove by AGR that inverting y≡xngx(modp) is computationally equivalent to inverting y≡gx(modp) when PTR cannot be utilized expediently. AGR with the assumption partitions the complexities of problems more detailedly, and finds out some new evidence for the security of cryptosystems. © 2011 Elsevier B.V. All rights reserved. (15 refs.)Main Heading: Polynomial approximationControlled terms: Algebra - Asymptotic analysis - Public key cryptographyUncontrolled terms: Granularity reduction - Polynomial time reduction - Provable security - Public key cryptosystems - Transcendental logarithm problemClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Interactive and intelligent storytelling system for children
Wang, Danli1; Zhan, Zhizheng1; Dai, Guozhong1
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 7, p 1186-1193, July 2011; Language: Chinese; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 Human-Computer Interaction and Intelligent Information Processing Laboratory, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China
Abstract: Storytelling is an effective way of education. But there are some limitations in existing children's storytelling software and story authoring systems, such as complicated interaction, innovation difficulty for children and hardly meeting the demands of preschool children, etc. In this paper, an interactive and intelligent storytelling model is proposed. The interactive and intelligent storytelling system for children is designed and implemented. A multimodal interaction technology with pen and speech was adopted in the system. Based on current interaction contexts and knowledge bases, the system could assist the children to author stories appropriately and intelligently in storytelling process. A detailed case study of the system was given. Finally, a user evaluation is performed to assert our system's usability and learnability. (24 refs.)Main Heading: EducationControlled terms: User interfacesUncontrolled terms: Authoring systems - Children - Evaluation - Interaction technology - Knowledge basis - Learnability - Multi-Modal Interactions - On currents - Storytelling system - User evaluationsClassification Code: 722.2 Computer Peripheral Equipment - 901.2 Education
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A multipath planner for UAV based on Pareto optimization
Zhu, Hongguo1, 2; Hai, Xin1; Zheng, Changwen2
Source: Applied Mechanics and Materials, v 58-60, p 2356-2359, 2011, Information Technology for Manufacturing Systems II; ISSN: 16609336; ISBN-13: 9783037851494;
DOI: 10.4028/www.scientific.net/AMM.58-60.2356; Conference: 2011 International Conference on Information Technology for Manufacturing Systems, ITMS 2011, May 7, 2011 - May 8, 2011; Sponsor: University of Adelaide; Huazhong University of Science and Technology;
Publisher: Trans Tech Publications
Author affiliation: 1 National University of Defense Technology, Changsha, Hunan, 410073, China2 Institute of Software, Chinese Academy of Science, Beijing, 100090, China
Abstract: A multipath planner for UAV based on Pareto optimization is proposed to overcome the disadvantage of planners existed. Multipath planning is modeled as a constrained multiobject optimization problem. A Pareto solution of multipath for UAV is generated by optimizing several object functions at the same time. The simulation results demonstrated the feasibility of the approach. © (2011) Trans Tech Publications, Switzerland. (6 refs.)Main Heading: Constrained optimizationControlled terms: Information technology - Manufacture - Pareto principleUncontrolled terms: Multi-object optimization - Multi-path - Object functions - Pareto optimization - Pareto solution - Path-planning - Simulation resultClassification Code: 537.1 Heat Treatment Processes - 903 Information Science - 911 Cost and Value Engineering; Industrial Economics - 912 Industrial Engineering and Management - 961 Systems Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Model checking: A coalgebraic approach
Gao, Jianhua1, 2; Jiang, Ying1
Source: Proceedings - 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011, p 235-238, 2011, Proceedings - 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011; ISBN-13: 9780769545066; DOI: 10.1109/TASE.2011.42; Article number: 6042086; Conference: 5th International Conference on Theoretical Aspects of Software Engineering, TASE 2011, August 29, 2011 - August 31, 2011; Sponsor: IEEE CS; IFIP;
Publisher: IEEE Computer Society
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University, Chinese Academy of Sciences, China
Abstract: State explosion problem is the main obstacle of model checking. In this work, we address this problem from a co algebraic point of view. We establish an effective method to prove uniformly the existence of the smallest Kripke structure with respect to bisimilarity, which describes all behaviors of the Kripke structures with no redundancy. We show this smallest Kripke structure generates a minimal one for each given finite Kripke structure and some kind of infinite ones. This method is based on the existence of the final co algebra of a suitable endofunctor and can be generalized smoothly to other co algebraic structures. A naive implementation of this method is developed in Ocaml. © 2011 IEEE. (8 refs.)Main Heading: Model checkingControlled terms: Algebra - Software engineeringUncontrolled terms: Algebraic structures - Bisimilarity - Coalgebraic - Coalgebras - Endofunctors - Kripke structure - State explosion problemsClassification Code: 723.1 Computer Programming - 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Computing semi-algebraic invariants for polynomial dynamical systems
Liu, Jiang1; Zhan, Naijun1; Zhao, Hengjun1, 2
Source: Embedded Systems Week 2011, ESWEEK 2011 - Proceedings of the 9th ACM International Conference on Embedded Software, EMSOFT'11, p 97-106, 2011, Embedded Systems Week 2011, ESWEEK 2011 - Proceedings of the 9th ACM International Conference on Embedded Software, EMSOFT'11; ISBN-13: 9781450307147; DOI: 10.1145/2038642.2038659; Conference: Embedded Systems Week 2011, ESWEEK 2011 - 9th ACM International Conference on Embedded Software, EMSOFT'11, October 9, 2011 - October 14, 2011; Sponsor: IEEE Council on Electronic Design Automation (CEDA); IEEE Circuits and Systems Society; IEEE Computer Society; ACM SIGMICRO; Special Interest Group on Embedded Systems (ACM SIGBED); Special Interest Group on Design Automation (ACM SIGDA);
Publisher: Association for Computing Machinery
Author affiliation: 1 State Key Lab. of Comp. Sci., Institute of Software, Chinese Academy of Sciences, China2 Zhong Guan Cun, No. 4 South Fourth Street, Beijing, 100190 P.R., China
Abstract: In this paper, we consider an extended concept of invariant for polynomial dynamical systems (PDSs) with domain and initial condition, and establish a sound and complete criterion for checking semi-algebraic invariants (SAIs) for such PDSs. The main idea is encoding relevant dynamical properties as conditions on the high order Lie derivatives of polynomials occurring in the SAI. A direct consequence of this criterion is a relatively complete method of SAI generation based on template assumption and semi-algebraic constraint solving. Relative completeness means if there is an SAI in the form of a predefined template, then our method can indeed find one. Copyright © 2011 ACM. (34 refs.)Main Heading: Dynamical systemsControlled terms: Embedded software - Embedded systems - PolynomialsUncontrolled terms: Constraint Solving - Dynamical properties - High order - Initial conditions - Invariant - Lie derivative - Polynomial dynamical system - Polynomial dynamical systems - Semi-algebraic setClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics - 921.1 Algebra - 931 Classical Physics; Quantum Theory; Relativity
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Collision attack for the hash function extended MD4
Wang, Gaoli1, 2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 7043 LNCS, p 228-241, 2011, Information and Communications Security - 13th International Conference, ICICS 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642252426;
DOI: 10.1007/978-3-642-25243-3_19; Conference: 13th International Conference on Information and Communications Security, ICICS 2011, November 23, 2011 - November 26, 2011; Sponsor: National Natural Science Foundation of China (NNSFC); The Microsoft Corporation; Beijing Tip Technology Corporation; Trusted Computing Group (TCG);
Publisher: Springer Verlag
Author affiliation: 1 School of Computer Science and Technology, Donghua University, Shanghai 201620, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100049, China
Abstract: Extended MD4 is a hash function proposed by Rivest in 1990 with a 256-bit hash value. The compression function consists of two different and independent parallel lines called Left Line and Right Line, and each line has 48 steps. The initial values of Left Line and Right Line are denoted by IV0 and IV1 respectively. Dobbertin proposed a collision attack for the compression function of Extended MD4 with a complexity of about 240 under the condition that the value for IV0 = IV1 is prescribed. In this paper, we gave a collision attack on the full Extended MD4 with a complexity of about 237. Firstly, we propose a collision differential path for both lines by choosing a proper message difference, and deduce a set of sufficient conditions that ensure the differential path hold. Then by using some precise message modification techniques to improve the success probability of the attack, we find two-block collisions of Extended MD4 with less than 237 computations. This work provides a new reference to the collision analysis of other hash functions such as RIPEMD-160 etc. which consist of two lines. © 2011 Springer-Verlag. (33 refs.)Main Heading: Security of dataControlled terms: Hash functionsUncontrolled terms: Collision - Collision analysis - Collision attack - Compression functions - Cryptanalysis - Differential path - Extended MD4 - Hash value - Initial values - Parallel line - Sufficient conditions - Two-lineClassification Code: 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An anti-obfuscation malware variants identification system
Wang, Rui1, 2; Su, Pu-Rui2; Yang, Yi2, 3; Feng, Deng-Guo1, 2
Source: Tien Tzu Hsueh Pao/Acta Electronica Sinica, v 39, n 10, p 2322-2330, October 2011; Language: Chinese; ISSN: 03722112;
Publisher: Chinese Institute of Electronics
Author affiliation: 1 State Key Laboratory of Information Security, Graduate University of Chinese Academy of Sciences, Beijing 100049, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China3 National Engineering Research Center for Information Security, Beijing 100190, China
Abstract: Malware variants are one of the major challenges in malware detecting today. Obfuscation, as a most popular technology to generate these variants, can change the signatures of malware to avoid the current signature-based malware preventing method, which is a big threat to information system. This paper proposes a novel anti-obfuscate malware detecting method. By making use of dynamic taint analysis methods and trigger-based behavior processing engine, this method can abstract the essential behavior logic of malware in fine-grained and form it as signatures of a class of malware, and identify variants more precisely associated with signature merging optimizing process and fuzzy matching methods. Experiment results show that the detecting method in this paper can identify malwares and its variants efficiently. (17 refs.)Main Heading: Computer crimeControlled terms: Dynamic analysis - Network securityUncontrolled terms: Analysis method - Behavior analysis - Detecting methods - Fuzzy matching - Malware variants - Malwares - Obfuscation - Preventing methods - Processing engineClassification Code: 422.2 Strength of Building Materials : Test Methods - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
The loop formula based semantics of description logic programs
Wang, Yisong1, 3; You, Jia-Huai2; Yuan, Li Yan2; Shen, Yi-Dong3; Zhang, Mingyi4
Source: Theoretical Computer Science, 2011; ISSN: 03043975; DOI: 10.1016/j.tcs.2011.10.026 Article in Press
Author affiliation: 1 School of Computer Science and Information, Guizhou University, Guiyang, 550025, China2 Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada T6G 2R33 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, 100190, China4 Guizhou Academy of Sciences, Guiyang, 550001, China
Abstract: Description logic programs (dl-programs) proposed by Eiter et al. constitute an elegant yet powerful formalism for the integration of answer set programming with description logics, for the Semantic Web. In this paper, we generalize the notions of completion and loop formulas of logic programs to description logic programs and show that the answer sets of a dl-program can be precisely captured by the models of its completion and loop formulas. Furthermore, we propose a new, alternative semantics for dl-programs, called the canonical answer set semantics, which is defined by the models of completion that satisfy what are called canonical loop formulas. A desirable property of canonical answer sets is that they are free of circular justifications. Some properties of canonical answer sets are also explored and we compare the canonical answer set semantics with the FLP-semantics and the answer set semantics by translating dl-programs into logic programs with abstract constraints. We present a clear picture on the relationship among these semantics variations for dl-programs. © 2011 Elsevier B.V.Main Heading: Program translatorsControlled terms: Data description - Formal languages - Knowledge representation - Logic programming - Semantic WebUncontrolled terms: Answer set - Answer set programming - Answer set semantics - Description logic - Description logic programs - Logic programsClassification Code: 723 Computer Software, Data Handling and Applications - 903 Information Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Zero-Knowledge Argument for Simultaneous Discrete Logarithms
Chow, Sherman S.M.1; Ma, Changshe2; Weng, Jian3, 4, 5, 6
Source: Algorithmica (New York), p 1-21, 2011; ISSN: 01784617, E-ISSN: 14320541; DOI: 10.1007/s00453-011-9593-3 Article in Press
Author affiliation: 1 Department of Combinatorics and Optimization, and Centre for Applied Cryptographic Research, University of Waterloo, Waterloo, N2L3G1, Canada2 School of Computer, South China Normal University, Guangzhou, 510631, China3 Department of Computer Science, Jinan University, Guangzhou, 510632, China4 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100080, China5 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, 100876, China6 Emergency Technology Research Center of Risk Evaluation and Prewarning on Public Network Security, Guangdong, 510632, China
Abstract: In Crypto 1992, Chaum and Pedersen introduced a protocol (CP protocol for short) for proving the equality of two discrete logarithms (EQDL) with unconditional soundness, which is widely used nowadays and plays a central role in DL-based cryptography. Somewhat surprisingly, the CP protocol has never been improved for nearly two decades since its advent. We note that the CP protocol is usually used as a non-interactive proof by using the Fiat-Shamir heuristic, which inevitably relies on the random oracle model (ROM) and assumes that the adversary is computationally bounded. In this paper, we present an EQDL protocol in the ROM which saves approximately 40% of the computational cost and approximately 33% of the prover's outgoing message size when instantiated with the same security parameter. The catch is that our security guarantee only holds for computationally bounded adversaries. Our idea can be naturally extended for simultaneously showing the equality of n discrete logarithms with O(1)-size commitment, in contrast to the n-element adaption of the CP protocol which requires O(n)-size. This improvement benefits a variety of interesting cryptosystems, ranging from signatures and anonymous credential systems, to verifiable secret sharing and threshold cryptosystems. As an example, we present a signature scheme that only takes one (offline) exponentiation to sign, without utilizing pairing, relying on the standard decisional Diffie-Hellman assumption. © 2011 Springer Science+Business Media, LLC. (25 refs.)Main Heading: CryptographyControlled terms: Network securityUncontrolled terms: Anonymous credential systems - Computational costs - Diffie-Hellman assumption - Discrete logarithms - Exponentiations - Non-interactive proof - Offline - Random Oracle model - Security parameters - Signature Scheme - Threshold cryptosystems - Verifiable secret sharing - Zero knowledgeClassification Code: 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Fault analysis study of the block cipher FOX64
Li, Ruilin1; You, Jianxiong1; Sun, Bing1, 2; Li, Chao1, 3
Source: Multimedia Tools and Applications, p 1-18, 2011; ISSN: 13807501, E-ISSN: 15737721; DOI: 10.1007/s11042-011-0895-x Article in Press
Author affiliation: 1 Department of Mathematics and System Science, Science College, National University of Defense Technology, Changsha, 410073, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing, 100190, China3 College of Computer, National University of Defense Technology, Changsha, 410073, China
Abstract: FOX is a family of symmetric block ciphers from MediaCrypt AG that helps to secure digital media, communications, and storage. The high-level structure of FOX is the so-called (extended) Lai-Massey scheme. This paper presents a detailed fault analysis of the block cipher FOX64, the 64-bit version of FOX, based on a differential property of two-round Lai-Massey scheme in a fault model. Previous fault attack on FOX64 shows that each round-key (resp. whole round-keys) could be recovered through 11.45 (resp. 183.20) faults on average. Our proposed fault attack, however, can deduce any round-key (except the first one) through 4.25 faults on average (4 in the best case), and retrieve the whole round-keys through 43.31 faults on average (38 in the best case). This implies that the number of needed faults in the fault attack on FOX64 can be significantly reduced. Furthermore, the technique introduced in this paper can be extended to other series of the block cipher family FOX. © 2011 Springer Science+Business Media, LLC. (37 refs.)Main Heading: CryptographyControlled terms: Digital storageUncontrolled terms: Block ciphers - Fault analysis - Fault attack - Fault model - High-level structure - Lai-Massey schemes - Symmetric block ciphersClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 722.1 Data Storage, Equipment and Techniques - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A method for detecting mismatch of time-aware Web services based on SMT
Xiyan, Wang1; Chen, Shenbiao1; Zhang, Guangquan1, 2; Zhu, Jihan1; Wu, Jianfeng1
Source: ICCSE 2011 - 6th International Conference on Computer Science and Education, Final Program and Proceedings, p 1165-1169, 2011, ICCSE 2011 - 6th International Conference on Computer Science and Education, Final Program and Proceedings; ISBN-13: 9781424497188; DOI: 10.1109/ICCSE.2011.6028840; Article number: 6028840; Conference: 6th International Conference on Computer Science and Education, ICCSE 2011, August 3, 2011 - August 5, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 School of Computer Science and Technology, Soochow University, Suzhou 215006, China2 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Considering the timed properties between the interactions of Web services, we model the Web services with timed properties formally and propose a method for detecting mismatch of time-aware Web services based on Satisfiability Modulo Theories (SMT) in this paper. The issue of detecting mismatch of Web services can be transformed into the problem of existence model checking whether a deadlock is reachable or not between the interaction of the services, and the issue of existence model checking can be transformed into the problem whether the logic formula is satisfiable or not. © 2011 IEEE. (10 refs.)Main Heading: Web servicesControlled terms: Computer science - Education computing - Model checking - User interfacesUncontrolled terms: bounded mode checking - Logic formulas - Satisfiability modulo Theories - service mismatchClassification Code: 721 Computer Circuits and Logic Elements - 722 Computer Systems and Equipment - 722.2 Computer Peripheral Equipment - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Separating NE from some nonuniform nondeterministic complexity classes
Fu, Bin1; Li, Angsheng2; Zhang, Liyu3
Source: Journal of Combinatorial Optimization, v 22, n 3, p 482-493, October 2011, Special Issue: Selected Papers from the 15th International Computing and Combinatorics Conference; ISSN: 13826905, E-ISSN: 15732886; DOI: 10.1007/s10878-010-9327-5;
Publisher: Springer Netherlands
Author affiliation: 1 Department of Computer Science, University of Texas-Pan American, Edinburg, TX 78539, United States2 Institute of Software, Chinese Academy of Sciences, Beijing, China3 Department of Computer and Information Sciences, University of Texas at Brownsville, Brownsville, TX 78520, United States
Abstract: We investigate the question whether NE can be separated from the reduction closures of tally sets, sparse sets and NP. We show that (1) NE ⊈ R NPn°(1)-T(TALLY);(2) NE ⊈ R SN m; (3) NEXP ⊈ PNPnk-T/nk for all k≥1; and (4) NE ⊈ Pbtt (NP ⊕ SPARSE). Result (3) extends a previous result by Mocas to nonuniform reductions. We also investigate how different an NE-hard set is from an NP-set. We show that for any NP subset A of a many-one-hard set H for NE, there exists another NP subset A′ of H such that A′ ⊇ A and A′-A is not of sub-exponential density. © 2010 Springer Science+Business Media, LLC. (20 refs.)Main Heading: Computational complexityControlled terms: Neon - SeparationUncontrolled terms: Complexity - Complexity class - NEXP - Nonuniform complexity - Sparse setClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 802.3 Chemical Operations - 804 Chemical Products Generally
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
On the derandomization of the graph test for homomorphism over groups
Tang, Linqing1, 2
Source: Theoretical Computer Science, v 412, n 18, p 1718-1728, April 15, 2011; ISSN: 03043975; DOI: 10.1016/j.tcs.2010.12.046;
Publisher: Elsevier
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, P.O. Box 8718, Beijing, 100080, China2 Graduate University, Chinese Academy of Sciences, Beijing, China
Abstract: In this article, we study the randomness-efficient graph tests for homomorphism over arbitrary groups (which can be used in locally testing the Hadamard code and PCP construction). We try to optimize both the amortized query complexity and the randomness complexity of the homomorphism test simultaneously. For abelian groups G=Zpm, Γ=Zp and function f:G→Γ, by using a λ-biased set S of size poly(log|G|), we show that, on any given bipartite graph H=(V1,V2;E), there exists a graph test for linearity over G with randomness complexity |V1|log|G|+|V2|O(loglog|G|), query complexity |V1|+|V2|+|E| and if the test accepts f:G→Γ with probability at least p-|E|+(1-p-|E|)δ, then f has agreement <p-1(1+δ2-λ2) with some affine linear function. It is a derandomized version of the graph test for linearity of Samorodnitsky and Trevisan (2000) [13]. For general groups G, Γ and function f:G→Γ, we introduce k random walks of some length, ℓ say, on expander graphs to design a probabilistic homomorphism test, which could be thought as a graph test on a graph which is the union of k paths. This gives a homomorphism test over general groups with randomness complexity klog|G|+ℓO(loglog|G|), query complexity k+ℓ+kℓ and if the test accepts f with probability at least 1-kμℓ2kℓ(1+μℓ-μ) +2ψ(λ,ℓ), then f is 2μ(1-λ)-far from being affine homomorphism, here ψ(λ,ℓ)=∑t=1ℓ-1tλℓ-1-t. It is a graph test version of the derandomized test for homomorphism of Shpilka and Wigderson (2004) [14]. © 2010 Elsevier B.V. All rights reserved. (15 refs.)Main Heading: Graph theoryControlled terms: Codes (symbols) - Random processes - TestingUncontrolled terms: Abelian group - Bipartite graphs - Cayley graphs - Derandomization - Expander graphs - Graph test - Hadamard codes - Homomorphism over groups - K-paths - Linear functions - Query complexity - Random WalkClassification Code: 423.2 Non Mechanical Properties of Building Materials: Test Methods - 723.2 Data Processing and Image Processing - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory - 922.1 Probability Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Common key technology system of cloud manufacturing service platform for small and medium enterprises
Yin, Chao1; Huang, Bi-Qing2; Liu, Fei1; Wen, Li-Jie3; Wang, Zhao-Kun3; Li, Xiao-Dong4; Yang, Shu-Ping4; Ye, Dan5; Liu, Xian-Hui6
Source: Jisuanji Jicheng Zhizao Xitong/Computer Integrated Manufacturing Systems, CIMS, v 17, n 3, p 495-503, March 2011; Language: Chinese; ISSN: 10065911;
Publisher: CIMS
Author affiliation: 1 State Key Laboratory of Mechanical Transmission, Chongqing University, Chongqing 400030, China2 State CIMS Engineering and Research Center, Tsinghua University, Beijing 100084, China3 School of Software, Tsinghua University, Beijing 100084, China4 Beijing Research Institute of Automation for Machinery Industry, Beijing 100120, China5 Institute of Software, Chinese Academy of Sciences, Beijing 100080, China6 Engineering Research Center for Enterprise Digital Technology, Tongji University, Shanghai 200092, China
Abstract: Cloud manufacturing service platform for Small and Medium Enterprises(SME) provided effective support for the SME to facilitate utilization and sharing of social manufacturing reSources as well as to enhance enterprises' general competitiveness. Features of cloud manufacturing service platform for SME were analyzed, common key technology system architecture of platform was constructed. Meanwhile, research idea and content of common key technology of platform were discussed, including core theory and technology, standards and specifications, system architecture, common engines and common management tools, service mode and operation mode, application architecture et al. Thus, the research laid foundation for the future research, development, implementation and application of cloud manufacturing platform for SME. (14 refs.)Main Heading: ManufactureControlled terms: Architecture - Competition - Industrial research - Industry - TechnologyUncontrolled terms: Application architecture - Common key technology - Key technologies - Management tool - Manufacturing reSource - Manufacturing service - Operation mode - Service mode - Service platforms - Small and medium enterprise - System architecture - System architecturesClassification Code: 913.4 Manufacturing - 913 Production Planning and Control; Manufacturing - 912.1 Industrial Engineering - 912 Industrial Engineering and Management - 911.2 Industrial Economics - 911 Cost and Value Engineering; Industrial Economics - 901.3 Engineering Research - 901 Engineering Profession - 402 Buildings and Towers
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
T-Maze: A tangible programming tool for children
Wang, Danli1; Zhang, Cheng1, 2; Wang, Hongan1
Source: Proceedings of IDC 2011 - 10th International Conference on Interaction Design and Children, p 127-135, 2011, Proceedings of IDC 2011 - 10th International Conference on Interaction Design and Children; ISBN-13: 9781450307512; DOI: 10.1145/1999030.1999045; Conference: 10th International Conference on Interaction Design and Children, IDC 2011, June 20, 2011 - June 23, 2011; Sponsor: ACM Special Interest Group on Computer-Human Interaction (SIGCHI); University of Michigan;
Publisher: Association for Computing Machinery
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing, China2 Graduate University, Chinese Academy of Sciences, Beijing, China
Abstract: This paper presents a tangible programming tool 'T-Maze' for children aged 5 to 9. Children could use T-Maze to create their own maze maps and complete some maze escaping tasks by the tangible programming blocks and sensors. T-Maze uses a camera to, in real-time, catch the programming sequence of the wooden blocks' arrangement, which will be used to analyze the semantic correctness and enable the children to receive feedbacks immediately. And children could join in the game by controlling the sensors during program's running. A user study shows that T-Maze is an interesting programming approach for children and easy to learn and use. © 2011 ACM. (18 refs.)Main Heading: EducationControlled terms: Semantics - Sensors - User interfacesUncontrolled terms: children - maze - Programming language - tangible programming - Tangible user interfacesClassification Code: 722.2 Computer Peripheral Equipment - 801 Chemistry - 901.2 Education - 903.2 Information Dissemination
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Extending description logics with uncertainty reasoning in possibilistic logic
Qi, Guilin1, 2; Ji, Qiu1; Pan, Jeff Z.3; Du, Jianfeng4, 5
Source: International Journal of Intelligent Systems, v 26, n 4, p 353-381, April 2011; ISSN: 08848173, E-ISSN: 1098111X; DOI: 10.1002/int.20470;
Publisher: John Wiley and Sons Ltd
Author affiliation: 1 School of Computer Science and Engineering Southeast University, Nanjing, China2 State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China3 Department of Computing Science, University of Aberdeen, Aberdeen, United Kingdom4 Guangdong University of Foreign Studies, Guangzhou 510006, China5 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Possibilistic logic provides a convenient tool for dealing with uncertainty and handling inconsistency. In this paper, we propose possibilistic description logics as an extension of description logics, which are a family of well-known ontology languages. We first give the syntax and semantics of possibilistic description logics and define several inference services in possibilistic description logics. We show that these inference serviced can be reduced to the task of computing the inconsistency degree of a knowledge base in possibilistic description logics. Since possibilistic inference services suffer from the drowning problem, that is, axioms whose confidence degrees are less than or equal to the inconsistency are not used, we consider a drowning-free variant of possibilistic inference, called linear order inference. We propose an algorithm for computing the inconsistency degree of a possibilistic description logic knowledge base and an algorithm for the linear order inference. We consider the impact of our possibilistic description logics on ontology learning and ontology merging. Finally, we implement these algorithms and provide some interesting evaluation results. © 2011 Wiley Periodicals, Inc. (39 refs.)Main Heading: Data descriptionControlled terms: Accidents - Algorithms - Inference engines - Knowledge based systems - Ontology - SemanticsUncontrolled terms: Confidence degree - Description logic - Evaluation results - Knowledge base - Linear order - Ontology language - Ontology learning - Ontology merging - Possibilistic - Possibilistic logic - Uncertainty reasoningClassification Code: 723 Computer Software, Data Handling and Applications - 903 Information Science - 903.2 Information Dissemination - 914.1 Accidents and Accident Prevention - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Compiling answer set programs into event-driven action rules
Zhou, Neng-Fa1; Shen, Yi-Dong2; You, Jia-Huai3
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6645 LNAI, p 376-381, 2011, Logic Programming and Nonmonotonic Reasoning - 11th International Conference, LPNMR 2011, Proceedings; ISSN: 03029743, E-ISSN:16113349; ISBN-13: 9783642208942;
DOI: 10.1007/978-3-642-20895-9_44; Conference: 11th International Conference on Logic Programming and Nonmonotonic Reasoning, LPNMR 2011, May 16, 2011 - May 19, 2011; Sponsor: Artificial Intelligence Journal; Pacific Institute of the Mathematical Sciences (PIMS); Assocation of Logic Programming (ALP); Simon Fraser University; University of Calabria;
Publisher: Springer Verlag
Author affiliation: 1 CUNY Brooklyn College and Graduate Center, United States2 Institute of Software, Chinese Academy of Sciences, China3 Department of Computing Science, University of Alberta, Canada
Abstract: This paper presents a compilation scheme, called ASP2AR, for translating ASP into event-driven action rules. For an ASP program, the generated program maintains a partial answer set as a pair of sets of tuples (called IN and OUT) and propagates updates to these sets using action rules. To facilitate propagation, we encode each set as a finite-domain variable and treat additions of tuples into a set as events handled by action rules. Like GASP and ASPeRiX, ASP2AR requires no prior grounding of programs. The preliminary experimental results show that ASP2AR is an order of magnitude faster than GASP and is much faster than Clasp on benchmarks that require heavy grounding. © 2011 Springer-Verlag Berlin Heidelberg. (6 refs.)Main Heading: Program translatorsControlled terms: Logic programmingUncontrolled terms: Action rules - Answer set - Finite-domain variablesClassification Code: 723.1 Computer Programming
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Fast algebraic attacks and decomposition of symmetric boolean functions
Liu, Meicheng1, 3; Lin, Dongdai1; Pei, Dingyi2
Source: IEEE Transactions on Information Theory, v 57, n 7, p 4817-4821, July 2011; ISSN: 00189448; DOI: 10.1109/TIT.2011.2145690; Article number: 5895076;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 College of Mathematics and Information Sciences, Guangzhou University, Guangzhou 510006, China3 Graduate University, Chinese Academy of Sciences, Beijing 100049, China
Abstract: In this correspondence, first we give a decomposition of symmetric Boolean functions, then we show that almost all symmetric Boolean functions, including these functions with good algebraic immunity, behave badly against fast algebraic attacks. Besides, we improve the relations between algebraic degree and algebraic immunity of symmetric Boolean functions. © 2011 IEEE. (24 refs.)Main Heading: Boolean functionsControlled terms: AlgebraUncontrolled terms: Algebraic attack - Algebraic degrees - Algebraic immunity - Stream Ciphers - Symmetric boolean functionsClassification Code: 921.1 Algebra
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Linear texture coordinate interpolation in rasterization
Han, Honglei1, 2, 3; Wang, Wencheng1
Source: Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, v 23, n 6, p 999-1005, June 2011; Language: Chinese; ISSN: 10039775;
Publisher: Institute of Computing Technology
Author affiliation: 1 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China3 Game Design Department, Animation School, Communication University of China, Beijing 100024, China
Abstract: Existing algorithms for pixel rasterization and texture coordinate interpolation always need per-pixel division operations, severely limiting the computation efficiency. This paper presents a new algorithm with only linear computation, and so achieves much acceleration. The new algorithm decomposes a 3D polygon into a set of line segments with the planes parallel to the viewing plane, and uses the obtained line segments for pixel rasterization and texture coordinate interpolation. As the line segments are parallel to the viewing plane, texture coordinates can be computed with linear interpolation, without the expensive division operations. For the errors from such a rasterization, simple supplementary measures are provided for correction to get high precise results. Experimental results have shown the effectiveness and efficiency of the new algorithm. (10 refs.)Main Heading: TexturesControlled terms: Algorithms - Computational efficiency - Interpolation - Pixels - Three dimensionalUncontrolled terms: Computation efficiency - High-precise - Line segment - Linear Interpolation - Perspective projections - Rasterization - Scan conversion - Set of lines - Texture coordinate interpolation - Texture coordinatesClassification Code: 723 Computer Software, Data Handling and Applications - 723.5 Computer Applications - 921 Mathematics - 921.6 Numerical Methods - 933 Solid State Physics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Hyper-Sbox view of AES-like permutations: A generalized distinguisher
Wu, Shuang1; Feng, Dengguo1; Wu, Wenling1; Su, Bozhan1
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6584 LNCS, p 155-168, 2011, Information Security and Cryptology - 6th International Conference, Inscrypt 2010, Revised Selected Papers; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642215179;
DOI:10.1007/978-3-642-21518-6_12;Conference: 6th China International Conference on Information Security and Cryptology, Inscrypt 2010, October 20, 2010 - October 24, 2010; Sponsor: State Key Laboratory of Information Security; Chinese Academy of Sciences; Chinese Association for Cryptologic Research;
Publisher: Springer Verlag
Author affiliation: 1 State Key Lab. of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: Grøstl[1] is one of the second round candidates of the SHA-3 competition[2] hosted by NIST, which aims to find a new hash standard. In this paper, we studied equivalent expressions of the generalized AES-like permutation. We found that four rounds of the AES-like permutation can be regarded as a Hyper-Sbox. Then we further analyzed the differential properties of both Super-Sbox and Hyper-Sbox. Based on these observations, we give an 8-round truncated differential path of the generalized AES-like permutation, which can be used to construct a distinguisher of 8-round Grøstl-256 permutation with 264 time and 264 memory. This is the best known distinguisher of reduced-round Grøstl permutation. © 2011 Springer-Verlag. (19 refs.)Main Heading: Security of dataControlled terms: CryptographyUncontrolled terms: AES-like permutation - Distin- guisher - Hyper-Sbox - Sha-3 candidates - Super-SboxClassification Code: 716 Telecommunication; Radar, Radio and Television - 717 Optical Communication - 718 Telephone Systems and Related Technologies; Line Communications - 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Optimization of ANN based on EA
Zhu, Hongguo1, 2; Xin, Hai1; Zheng, Changwen2
Source: Advanced Materials Research, v 271-273, p 629-632, 2011, Advanced Materials and Information Technology Processing, AMITP 2011; ISSN: 10226680; ISBN-13: 9783037851579; DOI: 10.4028/www.scientific.net/AMR.271-273.629; Conference: 2011 International Conference on Advanced Materials and Information Technology Processing, AMITP 2011, April 17, 2011 - April 18, 2011; Sponsor: Hainan University; Asia Pacific Human-Computer Interaction Research Center;
Publisher: Trans Tech Publications
Author affiliation: 1 National University of Defense Technology, Changsha, Hunan, 410073, China2 Institute of Software, Chinese Academy of Science, Beijing, 100090, China
Abstract:Artificial neural network (ANN) and evolutionary algorithm (EA) are both biology-based models of information processing. The basic theories of ANN and EA are described. Then the mechanisms of using EA for ANN's optimization are explained and the research states are summarized. The existing problems and future directions are demonstrated finally. © (2011) Trans Tech Publications, Switzerland. (10 refs.)Main Heading: Neural networksControlled terms: Data processing - Evolutionary algorithms - Information technology - OptimizationUncontrolled terms: Artificial Neural Network - Basic theory - Existing problems - Future directionsClassification Code: 723 Computer Software, Data Handling and Applications - 903 Information Science - 921 Mathematics - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
An asymmetric impossible boomerang attack on 7-Round AES-128
Dong, Xiao-Li1; Hu, Yu-Pu1; Chen, Jie1, 3; Wei, Yong-Zhuang1, 2
Source: Jisuanji Xuebao/Chinese Journal of Computers, v 34, n 7, p 1300-1307, July 2011; Language: Chinese; ISSN: 02544164; DOI: 10.3724/SP.J.1016.2011.01300;
Publisher: Science Press
Author affiliation: 1 Key Laboratory of Computer Networks and Information Security of Ministry of Education, Xidian University, Xi'an 710071, China2 School of Information and Communication, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China3 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100049, China
Abstract: Block cipher is the core of cryptography that provides data encryption, authentication and key management in information security. The security of block cipher is an important issue in the cryptanalysis. Based on the principle of differential cryptanalysis, this paper introduces a new cryptanalytic technique on block cipher: asymmetric impossible boomerang attack. The technique used asymmetric impossible boomerang distinguisher to eliminate wrong key material and leave the right key candidate. With key schedule considerations, techniques of looking up differential tables and re-using the data, the authors apply asymmetric impossible boomerang attack to AES-128. It is shown that attack on 7-round AES-128 requires data complexity of about 2105.18 chosen plaintexts, time complexity of about 2115.2 encryptions and memory complexity of about 2106.78 AES blocks. The presented result is better than any previous published cryptanalytic results on AES-128 in terms of the numbers of attacked rounds, the data complexity and the time complexity. (13 refs.)Main Heading: Security of dataControlled terms: Authentication - Public key cryptographyUncontrolled terms: AES - Block ciphers - Boomerang - Cryptanalysis - Time complexityClassification Code: 723 Computer Software, Data Handling and Applications - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Design and implementation of a graphical programming tool for children
Xiajian, Chen1; Danli, Wang2; Hongan, Wang2
Source: Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, v 4, p 572-576, 2011, Proceedings - 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011; ISBN-13: 9781424487257; DOI: 10.1109/CSAE.2011.5952915; Article number: 5952915; Conference: 2011 IEEE International Conference on Computer Science and Automation Engineering, CSAE 2011, June 10, 2011 - June 12, 2011; Sponsor: IEEE Beijing Section; Pudong New Area Association for Computer; Pudong New Area Science and Technology Development Fund; Tongji University; Xiamen University;
Publisher: IEEE Computer Society
Author affiliation: 1 Graduate University, Chinese Academy of Sciences, Beijing, China2 Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: Since the computer is more widely used by children, the existing software for children's amusement and study are not able to satisfy their needs any longer. To solve this problem, a kind of graphical programming language for children is proposed in the paper. We analyze the design principles of graphical programming language for children, and implement ten types of common graphical blocks, integrating the basic knowledge of object oriented programming. We also provide the function to bridge the gap between graphical and text programming. Additionally, we designed and developed a graphical programming tool - KidPipe based on Pen Interaction. Finally, a user study is carried out to verify reliability and usability of KidPipe. © 2011 IEEE. (18 refs.)Main Heading: Object oriented programmingControlled terms: Computer graphics - Computer programming languages - Computer scienceUncontrolled terms: Children Programming - Design Principles - Graphical blocks - Graphical programming - Graphical programming language - Object oriented - User studyClassification Code: 721 Computer Circuits and Logic Elements - 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Approximation and hardness results for label cut and related problems
Zhang, Peng1; Cai, Jin-Yi2; Tang, Lin-Qing3, 4; Zhao, Wen-Bo5
Source: Journal of Combinatorial Optimization, v 21, n 2, p 192-208, February 2011; ISSN: 13826905, E-ISSN: 15732886; DOI: 10.1007/s10878-009-9222-0;
Publisher: Springer Netherlands
Author affiliation: 1 School of Computer Science and Technology, Shandong University, Ji'nan 250101, China2 Computer Sciences Department, University of Wisconsin, Madison, WI 53706, United States3 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China4 Graduate University, Chinese Academy of Sciences, Beijing 100190, China5 Dept. of Computer Science and Engineering, University of California, San Diego, San Diego, CA 92093, United States
Abstract: We investigate a natural combinatorial optimization problem called the Label Cut problem. Given an input graph G with a Source s and a sink t, the edges of G are classified into different categories, represented by a set of labels. The labels may also have weights. We want to pick a subset of labels of minimum cardinality (or minimum total weight), such that the removal of all edges with these labels disconnects s and t. We give the first non-trivial approximation and hardness results for the Label Cut problem. Firstly, we present an O(m) -approximation algorithm for the Label Cut problem, where m is the number of edges in the input graph. Secondly, we show that it is NP-hard to approximate Label Cut within 2log 1-1 / log log cnn for any constant c<1/2, where n is the input length of the problem. Thirdly, our techniques can be applied to other previously considered optimization problems. In particular we show that the Minimum Label Path problem has the same approximation hardness as that of Label Cut, simultaneously improving and unifying two known hardness results for this problem which were previously the best (but incomparable due to different complexity assumptions). © 2009 Springer Science+Business Media, LLC. (30 refs.)Main Heading: Approximation algorithmsControlled terms: Combinatorial optimization - HardnessUncontrolled terms: Approximation hardness - Cardinalities - Combinatorial optimization problems - Complexity assumptions - Hardness result - Input graphs - Input lengths - Minimum total weights - Non-trivial - NP-hard - Optimization problems - Path problemsClassification Code: 421 Strength of Building Materials; Mechanical Properties - 921 Mathematics - 951 Materials Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Realistic, fast, and controllable simulation of solid combustion
Zhu, Jian1; Chang, Yuanzhang1; Wu, Enhua1, 2
Source: Computer Animation and Virtual Worlds, v 22, n 2-3, p 125-132, April-May 2011; ISSN: 15464261, E-ISSN: 1546427X; DOI: 10.1002/cav.394;
Publisher: John Wiley and Sons Ltd
Author affiliation: 1 Computer Graphics and Multimedia Laboratory, Faculty of Science and Technology, University of Macau, Av. Padre Tomas Pereira, Taipa, Macau, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: We present a realistic, fast, and controllable model to simulate fire propagation on solid objects with the object decomposition process involved. A hybrid structure of grids is employed to simulate the whole process efficiently. An improved burning surface update scheme based on level set and a novel method for visualizing the burning surface are proposed to produce convincing results. To achieve interactive simulation speed, a few acceleration techniques are employed, including a moving grid generated to dynamically track the fire propagation, a refined Marching Cubes method to reconstruct the burning surface, and a hardware-implemented fluid solver. By controlling a few physical and geometric parameters, we are able to simulate various solid combustion phenomena. © 2011 John Wiley & Sons, Ltd. (24 refs.)Main Heading: Numerical methodsControlled terms: Combustion - Fires - GeometryUncontrolled terms: CUDA - fire propagation - level set - marching cubes - moving gridClassification Code: 521.1 Fuel Combustion - 914.2 Fires and Fire Protection - 921 Mathematics - 921.6 Numerical Methods
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
High viscosity fluid simulation using particle-based method
Chang, Yuanzhang1; Bao, Kai2, 3; Zhu, Jian1; Wu, Enhua1, 2
Source: ISVRI 2011 - IEEE International Symposium on Virtual Reality Innovations 2011, Proceedings, p 199-205, 2011, ISVRI 2011 - IEEE International Symposium on Virtual Reality Innovations 2011, Proceedings; ISBN-13: 9781457700538; DOI: 10.1109/ISVRI.2011.5759632; Article number: 5759632; Conference: IEEE International Symposium on Virtual Reality Innovations 2011, ISVRI 2011, March 19, 2011 - March 20, 2011; Sponsor: IEEE Visualization and Graphics Technical Committee (VGTC);
Publisher: IEEE Computer Society
Author affiliation: 1 Department of Computer and Information Science, University of Macau, Macao, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China3 Division of Mathematical and Computer Sciences and Engineering, KAUST, Thuwal, Saudi Arabia
Abstract: We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke's law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn't need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE. (38 refs.)Main Heading: Finite element methodControlled terms: Fluids - Hydrodynamics - Navier Stokes equations - Virtual reality - Viscosity - Viscous flowUncontrolled terms: Elastic stress - Flow deformation - Fluid behavior - High viscosity fluids - Hooke's Law - Lagrangian - Particle-based methods - Remeshing - Smoothed Particle Hydrodynamics - Smoothed particle hydrodynamics methodsClassification Code: 631 Fluid Flow - 723 Computer Software, Data Handling and Applications - 921.6 Numerical Methods
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Empirical studies of pen tilting performance in pen-based user interfaces
Tian, Feng1; Cao, Xiang2; Lu, Fei1; Dai, Guozhong3; Zhang, Xiaolong4; Wang, Hongan3
Source: ACM International Conference Proceeding Series, 2011, VINCI 2011 - The 4th Visual Information Communication - International Symposium; ISBN-13: 9781450308755; DOI: 10.1145/2016656.2016664; Conference: 4th Visual Information Communication - International Symposium, VINCI 2011, August 4, 2011 - August 5, 2011; Sponsor: ACM SIGCHI China; China Computer Federation;
Publisher: Association for Computing Machinery
Author affiliation: 1 Intelligence Engineering Lab., Institute of Software, Chinese Academy of Sciences, China2 Microsoft Research Asia, China3 State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, China4 Pennsylvania State University, United States
Abstract: Recently, pen tilting has been explored in pen-based user interfaces and has shown potential to improve user interaction in various tasks (e.g., menu selection, modeless object manipulation). However, some basic questions concerning pen tilting behaviors, such as the ideal range, azimuth size, and direction of pen tilting, have not been thoroughly investigated. In this paper, we report our empirical studies on user performances in basic pen tilting tasks. First, we conducted a baseline study, which helps us to determine tilting directions, tilting ranges, and the thresholds that separate incidental pen tilting actions from intentional actions used for interaction. Based on the results from the baseline study, we designed an experiment to investigate user performances in goal tilting in different tilting ranges, azimuth sizes, and directions. Drawing on the results of our data analyses on task completion time, error rate, and pen tip movements, we discussed values of tilting parameters like titling range, minimal azimuth size, and tilting direction. © 2011 ACM. (25 refs.)Main Heading: Graphical user interfacesControlled terms: Data reduction - Visual communicationUncontrolled terms: Empirical evaluations - Empirical studies - Error rate - Menu selection - Object manipulation - Pen based user interfaces - pen tilting - Pen-tip movement - Task completion time - User interaction - User performanceClassification Code: 717.1 Optical Communication Systems - 722.2 Computer Peripheral Equipment - 723.2 Data Processing and Image Processing
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Generic construction of identity-based strong key-insulated signature
Zhou, Dehua1, 2, 3; Weng, Jian2, 4, 5; Chen, Kefei1; Zheng, Dong6
Source: Journal of Internet Technology, v 12, n 4, p 629-636, 2011; ISSN: 16079264;
Publisher: Taiwan Academic Network Management Committee
Author affiliation: 1 Department of Computer Science and Engineering, Shanghai Jiao Tong University, China2 Department of Computer Science, Jinan University, China3 Shanghai Key Laboratory of Scalable Computing and Systems, China4 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, China5 State Key Laboratory of Information Security Institute of Software, Chinese Academy of Sciences, China6 School of Information Security Engineering, Shanghai Jiao Tong University, China
Abstract: It is worthwhile to deal with the key-exposure problem for identity-based signature. Identity-based key-insulated signature can successfully reduce the damage caused by key-exposure, and thus attracts the researchers' great interest. In this paper, we study the problem of how to generically construct identity-based strong key-insulated signature, and proposed a generic construction of identity-based strong key-insulated signature schemes from two-level hierarchical identity-based signature. (24 refs.)Uncontrolled terms: Generic construction - Hierarchical identity-based signature - Identity based signature - Identity-based - Key-exposure - Key-insulated signature - Key-insulated signature scheme - Secure key-updates - Strong key-insulation
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Impossible differential and integral cryptanalysis of zodiac
Sun, Bing1; Zhang, Peng1; Li, Chao1, 2
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 8, p 1911-1917, August 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03875;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Department of Mathematics and Systems Science, College of Science, National University of Defense Technology, Changsha 410073, China2 State Key Laboratory of Information Security, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China
Abstract: This paper reevaluates the security of Zodiac against impossible differential and integral attacks. In the past, results have shown that there are 15-round impossible differentials and 8-round integral distinguishers of Zodiac. Based on an 8-round truncated differential, with probability being 1, full 16-round impossible differentials and 9-round integral distinguishers are constructed. Integral attacks are applied to 12/13/14/15/16-round Zodiac with time complexities being 234, 259, 293, 2133 and 2190, respectively. Both the numbers of chosen plaintexts are no more than 216, which shows that the full 16-round Zodiac is not immune to integral attack. © 2011 ISCAS. (15 refs.)Uncontrolled terms: Chosen plaintexts - Distinguishers - Impossible differential - Integral attack - Time complexity - Truncated differential - Zodiac
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Collecting and managing network-matched trajectories of moving objects in databases
Ding, Zhiming1; Deng, Ke2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6860 LNCS, n PART 1, p 270-279, 2011, Database and Expert Systems Applications - 22nd International Conference, DEXA 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642230875;
DOI: 10.1007/978-3-642-23088-2_19; Conference: 22nd International Conference on Database and Expert Systems Applications, DEXA 2011, August 29, 2011 - September 2, 2011;
Publisher: Springer Verlag
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, South-Fourth-Street 4, Zhong-Guan-Cun, Beijing 100190, China2 School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, QLD 4072, Australia
Abstract: To track network-matched trajectories of moving objects is important in a lot of applications such as trajectory-based traffic-flow analysis and trajectory data mining. However, current network-based location tracking methods for moving objects need digital maps installed at the moving object side, which is not realistic in a lot of circumstances. In this paper, we propose a new moving objects database framework, Euclidean-batch-sampling and Network-matched-trajectory based Moving Objects Database (EuNetMOD) model, to support network-matched trajectory tracking without digital maps installed at the moving object side. © 2011 Springer-Verlag Berlin Heidelberg. (9 refs.)Main Heading: TrajectoriesControlled terms: Data flow analysis - Database systems - Expert systems - Surface dischargesUncontrolled terms: Database - Digital map - Moving objects - Moving Objects Databases - Network-based - Spatial temporals - Trajectory data - Trajectory tracking - Trajectory-basedClassification Code: 404.1 Military Engineering - 701.1 Electricity: Basic Concepts and Phenomena - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Secure machine learning, a brief overview
Liao, Xiaofeng1, 3, 4; Ding, Liping1; Wang, Yongji2
Source: 2011 5th International Conference on Secure Software Integration and Reliability Improvement - Companion, SSIRI-C 2011, p 26-29, 2011, 2011 5th International Conference on Secure Software Integration and Reliability Improvement - Companion, SSIRI-C 2011; ISBN-13: 9780769544540; DOI: 10.1109/SSIRI-C.2011.15; Article number: 6004498; Conference: 2011 5th International Conference on Secure Software Integration and Reliability Improvement - Companion, SSIRI-C 2011, June 27, 2011 - June 29, 2011;
Publisher: IEEE Computer Society
Author affiliation: 1 National Engineering Research Center for Fundamental Software, Institute of Software, China2 State Key Laboratory of Computer Science, Institute of Software, China3 Graduate University, Chinese Academy of Sciences, Beijing 100049, China4 Information Engineering School, Nanchang University, Nanchang, Jiangxi, 330031, China
Abstract: The purpose of this article is to give a brief overview on the current work towards the emerging research problem of secure machine learning. Machine learning technique has been applied widely in various applications especially in spam detection and network intrusion detection. Most existing learning schemes assume that the environment they settle in is benign. However this is not always true in the real adversarial decision-making situations where the future data sets and the training data set are no longer from the same population, due to the transformations employed by the adversaries. As more and more machine learning systems are put into use, it is imperative to consider the security of the machine learning system. As a emerging problem, it is attracting more and more researchers' attention. In this article, we present a brief overview on secure machine learning and current progress on developing secure machine learning algorithms. © 2011 IEEE. (22 refs.)Main Heading: Learning algorithmsControlled terms: C (programming language) - Intrusion detection - Learning systems - Population statistics - Software reliabilityUncontrolled terms: Data sets - Learning schemes - Machine learning techniques - Machine-learning - Network intrusion detection - Overview - Research problems - Spam detection - Training data setsClassification Code: 723 Computer Software, Data Handling and Applications - 922.2 Mathematical Statistics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Combinatorial optimization problem reduction and algorithm derivation
Zheng, Yu-Jun1, 2; Xue, Jin-Yun1, 2; Ling, Hai-Feng3
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 9, p 1985-1993, September 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03948;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 The State Key Laboratory of Computer Science, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Provincial Key Laboratory of High Performance Computing, Jiangxi Normal University, Nanchang 330027, China3 School of Management and Engineering, Nanjing University, Nanjing 210093, China
Abstract: A unified algebraic model is used to represent optimization problems, which uses a transformational approach that starts from an initial problem specification and reduces it into sub-problems with less complexity. The model then constructs the problem reduction graph (PRG) describing the recurrence relations between the problem, and derives an algorithm with its correctness proof hand-in-hand. A prototype system that implements the formal algorithm development process mechanically is also designed. This approach significantly improves the automation of algorithmic program design and helps to understand inherent characteristics of the algorithms. ©2011, Institute of Software, the Chinese Academy of Sciences. All rights reserved. (27 refs.)Main Heading: Combinatorial mathematicsControlled terms: Algorithms - Combinatorial optimization - OptimizationUncontrolled terms: Algorithm derivation - Combinatorial optimization problem - Correctness proof - PAR (partition-and-recur) - Problem reductionClassification Code: 723 Computer Software, Data Handling and Applications - 921 Mathematics
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Hypergraph partitioning for the parallel computation of continuous Petri nets
Ding, Zuohua1; Shen, Hui1; Cao, Jianwen2
Source: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v 6873 LNCS, p 257-271, 2011, Parallel Computing Technologies - 11th International Conference, PaCT 2011, Proceedings; ISSN: 03029743, E-ISSN: 16113349; ISBN-13: 9783642231773;
DOI: 10.1007/978-3-642-23178-0_23; Conference: 11th International Conference on Parallel Computing Technologies, PaCT 2011, September 19, 2011 - September 23, 2011; Sponsor: Russian Academy of Sciences; Kazan Federal University; Academy of Sciences of the Republic of Tatarstan; Russian Fund for Basic Research; Lufthansa Official Airlines;
Publisher: Springer Verlag
Author affiliation: 1 Center of Math Computing and Software Engineering, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China2 Institute of Software, Chinese Academy Sciences, Beijing 100080, China
Abstract: Continuous Petri net can be used for performance analysis or static analysis. The analysis is based on solving the associated ordinary differential equations. However, large equation groups will give us overhead computing. To solve this issue, this paper presents a method to compute these differential equations in parallel. We first map the Petri net to a hypergraph, and then partition the hypergraph with minimal inter-processor communication and good load balance; Based on the partition result, we divide the differential equations into several blocks; Finally we design parallel computing algorithm to compute these equations. Software hMETIS and SUNDIALS have been used to partition the hypergraph and to support the parallel computing, respectively. Gas Station problem and Dining Philosopher problem have been used to demonstrate the benefit of our method. © 2011 Springer-Verlag Berlin Heidelberg. (19 refs.)Main Heading: Ordinary differential equationsControlled terms: Parallel architectures - Parallel processing systems - Petri nets - Philosophical aspects - Static analysisUncontrolled terms: Continuous Petri net - Dining philosophers - Equation groups - Gas stations - Hypergraph - Hypergraph partitioning - Inter processor communication - Load balance - ODE - Parallel Computation - Performance analysisClassification Code: 722 Computer Systems and Equipment - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications - 901.1 Engineering Professional Aspects - 921.2 Calculus - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Computational complexity of holant problems
Cai, Jin-Yi1, 2; Lu, Pinyan3; Xia, Mingji4
Source: SIAM Journal on Computing, v 40, n 4, p 1101-1132, 2011; ISSN: 00975397; DOI: 10.1137/100814585;
Publisher: Society for Industrial and Applied Mathematics Publications
Author affiliation: 1 Computer Sciences Department, University of Wisconsin, Madison, WI 53706, United States2 Beijing University, Beijing, China3 Microsoft Research Asia, Beijing, China4 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
Abstract: We propose and explore a novel alternative framework to study the complexity of counting problems, called Holant problems. Compared to counting constraint satisfaction problems (#CSP), it is a refinement with a more explicit role for the constraint functions. Both graph homomorphism and #CSP can be viewed as special cases of Holant problems. We prove complexity dichotomy theorems in this framework. Our dichotomy theorems apply to local constraint functions, which are symmetric functions on Boolean input variables and evaluate to arbitrary real or complex values. We discover surprising tractable subclasses of counting problems, which could not easily be specified in the #CSP framework. When all unary functions are assumed to be free (Holant * problems), the tractable ones consist of functions that are degenerate, or of arity at most two, or holographic transformations of Fibonacci gates. When only two special unary functions, the constant zero and constant one functions, are assumed to be free (Holantc problems), we further identify three special families of tractable cases. Then we prove that all other cases are #P-hard. The main technical tool we use and develop is holographic reductions. Another technical tool used in combination with holographic reductions is polynomial interpolations. © 2011 Society for Industrial and Applied Mathematics. (36 refs.)Main Heading: Computational complexityControlled terms: Boolean functions - Interpolation - Real variablesUncontrolled terms: #P-hardness - Alternative framework - Complex values - Complexity dichotomies - Constraint functions - Constraint Satisfaction Problems - Counting problems - Dichotomy theorem - Graph homomorphisms - Holant problems - Holographic reduction - Input variables - Local constraints - Polynomial interpolation - Symmetric functions - Technical tools - Unary functionsClassification Code: 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 921 Mathematics - 921.6 Numerical Methods
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Geographical location-based convergence-free routing using multiple metrics for satellite networks
Wang, Lu1, 2; Liu, Li-Xiang1; Hu, Xiao-Hui1
Source: Yuhang Xuebao/Journal of Astronautics, v 32, n 7, p 1542-1550, July 2011; Language: Chinese; ISSN: 10001328; DOI: 10.3873/j.issn.1000-1328.2011.07.016;
Publisher: China Spaceflight Society
Author affiliation: 1 Institute of Software, Chinese Academy of Sciences, Beijing 100190, China2 Graduate University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: A novel satellite network routing algorithm named CFR is proposed in this paper. In the CFR geographical location, delay and packet drop rate are used as metrics to calculate routes to meet different QoS requirements. When packets arrive, instead of using global routing tables, CFR calculates routes in real time, in order to achieve convergence free. In addition, an explicit load balancing mechanism is proposed to achieve load balance. The link load information is exchanged among satellites transmitting packets from the same data flow. In response, a less congested path is selected when there are satellites with heavy link load. The retrieved path does not include the congested link, and a portion of data is communicated via the retrieved path. CFR is able to guarantee a good performance in terms of a better distribution of traffic among satellites, lower packet drops and higher throughput. (15 refs.)Main Heading: SatellitesControlled terms: Convergence of numerical methods - Drops - Packet loss - Parallel architectures - Quality of serviceUncontrolled terms: Convergence free - Geographical locations - Load-Balancing - Multiple metrics - Satellite networkClassification Code: 723 Computer Software, Data Handling and Applications - 722 Computer Systems and Equipment - 718 Telephone Systems and Related Technologies; Line Communications - 921.6 Numerical Methods - 717 Optical Communication - 655.2 Satellites - 443.1 Atmospheric Properties - 716 Telecommunication; Radar, Radio and Television
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Software reliability prediction with an improved Elman network model
Cheng, Xu-Chao1, 2; Chen, Xin-Yu1, 2; Guo, Ping1
Source: Tongxin Xuebao/Journal on Communications, v 32, n 4, p 86-93, April 2011; Language: Chinese; ISSN: 1000436X;
Publisher: Editorial Board of Journal on Communications
Author affiliation: 1 Image Processing and Pattern Recognition Laboratory, Beijing Normal University, Beijing 100875, China2 State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: In order to improve accuracy and dependability of using neural network for software reliability prediction, a multi-objective optimization-based improved Elman recurrent network method (Mop-IElman) was proposed. First, on the basis of the Elman network, a self-delay feedback of the output layer as another context layer was designed. Second, the network architecture and the initial outputs of these two context layers were taken as variables of network configuration setting, and NSGA-II was employed to simultaneously optimize prediction performance and robustness, then the Pareto solution was obtained. After that, by maximizing the sum of prediction performance and robustness, the final network configuration setting was determined. Finally, the proposed method was compared with the feed-forward neural network, the Elman network, both the single-objective and the multi-objective optimization Elman networks with respect to two real software failure data. It demonstrated that the proposed Mop-IElman achieves higher prediction accuracy and dependability. (20 refs.)Main Heading: Software reliabilityControlled terms: Computer software selection and evaluation - Forecasting - Multiobjective optimization - Network architecture - Neural networksUncontrolled terms: Dependability - Multi objective - NSGA-II - Recurrent networks - Software reliability predictionClassification Code: 722 Computer Systems and Equipment - 723 Computer Software, Data Handling and Applications - 921 Mathematics - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Mining invocation specifications for API libraries
Zhong, Hao1; Zhang, Lu2, 3; Mei, Hong2, 3
Source: Ruan Jian Xue Bao/Journal of Software, v 22, n 3, p 408-416, March 2011; Language: Chinese; ISSN: 10009825; DOI: 10.3724/SP.J.1001.2011.03931;
Publisher: Chinese Academy of Sciences
Author affiliation: 1 Laboratory for Internet Software Technologies, Institute of Software, The Chinese Academy of Sciences, Beijing 100190, China2 Software Institute, School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China3 Key Laboratory of High Confidence Software Technologies (Peking University), Ministry of Education, Beijing 100871, China
Abstract: Invocation Specifications of an API library are types of specifications that are able to describe the legal invocation sequences of methods. Client code should follow descriptions of invocation specifications to call upon methods provided by API libraries. If this procedure is not followed, defects can be introduced into client code, and reduce its integrity. Because invocation specifications define the properties that are trustworthy for a software system, they play a central role in the research of trustworthy software and model checking; however, due to the great efforts to write invocation specifications, most API libraries do not provide such specifications. To address the problem, researchers have proposed various approaches to mine invocation specifications automatically. In this paper update-to-date research of mining specifications is surveyed, and the potential direction of further research is discussed. © ISCAS. (53 refs.)Main Heading: SpecificationsControlled terms: Libraries - Model checking - ResearchUncontrolled terms: API library - Client code - Software systems - Trustworthy softwaresClassification Code: 402.2 Public Buildings - 723.1 Computer Programming - 901.3 Engineering Research - 902.2 Codes and Standards
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research on decision analysis system based on interactive visual components
Teng, Dong-Xing1; Wang, Zi-Lu1, 2; Yang, Hai-Yan1, 2; Wang, Hong-An1; Dai, Guo-Zhong1 Source: Jisuanji Xuebao/Chinese Journal of Computers, v 34, n 3, p 555-565, March 2011; Language: Chinese; ISSN: 02544164; DOI: 10.3724/SP.J.1016.2011.00555;
Publisher: Science Press
Author affiliation: 1 Intelligence Engineering Laboratory, Institute of Software, Chinese Acad. of Sci., Beijing 100190, China2 Graduate University of Chinese Acad. of Sci., Beijing 100049, China
Abstract: According to the facts that people's analyzing and decision making activities are stepwise and interactive, this paper studies the human-computer collaborative working model based on studies of cognitive psychology. It provides an analyzing and decision making interaction task framework based on interactive visual components, establishes a unified description of interactive visual components, and studies the implementation techniques of interactive visual components. The implementation techniques contain the visualization technique and interaction technique during the running procedure, and the configuration history reusing technique during the configuration procedure etc. The authors built a decision analysis system based on interactive visual components, and verify it in an enterprise. The result of the verification indicates that the system can support the analyzing and decision making activities effectively. (24 refs.)Main Heading: Decision makingControlled terms: Decision support systems - Information analysis - Information systems - VisualizationUncontrolled terms: Cognitive psychology - Collaborative working - Decision analysis - Decision supports - Human interactions - Human-computer - Implementation techniques - Information visualization - Interaction techniques - Unified description - Visual components - Visualization techniqueClassification Code: 902.1 Engineering Graphics - 903 Information Science - 912.2 Management
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Sketch-based design for green geometry and image deformation
Sheng, Bin1, 2; Meng, Weiliang3; Sun, Hanqiu4; Wu, Enhua5, 6
Source: Multimedia Tools and Applications, p 1-19, 2011; ISSN: 13807501, E-ISSN: 15737721; DOI: 10.1007/s11042-011-0860-8 Article in Press
Author affiliation: 1 Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China2 Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, China3 Institute of Software, Chinese Academy of Sciences, LIAMA-NLPR, CAS Institute of Automation, Beijing, China4 The Chinese University of Hong Kong, Shatin, China5 Institute of Software, Chinese Academy of Sciences, Beijing, China6 University of Macau, Macau, China
Abstract: User interfaces have traditionally followed the WIMP (window, icon, menu, pointer) paradigm. Though functional and powerful, they are usually cumbersome for a novice user to design a complex model, requiring considerable expertise and effort. This paper presents a system for designing geometric models and image deformation with sketching curves, with the use of Green coordinates. In 3D modeling, the user first creates a 3D model by using a sketching interface, where a given 2D curve is interpreted as the projection of the 3D curve. The user can add, remove, and deform these control curves easily, as if working with a 2D line drawing. For a given set of curves, the system automatically identifies the topology and face embedding by applying graph rotation system. Green coordinates are then used to deform the generated models with detail-preserving property. Also, we have developed a sketch-based image-editing interface to deform image regions using Green coordinates. Hardware-assisted schemes are provided for both control shape deformation and the subsequent surface optimization, the experimental results demonstrate that 3D/2D deformations can be achieved in realtime. © 2011 Springer Science+Business Media, LLC. (39 refs.)Main Heading: Three dimensionalControlled terms: Deformation - Topology - User interfacesUncontrolled terms: 2D line drawing - 3-d modeling - 3D models - Complex model - Control curve - Geometric models - Hardware-assisted - Image deformation - Image regions - Novice user - Real time - Shape deformation - Sketching interface - Surface optimizationClassification Code: 421 Strength of Building Materials; Mechanical Properties - 422 Strength of Building Materials; Test Equipment and Methods - 722.2 Computer Peripheral Equipment - 902.1 Engineering Graphics - 921.4 Combinatorial Mathematics, Includes Graph Theory, Set Theory
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A covert timing channel via algorithmic complexity attacks: Design and analysis
Sun, Xiaoshan1; Cheng, Liang2; Zhang, Yang2
Source: IEEE International Conference on Communications, 2011, 2011 IEEE International Conference on Communications, ICC 2011; ISSN: 05361486; ISBN-13: 9781612842332; DOI: 10.1109/icc.2011.5962718; Article number: 5962718; Conference: 2011 IEEE International Conference on Communications, ICC 2011, June 5, 2011 - June 9, 2011; Sponsor: IEEE Communication Society; IEICE Communications Society; Science Council of Japan;
Publisher: Institute of Electrical and Electronics Engineers Inc.
Author affiliation: 1 State Key Laboratory of Information Security, Graduate School, Chinese Academy of Sciences, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, China
Abstract: A covert channel is a communication channel that bypasses the access controls of the system, and it is a threat to the system's security. In this paper, we propose a new covert timing channel which exploits the algorithmic complexity vulnerabilities in the name lookup algorithm of the kernel. This covert channel has a high capacity and it is practically exploitable. In our experiments, the data rate reaches 2256 bps under a very low error rate. This data rate is high enough for practical use. So our covert channel is dangerous. To our knowledge, no previous works propose this covert channel nor implement it. We describe our design and implementation of the covert channel on a SELinux system, discuss the subtle issues that arose in the design, present performance data of the covert channel and analyse its capacity. © 2011 IEEE. (23 refs.)Main Heading: Parallel processing systemsControlled terms: Access control - Algorithms - Computational complexity - DesignUncontrolled terms: Algorithmic complexity - Covert channels - Covert timing channels - Data rates - Design and analysis - Error rate - High capacity - Lookup algorithms - Performance dataClassification Code: 408 Structural Design - 721.1 Computer Theory, Includes Formal Logic, Automata Theory, Switching Theory, Programming Theory - 722.4 Digital Computers and Systems - 723 Computer Software, Data Handling and Applications
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
A tabu search approach to fuzzy optimization of Camellia oleifera fertilization
Song, Qin1; Zhao, Fukuan1; Zheng, Yujun2
Source: IFIP Advances in Information and Communication Technology, v 344 AICT, n PART 1, p 125-130, 2011, Computer and Computing Technologies in Agriculture IV - 4th IFIP TC 12 Conference, CCTA 2010, Selected Papers; ISSN: 18684238; ISBN-13: 9783642183324; DOI: 10.1007/978-3-642-18333-1_17; Conference: 4th IFIP International Conference on Computer and Computing Technologies in Agriculture and the 4th Symposium on Development of Rural Information, CCTA 2010, October 22, 2010 - October 25, 2010; Sponsor: China Agricultural University; China Society of Agricultural Engineering; International Federation for Information Processing (IFIP); Beijing Society for Information Technology in Agriculture; National Natural Science Foundation of China;
Publisher: Springer New York
Author affiliation: 1 Department of Biotechnology, Beijing University of Agriculture, 102206 Beijing, China2 Institute of Software, Chinese Academy of Sciences, 100093 Beijing, China
Abstract: Traditional optimization methods have been applied for years to high-yield fertilization models, which are usually well formulated by crisp coefficients and variables. Unfortunately, real-world crop growing environment and process are often not deterministic. In this paper we establish a fuzzy mathematical model between Camellia oleifera yield and fertilization application rates, in which variation coefficients of N, P, K are described with fuzzy numbers. In particular, we present a tabu search algorithm for finding a set of fertilization solutions in order to maximize Camellia oleifera yield based on fuzzy measures including expected value, optimistic value and pessimistic value. Our approach is more realistic and practical for real-world problems by taking vague and imprecise data into consideration, provides more comprehensive decision support by generating a set of high-quality alternatives, and can be applied to fertilizer decision for a variety of other crops. © 2011 IFIP International Federation for Information Processing. (16 refs.)Main Heading: Tabu searchControlled terms: Crops - Cultivation - Decision support systems - Fuzzy sets - Mathematical modelsUncontrolled terms: Application rates - Decision supports - Expected values - Fuzzy measures - Fuzzy numbers - Fuzzy optimization - High quality - Imprecise data - Oleifera - Optimistic value - Optimization method - Pessimistic value - Real-world - Real-world problem - Tabu search algorithms - Variation coefficientClassification Code: 723 Computer Software, Data Handling and Applications - 821.3 Agricultural Methods - 821.4 Agricultural Products - 921 Mathematics - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Construction of 1-resilient Boolean functions with optimal algebraic immunity and good nonlinearity
Pan, Sen-Shan1; Fu, Xiao-Tong1, 2; Zhang, Wei-Guo1
Source: Journal of Computer Science and Technology, v 26, n 2, p 269-275, March 2011; ISSN: 10009000; DOI: 10.1007/s11390-011-9433-6;
Publisher: Springer New York
Author affiliation: 1 State Key Laboratory of Integrated Service Networks, Xidian University, Xi'an 710071, China2 State Key Laboratory of Information Security, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Abstract: This paper presents a construction for a class of 1-resilient functions with optimal algebraic immunity on an even number of variables. The construction is based on the concatenation of two balanced functions in associative classes. For some n, a part of 1-resilient functions with maximum algebraic immunity constructed in the paper can achieve almost optimal nonlinearity. Apart from their high nonlinearity, the functions reach Siegenthaler's upper bound of algebraic degree. Also a class of 1-resilient functions on any number n > 2 of variables with at least sub-optimal algebraic immunity is provided. © 2011 Springer Science+Business Media, LLC & Science Press, China. (10 refs.)Main Heading: Boolean functionsControlled terms: Algebra - Hydraulics - OptimizationUncontrolled terms: 1-resilient - algebraic degree - Algebraic degrees - algebraic immunity - Balanced functions - High nonlinearity - Non-Linearity - Resilient Boolean functions - Resilient function - stream ciphers - Upper BoundClassification Code: 632.1 Hydraulics - 921.1 Algebra - 921.5 Optimization Techniques
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.
Research on UAV path planning
Zhu, Hongguo1, 2; Hai, Xin1; Zheng, Changwen2
Source: Applied Mechanics and Materials, v 58-60, p 2351-2355, 2011, Information Technology for Manufacturing Systems II; ISSN: 16609336; ISBN-13: 9783037851494;
DOI: 10.4028/www.scientific.net/AMM.58-60.2351; Conference: 2011 International Conference on Information Technology for Manufacturing Systems, ITMS 2011, May 7, 2011 - May 8, 2011; Sponsor: University of Adelaide; Huazhong University of Science and Technology;
Publisher: Trans Tech Publications
Author affiliation: 1 National University of Defense Technology, Changsha, Hunan, 410073, China2 Institute of Software, Chinese Academy of Science, Beijing, 100090, China
Abstract: Path planning is key to performance of UAV. Research results are summarized according to tactical planning and mission planning. The disadvantages of solutions existed are pointed out andpossible research areas are suggested. © (2011) Trans Tech Publications, Switzerland. (19 refs.)Main Heading: Industrial researchControlled terms: Information technology - ManufactureUncontrolled terms: Mission path planning - Mission planning - Research areas - Research results - Tactical planningClassification Code: 537.1 Heat Treatment Processes - 901.3 Engineering Research - 903 Information Science
Database: Compendex
Compilation and indexing terms, Copyright 2011 Elsevier Inc.