The primary aim for this book is to gather and collate articles which represent the best and latest thinking in the domain of technology transfer, from research, academia and practice around the world. We envisage that the book will, as a result of t
Doktorarbeit / Dissertation aus dem Jahr 2010 im Fachbereich Informatik - Sonstiges, Note: cum laude, Umit Private Universitaet fuer Gesundheitswissenschaften, Medizinische Informatik und Technik, Sprache: Deutsch, Abstract: In Deutschland besteht ein grosse
This study was conducted to evaluate the success of the pilot phase of the New Partnership for Africa´s Development (Nepad) e-School project in Kenya. The study employed survey research methodology. All six of the Nepad e-Schools in Kenya were included an
This book comprehensively presents a novel approach to the systematic security hardening of software design models expressed in the standard UML language. It combines model-driven engineering and the aspect-oriented paradigm to integrate security practices into the early phases of the software development process. To this end, a UML profile has been developed for the specification of security hardening aspects on UML diagrams. In addition, a weaving framework, with the underlying theoretical foundations, has been designed for the systematic injection of security aspects into UML models. The work is organized as follows: chapter 1 presents an introduction to software security, model-driven engineering, UML and aspect-oriented technologies. Chapters 2 and 3 provide an overview of UML language and the main concepts of aspect-oriented modeling (AOM) respectively. Chapter 4 explores the area of model-driven architecture with a focus on model transformations. The main approaches that are adopted in the literature for security specification and hardening are presented in chapter 5. After these more general presentations, chapter 6 introduces the AOM profile for security aspects specification. Afterwards, chapter 7 details the design and the implementation of the security weaving framework, including several real-life case studies to illustrate its applicability. Chapter 8 elaborates an operational semantics for the matching/weaving processes in activity diagrams, while chapters 9 and 10 present a denotational semantics for aspect matching and weaving in executable models following a continuation-passing style. Finally, a summary and evaluation of the work presented are provided in chapter 11. The book will benefit researchers in academia and industry as well as students interested in learning about recent research advances in the field of software security engineering. The authors of this book conducted several research initiatives in the area of computer security, privacy and cyber forensics. The content reported is the result of a 4-year research project on the aspect oriented security hardening of UML design models and is based on a fruitful collaboration between Concordia University and Ericsson under a research partnership program of the Canadian Natural Sciences and Engineering Research Council (NSERC).
This book features papers from CEPE-IACAP 2015, a joint international conference focused on the philosophy of computing. Inside, readers will discover essays that explore current issues in epistemology, philosophy of mind, logic, and philosophy of science from the lens of computation. Coverage also examines applied issues related to ethical, social, and political interest. The contributors first explore how computation has changed philosophical inquiry. Computers are now capable of joining humans in exploring foundational issues. Thus, we can ponder machine-generated explanation, thought, agency, and other quite fascinating concepts. The papers are also concerned with normative aspects of the computer and information technology revolution. They examine technology-specific analyses of key challenges, from Big Data to autonomous robots to expert systems for infrastructure control and financial services. The virtue of a collection that ranges over philosophical questions, such as this one does, lies in the prospects for a more integrated understanding of issues. These are early days in the partnership between philosophy and information technology. Philosophers and researchers are still sorting out many foundational issues. They will need to deploy all of the tools of philosophy to establish this foundation. This volume admirably showcases those tools in the hands of some excellent scholars.
Parallel Programming: Concepts and Practice provides an upper level introduction to parallel programming. In addition to covering general parallelism concepts, this text teaches practical programming skills for both shared memory and distributed memory architectures. The authors open-source system for automated code evaluation provides easy access to parallel computing resources, making the book particularly suitable for classroom settings. Covers parallel programming approaches for single computer nodes and HPC clusters: OpenMP, multithreading, SIMD vectorization, MPI, UPC++ Contains numerous practical parallel programming exercises Includes access to an automated code evaluation tool that enables students the opportunity to program in a web browser and receive immediate feedback on the result validity of their program Features an example-based teaching of concept to enhance learning outcomes Bertil Schmidt is tenured Full Professor and Chair for Parallel and Distributed Architectures at the Johannes Gutenberg University Mainz, Germany. Prior to that he was a faculty member at Nanyang Technological University (Singapore) and at the University of New South Wales (UNSW). His research group has designed a variety of parallel algorithms and tools for Bioinformatics mainly focusing on the analysis of large-scale sequence and short read datasets. For his research work, he has received a CUDA Research Center award, a CUDA Academic Partnership award, a CUDA Professor Partnership award and the Best Paper Award at IEEE ASAP 2009. Furthermore, he serves as the champion for Bioinformatics and Computational Biology on gpucomputing.net. He is also director of the Competence Center for HPC in the Natural Sciences which has recently been funded by the Carl-Zeiss-Foundation. His work has been published in leading journals such as Bioinformatics, BMC Bioinformatics, IEEE Transactions on Parallel and Distributed Computing, IEEE Transactions on VLSI, BMC Genomics, Parallel Computing, and Journal of Parallel and Distributed Computing.
Will your next doctor be a human being-or a machine? Will you have a choice? If you do, what should you know before making it? This book introduces the reader to the pitfalls and promises of artificial intelligence (AI) in its modern incarnation and the growing trend of systems to reach off the Web into the real world. The convergence of AI, social networking, and modern computing is creating an historic inflection point in the partnership between human beings and machines with potentially profound impacts on the future not only of computing but of our world and species. AI experts and researchers James Hendler-co-originator of the Semantic Web (Web 3.0)-and Alice Mulvehill-developer of AI-based operational systems for DARPA, the Air Force, and NASA-explore the social implications of AI systems in the context of a close examination of the technologies that make them possible. The authors critically evaluate the utopian claims and dystopian counterclaims of AI prognosticators. Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity is your richly illustrated field guide to the future of your machine-mediated relationships with other human beings and with increasingly intelligent machines. What Readers Will Learn What the concept of a social machine is and how the activities of non-programmers are contributing to machine intelligence How modern artificial intelligence technologies, such as Watson, are evolving and how they process knowledge from both carefully produced information (such as Wikipedia and journal articles) and from big data collections The fundamentals of neuromorphic computing, knowledge graph search, and linked data, as well as the basic technology concepts that underlie networking applications such as Facebook and Twitter How the change in attitudes towards cooperative work on the Web, especially in the younger demographic, is critical to the future of Web applications Who This Book Is For General readers and technically engaged developers, entrepreneurs, and technologists interested in the threats and promises of the accelerating convergence of artificial intelligence with social networks and mobile web technologies. James Hendler is the Director of the Institute for Data Exploration and Applications and the Tetherless World Professor of Computer, Web and Cognitive Sciences at RPI. He also serves as a Director of the UKs charitable Web Science Trust. Hendler has authored over 250 technical papers in the areas of Semantic Web, artificial intelligence, agent-based computing and high performance processing. One of the originators of the Semantic Web, Hendler was the recipient of a 1995 Fulbright Foundation Fellowship, is a former member of the US Air Force Science Advisory Board, and is a Fellow of the American Association for Artificial Intelligence, the British Computer Society, the IEEE and the AAAS. He is also the former Chief Scientist of the Information Systems Office at the US Defense Advanced Research Projects Agency (DARPA) and was awarded a US Air Force Exceptional Civilian Service Medal in 2002. He is also the first computer scientist to serve on the Board of Reviewing editors for Science. In 2010, Hendler was named one of the 20 most innovative professors in America by Playboy magazine and was selected as an Internet Web Expert by the US government. In 2012, he was one of the inaugural recipients of the Strata Conference Big Data awards for his work on large-scale open government data, and he is a columnist and associate editor of the Big Data journal. In 2013, he was appointed by the governor as the Open Data Advisor to New York State and in 2014, he won a prestigious IBM faculty award for his work on cognitive computing and artificial intelligence. Alice M. Mulvehill is a research scientist and provides consulting through her company, Memory Based Research, LLC. She was previously a lead scientist at Raytheon/BBN Technologies where she led the development of several advanced decision support systems for the Air Force and DARPA. Prior to joining BBN she worked for the MITRE Corporation as a researcher, specializing in knowledge acquisition, knowledge representation, case-based reasoning, and planning. While at MITRE she was part of early research teams that explored the use of Artificial Intelligence techniques for the development of planning and scheduling systems. She was a participant in the DARPA/Rome Lab Planning Initiative and participated in