This book covers pseudorandom number generation algorithms, evaluation techniques, and offers practical advice and code examples. Random Numbers and Computers is an essential introduction or refresher on pseudorandom numbers in computer science. The first comprehensive book on the topic, readers are provided with a practical introduction to the techniques of pseudorandom number generation, including how the algorithms work and how to test the output to decide if it is suitable for a particular purpose. Practical applications are demonstrated with hands-on presentation and descriptions that readers can apply directly to their own work. Examples are in C and Python and given with an emphasis on understanding the algorithms to the point of practical application. The examples are meant to be implemented, experimented with and improved/adapted by the reader.
Personas, an essential step to successful product development, are profiles of target customers created to focus development teams on user need, context, and pain points. The Persona Lifecycle , published by Morgan Kaufmann in 2007, is a comprehensive treatment of persona creation, use, and evaluation, complete with case studies, justifications, and methodology. The groundbreaking book has received 17 five-star reviews on Amazon and accolades from gurus in the field. While the book is a must-have for user experience practitioners, the industry has called for a shorter, quick-reference edition that features just the basic steps for creating and using personas. The Persona Lifecycle: Practitioners´ Quick Reference is a low-priced, condensed version that borrows just the basic steps from the parent book and presents a how-to for students, those new to the field, and practitioners wanting a quick refresher while on the job.
Embedded systems have long become essential in application areas in which human control is impossible or infeasible. The development of modern embedded systems is becoming increasingly difficult and challenging because of their overall system complexity, their tighter and cross-functional integration, the increasing requirements concerning safety and real-time behavior, and the need to reduce development and operation costs. This book provides a comprehensive overview of the Software Platform Embedded Systems (SPES) modeling framework and demonstrates its applicability in embedded system development in various industry domains such as automation, automotive, avionics, energy, and healthcare. In SPES 2020, twenty-one partners from academia and industry have joined forces in order to develop and evaluate in different industrial domains a modeling framework that reflects the current state of the art in embedded systems engineering. The content of this book is structured in four parts. Part I ´´Starting Point´´ discusses the status quo of embedded systems development and model-based engineering, and summarizes the key requirements faced when developing embedded systems in different application domains. Part II ´´The SPES Modeling Framework´´ describes the SPES modeling framework. Part III ´´Application and Evaluation of the SPES Modeling Framework´´ reports on the validation steps taken to ensure that the framework met the requirements discussed in Part I. Finally, Part IV ´´Impact of the SPES Modeling Framework´´ summarizes the results achieved and provides an outlook on future work. The book is mainly aimed at professionals and practitioners who deal with the development of embedded systems on a daily basis. Researchers in academia and industry may use it as a compendium for the requirements and state-of-the-art solution concepts for embedded systems development.
For undergraduate or advanced undergraduate courses in Classical Natural Language Processing, Statistical Natural Language Processing, Speech Recognition, Computational Linguistics, and Human Language Processing. An explosion of Web-based language techniques, merging of distinct fields, availability of phone-based dialogue systems, and much more make this an exciting time in speech and language processing. The first of its kind to thoroughly cover language technology - at all levels and with all modern technologies - this text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. The authors cover areas that traditionally are taught in different courses, to describe a unified vision of speech and language processing. Emphasis is on practical applications and scientific evaluation. An accompanying Website contains teaching materials for instructors, with pointers to language processing resources on the Web. The Second Edition offers a significant amount of new and extended material. Supplements: Click on the Resources tab to View Downloadable Files: * Solutions * Power Point Lecture Slides - Chapters 1-5, 8-10, 12-13 and 24 Now Available! * For additional resourcse visit the author website:
Quality of Protection: Security Measurements and Metrics is an edited volume based on the Quality of Protection Workshop in Milano, Italy (September 2005). This volume discusses how security research can progress towards quality of protection in security comparable to quality of service in networking and software measurements, and metrics in empirical software engineering. Information security in the business setting has matured in the last few decades. Standards such as IS017799, the Common Criteria (ISO15408), and a number of industry certifications and risk analysis methodologies have raised the bar for good security solutions from a business perspective. Designed for a professional audience composed of researchers and practitioners in industry, Quality of Protection: Security Measurements and Metrics is also suitable for advanced-level students in computer science. Information security in the business setting has matured in the last few decades. Standards, such as IS017799, the Common Criteria s, and a number of industry and academic certifications and risk analysis methodologies, have raised the bar on what is considered good security solution, from a business perspective. Yet, the evaluation of security solutions has largely a qualitative flavor. Notions such as Security Metrics, Quality of Protection (QoP) or Protection Level Agreement (PLA) have only surfaced in the literature. Quality of Protection: Security Measurements and Metrics is an edited volume based on the Quality of Protection Workshop at ESORICS 2005, the flagship European Symposium on Research in Computer Security. This book discusses how security research can progress towards a notion of quality of protection in security, comparable to the notion of quality of service in networking and software measurements and metrics, in empirical software engineering. Quality of Protection: Security Measurements and Metrics is designed for a professional audience, composed of researchers and practitioners in industry. This book is also suitable for graduate-level students in computer science and telecommunications.
Verbessern Sie Ihre Entwicklungsprozesse und Ihre Produkte! User Story Mapping ist eine von Jeff Patton entwickelte Methode, die es Ihnen deutlich erleichtert, im Rahmen agiler Projekte eine stimmige User Experience zu schaffen. Die Idee: Die Produktentwicklung wird am Arbeitsfluss der Nutzer ausgerichtet und in flexibel anpassbaren Story Maps geplant, dokumentiert und visualisiert. Dadurch entsteht im gesamten Team - bei Product Ownern, Designern, Entwicklern und Auftraggebern - ein deutlich verbessertes gemeinsames Verständnis vom Gesamtprozess und vom zu entwickelnden Produkt. Gleichzeitig wird die Gefahr reduziert, sich in unwichtigen Details zu verzetteln oder gar ein Produkt zu entwickeln, dass dem Nutzer nicht hilft. User Story Maps statt Anforderungsdokumente In der agilen Entwicklung werden Anforderungen in User Stories heruntergebrochen. User Story Mapping geht noch weiter und stellt die Stories in einen für alle Teammitglieder nachvollziehbaren Gesamtzusammenhang. Stories als Impulse für Konversationen Eine gute User Story Map fördert Konversationen zwischen allen, die an der Entstehung eines Produkts beteiligt sind, sowie mit denen, die es letztlich anwenden. Bessere Kommunikation = bessere Produkte. User Story Maps sind wie Landkarten Mit den Maps wird die narrative Struktur von User Stories gewahrt, zugleich können einzelne Aspekte jederzeit herausgegriffen, weiterentwickelt und vertieft werden. Optimaler Outcome statt möglichst vieler Features Die Methode hilft dabei, die Features zu identifizieren, die wirklich nützlich und zudem bezahl- und realisierbar sind. Der Schlüssel: Konzentration auf das Ergebnis und die Schritte, die dorthin führen. Fortlaufend dazulernen Konversationen und Maps begleiten alle Projektschritte, z.B. Evaluation der Chancen und Risiken, Tests mit Usern und Kunden, Iterationen sowie sämtliche Schlussfolgerungen aus dem Gelernten.
Delve into your data for the key to success Data mining is quickly becoming integral to creating value and business momentum. The ability to detect unseen patterns hidden in the numbers exhaustively generated by day-to-day operations allows savvy decision-makers to exploit every tool at their disposal in the pursuit of better business. By creating models and testing whether patterns hold up, it is possible to discover new intelligence that could change your business´s entire paradigm for a more successful outcome. Data Mining for Dummies shows you why it doesn´t take a data scientist to gain this advantage, and empowers average business people to start shaping a process relevant to their business´s needs. In this book, you´ll learn the hows and whys of mining to the depths of your data, and how to make the case for heavier investment into data mining capabilities. The book explains the details of the knowledge discovery process including: * Model creation, validity testing, and interpretation * Effective communication of findings * Available tools, both paid and open-source * Data selection, transformation, and evaluation Data Mining for Dummies takes you step-by-step through a real-world data-mining project using open-source tools that allow you to get immediate hands-on experience working with large amounts of data. You´ll gain the confidence you need to start making data mining practices a routine part of your successful business. If you´re serious about doing everything you can to push your company to the top, Data Mining for Dummies is your ticket to effective data mining.
Computer Architecture: A Quantitative Approach, Sixth Edition has been considered essential reading by instructors, students and practitioners of computer design for over 20 years. The sixth edition of this classic textbook from Hennessy and Patterson, winners of the 2017 ACM A.M. Turing Award recognizing contributions of lasting and major technical importance to the computing field, is fully revised with the latest developments in processor and system architecture. The text now features examples from the RISC-V (RISC Five) instruction set architecture, a modern RISC instruction set developed and designed to be a free and openly adoptable standard. It also includes a new chapter on domain-specific architectures and an updated chapter on warehouse-scale computing that features the first public information on Google´s newest WSC. True to its original mission of demystifying computer architecture, this edition continues the longstanding tradition of focusing on areas where the most exciting computing innovation is happening, while always keeping an emphasis on good engineering design. Includes a new chapter on domain-specific architectures, explaining how they are the only path forward for improved performance and energy efficiency given the end of Moore´s Law and Dennard scaling Features the first publication of several DSAs from industry Features extensive updates to the chapter on warehouse-scale computing, with the first public information on the newest Google WSC Offers updates to other chapters including new material dealing with the use of stacked DRAM; data on the performance of new NVIDIA Pascal GPU vs. new AVX-512 Intel Skylake CPU; and extensive additions to content covering multicore architecture and organization Includes ´´Putting It All Together´´ sections near the end of every chapter, providing real-world technology examples that demonstrate the principles covered in each chapter Includes review appendices in the printed text and additional reference appendices available online Includes updated and improved case studies and exercises ACM named John L. Hennessy and David A. Patterson, recipients of the 2017 ACM A.M. Turing Award for pioneering a systematic, quantitative approach to the design and evaluation of computer architectures with enduring impact on the microprocessor industry
In many decision problems, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. One common type is the monotonicity constraint stating that the greater an input is, the greater the output must be, all other inputs being equal. Well-known examples include investment decisions, medical diagnosis, selection and evaluation tasks. However, often the models obtained by traditional data mining techniques alone does not meet these constraints. Therefore, this book provides a thorough study on the incorporation of monotonicity constraints into a data mining process to improve knowledge discovery and facilitate the decision-making process for end-users by deriving more accurate and plausible decision models. The main contributions include a novel procedure to test the degree of monotonicity of a data set, a greedy algorithm to transform non-monotone into monotone data, and extended and novel approaches to build monotone decision models. The theoretical and empirical findings should be valuable to graduates, researchers and practitioners involved in the study and development of data mining systems.
This successful textbook on predictive text mining offers a unified perspective on a rapidly evolving field, integrating topics spanning the varied disciplines of data science, machine learning, databases, and computational linguistics. Serving also as a practical guide, this unique book provides helpful advice illustrated by examples and case studies. This highly anticipated second edition has been thoroughly revised and expanded with new material on deep learning, graph models, mining social media, errors and pitfalls in big data evaluation, Twitter sentiment analysis, and dependency parsing discussion. The fully updated content also features in-depth discussions on issues of document classification, information retrieval, clustering and organizing documents, information extraction, web-based data-sourcing, and prediction and evaluation. Features: includes chapter summaries and exercises; explores the application of each method; provides several case studies; contains links to free text-mining software.