Cargando…

Making software : what really works, and why we believe it /

No doubt, you've heard many claims about how some tool, technology, or practice improves software development. But which claims are verifiable? In this book, leading thinkers offer essays that uncover the truth and unmask myths commonly held among the software development community.

Detalles Bibliográficos
Clasificación:Libro Electrónico
Otros Autores: Oram, Andrew (Editor ), Wilson, Greg, 1963- (Editor )
Formato: Electrónico eBook
Idioma:Inglés
Publicado: Sebastopol, Calif. : O'Reilly, ©2011.
Edición:1st ed.
Colección:Theory in practice (Sebastopol, Calif.)
Temas:
Acceso en línea:Texto completo (Requiere registro previo con correo institucional)

MARC

LEADER 00000cam a2200000Ia 4500
001 OR_ocn702151490
003 OCoLC
005 20231017213018.0
006 m o d
007 cr an|||||||||
008 110216s2011 enk ob 001 0 eng
040 |a CIT  |b eng  |e pn  |c CIT  |d CUS  |d UMI  |d OCLCQ  |d N$T  |d COO  |d HEBIS  |d OCLCQ  |d DEBSZ  |d YDXCP  |d OCLCF  |d EBLCP  |d DKU  |d S4S  |d E7B  |d OCLCQ  |d S3O  |d OCLCQ  |d CEF  |d UAB  |d OCLCQ  |d UKAHL  |d OCLCQ  |d OCLCO  |d CZL  |d OCLCQ  |d OCLCO 
019 |a 738407040  |a 748338381  |a 772190037  |a 780425319  |a 1295609705  |a 1300635576  |a 1303343487 
020 |a 9780596808310 
020 |a 0596808313 
020 |a 9781449397890  |q (electronic bk.) 
020 |a 1449397891  |q (electronic bk.) 
020 |a 9781449397760 
020 |a 144939776X 
020 |z 9780596808327  |q (pbk.) 
020 |z 0596808321  |q (pbk.) 
029 1 |a AU@  |b 000050888975 
029 1 |a DEBSZ  |b 368470350 
029 1 |a DEBSZ  |b 396443087 
029 1 |a GBVCP  |b 785353631 
029 1 |a HEBIS  |b 291537480 
029 1 |a AU@  |b 000062586311 
035 |a (OCoLC)702151490  |z (OCoLC)738407040  |z (OCoLC)748338381  |z (OCoLC)772190037  |z (OCoLC)780425319  |z (OCoLC)1295609705  |z (OCoLC)1300635576  |z (OCoLC)1303343487 
037 |a CL0500000093  |b Safari Books Online 
050 4 |a QA76.76.D47 
072 7 |a COM  |x 051390  |2 bisacsh 
072 7 |a COM  |x 051230  |2 bisacsh 
072 7 |a COM  |x 051440  |2 bisacsh 
082 0 4 |a 005.1  |2 23 
049 |a UAMI 
245 0 0 |a Making software :  |b what really works, and why we believe it /  |c edited by Andy Oram and Greg Wilson. 
250 |a 1st ed. 
260 |a Sebastopol, Calif. :  |b O'Reilly,  |c ©2011. 
300 |a 1 online resource (xv, 602 pages). 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
347 |a text file 
490 1 |a Theory in practice 
504 |a Includes bibliographical references and index. 
505 0 0 |g Machine generated contents note:  |g pt. One  |t General Principles of Searching for and Using Evidence --  |g 1.  |t The Quest for Convincing Evidence /  |r Forrest Shull --  |t In the Beginning --  |t The State of Evidence Today --  |t Change We Can Believe In --  |t The Effect of Context --  |t Looking Toward the Future --  |g 2.  |t Credibility, or Why Should I Insist on Being Convinced? /  |r Marian Petre --  |t How Evidence Turns Up in Software Engineering --  |t Credibility and Relevance --  |t Aggregating Evidence --  |t Types of Evidence and Their Strengths and Weaknesses --  |t Society, Culture, Software Engineering, and You --  |t Acknowledgments --  |g 3.  |t What We Can Learn From Systematic Reviews /  |r Barbara Kitchenham --  |t An Overview of Systematic Reviews --  |t The Strengths and Weaknesses of Systematic Reviews --  |t Systematic Reviews in Software Engineering --  |t Conclusion --  |g 4.  |t Understanding Software Engineering Through Qualitative Methods /  |r Andrew Ko --  |t What Are Qualitative Methods? --  |t Reading Qualitative Research --  |t Using Qualitative Methods in Practice --  |t Generalizing from Qualitative Results --  |t Qualitative Methods Are Systematic. 
505 0 0 |g 5.  |t Learning Through Application: The Maturing of the QIP in the SEL /  |r Victor R. Basili --  |t What Makes Software Engineering Uniquely Hard to Research --  |t A Realistic Approach to Empirical Research --  |t The NASA Software Engineering Laboratory: A Vibrant Testbed for Empirical Research --  |t The Quality Improvement Paradigm --  |t Conclusion --  |g 6.  |t Personality, Intelligence, and Expertise: Impacts on Software Development /  |r Jo E. Hannay --  |t How to Recognize Good Programmers --  |t Individual or Environment --  |t Concluding Remarks --  |g 7.  |t Why is it so Hard to Learn to Program? /  |r Mark Guzdial --  |t Do Students Have Difficulty Learning to Program? --  |t What Do People Understand Naturally About Programming? --  |t Making the Tools Better by Shifting to Visual Programming --  |t Contextualizing for Motivation --  |t Conclusion: A Fledgling Field --  |g 8.  |t Beyond Lines of Code: Do We Need More Complexity Metrics? /  |r Ahmed E. Hassan --  |t Surveying Software --  |t Measuring the Source Code --  |t A Sample Measurement --  |t Statistical Analysis --  |t Some Comments on the Statistical Methodology --  |t So Do We Need More Complexity Metrics? --  |g pt. Two  |t Specific Topics in Software Engineering. 
505 0 0 |g 9.  |t An Automated Fault Prediction System /  |r Thomas J. Ostrand --  |t Fault Distribution --  |t Characteristics of Faulty Files --  |t Overview of the Prediction Model --  |t Replication and Variations of the Prediction Model --  |t Building a Tool --  |t The Warning Label --  |g 10.  |t Architecting: How Much and When? /  |r Barry Boehm --  |t Does the Cost of Fixing Software Increase over the Project Life Cycle? --  |t How Much Architecting Is Enough? --  |t Using What We Can Learn from Cost-to-Fix Data About the Value of Architecting --  |t So How Much Architecting Is Enough? --  |t Does the Architecting Need to Be Done Up Front? --  |t Conclusions --  |g 11.  |t Conway's Corollary /  |r Christian Bird --  |t Conway's Law --  |t Coordination, Congruence, and Productivity --  |t Organizational Complexity Within Microsoft --  |t Chapels in the Bazaar of Open Source Software --  |t Conclusions --  |g 12.  |t How Effective Is Test-Driven Development? /  |r Forrest Shull --  |t The TDD Pill -- What Is It? --  |t Summary of Clinical TDD Trials --  |t The Effectiveness of TDD --  |t Enforcing Correct TDD Dosage in Trials --  |t Cautions and Side Effects --  |t Conclusions --  |t Acknowledgments. 
505 0 0 |g 13.  |t Why Aren't More Women in Computer Science? /  |r Wendy M. Williams --  |t Why So Few Women? --  |t Should We Care? --  |t Conclusion --  |g 14.  |t Two Comparisons of Programming Languages /  |r Lutz Prechelt --  |t A Language Shoot-Out over a Peculiar Search Algorithm --  |t Plat_Forms: Web Development Technologies and Cultures --  |t So What? --  |g 15.  |t Quality Wars: Open Source Versus Proprietary Software /  |r Diomidis Spinellis --  |t Past Skirmishes --  |t The Battlefield --  |t Into the Battle --  |t Outcome and Aftermath --  |t Acknowledgments and Disclosure of Interest --  |g 16.  |t Code Talkers /  |r Robert DeLine --  |t A Day in the Life of a Programmer --  |t What is All This Talk About? --  |t A Model for Thinking About Communication --  |g 17.  |t Pair Programming /  |r Laurie Williams --  |t A History of Pair Programming --  |t Pair Programming in an Industrial Setting --  |t Pair Programming in an Educational Setting --  |t Distributed Pair Programming --  |t Challenges --  |t Lessons Learned --  |t Acknowledgments --  |g 18.  |t Modern Code Review /  |r Jason Cohen --  |t Common Sense --  |t A Developer Does a Little Code Review --  |t Group Dynamics --  |t Conclusion. 
505 0 0 |g 19.  |t A Communal Workshop or Doors That Close? /  |r Jorge Aranda --  |t Doors That Close --  |t A Communal Workshop --  |t Work Patterns --  |t One More Thing ... --  |g 20.  |t Identifying and Managing Dependencies in Global Software Development /  |r Marcelo Cataldo --  |t Why Is Coordination a Challenge in GSD? --  |t Dependencies and Their Socio-Technical Duality --  |t From Research to Practice --  |t Future Directions --  |g 21.  |t How Effective is Modularization? /  |r Gail Murphy --  |t The Systems --  |t What Is a Change? --  |t What Is a Module? --  |t The Results --  |t Threats to Validity --  |t Summary --  |g 22.  |t The Evidence for Design Patterns /  |r Walter Tichy --  |t Design Pattern Examples --  |t Why Might Design Patterns Work? --  |t The First Experiment: Testing Pattern Documentation --  |t The Second Experiment: Comparing Pattern Solutions to Simpler Ones --  |t The Third Experiment: Patterns in Team Communication --  |t Lessons Learned --  |t Conclusions --  |t Acknowledgments --  |g 23.  |t Evidence-Based Failure Prediction /  |r Thomas Ball --  |t Introduction --  |t Code Coverage --  |t Code Churn --  |t Code Complexity --  |t Code Dependencies --  |t People and Organizational Measures --  |t Integrated Approach for Prediction of Failures --  |t Summary --  |t Acknowledgments. 
505 0 0 |g 24.  |t The Art of Collecting Bug Reports /  |r Thomas Zimmermann --  |t Good and Bad Bug Reports --  |t What Makes a Good Bug Report? --  |t Survey Results --  |t Evidence for an Information Mismatch --  |t Problems with Bug Reports --  |t The Value of Duplicate Bug Reports --  |t Not All Bug Reports Get Fixed --  |t Conclusions --  |t Acknowledgments --  |g 25.  |t Where Do Most Software Flaws Come From? /  |r Dewayne Perry --  |t Studying Software Flaws --  |t Context of the Study --  |g Phase 1  |t Overall Survey --  |g Phase 2  |t Design/Code Fault Survey --  |t What Should You Believe About These Results? --  |t What Have We Learned? --  |t Acknowledgments --  |g 26.  |t Novice Professionals: Recent Graduates in a First Software Engineering Job /  |r Beth Simon --  |t Study Methodology --  |t Software Development Task --  |t Strengths and Weaknesses of Novice Software Developers --  |t Reflections --  |t Misconceptions That Hinder Learning --  |t Reflecting on Pedagogy --  |t Implications for Change --  |g 27.  |t Mining Your Own Evidence /  |r Andreas Zeller --  |t What is There to Mine? --  |t Designing a Study --  |t A Mining Primer --  |t Where to Go from Here --  |t Acknowledgments --  |g 28.  |t Copy-Paste as a Principled Engineering Tool /  |r Cory Kapser --  |t An Example of Code Cloning --  |t Detecting Clones in Software --  |t Investigating the Practice of Code Cloning --  |t Our Study --  |t Conclusions. 
505 0 0 |g 29.  |t How Usable Are Your Apis? /  |r Steven Clarke --  |t Why Is It Important to Study API Usability? --  |t First Attempts at Studying API Usability --  |t If At First You Don't Succeed ... --  |t Adapting to Different Work Styles --  |t Conclusion --  |g 30.  |t What Does 10X Mean? Measuring Variations in Programmer Productivity /  |r Steve McConnell --  |t Individual Productivity Variation in Software Development --  |t Issues in Measuring Productivity of Individual Programmers --  |t Team Productivity Variation in Software Development. 
520 8 |a No doubt, you've heard many claims about how some tool, technology, or practice improves software development. But which claims are verifiable? In this book, leading thinkers offer essays that uncover the truth and unmask myths commonly held among the software development community. 
546 |a English. 
590 |a O'Reilly  |b O'Reilly Online Learning: Academic/Public Library Edition 
650 0 |a Computer software  |x Development. 
650 7 |a COMPUTERS  |x Programming  |x Open Source.  |2 bisacsh 
650 7 |a COMPUTERS  |x Software Development & Engineering  |x General.  |2 bisacsh 
650 7 |a COMPUTERS  |x Software Development & Engineering  |x Tools.  |2 bisacsh 
650 7 |a Computer software  |x Development  |2 fast 
700 1 |a Oram, Andrew.  |4 edt 
700 1 |a Wilson, Greg,  |d 1963-  |4 edt 
776 0 8 |i Print version:  |t Making software.  |d Farnham; Cambridge : O'Reilly, ©2011  |z 9780596808327  |w (OCoLC)648096823 
830 0 |a Theory in practice (Sebastopol, Calif.) 
856 4 0 |u https://learning.oreilly.com/library/view/~/9780596808310/?ar  |z Texto completo (Requiere registro previo con correo institucional) 
938 |a Askews and Holts Library Services  |b ASKH  |n AH26833853 
938 |a ProQuest Ebook Central  |b EBLB  |n EBL604455 
938 |a ebrary  |b EBRY  |n ebr10761432 
938 |a EBSCOhost  |b EBSC  |n 415326 
938 |a YBP Library Services  |b YANK  |n 7365782 
938 |a YBP Library Services  |b YANK  |n 7489356 
994 |a 92  |b IZTAP