Cargando…

Proceedings of the second annual Workshop on Computational Learning Theory : University of California, Santa Cruz, July 31- August 2, 1989 /

Detalles Bibliográficos
Clasificación:Libro Electrónico
Autor Corporativo: Workshop on Computational Learning Theory Santa Cruz, Calif.
Otros Autores: Rivest, Ronald L. (Editor ), Haussler, David (Editor ), Warmuth, Manfred (Editor )
Formato: Electrónico Congresos, conferencias eBook
Idioma:Inglés
Publicado: San Mateo, California : Morgan Kaufmann, 1989.
Temas:
Acceso en línea:Texto completo
Tabla de Contenidos:
  • Front Cover; Proceedings of the Second Annual Workshop on Computational Learning Theory; Copyright Page; Table of Contents; Foreword; Part 1: Invited Lecture; Chapter 1. Inductive Principles of the Search for Empirical Dependences; 1. Introduction; 2. The problem of expected risk minimization; 3. The principle of empirical risk minimization; 4. The concept of consistency and strong consistency; 5. Strong consistency and uniform convergence; 6. Necessary and sufficient conditions of uniform convergence; 7. The relation to the theory of falsifiability by K. Popper.
  • 8. The capacity of a set of functions9. Theorems about the rate of uniform convergence; 10. The principle of structural risk minimization; 11. Concluding remarks; REFERENCES; Part 2: Technical Papers; Chapter 2. Polynomial Learnability of Semilinear Sets; Abstract; 1 Introduction; 2 Results and Significance; 3 Learnability Models Used; 4 Classes of Concepts Considered; 5 Technical Details; 6 Open Problems; Acknowledgments; References; Chapter 3. LEARNING NESTED DIFFERENCES OF INTERSECTION-CLOSED CONCEPT CLASSES; ABSTRACT; 1 Introduction; 2 The Inclusion-Exclusion Algorithms.
  • 3 Conclusions and Open Problems4 Acknowledgements; References; Chapter 4. A Polynomial-time Algorithm for Learningfc-variable Pattern Languages from Examples; 1 Introduction; 2 Definitions and Notation; 3 The Algorithm COVER; 4 Good Things and Bad Things; 5 The Event Tree; 6 Putting it All Together; 7 Conclusions and Future Research; References; Chapter 5. ON LEARNING FROM EXERCISES; ABSTRACT; 1. INTRODUCTION; 2. LEARNING FROM SOLVED INSTANCES; 3. AN APPLICATION; 4. LEARNING FROM EXERCISES; 5. CONCLUSION; Acknowledgements; References; Appendix A; Chapter 6. On Approximate Truth; Abstract.
  • 1 Introduction2 Approximate Truth; 3 Some deductive logic of approximate truth; 4 Some inductive logic of approximate truth; 5 Stable predicates; 6 Concluding remarks; 7 References; Chapter 7. Informed parsimonious inference of prototypical genetic sequences; Abstract; 1 Introduction; 2 Model of sequence generation; 3 Bayes model; 4 Expressing the inductive assumptions; 5 Computing the optimal theory; 6 Experimental results; 7 Comparison to the biological parsimony methods; 8 Acknowledgements; References; Chapter 8. COMPLEXITY ISSUES IN LEARNING BY NEURAL NETS; Abstract; 1 INTRODUCTION.
  • 2 DEFINITIONS3 NEURAL NET DESIGN PROBLEMS; 4 TRAINING NEURAL NETS; 5 CASCADE NEURAL NETS; 6 CONCLUSIONS; REFERENCES; Chapter 9. Equivalence Queries and Approximate Fingerprints; Abstract; 1 Introduction; 2 The basic idea; 3 Representations of concepts; 4 Some examples of approximate fingerprints; 5 Comments; 6 Acknowledgments; References; Chapter 10. LEARNING READ-ONCE FORMULAS USING MEMBERSHIP QUERIES; ABSTRACT; 1. INTRODUCTION; 2. LEARNING EQUIVALENT READ-ONCE FORMULAS FROM EXPLICITLY GIVEN FORMULAS; 3. PRELIMINARIES; 4. LEARNING READ-ONCE FORMULAS WITH A RELEVANT POSSIBILITY ORACLE.