|
|
|
|
LEADER |
00000cam a2200000 i 4500 |
001 |
OR_ocn959952427 |
003 |
OCoLC |
005 |
20231017213018.0 |
006 |
m o d |
007 |
cr |n||||||||| |
008 |
161005t20172017maua ob 001 0 eng d |
040 |
|
|
|a YDX
|b eng
|e rda
|e pn
|c YDX
|d N$T
|d TEFOD
|d IDEBK
|d OCLCF
|d OCLCQ
|d OPELS
|d OCLCQ
|d CDN
|d OCLCO
|d QCL
|d NETUE
|d TS225
|d OCLCQ
|d CNCGM
|d LWU
|d Z@L
|d U3W
|d NRC
|d OCLCQ
|d RRP
|d DKU
|d OCLCQ
|d UKAHL
|d EBLCP
|d IDB
|d MERUC
|d UAB
|d D6H
|d VVB
|d COO
|d LVT
|d UMI
|d TOH
|d STF
|d DEBBG
|d DEBSZ
|d CHVBK
|d CEF
|d KSU
|d Z5A
|d GGVRL
|d UKMGB
|d AU@
|d UKBTH
|d DST
|d BRF
|d EYM
|d VT2
|d K6U
|d UOK
|d RDF
|d ORE
|d OCLCO
|d EZ9
|d OCLCQ
|
015 |
|
|
|a GBB6E1735
|2 bnb
|
016 |
7 |
|
|a 018058435
|2 Uk
|
019 |
|
|
|a 960030486
|a 960087466
|a 960163357
|a 961062130
|a 961944616
|a 961996416
|a 962005936
|a 972568580
|a 974348908
|a 988827471
|a 989035589
|a 990619648
|a 992553299
|a 992824373
|a 994811754
|a 1005810342
|a 1007085077
|a 1026462293
|a 1153393691
|a 1179528266
|a 1200099775
|a 1245476059
|a 1300591677
|a 1303326754
|a 1331416211
|
020 |
|
|
|a 9780128043578
|q (electronic book)
|
020 |
|
|
|a 0128043571
|q (electronic book)
|
020 |
|
|
|z 9780128042915
|
020 |
|
|
|z 0128042915
|
020 |
|
|
|z 9780123748560
|
020 |
|
|
|z 0123748569
|
024 |
7 |
|
|a 10.1016/C2015-0-02071-8
|2 doi
|
024 |
8 |
|
|a C20150020718
|
024 |
8 |
|
|a 9780128043578
|
029 |
1 |
|
|a AU@
|b 000059643855
|
029 |
1 |
|
|a AU@
|b 000059708972
|
029 |
1 |
|
|a AU@
|b 000066453094
|
029 |
1 |
|
|a AU@
|b 000067095886
|
029 |
1 |
|
|a AU@
|b 000068488806
|
029 |
1 |
|
|a AU@
|b 000069105465
|
029 |
1 |
|
|a CHNEW
|b 000886297
|
029 |
1 |
|
|a CHSLU
|b 001259610
|
029 |
1 |
|
|a CHVBK
|b 375152784
|
029 |
1 |
|
|a DEBBG
|b BV043969890
|
029 |
1 |
|
|a DEBSZ
|b 484805568
|
029 |
1 |
|
|a DEBSZ
|b 485804190
|
029 |
1 |
|
|a GBVCP
|b 879420537
|
029 |
1 |
|
|a GBVCP
|b 882850741
|
029 |
1 |
|
|a NLGGC
|b 408758139
|
029 |
1 |
|
|a UKMGB
|b 018058435
|
035 |
|
|
|a (OCoLC)959952427
|z (OCoLC)960030486
|z (OCoLC)960087466
|z (OCoLC)960163357
|z (OCoLC)961062130
|z (OCoLC)961944616
|z (OCoLC)961996416
|z (OCoLC)962005936
|z (OCoLC)972568580
|z (OCoLC)974348908
|z (OCoLC)988827471
|z (OCoLC)989035589
|z (OCoLC)990619648
|z (OCoLC)992553299
|z (OCoLC)992824373
|z (OCoLC)994811754
|z (OCoLC)1005810342
|z (OCoLC)1007085077
|z (OCoLC)1026462293
|z (OCoLC)1153393691
|z (OCoLC)1179528266
|z (OCoLC)1200099775
|z (OCoLC)1245476059
|z (OCoLC)1300591677
|z (OCoLC)1303326754
|z (OCoLC)1331416211
|
037 |
|
|
|a C865C70A-8BD2-4D1F-B413-46A78EC352BB
|b OverDrive, Inc.
|n http://www.overdrive.com
|
050 |
|
4 |
|a QA76.9.D343
|
072 |
|
7 |
|a COM
|x 000000
|2 bisacsh
|
072 |
|
7 |
|a COM
|2 ukslc
|
072 |
|
7 |
|a UG
|2 bicssc
|
072 |
|
7 |
|a UNF
|2 bicssc
|
072 |
|
7 |
|a UYQM
|2 bicssc
|
072 |
|
7 |
|a GLC
|2 bicssc
|
072 |
|
7 |
|a UB
|2 bicssc
|
072 |
|
7 |
|a UN
|2 bicssc
|
072 |
|
7 |
|a UG
|2 thema
|
072 |
|
7 |
|a UNF
|2 thema
|
072 |
|
7 |
|a UYQM
|2 thema
|
072 |
|
7 |
|a GLC
|2 thema
|
072 |
|
7 |
|a UB
|2 thema
|
072 |
|
7 |
|a UN
|2 thema
|
072 |
|
7 |
|a UFL
|2 thema
|
072 |
|
7 |
|a UNC
|2 thema
|
072 |
|
7 |
|a UYQ
|2 thema
|
072 |
|
7 |
|a KJ
|2 thema
|
082 |
0 |
4 |
|a 006.3/12
|2 23
|
083 |
0 |
|
|a 006.312
|q NO-OsHOA.
|2 23/nor
|
049 |
|
|
|a UAMI
|
100 |
1 |
|
|a Witten, I. H.
|q (Ian H.),
|e author.
|
245 |
1 |
0 |
|a Data mining :
|b practical machine learning tools and techniques /
|c Ian H. Witten, University of Waikato, Hamilton, New Zealand ; Eibe Frank, University of Waikato, Hamilton, New Zealand ; Mark A. Hall, University of Waikato, Hamilton, New Zealand ; Christopher J. Pal, Polytechnique Montréal, Montreal, Quebec, Canada
|
250 |
|
|
|a Fourth edition
|
264 |
|
1 |
|a Cambridge, MA, United States :
|b Morgan Kaufmann,
|c [2017]
|
264 |
|
4 |
|c ©2017
|
300 |
|
|
|a 1 online resource (xxxii, 621 pages) :
|b illustrations
|
336 |
|
|
|a text
|b txt
|2 rdacontent
|
337 |
|
|
|a computer
|b c
|2 rdamedia
|
338 |
|
|
|a online resource
|b cr
|2 rdacarrier
|
347 |
|
|
|a text file
|
347 |
|
|
|b PDF
|
490 |
1 |
|
|a Morgan Kaufmann Series in Data Management Systems
|
504 |
|
|
|a Includes bibliographical references (pages 573-600) and index
|
505 |
0 |
0 |
|g Machine generated contents note: ch. 1
|t What's it all about? --
|g 1.1.
|t Data Mining and Machine Learning --
|t Describing Structural Patterns --
|t Machine Learning --
|t Data Mining --
|g 1.2.
|t Simple Examples: The Weather Problem and Others --
|t Weather Problem --
|t Contact Lenses: An Idealized Problem --
|t Irises: A Classic Numeric Dataset --
|t CPU Performance: Introducing Numeric Prediction --
|t Labor Negotiations: A More Realistic Example --
|t Soybean Classification: A Classic Machine Learning Success --
|g 1.3.
|t Fielded Applications --
|t Web Mining --
|t Decisions Involving Judgment --
|t Screening Images --
|t Load Forecasting --
|t Diagnosis --
|t Marketing and Sales --
|t Other Applications --
|g 1.4.
|t Data Mining Process --
|g 1.5.
|t Machine Learning and Statistics --
|g 1.6.
|t Generalization as Search --
|t Enumerating the Concept Space --
|t Bias --
|g 1.7.
|t Data Mining and Ethics --
|t Reidentification --
|t Using Personal Information --
|t Wider Issues --
|g 1.8.
|t Further Reading and Bibliographic Notes --
|g ch. 2
|t Input: concepts, instances, attributes --
|g 2.1.
|t What's a Concept? --
|g 2.2.
|t What's in an Example? --
|t Relations --
|t Other Example Types --
|g 2.3.
|t What's in an Attribute? --
|g 2.4.
|t Preparing the Input --
|t Gathering the Data Together --
|t ARFF Format --
|t Sparse Data --
|t Attribute Types --
|t Missing Values --
|t Inaccurate Values --
|t Unbalanced Data --
|t Getting to Know Your Data --
|g 2.5.
|t Further Reading and Bibliographic Notes --
|g ch. 3
|t Output: knowledge representation --
|g 3.1.
|t Tables --
|g 3.2.
|t Linear Models --
|g 3.3.
|t Trees --
|g 3.4.
|t Rules --
|t Classification Rules --
|t Association Rules --
|t Rules With Exceptions --
|t More Expressive Rules --
|g 3.5.
|t Instance-Based Representation --
|g 3.6.
|t Clusters --
|g 3.7.
|t Further Reading and Bibliographic Notes --
|g ch. 4
|t Algorithms: the basic methods --
|g 4.1.
|t Inferring Rudimentary Rules --
|t Missing Values and Numeric Attributes --
|g 4.2.
|t Simple Probabilistic Modeling --
|t Missing Values and Numeric Attributes --
|t Naive Bayes for Document Classification --
|t Remarks --
|g 4.3.
|t Divide-and-Conquer: Constructing Decision Trees --
|t Calculating Information --
|t Highly Branching Attributes --
|g 4.4.
|t Covering Algorithms: Constructing Rules --
|t Rules Versus Trees --
|t Simple Covering Algorithm --
|t Rules Versus Decision Lists --
|g 4.5.
|t Mining Association Rules --
|t Item Sets --
|t Association Rules --
|t Generating Rules Efficiently --
|g 4.6.
|t Linear Models --
|t Numeric Prediction: Linear Regression --
|t Linear Classification: Logistic Regression --
|t Linear Classification Using the Perceptron --
|t Linear Classification Using Winnow --
|g 4.7.
|t Instance-Based Learning --
|t Distance Function --
|t Finding Nearest Neighbors Efficiently --
|t Remarks --
|g 4.8.
|t Clustering --
|t Iterative Distance-Based Clustering --
|t Faster Distance Calculations --
|t Choosing the Number of Clusters --
|t Hierarchical Clustering --
|t Example of Hierarchical Clustering --
|t Incremental Clustering --
|t Category Utility --
|t Remarks --
|g 4.9.
|t Multi-instance Learning --
|t Aggregating the Input --
|t Aggregating the Output --
|g 4.10.
|t Further Reading and Bibliographic Notes --
|g 4.11.
|t WEKA Implementations --
|g ch. 5
|t Credibility: evaluating what's been learned --
|g 5.1.
|t Training and Testing --
|g 5.2.
|t Predicting Performance --
|g 5.3.
|t Cross-Validation --
|g 5.4.
|t Other Estimates --
|t Leave-One-Out --
|t Bootstrap --
|g 5.5.
|t Hyperparameter Selection --
|g 5.6.
|t Comparing Data Mining Schemes --
|g 5.7.
|t Predicting Probabilities --
|t Quadratic Loss Function --
|t Informational Loss Function --
|t Remarks --
|g 5.8.
|t Counting the Cost --
|t Cost-Sensitive Classification --
|t Cost-Sensitive Learning --
|t Lift Charts --
|t ROC Curves --
|t Recall-Precision Curves --
|t Remarks --
|t Cost Curves --
|g 5.9.
|t Evaluating Numeric Prediction --
|g 5.10.
|t MDL Principle --
|g 5.11.
|t Applying the MDL Principle to Clustering --
|g 5.12.
|t Using a Validation Set for Model Selection --
|g 5.13.
|t Further Reading and Bibliographic Notes --
|g ch. 6
|t Trees and rules --
|g 6.1.
|t Decision Trees --
|t Numeric Attributes --
|t Missing Values --
|t Pruning --
|t Estimating Error Rates --
|t Complexity of Decision Tree Induction --
|t From Trees to Rules --
|t C4.5: Choices and Options --
|t Cost-Complexity Pruning --
|t Discussion --
|g 6.2.
|t Classification Rules --
|t Criteria for Choosing Tests --
|t Missing Values, Numeric Attributes --
|t Generating Good Rules --
|t Using Global Optimization --
|t Obtaining Rules From Partial Decision Trees --
|t Rules With Exceptions --
|t Discussion --
|g 6.3.
|t Association Rules --
|t Building a Frequent Pattern Tree --
|t Finding Large Item Sets --
|t Discussion --
|g 6.4.
|t WEKA Implementations --
|g ch. 7
|t Extending instance-based and linear models --
|g 7.1.
|t Instance-Based Learning --
|t Reducing the Number of Exemplars --
|t Pruning Noisy Exemplars --
|t Weighting Attributes --
|t Generalizing Exemplars --
|t Distance Functions for Generalized Exemplars --
|t Generalized Distance Functions --
|t Discussion --
|g 7.2.
|t Extending Linear Models --
|t Maximum Margin Hyperplane --
|t Nonlinear Class Boundaries --
|t Support Vector Regression --
|t Kernel Ridge Regression --
|t Kernel Perceptron --
|t Multilayer Perceptrons --
|t Radial Basis Function Networks --
|t Stochastic Gradient Descent --
|t Discussion --
|g 7.3.
|t Numeric Prediction With Local Linear Models --
|t Model Trees --
|t Building the Tree --
|t Pruning the Tree --
|t Nominal Attributes --
|t Missing Values --
|t Pseudocode for Model Tree Induction --
|t Rules From Model Trees --
|t Locally Weighted Linear Regression --
|t Discussion --
|g 7.4.
|t WEKA Implementations --
|g ch. 8
|t Data transformations --
|g 8.1.
|t Attribute Selection --
|t Scheme-Independent Selection --
|t Searching the Attribute Space --
|t Scheme-Specific Selection --
|g 8.2.
|t Discretizing Numeric Attributes --
|t Unsupervised Discretization --
|t Entropy-Based Discretization --
|t Other Discretization Methods --
|t Entropy-Based Versus Error-Based Discretization --
|t Converting Discrete to Numeric Attributes --
|g 8.3.
|t Projections --
|t Principal Component Analysis --
|t Random Projections --
|t Partial Least Squares Regression --
|t Independent Component Analysis --
|t Linear Discriminant Analysis --
|t Quadratic Discriminant Analysis --
|t Fisher's Linear Discriminant Analysis --
|t Text to Attribute Vectors --
|t Time Series --
|g 8.4.
|t Sampling --
|t Reservoir Sampling --
|g 8.5.
|t Cleansing --
|t Improving Decision Trees --
|t Robust Regression --
|t Detecting Anomalies --
|t One-Class Learning --
|t Outlier Detection --
|t Generating Artificial Data --
|g 8.6.
|t Transforming Multiple Classes to Binary Ones --
|t Simple Methods --
|t Error-Correcting Output Codes --
|t Ensembles of Nested Dichotomies --
|g 8.7.
|t Calibrating Class Probabilities --
|g 8.8.
|t Further Reading and Bibliographic Notes --
|g 8.9.
|t WEKA Implementations --
|g ch.
|
505 |
0 |
0 |
|g 9
|t Probabilistic methods --
|g 9.1.
|t Foundations --
|t Maximum Likelihood Estimation --
|t Maximum a Posteriori Parameter Estimation --
|g 9.2.
|t Bayesian Networks --
|t Making Predictions --
|t Learning Bayesian Networks --
|t Specific Algorithms --
|t Data Structures for Fast Learning --
|g 9.3.
|t Clustering and Probability Density Estimation --
|t Expectation Maximization Algorithm for a Mixture of Gaussians --
|t Extending the Mixture Model --
|t Clustering Using Prior Distributions --
|t Clustering With Correlated Attributes --
|t Kernel Density Estimation --
|t Comparing Parametric, Semiparametric and Nonparametric Density Models for Classification --
|g 9.4.
|t Hidden Variable Models --
|t Expected Log-Likelihoods and Expected Gradients --
|t Expectation Maximization Algorithm --
|t Applying the Expectation Maximization Algorithm to Bayesian Networks --
|g 9.5.
|t Bayesian Estimation and Prediction --
|t Probabilistic Inference Methods --
|g 9.6.
|t Graphical Models and Factor Graphs --
|t Graphical Models and Plate Notation --
|t Probabilistic Principal Component Analysis --
|t Latent Semantic Analysis --
|t Using Principal Component Analysis for Dimensionality Reduction --
|t Probabilistic LSA --
|t Latent Dirichlet Allocation --
|t Factor Graphs --
|t Markov Random Fields --
|t Computing Using the Sum-Product and Max-Product Algorithms --
|g 9.7.
|t Conditional Probability Models --
|t Linear and Polynomial Regression as Probability Models --
|t Using Priors on Parameters --
|t Multiclass Logistic Regression --
|t Gradient Descent and Second-Order Methods --
|t Generalized Linear Models --
|t Making Predictions for Ordered Classes --
|t Conditional Probabilistic Models Using Kernels --
|g 9.8.
|t Sequential and Temporal Models --
|t Markov Models and N-gram Methods --
|t Hidden Markov Models --
|t Conditional Random Fields --
|g 9.9.
|t Further Reading and Bibliographic Notes --
|t Software Packages and Implementations --
|g 9.10.
|t WEKA Implementations --
|g ch. 10
|t Deep learning --
|g 10.1.
|t Deep Feedforward Networks --
|t MNIST Evaluation --
|t Losses and Regularization --
|t Deep Layered Network Architecture --
|t Activation Functions --
|t Backpropagation Revisited --
|t Computation Graphs and Complex Network Structures --
|t Checking Backpropagation Implementations --
|g 10.2.
|t Training and Evaluating Deep Networks --
|t Early Stopping --
|t Validation, Cross-Validation, and Hyperparameter Tuning --
|t Mini-Batch-Based Stochastic Gradient Descent --
|t Pseudocode for Mini-Batch Based Stochastic Gradient Descent --
|t Learning Rates and Schedules --
|t Regularization With Priors on Parameters --
|t Dropout --
|t Batch Normalization --
|t Parameter Initialization --
|t Unsupervised Pretraining --
|t Data Augmentation and Synthetic Transformations --
|g 10.3.
|t Convolutional Neural Networks --
|t ImageNet Evaluation and Very Deep Convolutional Networks --
|t From Image Filtering to Learnable Convolutional Layers --
|t Convolutional Layers and Gradients --
|t Pooling and Subsampling Layers and Gradients --
|t Implementation --
|g 10.4.
|t Autoencoders --
|t Pretraining Deep Autoencoders With RBMs --
|t Denoising Autoencoders and Layerwise Training.
|
505 |
0 |
0 |
|g Note continued:
|t Combining Reconstructive and Discriminative Learning --
|g 10.5.
|t Stochastic Deep Networks --
|t Boltzmann Machines --
|t Restricted Boltzmann Machines --
|t Contrastive Divergence --
|t Categorical and Continuous Variables --
|t Deep Boltzmann Machines --
|t Deep Belief Networks --
|g 10.6.
|t Recurrent Neural Networks --
|t Exploding and Vanishing Gradients --
|t Other Recurrent Network Architectures --
|g 10.7.
|t Further Reading and Bibliographic Notes --
|g 10.8.
|t Deep Learning Software and Network Implementations --
|t Theano --
|t Tensor Flow --
|t Torch --
|t Computational Network Toolkit --
|t Caffe --
|t Deeplearning4j --
|t Other Packages: Lasagne, Keras, and cuDNN --
|g 10.9.
|t WEKA Implementations --
|g ch. 11
|t Beyond supervised and unsupervised learning --
|g 11.1.
|t Semisupervised Learning --
|t Clustering for Classification --
|t Cotraining --
|t EM and Cotraining --
|t Neural Network Approaches --
|g 11.2.
|t Multi-instance Learning --
|t Converting to Single-Instance Learning --
|t Upgrading Learning Algorithms --
|t Dedicated Multi-instance Methods --
|g 11.3.
|t Further Reading and Bibliographic Notes --
|g 11.4.
|t WEKA Implementations --
|g ch. 12
|t Ensemble learning --
|g 12.1.
|t Combining Multiple Models --
|g 12.2.
|t Bagging --
|t Bias-Variance Decomposition --
|t Bagging With Costs --
|g 12.3.
|t Randomization --
|t Randomization Versus Bagging --
|t Rotation Forests --
|g 12.4.
|t Boosting --
|t AdaBoost --
|t Power of Boosting --
|g 12.5.
|t Additive Regression --
|t Numeric Prediction --
|t Additive Logistic Regression --
|g 12.6.
|t Interpretable Ensembles --
|t Option Trees --
|t Logistic Model Trees --
|g 12.7.
|t Stacking --
|g 12.8.
|t Further Reading and Bibliographic Notes --
|g 12.9.
|t WEKA Implementations --
|g ch. 13
|t Moving on: applications and beyond --
|g 13.1.
|t Applying Machine Learning --
|g 13.2.
|t Learning From Massive Datasets --
|g 13.3.
|t Data Stream Learning --
|g 13.4.
|t Incorporating Domain Knowledge --
|g 13.5.
|t Text Mining --
|t Document Classification and Clustering --
|t Information Extraction --
|t Natural Language Processing --
|g 13.6.
|t Web Mining --
|t Wrapper Induction --
|t Page Rank --
|g 13.7.
|t Images and Speech --
|t Images --
|t Speech --
|g 13.8.
|t Adversarial Situations --
|g 13.9.
|t Ubiquitous Data Mining --
|g 13.10.
|t Further Reading and Bibliographic Notes --
|g 13.11.
|t WEKA Implementations.
|
520 |
|
|
|a Data Mining: Practical Machine Learning Tools and Techniques, Fourth Edition, offers a thorough grounding in machine learning concepts, along with practical advice on applying these tools and techniques in real-world data mining situations. This highly anticipated fourth edition of the most acclaimed work on data mining and machine learning teaches readers everything they need to know to get going, from preparing inputs, interpreting outputs, evaluating results, to the algorithmic methods at the heart of successful data mining approaches. Extensive updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including substantial new chapters on probabilistic methods and on deep learning. Accompanying the book is a new version of the popular WEKA machine learning software from the University of Waikato. Authors Witten, Frank, Hall, and Pal include today's techniques coupled with the methods at the leading edge of contemporary research. Please visit the book companion website at https://www.cs.waikato.ac.nz/~ml/weka/book.html.
|
588 |
0 |
|
|a Online resource; title from PDF title page (ProQuest Ebook Central, viewed 02 March 2022)
|
590 |
|
|
|a O'Reilly
|b O'Reilly Online Learning: Academic/Public Library Edition
|
650 |
|
0 |
|a Data mining.
|
650 |
|
2 |
|a Data Mining
|
650 |
|
6 |
|a Exploration de données (Informatique)
|
650 |
|
7 |
|a COMPUTERS
|x General.
|2 bisacsh
|
650 |
|
7 |
|a Data mining.
|2 fast
|0 (OCoLC)fst00887946
|
650 |
|
7 |
|a Graphical & digital media applications.
|2 thema
|
650 |
|
7 |
|a Data mining.
|2 thema
|
650 |
|
7 |
|a Machine learning.
|2 thema
|
650 |
|
7 |
|a Library, archive & information management.
|2 thema
|
650 |
|
7 |
|a Information technology: general issues.
|2 thema
|
650 |
|
7 |
|a Databases.
|2 thema
|
650 |
|
7 |
|a Enterprise software.
|2 thema
|
650 |
|
7 |
|a Data capture & analysis.
|2 thema
|
650 |
|
7 |
|a Artificial intelligence.
|2 thema
|
650 |
|
7 |
|a Business & Management.
|2 thema
|
650 |
|
7 |
|a Computers and IT.
|2 ukslc
|
700 |
1 |
|
|a Frank, Eibe,
|e author.
|
700 |
1 |
|
|a Hall, Mark A.
|q (Mark Andrew),
|e author.
|
700 |
1 |
|
|a Pal, Christopher J.,
|e author.
|
776 |
0 |
8 |
|i Print version:
|a Witten, I.H. (Ian H.).
|t Data mining.
|b Fourth edition.
|d Amsterdam ; Boston : Elsevier, [2017]
|z 9780128042915
|w (DLC) 2016948470
|w (OCoLC)976423990
|
830 |
|
0 |
|a Morgan Kaufmann series in data management systems.
|
856 |
4 |
0 |
|u https://learning.oreilly.com/library/view/~/9780128043578/?ar
|z Texto completo (Requiere registro previo con correo institucional)
|
938 |
|
|
|a Askews and Holts Library Services
|b ASKH
|n AH28809629
|
938 |
|
|
|a ProQuest Ebook Central
|b EBLB
|n EBL4708912
|
938 |
|
|
|a EBSCOhost
|b EBSC
|n 1214611
|
938 |
|
|
|a Gale Cengage Learning
|b GVRL
|n GVRL02VB
|
938 |
|
|
|a ProQuest MyiLibrary Digital eBook Collection
|b IDEB
|n cis36491446
|
938 |
|
|
|a YBP Library Services
|b YANK
|n 13199723
|
994 |
|
|
|a 92
|b IZTAP
|