Machine Learning with TensorFlow, Second Edition 🔍
Chris A. Mattmann
Manning Publications Company, 2nd edition, Erscheinungsort nicht ermittelbar, 2021
英语 [en] · PDF · 24.5MB · 2021 · 📘 非小说类图书 · 🚀/lgli/lgrs/nexusstc/zlib · Save
描述
Updated with new code, new projects, and new chapters, Machine Learning with TensorFlow, Second Edition gives readers a solid foundation in machine-learning concepts and the TensorFlow library.
Summary
Updated with new code, new projects, and new chapters, Machine Learning with TensorFlow, Second Edition gives readers a solid foundation in machine-learning concepts and the TensorFlow library. Written by NASA JPL Deputy CTO and Principal Data Scientist Chris Mattmann, all examples are accompanied by downloadable Jupyter Notebooks for a hands-on experience coding TensorFlow with Python. New and revised content expands coverage of core machine learning algorithms, and advancements in neural networks such as VGG-Face facial identification classifiers and deep speech classifiers.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the technology
Supercharge your data analysis with machine learning! ML algorithms automatically improve as they process data, so results get better over time. You dont have to be a mathematician to use ML: Tools like Googles TensorFlow library help with complex calculations so you can focus on getting the answers you need.
About the book
Machine Learning with TensorFlow, Second Edition is a fully revised guide to building machine learning models using Python and TensorFlow. Youll apply core ML concepts to real-world challenges, such as sentiment analysis, text classification, and image recognition. Hands-on examples illustrate neural network techniques for deep speech processing, facial identification, and auto-encoding with CIFAR-10.
What's inside
Machine Learning with TensorFlow
Choosing the best ML approaches
Visualizing algorithms with TensorBoard
Sharing results with collaborators
Running models in Docker
About the reader
Requires intermediate Python skills and knowledge of general algebraic concepts like vectors and matrices. Examples use the super-stable 1.15.x branch of TensorFlow and TensorFlow 2.x.
About the author
Chris Mattmann is the Division Manager of the Artificial Intelligence, Analytics, and Innovation Organization at NASA Jet Propulsion Lab. The first edition of this book was written by Nishant Shukla with Kenneth Fricklas .
Table of Contents
PART 1 - YOUR MACHINE-LEARNING RIG
1 A machine-learning odyssey
2 TensorFlow essentials
PART 2 - CORE LEARNING ALGORITHMS
3 Linear regression and beyond
4 Using regression for call-center volume prediction
5 A gentle introduction to classification
6 Sentiment classification: Large movie-review dataset
7 Automatically clustering data
8 Inferring user activity from Android accelerometer data
9 Hidden Markov models
10 Part-of-speech tagging and word-sense disambiguation
PART 3 - THE NEURAL NETWORK PARADIGM
11 A peek into autoencoders
12 Applying autoencoders: The CIFAR-10 image dataset
13 Reinforcement learning
14 Convolutional neural networks
15 Building a real-world CNN: VGG-Face ad VGG-Face Lite
16 Recurrent neural networks
17 LSTMs and automatic speech recognition
18 Sequence-to-sequence models for chatbots
19 Utility landscape
Summary
Updated with new code, new projects, and new chapters, Machine Learning with TensorFlow, Second Edition gives readers a solid foundation in machine-learning concepts and the TensorFlow library. Written by NASA JPL Deputy CTO and Principal Data Scientist Chris Mattmann, all examples are accompanied by downloadable Jupyter Notebooks for a hands-on experience coding TensorFlow with Python. New and revised content expands coverage of core machine learning algorithms, and advancements in neural networks such as VGG-Face facial identification classifiers and deep speech classifiers.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the technology
Supercharge your data analysis with machine learning! ML algorithms automatically improve as they process data, so results get better over time. You dont have to be a mathematician to use ML: Tools like Googles TensorFlow library help with complex calculations so you can focus on getting the answers you need.
About the book
Machine Learning with TensorFlow, Second Edition is a fully revised guide to building machine learning models using Python and TensorFlow. Youll apply core ML concepts to real-world challenges, such as sentiment analysis, text classification, and image recognition. Hands-on examples illustrate neural network techniques for deep speech processing, facial identification, and auto-encoding with CIFAR-10.
What's inside
Machine Learning with TensorFlow
Choosing the best ML approaches
Visualizing algorithms with TensorBoard
Sharing results with collaborators
Running models in Docker
About the reader
Requires intermediate Python skills and knowledge of general algebraic concepts like vectors and matrices. Examples use the super-stable 1.15.x branch of TensorFlow and TensorFlow 2.x.
About the author
Chris Mattmann is the Division Manager of the Artificial Intelligence, Analytics, and Innovation Organization at NASA Jet Propulsion Lab. The first edition of this book was written by Nishant Shukla with Kenneth Fricklas .
Table of Contents
PART 1 - YOUR MACHINE-LEARNING RIG
1 A machine-learning odyssey
2 TensorFlow essentials
PART 2 - CORE LEARNING ALGORITHMS
3 Linear regression and beyond
4 Using regression for call-center volume prediction
5 A gentle introduction to classification
6 Sentiment classification: Large movie-review dataset
7 Automatically clustering data
8 Inferring user activity from Android accelerometer data
9 Hidden Markov models
10 Part-of-speech tagging and word-sense disambiguation
PART 3 - THE NEURAL NETWORK PARADIGM
11 A peek into autoencoders
12 Applying autoencoders: The CIFAR-10 image dataset
13 Reinforcement learning
14 Convolutional neural networks
15 Building a real-world CNN: VGG-Face ad VGG-Face Lite
16 Recurrent neural networks
17 LSTMs and automatic speech recognition
18 Sequence-to-sequence models for chatbots
19 Utility landscape
备用文件名
lgli/Machine Learning with TensorFlow.pdf
备用文件名
lgrsnf/Machine Learning with TensorFlow.pdf
备用文件名
zlib/Computers/Computer Science/Chris A. Mattmann/Machine Learning with TensorFlow_11106081.pdf
备选作者
Chris, Mattmann A.
备选作者
Mattmann A. Chris
备用出版商
Manning Publications Co. LLC
备用版本
Simon & Schuster, Shelter Island, NY, 2020
备用版本
United States, United States of America
备用版本
2nd Edition, PS, 2021
备用版本
New York, 2021
备用版本
2, 2020
元数据中的注释
Vector PDF
元数据中的注释
lg2897288
元数据中的注释
{"edition":"2","isbns":["1617297712","9781617297717"],"last_page":456,"publisher":"Manning Publications Co."}
备用描述
Machine Learning with TensorFlow, Second Edition
brief contents
contents
foreword
preface
acknowledgments
about this book
How this book is organized: A roadmap
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1: Your machine-learning rig
Chapter 1: A machine-learning odyssey
1.1 Machine-learning fundamentals
1.1.1 Parameters
1.1.2 Learning and inference
1.2 Data representation and features
1.3 Distance metrics
1.4 Types of learning
1.4.1 Supervised learning
1.4.2 Unsupervised learning
1.4.3 Reinforcement learning
1.4.4 Meta-learning
1.5 TensorFlow
1.6 Overview of future chapters
Chapter 2: TensorFlow essentials
2.1 Ensuring that TensorFlow works
2.2 Representing tensors
2.3 Creating operators
2.4 Executing operators within sessions
2.5 Understanding code as a graph
2.5.1 Setting session configurations
2.6 Writing code in Jupyter
2.7 Using variables
2.8 Saving and loading variables
2.9 Visualizing data using TensorBoard
2.9.1 Implementing a moving average
2.9.2 Visualizing the moving average
2.10 Putting it all together: The TensorFlow system architecture and API
Part 2: Core learning algorithms
Chapter 3: Linear regression and beyond
3.1 Formal notation
3.1.1 How do you know the regression algorithm is working?
3.2 Linear regression
3.3 Polynomial model
3.4 Regularization
3.5 Application of linear regression
Chapter 4: Using regression for call-center volume prediction
4.1 What is 311?
4.2 Cleaning the data for regression
4.3 What’s in a bell curve? Predicting Gaussian distributions
4.4 Training your call prediction regressor
4.5 Visualizing the results and plotting the error
4.6 Regularization and training test splits
Chapter 5: A gentle introduction to classification
5.1 Formal notation
5.2 Measuring performance
5.2.1 Accuracy
5.2.2 Precision and recall
5.2.3 Receiver operating characteristic curve
5.3 Using linear regression for classification
5.4 Using logistic regression
5.4.1 Solving 1D logistic regression
5.4.2 Solving 2D regression
5.5 Multiclass classifier
5.5.1 One-versus-all
5.5.2 One-versus-one
5.5.3 Softmax regression
5.6 Application of classification
Chapter 6: Sentiment classification: Large movie-review dataset
6.1 Using the Bag of Words model
6.1.1 Applying the Bag of Words model to movie reviews
6.1.2 Cleaning all the movie reviews
6.1.3 Exploratory data analysis on your Bag of Words
6.2 Building a sentiment classifier using logistic regression
6.2.1 Setting up the training for your model
6.2.2 Performing the training for your model
6.3 Making predictions using your sentiment classifier
6.4 Measuring the effectiveness of your classifier
6.5 Creating the softmax-regression sentiment classifier
6.6 Submitting your results to Kaggle
Chapter 7: Automatically clustering data
7.1 Traversing files in TensorFlow
7.2 Extracting features from audio
7.3 Using k-means clustering
7.4 Segmenting audio
7.5 Clustering with a self-organizing map
7.6 Applying clustering
Chapter 8: Inferring user activity from Android accelerometer data
8.1 The User Activity from Walking dataset
8.1.1 Creating the dataset
8.1.2 Computing jerk and extracting the feature vector
8.2 Clustering similar participants based on jerk magnitudes
8.3 Different classes of user activity for a single participant
Chapter 9: Hidden Markov models
9.1 Example of a not-so-interpretable model
9.2 Markov model
9.3 Hidden Markov model
9.4 Forward algorithm
9.5 Viterbi decoding
9.6 Uses of HMMs
9.6.1 Modeling a video
9.6.2 Modeling DNA
9.6.3 Modeling an image
9.7 Application of HMMs
Chapter 10: Part-of-speech tagging and word-sense disambiguation
10.1 Review of HMM example: Rainy or Sunny
10.2 PoS tagging
10.2.1 The big picture: Training and predicting PoS with HMMs
10.2.2 Generating the ambiguity PoS tagged dataset
10.3 Algorithms for building the HMM for PoS disambiguation
10.3.1 Generating the emission probabilities
10.4 Running the HMM and evaluating its output
10.5 Getting more training data from the Brown Corpus
10.6 Defining error bars and metrics for PoS tagging
Part 3: The neural network paradigm
Chapter 11: A peek into autoencoders
11.1 Neural networks
11.2 Autoencoders
11.3 Batch training
11.4 Working with images
11.5 Application of autoencoders
Chapter 12: Applying autoencoders: The CIFAR-10 image dataset
12.1 What is CIFAR-10?
12.1.1 Evaluating your CIFAR-10 autoencoder
12.2 Autoencoders as classifiers
12.2.1 Using the autoencoder as a classifier via loss
12.3 Denoising autoencoders
12.4 Stacked deep autoencoders
Chapter 13: Reinforcement learning
13.1 Formal notions
13.1.1 Policy
13.1.2 Utility
13.2 Applying reinforcement learning
13.3 Implementing reinforcement learning
13.4 Exploring other applications of reinforcement learning
Chapter 14: Convolutional neural networks
14.1 Drawback of neural networks
14.2 Convolutional neural networks
14.3 Preparing the image
14.3.1 Generating filters
14.3.2 Convolving using filters
14.3.3 Max pooling
14.4 Implementing a CNN in TensorFlow
14.4.1 Measuring performance
14.4.2 Training the classifier
14.5 Tips and tricks to improve performance
14.6 Application of CNNs
Chapter 15: Building a real-world CNN: VGG -Face and VGG -Face Lite
15.1 Making a real-world CNN architecture for CIFAR-10
15.1.1 Loading and preparing the CIFAR-10 image data
15.1.2 Performing data augmentation
15.2 Building a deeper CNN architecture for CIFAR-10
15.2.1 CNN optimizations for increasing learned parameter resilience
15.3 Training and applying a better CIFAR-10 CNN
15.4 Testing and evaluating your CNN for CIFAR-10
15.4.1 CIFAR-10 accuracy results and ROC curves
15.4.2 Evaluating the softmax predictions per class
15.5 Building VGG -Face for facial recognition
15.5.1 Picking a subset of VGG -Face for training VGG -Face Lite
15.5.2 TensorFlow’s Dataset API and data augmentation
15.5.3 Creating a TensorFlow dataset
15.5.4 Training using TensorFlow datasets
15.5.5 VGG -Face Lite model and training
15.5.6 Training and evaluating VGG -Face Lite
15.5.7 Evaluating and predicting with VGG -Face Lite
Chapter 16: Recurrent neural networks
16.1 Introduction to RNNs
16.2 Implementing a recurrent neural network
16.3 Using a predictive model for time-series data
16.4 Applying RNNs
Chapter 17: LSTMs and automatic speech recognition
17.1 Preparing the LibriSpeech corpus
17.1.1 Downloading, cleaning, and preparing LibriSpeech OpenSLR data
17.1.2 Converting the audio
17.1.3 Generating per-audio transcripts
17.1.4 Aggregating audio and transcripts
17.2 Using the deep-speech model
17.2.1 Preparing the input audio data for deep speech
17.2.2 Preparing the text transcripts as character-level numerical data
17.2.3 The deep-speech model in TensorFlow
17.2.4 Connectionist temporal classification in TensorFlow
17.3 Training and evaluating deep speech
Chapter 18: Sequence-to-sequence models for chatbots
18.1 Building on classification and RNNs
18.2 Understanding seq2seq architecture
18.3 Vector representation of symbols
18.4 Putting it all together
18.5 Gathering dialogue data
Chapter 19: Utility landscape
19.1 Preference model
19.2 Image embedding
19.3 Ranking images
What’s next
appendix: Installation instructions
A.1 Installing the book’s code with Docker
A.1.1 Installing Docker in Windows
A.1.2 Installing Docker in Linux
A.1.3 Installing Docker in macOS
A.1.4 Using Docker
A.2 Getting the data and storing models
A.3 Necessary libraries
A.4 Converting the call-center example to TensorFlow2
A.4.1 The call-center example with TF2
index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
brief contents
contents
foreword
preface
acknowledgments
about this book
How this book is organized: A roadmap
About the code
liveBook discussion forum
about the author
about the cover illustration
Part 1: Your machine-learning rig
Chapter 1: A machine-learning odyssey
1.1 Machine-learning fundamentals
1.1.1 Parameters
1.1.2 Learning and inference
1.2 Data representation and features
1.3 Distance metrics
1.4 Types of learning
1.4.1 Supervised learning
1.4.2 Unsupervised learning
1.4.3 Reinforcement learning
1.4.4 Meta-learning
1.5 TensorFlow
1.6 Overview of future chapters
Chapter 2: TensorFlow essentials
2.1 Ensuring that TensorFlow works
2.2 Representing tensors
2.3 Creating operators
2.4 Executing operators within sessions
2.5 Understanding code as a graph
2.5.1 Setting session configurations
2.6 Writing code in Jupyter
2.7 Using variables
2.8 Saving and loading variables
2.9 Visualizing data using TensorBoard
2.9.1 Implementing a moving average
2.9.2 Visualizing the moving average
2.10 Putting it all together: The TensorFlow system architecture and API
Part 2: Core learning algorithms
Chapter 3: Linear regression and beyond
3.1 Formal notation
3.1.1 How do you know the regression algorithm is working?
3.2 Linear regression
3.3 Polynomial model
3.4 Regularization
3.5 Application of linear regression
Chapter 4: Using regression for call-center volume prediction
4.1 What is 311?
4.2 Cleaning the data for regression
4.3 What’s in a bell curve? Predicting Gaussian distributions
4.4 Training your call prediction regressor
4.5 Visualizing the results and plotting the error
4.6 Regularization and training test splits
Chapter 5: A gentle introduction to classification
5.1 Formal notation
5.2 Measuring performance
5.2.1 Accuracy
5.2.2 Precision and recall
5.2.3 Receiver operating characteristic curve
5.3 Using linear regression for classification
5.4 Using logistic regression
5.4.1 Solving 1D logistic regression
5.4.2 Solving 2D regression
5.5 Multiclass classifier
5.5.1 One-versus-all
5.5.2 One-versus-one
5.5.3 Softmax regression
5.6 Application of classification
Chapter 6: Sentiment classification: Large movie-review dataset
6.1 Using the Bag of Words model
6.1.1 Applying the Bag of Words model to movie reviews
6.1.2 Cleaning all the movie reviews
6.1.3 Exploratory data analysis on your Bag of Words
6.2 Building a sentiment classifier using logistic regression
6.2.1 Setting up the training for your model
6.2.2 Performing the training for your model
6.3 Making predictions using your sentiment classifier
6.4 Measuring the effectiveness of your classifier
6.5 Creating the softmax-regression sentiment classifier
6.6 Submitting your results to Kaggle
Chapter 7: Automatically clustering data
7.1 Traversing files in TensorFlow
7.2 Extracting features from audio
7.3 Using k-means clustering
7.4 Segmenting audio
7.5 Clustering with a self-organizing map
7.6 Applying clustering
Chapter 8: Inferring user activity from Android accelerometer data
8.1 The User Activity from Walking dataset
8.1.1 Creating the dataset
8.1.2 Computing jerk and extracting the feature vector
8.2 Clustering similar participants based on jerk magnitudes
8.3 Different classes of user activity for a single participant
Chapter 9: Hidden Markov models
9.1 Example of a not-so-interpretable model
9.2 Markov model
9.3 Hidden Markov model
9.4 Forward algorithm
9.5 Viterbi decoding
9.6 Uses of HMMs
9.6.1 Modeling a video
9.6.2 Modeling DNA
9.6.3 Modeling an image
9.7 Application of HMMs
Chapter 10: Part-of-speech tagging and word-sense disambiguation
10.1 Review of HMM example: Rainy or Sunny
10.2 PoS tagging
10.2.1 The big picture: Training and predicting PoS with HMMs
10.2.2 Generating the ambiguity PoS tagged dataset
10.3 Algorithms for building the HMM for PoS disambiguation
10.3.1 Generating the emission probabilities
10.4 Running the HMM and evaluating its output
10.5 Getting more training data from the Brown Corpus
10.6 Defining error bars and metrics for PoS tagging
Part 3: The neural network paradigm
Chapter 11: A peek into autoencoders
11.1 Neural networks
11.2 Autoencoders
11.3 Batch training
11.4 Working with images
11.5 Application of autoencoders
Chapter 12: Applying autoencoders: The CIFAR-10 image dataset
12.1 What is CIFAR-10?
12.1.1 Evaluating your CIFAR-10 autoencoder
12.2 Autoencoders as classifiers
12.2.1 Using the autoencoder as a classifier via loss
12.3 Denoising autoencoders
12.4 Stacked deep autoencoders
Chapter 13: Reinforcement learning
13.1 Formal notions
13.1.1 Policy
13.1.2 Utility
13.2 Applying reinforcement learning
13.3 Implementing reinforcement learning
13.4 Exploring other applications of reinforcement learning
Chapter 14: Convolutional neural networks
14.1 Drawback of neural networks
14.2 Convolutional neural networks
14.3 Preparing the image
14.3.1 Generating filters
14.3.2 Convolving using filters
14.3.3 Max pooling
14.4 Implementing a CNN in TensorFlow
14.4.1 Measuring performance
14.4.2 Training the classifier
14.5 Tips and tricks to improve performance
14.6 Application of CNNs
Chapter 15: Building a real-world CNN: VGG -Face and VGG -Face Lite
15.1 Making a real-world CNN architecture for CIFAR-10
15.1.1 Loading and preparing the CIFAR-10 image data
15.1.2 Performing data augmentation
15.2 Building a deeper CNN architecture for CIFAR-10
15.2.1 CNN optimizations for increasing learned parameter resilience
15.3 Training and applying a better CIFAR-10 CNN
15.4 Testing and evaluating your CNN for CIFAR-10
15.4.1 CIFAR-10 accuracy results and ROC curves
15.4.2 Evaluating the softmax predictions per class
15.5 Building VGG -Face for facial recognition
15.5.1 Picking a subset of VGG -Face for training VGG -Face Lite
15.5.2 TensorFlow’s Dataset API and data augmentation
15.5.3 Creating a TensorFlow dataset
15.5.4 Training using TensorFlow datasets
15.5.5 VGG -Face Lite model and training
15.5.6 Training and evaluating VGG -Face Lite
15.5.7 Evaluating and predicting with VGG -Face Lite
Chapter 16: Recurrent neural networks
16.1 Introduction to RNNs
16.2 Implementing a recurrent neural network
16.3 Using a predictive model for time-series data
16.4 Applying RNNs
Chapter 17: LSTMs and automatic speech recognition
17.1 Preparing the LibriSpeech corpus
17.1.1 Downloading, cleaning, and preparing LibriSpeech OpenSLR data
17.1.2 Converting the audio
17.1.3 Generating per-audio transcripts
17.1.4 Aggregating audio and transcripts
17.2 Using the deep-speech model
17.2.1 Preparing the input audio data for deep speech
17.2.2 Preparing the text transcripts as character-level numerical data
17.2.3 The deep-speech model in TensorFlow
17.2.4 Connectionist temporal classification in TensorFlow
17.3 Training and evaluating deep speech
Chapter 18: Sequence-to-sequence models for chatbots
18.1 Building on classification and RNNs
18.2 Understanding seq2seq architecture
18.3 Vector representation of symbols
18.4 Putting it all together
18.5 Gathering dialogue data
Chapter 19: Utility landscape
19.1 Preference model
19.2 Image embedding
19.3 Ranking images
What’s next
appendix: Installation instructions
A.1 Installing the book’s code with Docker
A.1.1 Installing Docker in Windows
A.1.2 Installing Docker in Linux
A.1.3 Installing Docker in macOS
A.1.4 Using Docker
A.2 Getting the data and storing models
A.3 Necessary libraries
A.4 Converting the call-center example to TensorFlow2
A.4.1 The call-center example with TF2
index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
开源日期
2020-12-29
🚀 快速下载
成为会员以支持书籍、论文等的长期保存。为了感谢您对我们的支持,您将获得高速下载权益。❤️
如果您在本月捐款,您将获得双倍的快速下载次数。
🐢 低速下载
由可信的合作方提供。 更多信息请参见常见问题解答。 (可能需要验证浏览器——无限次下载!)
- 低速服务器(合作方提供) #1 (稍快但需要排队)
- 低速服务器(合作方提供) #2 (稍快但需要排队)
- 低速服务器(合作方提供) #3 (稍快但需要排队)
- 低速服务器(合作方提供) #4 (稍快但需要排队)
- 低速服务器(合作方提供) #5 (无需排队,但可能非常慢)
- 低速服务器(合作方提供) #6 (无需排队,但可能非常慢)
- 低速服务器(合作方提供) #7 (无需排队,但可能非常慢)
- 低速服务器(合作方提供) #8 (无需排队,但可能非常慢)
- 低速服务器(合作方提供) #9 (无需排队,但可能非常慢)
- 下载后: 在我们的查看器中打开
所有选项下载的文件都相同,应该可以安全使用。即使这样,从互联网下载文件时始终要小心。例如,确保您的设备更新及时。
外部下载
-
对于大文件,我们建议使用下载管理器以防止中断。
推荐的下载管理器:JDownloader -
您将需要一个电子书或 PDF 阅读器来打开文件,具体取决于文件格式。
推荐的电子书阅读器:Anna的档案在线查看器、ReadEra和Calibre -
使用在线工具进行格式转换。
推荐的转换工具:CloudConvert和PrintFriendly -
您可以将 PDF 和 EPUB 文件发送到您的 Kindle 或 Kobo 电子阅读器。
推荐的工具:亚马逊的“发送到 Kindle”和djazz 的“发送到 Kobo/Kindle” -
支持作者和图书馆
✍️ 如果您喜欢这个并且能够负担得起,请考虑购买原版,或直接支持作者。
📚 如果您当地的图书馆有这本书,请考虑在那里免费借阅。
下面的文字仅以英文继续。
总下载量:
“文件的MD5”是根据文件内容计算出的哈希值,并且基于该内容具有相当的唯一性。我们这里索引的所有影子图书馆都主要使用MD5来标识文件。
一个文件可能会出现在多个影子图书馆中。有关我们编译的各种数据集的信息,请参见数据集页面。
有关此文件的详细信息,请查看其JSON 文件。 Live/debug JSON version. Live/debug page.