Large Scale Machine Learning with Spark is published by Packt Publishing in October 2016. This book has 476 pages in English, ISBN-13 978-1785888748.
Data processing, implementing related algorithms, tuning, scaling up and finally deploying are some crucial steps in the process of optimising any application.
Spark is capable of handling large-scale batch and streaming data to figure out when to cache data in memory and processing them up to 100 times faster than Hadoop-based MapReduce. This means predictive analytics can be applied to streaming and batch to develop complete machine learning (ML) applications a lot quicker, making Spark an ideal candidate for large data-intensive applications.
This book focuses on design engineering and scalable solutions using ML with Spark. First, you will learn how to install Spark with all new features from the latest Spark 2.0 release. Moving on, you’ll explore important concepts such as advanced feature engineering with RDD and Datasets. After studying developing and deploying applications, you will see how to use external libraries with Spark.
In summary, you will be able to develop complete and personalised ML applications from data collections,model building, tuning, and scaling up to deploying on a cluster or the cloud.
Who This Book Is For
This book is for data science engineers and scientists who work with large and complex data sets. You should be familiar with the basics of machine learning concepts, statistics, and computational mathematics. Knowledge of Scala and Java is advisable.
What You Will Learn
- Get solid theoretical understandings of ML algorithms
- Configure Spark on cluster and cloud infrastructure to develop applications using Scala, Java, Python, and R
- Scale up ML applications on large cluster or cloud infrastructures
- Use Spark ML and MLlib to develop ML pipelines with recommendation system, classification, regression, clustering, sentiment analysis, and dimensionality reduction
- Handle large texts for developing ML applications with strong focus on feature engineering
- Use Spark Streaming to develop ML applications for real-time streaming
- Tune ML models with cross-validation, hyperparameters tuning and train split
- Enhance ML models to make them adaptable for new data in dynamic and incremental environments