Docker for Data Science Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server /

Learn Docker "infrastructure as code" technology to define a system for performing standard but non-trivial data tasks on medium- to large-scale data sets, using Jupyter as the master controller. It is not uncommon for a real-world data set to fail to be easily managed. The set may not fit...

Πλήρης περιγραφή

Λεπτομέρειες βιβλιογραφικής εγγραφής
Κύριος συγγραφέας: Cook, Joshua (Συγγραφέας)
Συγγραφή απο Οργανισμό/Αρχή: SpringerLink (Online service)
Μορφή: Ηλεκτρονική πηγή Ηλ. βιβλίο
Γλώσσα:English
Έκδοση: Berkeley, CA : Apress : Imprint: Apress, 2017.
Θέματα:
Διαθέσιμο Online:Full Text via HEAL-Link
LEADER 03179nam a22003975i 4500
001 978-1-4842-3012-1
003 DE-He213
005 20171117151244.0
007 cr nn 008mamaa
008 170823s2017 xxu| s |||| 0|eng d
020 |a 9781484230121  |9 978-1-4842-3012-1 
024 7 |a 10.1007/978-1-4842-3012-1  |2 doi 
040 |d GrThAP 
100 1 |a Cook, Joshua.  |e author. 
245 1 0 |a Docker for Data Science  |h [electronic resource] :  |b Building Scalable and Extensible Data Infrastructure Around the Jupyter Notebook Server /  |c by Joshua Cook. 
264 1 |a Berkeley, CA :  |b Apress :  |b Imprint: Apress,  |c 2017. 
300 |a XXI, 257 p. 97 illus., 76 illus. in color.  |b online resource. 
336 |a text  |b txt  |2 rdacontent 
337 |a computer  |b c  |2 rdamedia 
338 |a online resource  |b cr  |2 rdacarrier 
347 |a text file  |b PDF  |2 rda 
505 0 |a Chapter 1:  Introduction -- Chapter 2:  Docker -- Chapter 3: Interactive Programming -- Chapter 4: Docker Engine -- Chapter 5: The Dockerfile -- Chapter 6: Docker Hub -- Chapter 7: The Opinionated Jupyter Stacks -- Chapter 8: The Data Stores -- Chapter 9: Docker Compose -- Chapter 10: Interactive Development. 
520 |a Learn Docker "infrastructure as code" technology to define a system for performing standard but non-trivial data tasks on medium- to large-scale data sets, using Jupyter as the master controller. It is not uncommon for a real-world data set to fail to be easily managed. The set may not fit well into access memory or may require prohibitively long processing. These are significant challenges to skilled software engineers and they can render the standard Jupyter system unusable.  As a solution to this problem, Docker for Data Science proposes using Docker. You will learn how to use existing pre-compiled public images created by the major open-source technologies—Python, Jupyter, Postgres—as well as using the Dockerfile to extend these images to suit your specific purposes. The Docker-Compose technology is examined and you will learn how it can be used to build a linked system with Python churning data behind the scenes and Jupyter managing these background tasks. Best practices in using existing images are explored as well as developing your own images to deploy state-of-the-art machine learning and optimization algorithms.   What  You'll Learn: Master interactive development using the Jupyter platform Run and build Docker containers from scratch and from publicly available open-source images Write infrastructure as code using the docker-compose tool and its docker-compose.yml file type Deploy a multi-service data science application across a cloud-based system. 
650 0 |a Computer science. 
650 0 |a Computers. 
650 1 4 |a Computer Science. 
650 2 4 |a Big Data. 
650 2 4 |a Computing Methodologies. 
650 2 4 |a Open Source. 
650 2 4 |a Python. 
710 2 |a SpringerLink (Online service) 
773 0 |t Springer eBooks 
776 0 8 |i Printed edition:  |z 9781484230114 
856 4 0 |u http://dx.doi.org/10.1007/978-1-4842-3012-1  |z Full Text via HEAL-Link 
912 |a ZDB-2-CWD 
950 |a Professional and Applied Computing (Springer-12059)