ML

Deep learning with GPUs in production

Start-up: python -> enterprise: C/Java/Scala, more engineers, faster Research: quick result and prototyping GPU? Data movement between GPU and CPU is important [ ] fast.ai: class (high school math) infrastructure: spark/flink scheduler problem distributed file system Problems to think about when running works on GPU clusters memory is relatively small throughput, jobs are more than matrix math resource provisioning: how many resource we need? GPU/CPU/RAM GPU allocation per job Python <-> Java overhead, defeats the points of GPUs

[Cousera Note] Machine Learning Foundations: A Case Study Approach

1 Week1: Welcome 1.1 Introduction 1.1.1 real world case based - regression: house price prediction - classificiation: sentiment analysis - clustering & retrieval: finding doc - maxtrix factorization & dimensionality reduction: recommending products 1.1.2 requirement math: calculas & algebra python 1.1.3 capstone project 1.2 iPython Notebook Python command and its outputs Markdown for doc 1.3 SFrames 1.