Becoming a data scientist
2019-03-27 00:48|来源: 网路
Data Pointed, CouchDB in the Cloud, Launching Strata
How do I become a data scientist?
Here are some resources I've collected about working with data, I hope you find them useful (note: I'm an undergrad student, this is not an expert opinion in any way).
1) Learn about matrix factorizations:
Take the Computational Linear Algebra course (it is sometimes called Applied Linear Algebra or Matrix Computations or Numeric Analysis or Matrix Analysis and it can be either CS or Applied Math course). Matrix decomposition algorithms are fundamental to many data mining applications and usually underrepresented in a standard "machine learning" curriculum. With TBs of data traditional tools such as Matlab become not suitable for the job, you cannot just run eig() on Big Data. Distributed matrix computation packages such as those included in Apache Mahout  are trying to fill this void but you need to understand how the numeric algorithms/LAPACK/BLAS routines  work in order to use them properly, adjust for special cases, build your own and scale them up to terabytes of data on a cluster of commodity machines. Usually numerics courses are built upon undergraduate algebra and calculus so you should be good with prerequisites. I'd recommend these resources for self study/reference material:
- BellKor, Matrix factorization for recommender systems: www2.research.at
- BellKor, Scalable Collaborative Filtering..: public.resea
- Press et al., Numerical Recipes in C++: http://www.amazon.com/Nu
- Golub & Van Loan: Matrix Computations: http://www.
- Watkins, Fundamentals of Matrix Computations (this is a very gentle intro to the field): http://www.amazon
- Demmel, Applied Numeric Linear Algebra: http://www.amazo
- Trefethen & Bau, Numerical linear algebra: http://www.amazo
- Watkins: The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods: http://www.amazo
- Parlett, The Symmetric Eigenvalue Problem: http://www.amazo
- Iverson, Algebra as a language: http://www.jsof
- Iverson, Algebra: an algorithmic treatment: http://www.ama
- Bertsekas, Parallel and Distributed Computation: Numerical Methods:http://www.amazon
- Hamming, Numerical Methods for Scientists and Engineers: http://www.ama
- Bierman, Factorization Methods for Discrete Sequential Estimation: http://www.am
- Wilkinson, The algebraic Eigenvalue Problem: http://www.amazo
- Horn, Matrix Analysis: http://www.amaz
- Harville, Matrix Algebra from a statistician perspective: http://www.a
- Fiedler, Special Matrices: http://www.amaz
- Higham, Accuracy and stability of numerical algorithms: http://www.am
- Langville & Meyer, Google Page Rank and Beyond: http://www.am
- Nielsen, PageRank tutorial: http://michaeln
- Mannix, Numerical recipes in Hadoop: http://www.slides
- Godsil, Algebraic Graph Theory: http://www.amazon.com/Alg
- Wheeler: On building a stupidly fast graph database: http://blog.dir
2) Start learning statistics by coding with R:
- Pick up some R manuals (see
and UCI Machine learning repository: http://archiv
- Here is a good reference to get started with regression analysis:
- Albert, Bayesian computation with R:
- Spector, Data Manipulation with R:
- Gries, Quantitative corpus linguistics with R: http://www.amazon.com/
- Duda & Hart, Pattern Classification:http://www
.amazon.com/Pattern-Cl... , it is a classic book on statistical inference and a very readable intro to the field
- Go through the Exploratory Data Analysis by Tukey: http://www.amazon.
com/Explorator.... Read Hamming for inspiration: http://www.c s.virginia.edu/~robi...
- If you want to get a job look up "statistician" or "data scientist" job specs on Twitter and see what the market wants: http://twitter.com
/#search?q=sta..., http:/ /twitter.com/#search?q=%2 2...
- E.g. here is Netflix's definition of "data scientist" body of knowledge: http://jobs.ne
tflix.com/DetailFl... Mul tivariate Regression, Logistic Regression, Support Vector Machines, Bagging, Boosting, Decision Trees, Time Series Analysis, Optimization, Stochastic Processes, Experiment Analysis, Bootstrapping, R, SAS, Python, Weka, SQL and Excel . This looks like a standard Statistics curriculum.
- According to LinkedIn job posting (http://www.sanfranrecrui
ter.com/...) you need to know some of the following: algorithm design, information retrieval, relational databases (SQL) and non-relational databases (Hadoop/pig), big data analytics, data classification, text mining, search algorithms. This seems to be more of a CS/IR oriented role.
- Learn about Palantir (http://www.palantirtech.
com/), Recorded Future (https://www.recordedfutu re.com/) and Lyric Semiconductor (http://www.lyricsemicond uctor.com/), they make interesting products.
- Subscribe to DBWorld (it's a bit noisy but worth following): http://www.cs
.wisc.edu/dbworld/; Consi der joining at least one of these interest groups: http://www.sigkdd .org/, http://www.sigir.o rg/, http://www.sigmod.or g/, http://www.sigsam.org , http://www.amstat.org/, h ttp://www.siam.org/
- Choose an interesting problem to tackle, say temporal search: http://www.google
- See what interests you more, do your market research. Would you prefer working with vendor tools and do mostly modeling and reporting, or build data mining systems yourself and write a lot of code? Do you see yourself as a corporate employee, a researcher in academia or a startup founder in the future? What data interests you? Structure your curriculum based on that.
3) Learn about distributed systems and databases:
- Note: this topic is not part of a standard Machine Learning track but you can probably find courses such as Distributed Systems or Parallel Programming in your CS/EE catalog. I believe it is important to learn how to work with a Linux cluster and how to design scalable distributed algorithms if you want to work with big data. It is also becoming increasingly important to be able to utilize the full power of multicore. (see http://en.wikipedia.
org/wiki/Moo... , http:// techresearch.intel.com/ar ...)
- Download Hadoop  and run some MapReduce jobs on your laptop in pseudo-distributed mode (see
- Learn about Google technology stack (MapReduce, BigTable, Dremel, Pregel, GFS, Chubby, Protobuf etc). (See
- Setup account with Amazon AWS/EC2/S3/EBS and experiment with running Hadoop on a cluster with large data sets (you can use Cloudera or YDN images, but in my opinion you can better understand the system if you set it up from scratch, using the original distribution). Watch the costs.
- Try out Hadoop alternatives, specifically the minimalist frameworks such as BashReduce: http://github
.com/erikfrey/bashr... an d CloudMapReduce: http://co de.google.com/p/cloudma.. . (see
- Run Bryan Cooper's Cloud Serving Benchmark on AWS, compare Hbase vs Cassandra performance on a small cluster (6-8 nodes): http://wiki.github.com/b
- Run LINPACK benchmark: http://www.dat
- Run some experiments with MPI (http://www.mcs.anl.gov/r
esearch/...) try to implement a simple clustering algorithm (e.g http://en.wikipedia. org/wiki/K-m...) with MPI vs Hadoop/MapReduce and compare the performance, fault tolerance, ease of use etc. Learn the differences between the two approaches, and when it makes sense to use each one.
- Check out Dongarra' papers: http://www.netlib
- There is a new library called MPI-Mapreduce (http://www.sandia.gov/~s
jplimp/m...) see how it works and how it compares to other MapReduce implementations
- Run some tests with Scalapack , try to port one of the routines to Hadoop, compare the performance and scalability
- Write your own simplified MapReduce runtime in C or any other programming language
- Check out http://www.cascading.
org/, http://clojure.org/ and http://github.com/bra dford/infer
- Learn about distributed hash tables (http://en.wikipedia.org/
- Learn about Paxos (http://en.wikipedia.org/
wiki/Pax...), run some experiments with open source implementations.
- Download Nutch (http://nutch.apache.org/
) or Solr (http://lucene.apache.org /solr/), run a crawl on Wikipedia. Analyze the collected data with R (see item 2 above) or Python (http://www.nltk.org/)
- Write you own simplified crawler/indexer, test the performance and scalability, look at the Lucene source for ideas, look at http://infolab.stanfor
d.edu/~bac... for inspira tion. You can probably build it as a term project in either Information Retrieval or Search Engines course.
- Learn about prefix-sum: http://en.wik
ipedia.org/wiki/Pre... ,parallel matrix multiplication: http://ww w.cs.berkeley.edu/~yeli.. . ,streaming: http://infola b.stanford.edu/stream/ and BSP: http://en.wikipedia. org/wiki/Bul...
- Pick one of the PGAS languages (http://en.wikipedia.org/
wiki/Par...), e.g. X10 (http://en.wikipedia. org/wiki/X10..., go through the tutorials (http://ppppcourse.ning.c om/forum...), run some HPC benchmarks (LU, FFT) and the examples (the streaming example in particular): see how it scales on a cluster/AWS, compare to sequential and Hadoop/MapReduce implementation, see what kind of performance/scalability gains it gives you on multicore boxes.
- Some good references on parallel programming: Herlihy& Shavit, The art of multiprocessor programming: http://www.amazon.com/Art
-Multip... , Blelloch, Vector models for data-parallel computing: http://citeseerx.ist.psu. edu/vie... , Valiant, A bridging model for parallel computation: http://portal.acm.org/cit ation.c... ,Hillis & Steele, Data Parallel Algorithms: http://portal .acm.org/citation.c...
- Take a course in Parallel Computer Architecture: http://www.
- Check out Cilk: http://software.int
- Run some experiments with Weka (http://www.cs.waikato.ac
.nz/ml/w...) or RapidMiner (http://rapid-i.com/), pick a simple algorithm and port it to MapReduce, see how it scales on a cluster/AWS
- Experiment with distributed 'NoSQL' data stores (Voldemort, Hbase, Redis, Tokyo, Cassandra etc). Figure out what is CAP theorem all about (http://www.allthingsdist
ributed....). Create a simple app with key-value or column-based store as a back-end. Import several GBs of interesting data into it and run some simple clustering/KNN algos (http://en.wikipedia.org/ wiki/Clu..., http://en.wi kipedia.org/wiki/Nea...). Optimize your algo to better utilize random access patterns, experiment with various tuning options. Build a frond-end visualization for the results (Check out Protovis or similar visualization package: http://vis.stanf ord.edu/protovis/)
- A good resource on 'NoSQL': Varley, No Relation: The Mixed Blessings of Non-Relational Databases: http://ianvarley.com/UT/M
- Learn about main-memory databases: http://en.wiki
pedia.org/wiki/In-... , h ttp://scholar.google.com/ schola..., http://monetdb .cwi.nl/
- Write a distributed hash table in C, here is a good reference: http://pdos.cs
- Write a distributed file system in C. Learn how to write good systems code using the following resources:
4) Learn about data compression
To be added
5) Learn about machine learning
- This is an excellent resource for self-study: Cross, Learning about machine learning: http://measuringmeasures.
com/blo... , also http://metaoptimize. com/qa/quest...
- The alternative (and rather expensive) option is to enroll in a CS program/Machine Learning track if you prefer studying in a formal setting.
- Since all the standard machine learning, data mining, IR, statistics, AI, NLP content is available online, can be forked on github or purchased on Amazon I personally don't see much value in studying for a Masters degree unless you want a corporate job afterwards.
- See: Was your Master's in Computer Science (MS CS) degree worth it and why? , When is it a good idea to get an MS in Computer Science? , Was your Master's degree in Statistics/Applied Math/Symbolic systems worth it and why? What are the advantages and disadvantages of doing a CS PhD?
- [Higher Education] Which are the best universities for an MS or PhD related to Information Retrieval, and why?
- See Lorica, How to nurture data scientists: http://practi
- You can structure your study program according to online course catalogs and curricula of MIT (http://web.mit.edu/catal
og/degre..., http://ocw.m it.edu/courses/elect...), Stanford (http://www.stanford.edu/ dept/reg...) or other top engineering schools. Experiment with data a lot, hack some code, ask questions, talk to good people, set up a web crawler in your garage (http://www.ngoprekweb.co m/2006/1...).
- Joining a well-capitalized data-driven startup and learning by doing (with some part-time self-study using the resources above) could be a good option. See
Who are the best VCs in the field of analytics / data mining / databases?
Which companies have the best data science teams?
What are the notable startups in the news space?
Does the US Census have a data team?
Why do so many data geeks join web companies instead of solving large scale data problems in biology?
6) Learn about least-squares estimation and Kalman filters:
- This is a classic topic and "data science" par excellence in my opinion. It is also a good introduction to optimization and control theory. Start with Bierman's LLS tutorial given to his colleagues at JPL, it is clearly written and is inspiring (the Apollo mission trajectory was estimated using these methods): http://www.amaz
on.com/Factorizat... , also see Curkendall & Leondes: http://adsabs.harvard.edu /full/1974CeMec...8..481C and Quarles: http://citeseerx .ist.psu.edu/vie....
- See Steven Kay's series on statistical signal estimation: http://www.am
azon.com/Fundamenta..., also check out his short course outline at University of Rhode Island for a list of interesting topics to learn (this is usually part of EE curricula): http://www.el e.uri.edu/faculty/k...
7) Check out these Q&A:
What are the best blogs about data?
What are the best Twitter accounts about data?
What are the best blogs about bioinformatics?
What are the best Twitter accounts about bioinformatics?
What is data science?
What are the best courses at MIT?
What are the best resources to learn about web crawling and scraping?
What are the best interview questions to evaluate a machine learning researcher?
What are the best resources for learning about distributed file systems?
What are some useful packages for working with large datasets in R?
What are some good books on stringology and pattern matching?
What's a good introductory machine learning text?
What is the best book to pick up working knowledge of theoretical statistics (assuming strong general math)?
Can anyone recommend a fantastic book on time series analysis?
What are the standard texts on linear regression?
What are some good books on random processes?
How has BigTable evolved since the 2006 Google paper?
What is a good source for learning about Bayesian networks?
What are the best data visualizations ever created?
What are some of the prediction and risk estimation models used by insurance companies?
How do scientists share data?
What are the best quant hedge funds?
What are the best books on econometrics?
What are the best introductory books on mathematical finance?
What is the best approach for text categorization?
What are the numbers that every engineer should know, according to Jeff Dean?
If you do decide to go for a Masters degree:
8) Study Engineering - I'd go for CS with a focus on either IR or Machine Learning or a combination of both and take some systems courses along the way. As a "data scientist" you will have to write a ton of code and probably develop distributed algorithms/systems to process massive amounts of data. MS in Statistics will teach you how to do modeling and regression analysis etc, not how to build systems, I think the latter is more urgently needed these days as the old tools become obsolete with the avalanche of data. There is a shortage of engineers who can build a data mining system from the ground up. You can pick up statistics from books and experiments with R (see item 2 above) or take some statistics classes as a part of your CS studies.
1) Try to take some of the undergrad math courses you missed. Linear Algebra, Advanced Calculus, Diff. Eq., Probability, Statistics are the most important. After that, take some Machine Learning courses. Read a few of the leading ML textbooks and keep up with journals to get a good sense of the field.
2) Read up on what the top data companies are doing. After 1 or 2 machine learning courses you should have enough background to follow most of the academic papers. Implement some of these algorithms on real data.
3) If you are working with large datasets, get familiar with the latest techniques & tools (Hadoop, NoSQL, R, etc.) by putting them into practice at work (or outside of work).
Read these posts by Mike Driscoll:
1) MS or PhDs in Applied Mathematics or Electrical Engineering
2) Fluency C++/Matlab/Python
3) Experience building distributed systems and algorithms.
I agree with Anon that CS is probably not the way to go unless you are going to MIT, Caltech, Stanford, CMU, etc. The way I ended up in the field was working as a software engineer designing real-time systems and getting a MS in Applied Math part-time. After 4 years I had skills from both fields and was offered a position doing ML/DM. With that said, I can tell you that its an extremely interesting field, and it appears the skill set will only become more desirable in the future.
Stanford has online courses in data mining / ML - check
Look at some common problems solved with machine learning. Look at problems in your areas of interest with an abundance of available data. Intersect these sets, pick a problem to solve with ML. Learn whatever it takes to solve it poorly. Get people using the output of your model. Iterate, learn more techniques. Work on your maths as needed. Find mentors to talk with about problems you're working on. Keep them updated, collaborate, learn from them.
Get good at building things with data. Update your LinkedIn profile - congratulations, you're a data scientist!
+1 to both Pete's and Russ' wise words above.
The standard way to become a data analyst is master's in math/statistics + internship.
Other ways are:
- PhD in some empirical subject (economics, psychology).
- Get an engineering position in some data-intensive company and convert.
Some of the best modelers I know are ex-programmers.
2) simple stats about data, such as mean, correlation, and p-value.
3) algorithms for data modeling, such as logistic regression, and SVM.
4) visualization of data, such as chart and table.