Thursday, April 28, 2016

Seminar Notes- Job Search on Linkedin

Notes from Randy Block- Using Linkedin for Job Search presentation on 4/28/2016. For more info: http://www.randyblock.com/

Fun Facts:
1. 96% recruiters are active on linkedin, but only 36% of job seekers are active on linkedin, 
2. 89% of recruiters have hired candidates through social media. 
3. Social networks continue to grow the fastest as hiring tools.
4. Social edia has become a fast and cheap “background check” that is often done before inviting a job applicant in for an interview.
5.As a recruiter, if you don’t ask the question, the answer is always no.
6. Merger, re-orgnizating, layoff are the times to apply
7. Algorithm are ignoring words that are obsolete.
8. Recruiters don’t contact people who is not employed

Fun Tips:
1. when you are following a company, and when they are searching for candidate, you come up higher on their search. 
2. Photo Tips: men do not smile, women show your teeth
3. use photofeeler.com to see how people like your photo
4. Turn off notification when you update your profile!!

Wednesday, April 27, 2016

Seminar Notes- Pachyderm and TubeMogul ( use Big Data to convert Events --> Insights --> Actions)

From Big Data Application Meetup 4/27 See http://bdam.io/ for complete notes. slides: http://www.slideshare.net/JoeyZwicker/big-data-applications-61439464

Talk #1 Introducing Pachyderm, by Joe Doliner from Pachyderm

Pachyderm is a big data analytics platform deployed with Kubernetes and Docker. Pachyderm is inspired by the Hadoop ecosystem but shares no code with it. Instead, we leverage the container ecosystem to provide the broad functionality of Hadoop with the ease of use of Docker. 

There are two bold new ideas in Pachyderm: 

Tuesday, April 26, 2016

Seminar Notes- Data Pipelines development/deployment and management using Data Swarm

Below is my learning notes from Mike Starr's presentation on Dataswarm. Full video here:
https://www.youtube.com/watch?v=M0VCbhfQ3HQ&list=PL_EeYa3aRS55QAbL851AF5FIHlCcN9xbp


1. Key Takeaways:

  1. Dataswarm  is a dependency graph description language. It's not a code that runs to completion or does anything. It just defines what you want to do.
  2. DataSwarm's primary objective is the operator schedule the pipeline in a specific date. Users write python code which defines the pipeline, and it's delegate to the driver script to run the stuff. 
  3. DataStorm advantage: write functions that generates pipeline instead of write them manually
  4. In facebook, datastorm runs every major batch pipeline

2. Summary: 

"At Facebook, data is used to gain insights for existing products and drive development of new products. In order to do this, engineers and analysts need to seamlessly process data across a variety of backend data stores. Dataswarm is a framework for writing data processing pipelines in Python. Using an extensible library of operations (e.g. executing queries, moving data, running scripts), developers programmatically define dependency graphs of tasks to be executed. Dataswarm takes care of the rest: distributed execution, scheduling, and dependency management. "


Below is a high level data flow for batch processing; An action leads to a web request on the backend server, that backend server generates logs of events.  Those log events then go to the data warehouse.



Dataswarm at a high level is a tool that enables Data Scientists to convert logs into useful information.