Job Opportunity in Nashville (DATA SCIENTIST/BIG DATA)! Full-time
Posted By :
Ad Viewed : 769 Times | Ad Received : 0 Response
 
My client is seeking someone with application development lead experience, specific with Hadoop/Big Data experience.  Data Scientist a plus.  Details below.  Please contact me, if you are interested. I worked at my client for 10 years, so I know them well.  

Thanks, 
Robbie
Aspyre, LLC
615.708.5210 

JOB DETAILS ARE BELOW ........

Below is the job description: :) Also offering $1000 referral bonus for any successful referral/placements. 

Same client actually has 2 full-time needs.  The role requires working closely with others, frequently in a matrixed environment, and with little supervision. As a consulting-level position the role requires ‘self-starters’ who are proficient in problem solving and capable of bringing clarity to complex situations. It requires contributing to strategic technical direction and system architecture approaches for individual projects and platform migrations. The culture of the organization places an emphasis on teamwork, so social and interpersonal skills are equally important as technical capability. Due to the emerging and fast-evolving nature of Big Data technology and practice, the position requires that one stay well-informed of technological advancements and be proficient at putting new innovations into effective practice. Responsibilities This role will provide leadership and deep technical expertise in all aspects of solution design and application development for specific business environments. Focus on setting technical direction on groups of applications and similar technologies as well as taking responsibility for technically robust solutions encompassing all business, architecture, and technology constraints. Responsible for building and supporting a Hadoop-based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data. Manage and optimize Hadoop/Spark clusters, which may include many large HBase instances Support regular requests to move data from one cluster to another Manage production support teams to make sure service levels are maintained and any interruption is resolved in a timely fashion Bring new data sources into HDFS, transform and load to databases. Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand Big Data needs and use cases. Qualifications Required A successful candidate will have: Bachelor’s degree in Computer Science, or related discipline; with at least 7 years of equivalent work experience Data modeling experience using Big Data Technologies. Strong understanding of best practices and standards for Hadoop application design and implementation. 2 Years of hands-on experience with Cloudera Distributed Hadoop (CDH) and experience with many of the following components: Hadoop, MapReduce, Spark, Impala, Hive, Solr, YARN HBase or Cassandra Kafka, Flume, Storm, Zookeeper Java, Python, or Scala SQL, JSON, XML RegEx Sqoop Experience with Unstructured Data Experience in developing MapReduce programs using Apache Hadoop for working with Big Data. Experience having deployed Big Data Technologies to Production. Understanding of Lambda Design Architectures and Real-Time Streaming Ability to multitask and to balance competing priorities. Requires strong practical experience in agile application development, file systems management, and DevOps discipline and practice using short-cycle iterations to deliver continuous business value. Expertise in planning, implementing, supporting, and tuning Hadoop ecosystem environments using a variety of tools and techniques. Knowledge of all facets of Hadoop ecosystem development including ideation, design, implementation, tuning, and operational support. Ability to define and utilize best practice techniques and to impose order in a fast-changing environment. Must have strong problem-solving skills. Strong verbal, written, and interpersonal skills, including a desire to work within a highly-matrixed, team-oriented environment. Preferred A successful candidate may have: Experience in Healthcare Domain Experience in Patient Data Experience with Predictive ModelsPosition The role requires working closely with others, frequently in a matrixed environment, and with little supervision. As a consulting-level position the role requires ‘self-starters’ who are proficient in problem solving and capable of bringing clarity to complex situations. It requires contributing to strategic technical direction and system architecture approaches for individual projects and platform migrations. The culture of the organization places an emphasis on teamwork, so social and interpersonal skills are equally important as technical capability. Due to the emerging and fast-evolving nature of Big Data technology and practice, the position requires that one stay well-informed of technological advancements and be proficient at putting new innovations into effective practice. Responsibilities This role will provide leadership and deep technical expertise in all aspects of solution design and application development for specific business environments. Focus on setting technical direction on groups of applications and similar technologies as well as taking responsibility for technically robust solutions encompassing all business, architecture, and technology constraints. Responsible for building and supporting a Hadoop-based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data. Manage and optimize Hadoop/Spark clusters, which may include many large HBase instances Support regular requests to move data from one cluster to another Manage production support teams to make sure service levels are maintained and any interruption is resolved in a timely fashion Bring new data sources into HDFS, transform and load to databases. Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand Big Data needs and use cases. Qualifications Required A successful candidate will have: Bachelor’s degree in Computer Science, or related discipline; with at least 7 years of equivalent work experience Data modeling experience using Big Data Technologies. Strong understanding of best practices and standards for Hadoop application design and implementation. 2 Years of hands-on experience with Cloudera Distributed Hadoop (CDH) and experience with many of the following components: Hadoop, MapReduce, Spark, Impala, Hive, Solr, YARN HBase or Cassandra Kafka, Flume, Storm, Zookeeper Java, Python, or Scala SQL, JSON, XML RegEx Sqoop Experience with Unstructured Data Experience in developing MapReduce programs using Apache Hadoop for working with Big Data. Experience having deployed Big Data Technologies to Production. Understanding of Lambda Design Architectures and Real-Time Streaming Ability to multitask and to balance competing priorities. Requires strong practical experience in agile application development, file systems management, and DevOps discipline and practice using short-cycle iterations to deliver continuous business value. Expertise in planning, implementing, supporting, and tuning Hadoop ecosystem environments using a variety of tools and techniques. Knowledge of all facets of Hadoop ecosystem development including ideation, design, implementation, tuning, and operational support. Ability to define and utilize best practice techniques and to impose order in a fast-changing environment. Must have strong problem-solving skills. Strong verbal, written, and interpersonal skills, including a desire to work within a highly-matrixed, team-oriented environment. Preferred A successful candidate may have: Experience in Healthcare Domain Experience in Patient Data Experience with Predictive Models
Replies :