Documente Academic
Documente Profesional
Documente Cultură
Hadoop Admin/Developer
Email: avi.hadoop@gmail.com
Ph: 408-658-0338
Professional Summary
Experience in importing and exporting data using Sqoop from HDFS to Relational
Database Systems and vice-versa.
Experience in analyzing data using HiveQL, Pig Latin, and custom Map Reduce
programs in Java.
Familiar with Java virtual machine (JVM) and multi-threaded processing.
Knowledge in job workflow scheduling and monitoring tools like oozie and
Zookeeper
Familiar with data warehousing and ETL tools like Informatica and Pentaho.
Techno-functional
functional
and
responsibilities
technical
include
gaps,
interfacing
estimates,
with
designing
users,
identifying
custom
solutions,
Technical Skills
Hadoop/Big Data
HDFS,Mapreduce,HBase,Pig,Hive,Sqoop,Flume,MongoDB,Ca
Technologies
IDEs
Big data Analytics
Frameworks
Programming
languages
Databases
Web Servers
Web Technologies
Network Protocols
ETL Tools
Testing
Work Experience
HP (Big Data Services), Roseville, CA
Jan 2012
Till Date
Hadoop Engineer
The Hewlett-Packard Company or HP is an American multinational information
technology which provides products, technologies, software, solutions and services to
consumers, small- and medium-sized businesses (SMBs) and large enterprises,
including customers in the government, health and education sectors.
Environment: Hadoop, MapReduce, HDFS, Hive, Java, SQL, Cloudera Manager, Pig,
Sqoop, Oozie
Responsibilities:
Upgrading the Hadoop Cluster from CDH3 to CDH4 and setup High availability
Analyzed the data by performing Hive queries and running Pig scripts to know
user behavior.
Installed Oozie workflow engine to run multiple Hive and Pig jobs.
Developed Hive queries to process the data and generate the data cubes for
visualizing.
Importing and exporting data into HDFS and Hive using Sqoop.
Involved in defining job flows, managing and reviewing log files.
Extracted files from CouchDB through Sqoop and placed in HDFS and processed.
Load and transform large sets of structured, semi structured and unstructured
data.
Responsible to manage data coming from different sources.
Supported Map Reduce Programs those are running on the cluster.
Involved in loading data from UNIX file system to HDFS.
Installed and configured Hive and also written Hive UDFs.
Involved in creating Hive tables, loading with data and writing hive queries which
Responsible for the design and development of the framework. The system is
Set up Web sphere Application server and used Ant tool to build the
application and deploy the application in Web sphere.
Worked on Hibernate.
Environment: Java, Web Sphere 3.5, EJB, Servlets, JavaScript, JDBC, SQL, JUnit,
Eclipse IDE. Apache Tomcat 6
Responsibilities:
Developed Servlets and back-end Java classes using Web Sphere application
server.
Created complex SQL and used JDBC connectivity to access the database.
Involved in the design and coding of the data capture templates, presentation and
component templates.
Part of the team that designed, customized and implemented metadata search
Wachovia Bank, NC
Oct 2007
JAVA Developer
Sep 2006-
Wachovia provided a broad range of retail banking and brokerage, asset and wealth
management, corporate and investment banking products and services to customers,
along with nationwide retail brokerage, mortgage lending, and auto finance
businesses.
Environment Oracle 11g, Java 1.5, Struts, Servlets, HTML, XML, SQL, J2EE, JUnit,
Tomcat 6.
Responsibilities:
using Struts framework and performed data validation in form beans and action
classes.
database.
Validated the fields of user registration screen and login screen by writing
JavaScript validations.
Developed build and deployment scripts using Apache ANT to customize WAR and
EAR files.
Used DAO and JDBC for database access.
Developed stored procedures and triggers using PL/SQL in order to calculate and
application.
Involved in post production support and maintenance of the application.
References: