Hi,
I am Suresh from IPivot. Please find the job description below for your reference. If interested, reply with an updated resume.
Job Title: Data Engineer (Databricks, Python, SQL, PySpark)
Location: Jersey City, NJ (Hybrid)
Job Type: Contract / Full Time
Note: Need Only Independent W2 / Visa Transfers Consultants.
Job Summary:
We are seeking a highly skilled Data Engineer with hands-on experience in Databricks, Python, SQL, and PySpark to join our growing data engineering team.
In this role, you'll build scalable data pipelines, work on big data processing, and collaborate across teams to deliver reliable, analytics-ready data.
The ideal candidate has strong experience with cloud data platforms and is passionate about driving data solutions using modern tools and technologies.
Required Skills:
3+ years of experience in data engineering or a related role.
Strong hands-on experience with Databricks and Apache Spark (PySpark).
Proficiency in Python and SQL for data manipulation and scripting.
Experience working with large datasets and building scalable data processing workflows.
Familiarity with cloud platforms (AWS, Azure, or GCP), especially cloud-native data solutions.
Understanding of data modeling, warehousing concepts, and performance tuning.
Experience with version control (Git) and CI/CD for data pipelines.
Preferred Qualifications:
Experience with Delta Lake and the Lakehouse architecture.
Exposure to orchestration tools like Airflow, DBT, or Azure Data Factory.
Experience working in Agile/Scrum environments.
Knowledge of real-time data processing and streaming (e.g., Kafka, Structured Streaming) is a plus.
Certification in Databricks or relevant cloud technologies.
Key Responsibilities:
Design, build, and maintain large-scale data pipelines on Databricks using PySpark and SQL.
Develop efficient, reliable, and scalable ETL/ELT workflows to ingest and transform structured and unstructured data.
Collaborate with data scientists, analysts, and product teams to understand data needs and deliver actionable datasets.
Optimize data performance and resource usage within Databricks clusters.
Automate data validation and monitoring to ensure pipeline reliability and data quality.
Write clean, modular, and testable code in Python.
Implement best practices for data security, governance, and compliance.
Document data workflows, architecture, and technical decisions.
Thanks and Regards,
Suresh Durgam
Senior Recruiter
M: (732) 813-4401
E: durgams@ipivot.io
...Come join out salon team in our family-owned salon! Teamwork, fun, express your creativity, career advancement, and advanced training... ...stylist or need help transitioning into a busy salon, our team will assist you in your journey. Training and new technical skill...
...week 2:30 pm - 7:30 pm (Guaranteed 30 hours). It is important that the nanny is flexible to work full days when the child is home from school, including school holidays and summer break. The best nanny for this family would be warm, playful, and engaging while providing...
...Summary We Still Love Hospitality ! The Spa at Spruce Peak is a21,000 square foot experience on 3 levels. Created to be... ...and to experience the magic of nature. We are looking for a Massage Therapist to join our team at The Spa at Spruce Peak. The Massage Therapist...
Lensa is the leading career site for job seekers at every stage of their career. Our client, Molina Healthcare, is seeking professionals. Apply via Lensa today! Job Description Job Summary Performs research and analysis of complex healthcare claims data, pharmacy...
...qualificationsHousing allowance of 4,000 RMBFlight reimbursementFull global medical insurance providedFully paid summer and winter holidaysNational holidaysChristmas holiday and spring holidayRelocation fee providedDiscount on children's tuition feeFinish contract bonus is 54,000 RMB]]...