We look for people who are passionate around solving business problems through innovation & engineering practices. You will be required to apply your depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner continuously with your many stakeholders on a daily basis to stay focused on common goals.
J.P. Morgan Asset Management is a global institution that prides itself in the power of scale – offering first class financial products and services to its clients (and its clients’ clients) across the spectrum of client needs.
The Business Intelligence and Analytics team’s mission is to leverage large datasets to power cutting-edge applications and analytical capabilities within the Asset Management Distribution Business. As a Data Engineer, you will evolve at the intersection of business analytics, data warehousing and software engineering, and will be responsible for building the foundational data layer critical to the Business Intelligence and Analytics platform success.
Build large-scale batch, ETL and real-time data pipelines using cloud and on-premises data technologies, such as Redshift, Python, Spark, PySpark, and Apache Kafka
Design best practices for data processing, data modeling and warehouse development throughout our team and group
Develop strategy to provide proactive solutions and enable stakeholders to extract insights and value from data
Understand end to end data interactions and dependencies across complex data pipelines and data transformation and how they impact business decisions.
Hands-on experience building a data warehouse and data pipelines using Java, Python or Scala in a data intensive engineering role
Hands-on experience with data warehouse / data lake architectures based on Hadoop, Redshift or Snowflake
Experience with workflow orchestration tools such as Apache Airflow, Autosys
Familiarity with data transformation and collection tools such as Pentaho, Informatica
Experience with stream processing platforms such as Kafka or Dataflow
Knowledge of data columnar and serialization formats such as JSON, XML, Parquet, Avro
Experience with container technologies such as Docker and Kubernetes
Experience with CI/CD systems e.g. Jenkins and automation / DevOps best practices
Familiarity with AWS ecosystem including S3, Glue, Redshift, Kinesis, EMR, EC2, SQS
Familiarity of microservices stack based on AWS Lambdas. Elastic Search, Spring Boot, NodeJS
BS/BA degree or equivalent experience in computer science or engineering
Advanced level skills in SQL, data integration, data modeling and data architecture
Expert level skills in Python its standard library and its package
JPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.
We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. In accordance with applicable law, we make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as any mental health or physical disability needs.
Equal Opportunity Employer/Disability/Veterans