Keyword,Location,Job_title,Job_link,Company,Company_link,Job_location,Post_time,Applicants_count,Job_description,Seniority_level,Employment_type,Job_function,Industries Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-american-express-3488681090?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=V%2BDOv9jRRJJdwSrQWy5OZQ%3D%3D&position=1&pageNum=0&trk=public_jobs_jserp-result_search-card," American Express ",https://www.linkedin.com/company/american-express?trk=public_jobs_topcard-org-name," Phoenix, AZ "," 14 hours ago "," Over 200 applicants ","You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our custome rs’ digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. How will you make an impact in this role? You will collect, process, and perform statistical analysis of data and translate the numbers into an easy-to-understand format. By identifying trends and making predictions about the future, you will help teams at Amex make decisions on how to improve the efficiency of their products. You will be responsible for regulating, normalizing, and calibrating data that can be used alone or with other numbers and use charts, graphs, tables, and graphics to explain what the data means across specific amounts of time or various groups. Support and partner with teams across the enterprise. Support and contribute to data collection efforts, as needed. Verify data quality to ensure accurate analysis and reporting. Help identify the business data needed to produce the most useful insights and future analytics. Utilize data to make actionable recommendations at all levels. Communicate insights and recommendations effectively to the broader team. Monitor data management processes to ensure data quality and consistency. Monitor system performance, data integrity and usage metrics. Contribute to data dictionary, standards, training, and ongoing updates Minimum Qualifications Bachelor's or Graduate's Degree in business, computer science, engineering, or information systems or equivalent experience. Comfortable with statistics, datasets, and machine learning exercises. Strong critical thinking skills and attention to detail. Ability to assist with problem solving and debugging. Understanding of computer hardware and common operating systems; technology product management; cloud technologies Preferred Qualifications Proficiency in SQL, and PowerShell Understanding of Bash Salary Range: $59,000.00 to $105,000.00 annually + bonus + benefits The above represents the expected salary range for this job requisition. Ultimately, in determining your pay, we'll consider your location, experience, and other job-related factors. American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. We back our colleagues with the support they need to thrive, professionally and personally. That's why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. Depending on role and business needs, colleagues will either work onsite, in a hybrid model (combination of in-office and virtual days) or fully virtually. US Job Seekers/Employees - Click here to view the “Know Your Rights” poster and supplement and the Pay Transparency Policy Statement. If the links do not work, please copy and paste the following URLs in a new browser window: https://www.dol.gov/agencies/ofccp/posters to access the three posters."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-analytics-generalist-at-instagram-3512577161?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=ljL2tP4gcgL1inegyc90IA%3D%3D&position=2&pageNum=0&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us! Data Engineer, Analytics (Generalist) Responsibilities: Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision. Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems. Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Define and manage SLA for all data sets in allocated areas of ownership. Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership. Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains. Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources. Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts. Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts. Influence product and cross-functional teams to identify data opportunities to drive impact. Mentor team members by giving/receiving actionable feedback. Minimum Qualifications: Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. 12+ years experience in the data warehouse space. 12+ years experience in custom ETL design, implementation and maintenance. 12+ years experience with object-oriented programming languages. 12+ years experience with schema design and dimensional data modeling. 12+ years experience in writing SQL statements. Experience analyzing data to identify deliverables, gaps and inconsistencies. Experience managing and communicating data warehouse plans to internal clients. Preferred Qualifications: BS/BA in Technical Field, Computer Science or Mathematics. Experience working with either a MapReduce or an MPP system. Knowledge and practical application of Python. Experience working autonomously in global teams. Experience influencing product decisions with data. Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-at-michael-kors-3511301146?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=OZdOVxIFGKCK2bhZMG7Ngw%3D%3D&position=3&pageNum=0&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us!Data Engineer, Analytics (Generalist) Responsibilities:Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision.Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems.Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve.Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs.Define and manage SLA for all data sets in allocated areas of ownership.Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership.Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains.Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources.Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts.Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.Influence product and cross-functional teams to identify data opportunities to drive impact.Mentor team members by giving/receiving actionable feedback.Minimum Qualifications:Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.12+ years experience in the data warehouse space.12+ years experience in custom ETL design, implementation and maintenance.12+ years experience with object-oriented programming languages.12+ years experience with schema design and dimensional data modeling.12+ years experience in writing SQL statements.Experience analyzing data to identify deliverables, gaps and inconsistencies.Experience managing and communicating data warehouse plans to internal clients.Preferred Qualifications:BS/BA in Technical Field, Computer Science or Mathematics.Experience working with either a MapReduce or an MPP system.Knowledge and practical application of Python.Experience working autonomously in global teams.Experience influencing product decisions with data.Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-at-fooda-3511793648?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=aZ9%2FP4BaHA62161OujucUQ%3D%3D&position=4&pageNum=0&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us!Data Engineer, Analytics (Generalist) Responsibilities:Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision.Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems.Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve.Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs.Define and manage SLA for all data sets in allocated areas of ownership.Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership.Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains.Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources.Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts.Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.Influence product and cross-functional teams to identify data opportunities to drive impact.Mentor team members by giving/receiving actionable feedback.Minimum Qualifications:Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.12+ years experience in the data warehouse space.12+ years experience in custom ETL design, implementation and maintenance.12+ years experience with object-oriented programming languages.12+ years experience with schema design and dimensional data modeling.12+ years experience in writing SQL statements.Experience analyzing data to identify deliverables, gaps and inconsistencies.Experience managing and communicating data warehouse plans to internal clients.Preferred Qualifications:BS/BA in Technical Field, Computer Science or Mathematics.Experience working with either a MapReduce or an MPP system.Knowledge and practical application of Python.Experience working autonomously in global teams.Experience influencing product decisions with data.Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-evolution-3499091278?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=llGnwrenu6OeqokmAlBsqg%3D%3D&position=5&pageNum=0&trk=public_jobs_jserp-result_search-card," Evolution ",https://uk.linkedin.com/company/evolution-recruitment?trk=public_jobs_topcard-org-name," Durham, NC "," 2 weeks ago "," Over 200 applicants ","Junior Data Engineer, $100,000-$120,000, Durham (Hybrid) Are you looking to work for a company that uses tech for good? If so, keep reading... We're looking for enthusiastic junior engineers in the Durham area to join a philanthropic company who gives back to communities! This independent, non-profit research institution is a making strides in it's field. You'll be working within the Data Warehouse and BI space to support their community. Day to day you'll be developing and modifying Data Warehouse Integrations, data models, pipelines etc. You'll also be working closely with other teams in the business such as the software engineers, and database developers and product owners. Day to day you'll be: Turning business requirements from product owners into technical code Identify, fix and document bugs, bottlenecks in workflows and pipelines. Explore, learn the latest Azure data warehouse technologies to add to current competencies. Supporting Data Warehousing solutions in a fast-paced, dynamic environment Supporting Business Intelligence Applications (Power BI, Cognos) as required. Skills: Experience in data structures, data modeling, data warehousing and data pipeline development on Azure or equivalent major cloud platforms. Fluency in SQL for data exploration and analysis. Proven success in problem-solving, multi-tasking and managing multiple priorities. If you would like to hear more about this role, please contact Aimee Clemson at Evolution Recruitment 919 893 4419"," Entry level "," Full-time "," Information Technology "," Non-profit Organizations " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-dte-energy-3495681824?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=k7ifEPCbhfzctRlPlDtpFg%3D%3D&position=6&pageNum=0&trk=public_jobs_jserp-result_search-card," DTE Energy ",https://www.linkedin.com/company/dte-energy?trk=public_jobs_topcard-org-name," Detroit, MI "," 2 weeks ago "," Over 200 applicants ","DTE is one of the nation’s largest diversified energy companies. Our electric and gas companies have fueled our customer’s homes and Michigan’s progress for more than a century. And as Michigan’s largest source of renewable energy, we’re creating a cleaner, healthier environment to power our future. We’re also serving communities beyond Michigan, where our affiliated businesses offer renewable energy, emission control technologies, and energy services to industries in 19 states. But we’re more than a leading energy company... and working at DTE is more than just a job. At DTE, we take great care of each other and our customers, and we use our energy to be a force for growth and prosperity in our communities. When you join us, you’ll be part of a team that welcomes, recognizes, and celebrates differences and values everyone’s health, safety, and wellbeing. Are you ready to make that kind of difference? Bring your energy to DTE. Together, we can achieve great things. Testing Required: Not Applicable On-Site Role: Must be available to work on-site at this assigned work location. Emergency Response: Yes – Must be available to perform a primary assignment in support of DTE’s emergency response to storms or other events that impact service to our customers. Job Summary This position supports the Enterprise Security Transformation practice and is focused on engineering secure methods of utilizing cloud infrastructure services and software to improve the overall security posture of our company. In this role, you will have the opportunity to design and implement secure cloud infrastructure and systems for DTE. Your experience with the cloud security aspects of cloud data processing and storage will be used to create roadmaps for cloud integration, utilization, monitoring, and maintenance. Alongside the team of security consultants and leadership at DTE, you’ll collaborate with colleagues and client team members supporting network and architecture assessment, threat modeling, vulnerability assessment, and security operations. Key Accountabilities Facilitates data engineering projects and collaborates with stakeholders to formulate end-to-end solutions, including data structure design to feed downstream analytics, machine learning modeling, feature engineering, prototype development, and reporting. Works with business units, data architects, cloud engineers, and data scientists to identify relevant data, analyze data quality, design data requirements, and develop prototypes for proof-of- concepts. Develops data sets and automated pipelines that support data requirements for process improvement and operational efficiency metrics. Designs and implements data process pipelines in on-prem or Cloud platforms required for optimal extraction, transformation, and loading of data from multiple data sources. Builds reporting and visualizations that utilize data pipeline to provide actionable insights into compliance rates, operational efficiency, and other key business performance metrics. Designs and implements effective automation and testing strategies for data pipelines and processing methods. Deploys and automates Machine Learning Models in a data environment (e.g., SQL server, Cloud platform, on-prem servers and machines), including workflow orchestration, scheduling and advanced data processing implementation, and data delivery tools. Minimum Education & Experience Requirements This is a dual-track base requirement job; education and experience requirements can be satisfied through one of the following two options: Bachelor’s degree with emphasis on coursework of a quantitative nature (e.g., Computer Science, Mathematics, Physics, Data Science, Econometrics, etc.) and 3 years of experience working in a data engineering, data analytical, or computer programming function; OR Master’s degree with emphasis on coursework of a quantitative nature (e.g., Computer Science, Mathematics, Physics, Data Science, Econometrics, etc.) and 1 year of experience working in a data engineering, data analytical, or computer programming function. Preferred Other Qualifications At least 3 years of experience with Azure. Business domain knowledge SQL Database design and query optimization experience Advanced business acumen Experience with utility/energy industry Other Requirements Intermediate-level programming skills in structured query language (e.g., SQL) and modern programming language (e.g., C#, Python, R, Java, etc.) Experience in agile development and working with CI/CD pipelines. Intermediate-level proficiency in business intelligence tools and data blending tools (e.g., Microsoft Power Platform, Power BI, etc.) Proficiency with Big Data platforms. Ability to work overtime during peak periods. Dev/Ops engineer who is an expert in technology. Minimum of 2 + years of experience with API ingestion, file ingestion, batch transformation, metadata management, monitoring, pub/sub consumption, RDBMS ingestion and real-time transformation. Minimum of 2+ years using the following technology or equivalent: Octopus, Azure DevOps, Azure functions, Python, Azure Data Lake Storage (Gen 2), Azure Monitor, Azure Table Storage, Azure Databricks, Azure SQL Database, Azure Search, Azure Cosmo Data Store, and Azure Signa Additional Information Incumbents may engage in all or some combination of the activities and accountabilities and utilize a variety of the competencies cited in this description depending upon the organization and role to which they are assigned. This description is intended to describe the general nature and level of work performed by incumbents in this job. It is not intended as an all-inclusive list of accountabilities or responsibilities, nor is it intended to limit the rights of supervisors or management representatives to assign, direct and control the work of employees under their supervision. At DTE Energy, we are committed to providing an inclusive workplace where everyone feels welcome and a sense of belonging. We seek individuals with a heart for service, a passion to help our communities prosper, and ideas to help shape the future of energy. We are proud to be an equal opportunity employer that considers all qualified applicants without regard to race, color, sex, sexual orientation, gender identity, age, religion, disability, national origin, citizenship, height, weight, genetic information, marital status, pregnancy, protected veteran status or any other status protected by law."," Associate "," Full-time "," Information Technology "," Utilities " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-adobe-3459050990?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=kQTjSy1ypkm9dWTEmYd1gA%3D%3D&position=7&pageNum=0&trk=public_jobs_jserp-result_search-card," DTE Energy ",https://www.linkedin.com/company/dte-energy?trk=public_jobs_topcard-org-name," Detroit, MI "," 2 weeks ago "," Over 200 applicants "," DTE is one of the nation’s largest diversified energy companies. Our electric and gas companies have fueled our customer’s homes and Michigan’s progress for more than a century. And as Michigan’s largest source of renewable energy, we’re creating a cleaner, healthier environment to power our future. We’re also serving communities beyond Michigan, where our affiliated businesses offer renewable energy, emission control technologies, and energy services to industries in 19 states.But we’re more than a leading energy company... and working at DTE is more than just a job. At DTE, we take great care of each other and our customers, and we use our energy to be a force for growth and prosperity in our communities. When you join us, you’ll be part of a team that welcomes, recognizes, and celebrates differences and values everyone’s health, safety, and wellbeing. Are you ready to make that kind of difference? Bring your energy to DTE. Together, we can achieve great things.Testing Required: Not ApplicableOn-Site Role: Must be available to work on-site at this assigned work location.Emergency Response: Yes – Must be available to perform a primary assignment in support of DTE’s emergency response to storms or other events that impact service to our customers.Job SummaryThis position supports the Enterprise Security Transformation practice and is focused on engineering secure methods of utilizing cloud infrastructure services and software to improve the overall security posture of our company.In this role, you will have the opportunity to design and implement secure cloud infrastructure and systems for DTE. Your experience with the cloud security aspects of cloud data processing and storage will be used to create roadmaps for cloud integration, utilization, monitoring, and maintenance. Alongside the team of security consultants and leadership at DTE, you’ll collaborate with colleagues and client team members supporting network and architecture assessment, threat modeling, vulnerability assessment, and security operations.Key AccountabilitiesFacilitates data engineering projects and collaborates with stakeholders to formulate end-to-end solutions, including data structure design to feed downstream analytics, machine learning modeling, feature engineering, prototype development, and reporting.Works with business units, data architects, cloud engineers, and data scientists to identify relevant data, analyze data quality, design data requirements, and develop prototypes for proof-of- concepts.Develops data sets and automated pipelines that support data requirements for process improvement and operational efficiency metrics.Designs and implements data process pipelines in on-prem or Cloud platforms required for optimal extraction, transformation, and loading of data from multiple data sources.Builds reporting and visualizations that utilize data pipeline to provide actionable insights into compliance rates, operational efficiency, and other key business performance metrics.Designs and implements effective automation and testing strategies for data pipelines and processing methods.Deploys and automates Machine Learning Models in a data environment (e.g., SQL server, Cloud platform, on-prem servers and machines), including workflow orchestration, scheduling and advanced data processing implementation, and data delivery tools. Minimum Education & Experience RequirementsThis is a dual-track base requirement job; education and experience requirements can be satisfied through one of the following two options:Bachelor’s degree with emphasis on coursework of a quantitative nature (e.g., Computer Science, Mathematics, Physics, Data Science, Econometrics, etc.) and 3 years of experience working in a data engineering, data analytical, or computer programming function; ORMaster’s degree with emphasis on coursework of a quantitative nature (e.g., Computer Science, Mathematics, Physics, Data Science, Econometrics, etc.) and 1 year of experience working in a data engineering, data analytical, or computer programming function. PreferredOther QualificationsAt least 3 years of experience with Azure.Business domain knowledgeSQL Database design and query optimization experienceAdvanced business acumenExperience with utility/energy industry Other RequirementsIntermediate-level programming skills in structured query language (e.g., SQL) and modern programming language (e.g., C#, Python, R, Java, etc.)Experience in agile development and working with CI/CD pipelines.Intermediate-level proficiency in business intelligence tools and data blending tools (e.g., Microsoft Power Platform, Power BI, etc.)Proficiency with Big Data platforms.Ability to work overtime during peak periods.Dev/Ops engineer who is an expert in technology.Minimum of 2 + years of experience with API ingestion, file ingestion, batch transformation, metadata management, monitoring, pub/sub consumption, RDBMS ingestion and real-time transformation.Minimum of 2+ years using the following technology or equivalent: Octopus, Azure DevOps, Azure functions, Python, Azure Data Lake Storage (Gen 2), Azure Monitor, Azure Table Storage, Azure Databricks, Azure SQL Database, Azure Search, Azure Cosmo Data Store, and Azure Signa Additional InformationIncumbents may engage in all or some combination of the activities and accountabilities and utilize a variety of the competencies cited in this description depending upon the organization and role to which they are assigned. This description is intended to describe the general nature and level of work performed by incumbents in this job. It is not intended as an all-inclusive list of accountabilities or responsibilities, nor is it intended to limit the rights of supervisors or management representatives to assign, direct and control the work of employees under their supervision.At DTE Energy, we are committed to providing an inclusive workplace where everyone feels welcome and a sense of belonging. We seek individuals with a heart for service, a passion to help our communities prosper, and ideas to help shape the future of energy. We are proud to be an equal opportunity employer that considers all qualified applicants without regard to race, color, sex, sexual orientation, gender identity, age, religion, disability, national origin, citizenship, height, weight, genetic information, marital status, pregnancy, protected veteran status or any other status protected by law. "," Associate "," Full-time "," Information Technology "," Utilities " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3518400111?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=6Hd3ESO7vG7h%2FYx3Uc0DkQ%3D%3D&position=8&pageNum=0&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Chicago, IL "," 1 week ago "," Be among the first 25 applicants ","Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future? Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion? Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup? Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience? If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company. Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US. What You’ll Do: This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable. You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reporting Support users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reports Own the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teams Manage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automation Once you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your work will also include: AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layers Meeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group. Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issues Our tech stack: Mobile Platforms Supported: iOS & Android Cross-Platform Mobile Language: React Native Other Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda) Systems: Salesforce, AWS Amplify / Cognito / Lambda Your Profile: A minimum of 3+ years of professional experience Bachelor's Degree or equivalent experience Good interpersonal and relationship skills that include a positive attitude Self-starter who can find a way forward even when the path is unclear. Team player AND a leader simultaneously. What You’ll Bring to the Team: Passionate about creating value that changes people's lives Make low-level decisions quickly while being patient and methodical with high-level ones Are curious and passionate about digging into new technologies with a knack for picking them up quickly Adept at prioritizing value and shipping complex products while coordinating across multiple teams Love working with a diverse set of engineers, product managers, designers, and business partners Strive to excel, innovate and take pride in your work Work well with other leaders Are a positive culture driver Excited about working in a fast-paced, startup culture Experience in a regulated industry (healthcare, finance, etc.) a plus and perks: We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We make healthcare more equitable and accessible: Mission-driven organization focused on innovative digestive care Thrive on diversity with monthly DEIB discussions, activities, and more Virtual-first culture: Work from home anywhere in the US Live our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We take care of our people: Competitive compensation and meaningful equity Employer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happens Tailored professional development opportunities to learn and grow We rest, recharge and re-energize: Unlimited paid time off — take what you need, when you need it 13 paid company holidays to power down Team events, such as virtual cooking classes, games, and more Recognition of professional and personal accomplishments Oshi Health’s Core Values: Go For It Do the Right Thing Be Direct & Open Learn & Improve TEAM - Together Everyone Achieves More Oshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Powered by JazzHR FRVWJuzRKn"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-open-road-media-3499519671?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=FgpG90oS9M2A546exyTQPg%3D%3D&position=9&pageNum=0&trk=public_jobs_jserp-result_search-card," Open Road Media ",https://www.linkedin.com/company/open-road-media?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 37 applicants ","About Open Road Integrated Media Open Road Integrated Media is a prestige content brand delivering digital experiences that entertain and inform readers around the world. Open Road was founded in 2009 with the belief that great marketing and great content are the engines of growth for underserved authors and books. This philosophy is at the core of everything that we do. Open Road revolutionizes how publishers service authors, agents, and readers. Summary The Data Engineering and Analytics team is seeking a Junior Data Engineer. This role will work on various types of tasks and projects. The members of Data Engineering and Analytics team uses rigorous analytics to generate insights that inform product, marketing, and business decisions across the company. In the meantime, we build systems, infrastructure, data products for collecting, storing, and visualizing data to foster the data democracy in the company. We work in Python, SQL and we work with technologies like Airflow, Django, Tableau, Spark. Essential Functions ETL Design and build ETL pipelines by using Airflow to collect data from different sources to data warehouses Build pipeline integration with our various data products for long-running processes Identify the room for optimizing relational data storage through design, query optimization, indices, replicas, partitioning, etc. Automation Build ad-hoc scripts or recurring processes to fully/partially automate labor-intensive workflows of other departments (e.g. Production, Marketing) Requirements 1-2 years hands-on experience in ETL design, implementation and maintenance Experience in schema design and data modeling Experience in writing complex SQL queries to extract data from relational databases (e.g. MySQL, Redshift) Experience in version control systems such as Git Experience in the following tools/technology is a plus Airflow, Django, Git, Docker Comfortable with extensive Python coding Willing to work with a codebase that is not originally written by you Good communication skills; understand that being an effective engineer is about communicating with people as much as it is about writing code Willing to learn any language/tools/frameworks that are necessary to get the job done Compensation Salary will be commensurate with qualifications and experience. The salary range for this position is $55,000.00 - $75,000.00."," Entry level "," Full-time "," Information Technology "," Photography " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-analytics-generalist-at-instagram-3512575736?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=vv9MIxWYfL2TuGRwzcGL0A%3D%3D&position=10&pageNum=0&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 88 applicants ","Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us! Data Engineer, Analytics (Generalist) Responsibilities: Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision. Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems. Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Define and manage SLA for all data sets in allocated areas of ownership. Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership. Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains. Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources. Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts. Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts. Influence product and cross-functional teams to identify data opportunities to drive impact. Mentor team members by giving/receiving actionable feedback. Minimum Qualifications: Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. 12+ years experience in the data warehouse space. 12+ years experience in custom ETL design, implementation and maintenance. 12+ years experience with object-oriented programming languages. 12+ years experience with schema design and dimensional data modeling. 12+ years experience in writing SQL statements. Experience analyzing data to identify deliverables, gaps and inconsistencies. Experience managing and communicating data warehouse plans to internal clients. Preferred Qualifications: BS/BA in Technical Field, Computer Science or Mathematics. Experience working with either a MapReduce or an MPP system. Knowledge and practical application of Python. Experience working autonomously in global teams. Experience influencing product decisions with data. Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3488396292?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=TzrAJAQ3mwaa7xEtrjhaoA%3D%3D&position=11&pageNum=0&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Boston, MA "," 3 weeks ago "," Be among the first 25 applicants ","Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future? Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion? Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup? Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience? If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company. Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US. What You’ll Do: This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable. You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reporting Support users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reports Own the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teams Manage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automation Once you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your work will also include: AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layers Meeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group. Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issues Our tech stack: Mobile Platforms Supported: iOS & Android Cross-Platform Mobile Language: React Native Other Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda) Systems: Salesforce, AWS Amplify / Cognito / Lambda Your Profile: A minimum of 3+ years of professional experience Bachelor's Degree or equivalent experience Good interpersonal and relationship skills that include a positive attitude Self-starter who can find a way forward even when the path is unclear. Team player AND a leader simultaneously. What You’ll Bring to the Team: Passionate about creating value that changes people's lives Make low-level decisions quickly while being patient and methodical with high-level ones Are curious and passionate about digging into new technologies with a knack for picking them up quickly Adept at prioritizing value and shipping complex products while coordinating across multiple teams Love working with a diverse set of engineers, product managers, designers, and business partners Strive to excel, innovate and take pride in your work Work well with other leaders Are a positive culture driver Excited about working in a fast-paced, startup culture Experience in a regulated industry (healthcare, finance, etc.) a plus and perks: We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We make healthcare more equitable and accessible: Mission-driven organization focused on innovative digestive care Thrive on diversity with monthly DEIB discussions, activities, and more Virtual-first culture: Work from home anywhere in the US Live our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We take care of our people: Competitive compensation and meaningful equity Employer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happens Tailored professional development opportunities to learn and grow We rest, recharge and re-energize: Unlimited paid time off — take what you need, when you need it 13 paid company holidays to power down Team events, such as virtual cooking classes, games, and more Recognition of professional and personal accomplishments Oshi Health’s Core Values: Go For It Do the Right Thing Be Direct & Open Learn & Improve TEAM - Together Everyone Achieves More Oshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Powered by JazzHR wrgF3xMbK3"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-planoly-3475924786?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=0%2B1v50gESAZ28uRtUqGK%2BA%3D%3D&position=12&pageNum=0&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Boston, MA "," 3 weeks ago "," Be among the first 25 applicants "," Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future?Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion?Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup?Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience?If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company.Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US.What You’ll Do:This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable.You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reportingSupport users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reportsOwn the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teamsManage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automationOnce you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your work will also include: AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layersMeeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group.Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issuesOur tech stack:Mobile Platforms Supported: iOS & AndroidCross-Platform Mobile Language: React NativeOther Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda)Systems: Salesforce, AWS Amplify / Cognito / LambdaYour Profile:A minimum of 3+ years of professional experienceBachelor's Degree or equivalent experienceGood interpersonal and relationship skills that include a positive attitudeSelf-starter who can find a way forward even when the path is unclear.Team player AND a leader simultaneously.What You’ll Bring to the Team:Passionate about creating value that changes people's livesMake low-level decisions quickly while being patient and methodical with high-level onesAre curious and passionate about digging into new technologies with a knack for picking them up quicklyAdept at prioritizing value and shipping complex products while coordinating across multiple teamsLove working with a diverse set of engineers, product managers, designers, and business partnersStrive to excel, innovate and take pride in your workWork well with other leadersAre a positive culture driverExcited about working in a fast-paced, startup cultureExperience in a regulated industry (healthcare, finance, etc.) a plusand perks:We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We make healthcare more equitable and accessible: Mission-driven organization focused on innovative digestive careThrive on diversity with monthly DEIB discussions, activities, and moreVirtual-first culture: Work from home anywhere in the USLive our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We take care of our people:Competitive compensation and meaningful equityEmployer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happensTailored professional development opportunities to learn and growWe rest, recharge and re-energize:Unlimited paid time off — take what you need, when you need it13 paid company holidays to power downTeam events, such as virtual cooking classes, games, and moreRecognition of professional and personal accomplishmentsOshi Health’s Core Values:Go For ItDo the Right ThingBe Direct & OpenLearn & ImproveTEAM - Together Everyone Achieves MoreOshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.Powered by JazzHRwrgF3xMbK3 "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3488398091?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=Wve1G5qmXSX7SmTxONMedg%3D%3D&position=13&pageNum=0&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Chicago, IL "," 3 weeks ago "," Be among the first 25 applicants ","Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future? Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion? Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup? Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience? If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company. Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US. What You’ll Do: This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable. You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reporting Support users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reports Own the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teams Manage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automation Once you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your work will also include: AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layers Meeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group. Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issues Our tech stack: Mobile Platforms Supported: iOS & Android Cross-Platform Mobile Language: React Native Other Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda) Systems: Salesforce, AWS Amplify / Cognito / Lambda Your Profile: A minimum of 3+ years of professional experience Bachelor's Degree or equivalent experience Good interpersonal and relationship skills that include a positive attitude Self-starter who can find a way forward even when the path is unclear. Team player AND a leader simultaneously. What You’ll Bring to the Team: Passionate about creating value that changes people's lives Make low-level decisions quickly while being patient and methodical with high-level ones Are curious and passionate about digging into new technologies with a knack for picking them up quickly Adept at prioritizing value and shipping complex products while coordinating across multiple teams Love working with a diverse set of engineers, product managers, designers, and business partners Strive to excel, innovate and take pride in your work Work well with other leaders Are a positive culture driver Excited about working in a fast-paced, startup culture Experience in a regulated industry (healthcare, finance, etc.) a plus and perks: We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We make healthcare more equitable and accessible: Mission-driven organization focused on innovative digestive care Thrive on diversity with monthly DEIB discussions, activities, and more Virtual-first culture: Work from home anywhere in the US Live our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We take care of our people: Competitive compensation and meaningful equity Employer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happens Tailored professional development opportunities to learn and grow We rest, recharge and re-energize: Unlimited paid time off — take what you need, when you need it 13 paid company holidays to power down Team events, such as virtual cooking classes, games, and more Recognition of professional and personal accomplishments Oshi Health’s Core Values: Go For It Do the Right Thing Be Direct & Open Learn & Improve TEAM - Together Everyone Achieves More Oshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Powered by JazzHR mDt2wgg0Gz"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-adobe-3459050989?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=kGivn%2FitfRQgXdooK9%2B%2FAw%3D%3D&position=14&pageNum=0&trk=public_jobs_jserp-result_search-card," Adobe ",https://www.linkedin.com/company/adobe?trk=public_jobs_topcard-org-name," Austin, TX "," 14 hours ago "," Over 200 applicants ","Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Adobe Customer Solutions is looking for a full time Data Engineer with experience in building data integrations using AWS technology stack as part of the team's Data as a Service portfolio for Adobe’s Digital Experience enterprise customers. Customer facing Engineers who enjoy tackling complex technical challenges, have a passion for delighting customers and who are self-motivated to push themselves in a team oriented culture will thrive in our environment What You'll Do Collaborate with Data architects, Enterprise architects, Solution consultants and Product engineering teams to gather customer data integration requirements, conceptualize solutions & build required technology stack Collaborate with enterprise customer's engineering team to identify data sources, profile and quantify quality of data sources, develop tools to prepare data and build data pipelines for integrating customer data sources and third party data sources with Adobe solutions Develop new features and improve existing data integrations with customer data ecosystem Encourage team to think out-of-the-box and overcome engineering obstacles while incorporating new innovative design principles. Collaborate with a Project Manager to bill and forecast time for customer solutions What You Need To Succeed Proven experience in architecting and building fault tolerant and scalable data processing integrations using AWS. Ability to identify and resolve problems associated with production grade large scale data processing workflows. Experience leveraging REST APIs to serve and consume data. Proven track record in Python programming language Software development experience working with Apache Airflow, Spark, SQL / No SQL database. Deep understanding of streaming architecture using tools such as Spark-Streaming, Kinesis and Kafka. Experience using Docker, Containerization and Orchestration. BS/MS degree in Computer Science or equivalent proven experience At least 3 years of experience as a data engineer or in a similar role. This client-facing position requires working with a variety of stakeholders in different roles; having applicable experience is important. Previous experience in building and deploying solutions using CI/CD. Passion for crafting Intelligent data pipelines using Microservices/Event Driven Architecture under strict deadlines. Strong capacity to handle numerous projects in parallel is a must. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our outstanding Check-In approach where feedback flows freely. If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the significant benefits we offer. Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability or veteran status. Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $101,500 -- $194,300 annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter can share more about the specific salary range for the job location during the hiring process. At Adobe, for sales roles starting salaries are expressed as total target compensation (TTC = base + commission), and short-term incentives are in the form of sales commission plans. Non-sales roles starting salaries are expressed as base salary and short-term incentives are in the form of the Annual Incentive Plan (AIP). In addition, certain roles may be eligible for long-term incentives in the form of a new hire equity award."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting, Advertising Services, and Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-teckpert-3515390151?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=OXhAJn%2FCj%2B9s7jh6wIWscA%3D%3D&position=15&pageNum=0&trk=public_jobs_jserp-result_search-card," Adobe ",https://www.linkedin.com/company/adobe?trk=public_jobs_topcard-org-name," Austin, TX "," 14 hours ago "," Over 200 applicants "," Our CompanyChanging the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!Adobe Customer Solutions is looking for a full time Data Engineer with experience in building data integrations using AWS technology stack as part of the team's Data as a Service portfolio for Adobe’s Digital Experience enterprise customers.Customer facing Engineers who enjoy tackling complex technical challenges, have a passion for delighting customers and who are self-motivated to push themselves in a team oriented culture will thrive in our environmentWhat You'll DoCollaborate with Data architects, Enterprise architects, Solution consultants and Product engineering teams to gather customer data integration requirements, conceptualize solutions & build required technology stackCollaborate with enterprise customer's engineering team to identify data sources, profile and quantify quality of data sources, develop tools to prepare data and build data pipelines for integrating customer data sources and third party data sources with Adobe solutionsDevelop new features and improve existing data integrations with customer data ecosystemEncourage team to think out-of-the-box and overcome engineering obstacles while incorporating new innovative design principles.Collaborate with a Project Manager to bill and forecast time for customer solutionsWhat You Need To Succeed Proven experience in architecting and building fault tolerant and scalable data processing integrations using AWS. Ability to identify and resolve problems associated with production grade large scale data processing workflows. Experience leveraging REST APIs to serve and consume data. Proven track record in Python programming language Software development experience working with Apache Airflow, Spark, SQL / No SQL database. Deep understanding of streaming architecture using tools such as Spark-Streaming, Kinesis and Kafka. Experience using Docker, Containerization and Orchestration. BS/MS degree in Computer Science or equivalent proven experience At least 3 years of experience as a data engineer or in a similar role. This client-facing position requires working with a variety of stakeholders in different roles; having applicable experience is important. Previous experience in building and deploying solutions using CI/CD. Passion for crafting Intelligent data pipelines using Microservices/Event Driven Architecture under strict deadlines. Strong capacity to handle numerous projects in parallel is a must. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our outstanding Check-In approach where feedback flows freely.If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the significant benefits we offer.Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability or veteran status.Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $101,500 -- $194,300 annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter can share more about the specific salary range for the job location during the hiring process.At Adobe, for sales roles starting salaries are expressed as total target compensation (TTC = base + commission), and short-term incentives are in the form of sales commission plans. Non-sales roles starting salaries are expressed as base salary and short-term incentives are in the form of the Annual Incentive Plan (AIP).In addition, certain roles may be eligible for long-term incentives in the form of a new hire equity award. "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting, Advertising Services, and Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-american-express-3507120914?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=DJb9%2FR8IE9BL9ZEP%2FQYBYw%3D%3D&position=16&pageNum=0&trk=public_jobs_jserp-result_search-card," Adobe ",https://www.linkedin.com/company/adobe?trk=public_jobs_topcard-org-name," Austin, TX "," 14 hours ago "," Over 200 applicants "," Our CompanyChanging the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!Adobe Customer Solutions is looking for a full time Data Engineer with experience in building data integrations using AWS technology stack as part of the team's Data as a Service portfolio for Adobe’s Digital Experience enterprise customers.Customer facing Engineers who enjoy tackling complex technical challenges, have a passion for delighting customers and who are self-motivated to push themselves in a team oriented culture will thrive in our environmentWhat You'll DoCollaborate with Data architects, Enterprise architects, Solution consultants and Product engineering teams to gather customer data integration requirements, conceptualize solutions & build required technology stackCollaborate with enterprise customer's engineering team to identify data sources, profile and quantify quality of data sources, develop tools to prepare data and build data pipelines for integrating customer data sources and third party data sources with Adobe solutionsDevelop new features and improve existing data integrations with customer data ecosystemEncourage team to think out-of-the-box and overcome engineering obstacles while incorporating new innovative design principles.Collaborate with a Project Manager to bill and forecast time for customer solutionsWhat You Need To Succeed Proven experience in architecting and building fault tolerant and scalable data processing integrations using AWS. Ability to identify and resolve problems associated with production grade large scale data processing workflows. Experience leveraging REST APIs to serve and consume data. Proven track record in Python programming language Software development experience working with Apache Airflow, Spark, SQL / No SQL database. Deep understanding of streaming architecture using tools such as Spark-Streaming, Kinesis and Kafka. Experience using Docker, Containerization and Orchestration. BS/MS degree in Computer Science or equivalent proven experience At least 3 years of experience as a data engineer or in a similar role. This client-facing position requires working with a variety of stakeholders in different roles; having applicable experience is important. Previous experience in building and deploying solutions using CI/CD. Passion for crafting Intelligent data pipelines using Microservices/Event Driven Architecture under strict deadlines. Strong capacity to handle numerous projects in parallel is a must. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our outstanding Check-In approach where feedback flows freely.If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the significant benefits we offer.Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability or veteran status.Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $101,500 -- $194,300 annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter can share more about the specific salary range for the job location during the hiring process.At Adobe, for sales roles starting salaries are expressed as total target compensation (TTC = base + commission), and short-term incentives are in the form of sales commission plans. Non-sales roles starting salaries are expressed as base salary and short-term incentives are in the form of the Annual Incentive Plan (AIP).In addition, certain roles may be eligible for long-term incentives in the form of a new hire equity award. "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting, Advertising Services, and Software Development " Data Engineer,United States,Jr. Data Engineer,https://www.linkedin.com/jobs/view/jr-data-engineer-at-avispa-3504222126?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=LQ4sv0XVNhm4K0BbqHz5OQ%3D%3D&position=17&pageNum=0&trk=public_jobs_jserp-result_search-card," Avispa ",https://www.linkedin.com/company/avispa-llc?trk=public_jobs_topcard-org-name," United States "," 1 month ago "," 79 applicants ","Jr. Data Engineer 10863 A leading professional networking and development company is seeking a Data Engineer. The successful candidate will design and analyze experiments to test new product ideas and convert the results into actionable product recommendations. The ideal candidate has prior experience with Hadoop, Tableau, SQL, and Java/C++. The company offers a great work environment! Jr. Data Engineer Pay And Benefits Hourly pay: $30-$40/hr Worksite: Leading professional networking and development company (San Francisco, CA 94105, Open to Remote, Must work Pacific Standard Time hours) W2 Employment, Group Medical, Dental, Vision, Life, 401k 40 hours/week, 6 Month Assignment Jr. Data Engineer Responsibilities Extract and analyze data to derive actionable insights. Formulate success metrics for completely novel products, socializing them and creating dashboards/reports to monitor them. Design and analyze experiments to test new product ideas and convert the results into actionable product recommendations. As an end consumer of the data, determine the tracking necessary to enable analytics of the products and features by working closely with product and engineering partners. Enable others in the organization to utilize your work by onboarding new metrics into the self-serve data system and the experimentation platform. Develop models and data-driven solutions that add material lift to principal performance metrics. Manage Revamp Marketing Data platform. Rewrite data layers. Create foundation layers. Jr. Data Engineer Qualifications 2+ years of experience developing applications, software and web analytics. 2+ years of work experience providing analytical insights and business reports to product or business functions. 2+ years of experience with Tableau, QlikView, Microstrategy or other data visualization and BI dashboarding tools. 1+ years of experience programming in Java or Python and working with large datasets. BS/MS degree in a quantitative discipline: Statistics, Applied Mathematics, Operations Research, Computer Science, Engineering, or Economics. PhD in a quantitative discipline is preferred (statistics, applied mathematics, operations research, computer science, engineering, economics, etc.) Advanced skills in Java/C++ are preferred. Experience in Hadoop or other MapReduce paradigms and associated languages such as Pig, Sawzall, etc. Experience presenting insights to executive staff on a regular basis. Expertise in applied statistics and in at least one statistical software package, preferably R. Proficiency in SQL and in a Unix/Linux environment for automating processes with shell scripting. Ability to communicate findings clearly to both technical and non-technical audiences. Ability to translate business objectives into actionable analyses. Have a platform based on Legacy data/info."," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cinqcare-3510880916?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=s6PmauCTqglZ4juOLRvaaQ%3D%3D&position=18&pageNum=0&trk=public_jobs_jserp-result_search-card," CINQCARE ",https://www.linkedin.com/company/cinq-care?trk=public_jobs_topcard-org-name," Washington, DC "," 1 week ago "," 32 applicants ","Overview The Data Engineer is a critical member of our growing data science team. In this role, you will have the chance to define and develop a core data asset which provides a representative and culturally aware view of the individuals and the communities that CINQCARE serves. You will work to evolve this asset over time using a product roadmap that includes identification and closure of gaps in existing data, introduction of new data sources, generation of proprietary data while quantifying and eliminating areas of structural biases. CINQCARE seeks to fix gaps that have persisted for generations in the delivery of care to Black and Brown populations and to so, we must also seek to fix gaps in data that ignore and marginalizes the Black and Brown communities that we serve. An ideal candidate for this role will embody CINQCARE’s core values, including, Trusted, Empathetic, Committed, Humble, Creative and Community-Minded. At CINQCARE, we don’t have patients or customers – we have Family Members. Job Responsibilities The Data Engineer will have the following responsibilities: The Data Engineer will be responsible for the design, development and delivery of data pipelines and value-added data assets, leveraging a variety of data warehousing methodologies and disciplines to ingest the data from heterogeneous sources into cloud-based Data-Lake Environment in AWS. Manage initiatives & projects of significant complexity and risk. Excellent business and communication skills to be able to work with business owners providing input to prioritized roadmaps, develop work estimates, and ensure successful delivery to support strategic planning and initiatives, improve organizational performance, and advance progress towards CINQCARE’s goals. Assist in the overall architecture of the ETL Design and proactively provide inputs in designing, implementing, and automating the data pipelines. Investigate and mine data to identify potential issues within the data pipelines, notify end-users and propose adequate solutions. Ensure data quality and integrity within the data lake in the AWS environment with a focus on compliance to HIPAA and state level compliance requirements. Oversee user permissions and configurations for adherence to documented access management standards and policies. Independently (with minimal oversight) develop and maintain trusted advisor relationships with business, clinical, and operations leaders at the senior leadership level and with external partners, that include guidance for optimizing use of analytic capabilities and deliverables, and prioritization based on strategic vision. Use coding/scripting pipelines and APIs to uncover and turn data into assets that are analysis-friendly using AWS services like Athena Data Catalogue, Quick Sight or any other big data tool on AWS. Create a high-quality catalog of all pertinent data with the primary goal of establishing a single source of truth and significantly increasing productivity by reducing the time required for data search and discovery. Crossing team boundaries, educate/advise on data projects, on how to combine and aggregate client data across platforms or technologies, and how to make the greatest use of data. Lead consistent adherence to the Software Development Life Cycle framework and governance processes including, but not limited to leading planning sessions, collecting, and documenting requirements, identifying design patterns, create and define custom transformations, aggregations, and other data manipulations, developing data pipelines, creating documentation, developing test plans, performing unit testing, conducting peer review sessions. Use knowledge of healthcare industry, market environment, and clinical and business workflows and activities, to inform solution design and development to execute high-quality or differentiated solutions in an established problem space. Perform other job-related duties as assigned. General Duties The Data Engineer should have the following duties: Leadership: The Data Engineer will lead the continued build out of the data asset to create business value, including collaborating with their team to design, develop, and execute those strategies and solutions to deliver desired outcomes. Strategy: The Data Engineer will contribute to the business strategy and roadmap: (1) improve outcomes for CINQCARE Family Members; (2) enhance the efficacy of other CINQCARE. business divisions; and (3) develop and deliver external market opportunities for CINQCARE products and services. In establishing the business strategy, the Data Engineer will define and innovate sustainable revenue models to drive profitability of the Company. Collaboration: The Data Engineer will ensure that AI capabilities form a cohesive offering, including by working closely with other business divisions to learn their needs, internalize their knowledge, and define solutions to achieve the business objectives of CINQCARE. Knowledge: The Data Engineer will provide subject matter expertise in the AI solutions, including determining and recommended approaches for designing and building elegant data structures in support of existing reporting tools and custom visualization platforms. Culture: The Data Engineer is accountable for creating a productive, collaborative, safe and inclusive work environment for their team and as part of the larger Company. Qualifications The Data Engineer should have the following qualifications: Education: Bachelor’s degree in Computer Science, Engineering, Software Engineering, or related field; Master’s degree preferred. Experience: The ideal candidate should have at least 3+ years of experience in healthcare data engineering. Experience with a variety of data projects and environments, whether on-prem or in-cloud (5+ years in SQL Server, ETL Tools, Business Intelligence & Analysis, Architecture). Familiarity with the Microsoft Stack; experience with other platforms is a plus. Mastery of Python and SQL. Strong foundational knowledge of data lakes and AWS products such as AWS Glue. Experience with healthcare eligibility and claims, implementing APIs, HL7/FHIR standards, ETL scheduling solutions, SQL, and healthcare data security. Entrepreneurial: CINQCARE seeks to fix gaps that have persisted for generations in the delivery of care to Black and Brown populations. This position is accountable for ensuring CINQCARE is positioned to innovatively deliver on its promise. Communication: Strong analytical and collaboration skills is required. Excellent verbal, written communication and presentation skills; ability to clearly articulate and present concepts and models in an accessible manner to CINQCARE’s team, investors, partners, and other stakeholders. Relationships: Ability to build and effectively manage relationships with business leaders and external constituents. Culture. Good judgement, impeccable ethics, and a strong team player; desire to succeed and grow in a fast-paced, demanding, and entrepreneurial Company. Location: New York, NY Compensation: $100,000-$120,000 Powered by JazzHR My3ehHtP4z"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-digible-inc-3494570154?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=eXO2HAI727%2B9RNyXvn2d1g%3D%3D&position=19&pageNum=0&trk=public_jobs_jserp-result_search-card," CINQCARE ",https://www.linkedin.com/company/cinq-care?trk=public_jobs_topcard-org-name," Washington, DC "," 1 week ago "," 32 applicants "," OverviewThe Data Engineer is a critical member of our growing data science team. In this role, you will have the chance to define and develop a core data asset which provides a representative and culturally aware view of the individuals and the communities that CINQCARE serves. You will work to evolve this asset over time using a product roadmap that includes identification and closure of gaps in existing data, introduction of new data sources, generation of proprietary data while quantifying and eliminating areas of structural biases. CINQCARE seeks to fix gaps that have persisted for generations in the delivery of care to Black and Brown populations and to so, we must also seek to fix gaps in data that ignore and marginalizes the Black and Brown communities that we serve.An ideal candidate for this role will embody CINQCARE’s core values, including, Trusted, Empathetic, Committed, Humble, Creative and Community-Minded. At CINQCARE, we don’t have patients or customers – we have Family Members.Job ResponsibilitiesThe Data Engineer will have the following responsibilities:The Data Engineer will be responsible for the design, development and delivery of data pipelines and value-added data assets, leveraging a variety of data warehousing methodologies and disciplines to ingest the data from heterogeneous sources into cloud-based Data-Lake Environment in AWS.Manage initiatives & projects of significant complexity and risk. Excellent business and communication skills to be able to work with business owners providing input to prioritized roadmaps, develop work estimates, and ensure successful delivery to support strategic planning and initiatives, improve organizational performance, and advance progress towards CINQCARE’s goals.Assist in the overall architecture of the ETL Design and proactively provide inputs in designing, implementing, and automating the data pipelines.Investigate and mine data to identify potential issues within the data pipelines, notify end-users and propose adequate solutions.Ensure data quality and integrity within the data lake in the AWS environment with a focus on compliance to HIPAA and state level compliance requirements.Oversee user permissions and configurations for adherence to documented access management standards and policies.Independently (with minimal oversight) develop and maintain trusted advisor relationships with business, clinical, and operations leaders at the senior leadership level and with external partners, that include guidance for optimizing use of analytic capabilities and deliverables, and prioritization based on strategic vision.Use coding/scripting pipelines and APIs to uncover and turn data into assets that are analysis-friendly using AWS services like Athena Data Catalogue, Quick Sight or any other big data tool on AWS.Create a high-quality catalog of all pertinent data with the primary goal of establishing a single source of truth and significantly increasing productivity by reducing the time required for data search and discovery.Crossing team boundaries, educate/advise on data projects, on how to combine and aggregate client data across platforms or technologies, and how to make the greatest use of data.Lead consistent adherence to the Software Development Life Cycle framework and governance processes including, but not limited to leading planning sessions, collecting, and documenting requirements, identifying design patterns, create and define custom transformations, aggregations, and other data manipulations, developing data pipelines, creating documentation, developing test plans, performing unit testing, conducting peer review sessions.Use knowledge of healthcare industry, market environment, and clinical and business workflows and activities, to inform solution design and development to execute high-quality or differentiated solutions in an established problem space.Perform other job-related duties as assigned. General Duties The Data Engineer should have the following duties:Leadership: The Data Engineer will lead the continued build out of the data asset to create business value, including collaborating with their team to design, develop, and execute those strategies and solutions to deliver desired outcomes. Strategy: The Data Engineer will contribute to the business strategy and roadmap: (1) improve outcomes for CINQCARE Family Members; (2) enhance the efficacy of other CINQCARE. business divisions; and (3) develop and deliver external market opportunities for CINQCARE products and services. In establishing the business strategy, the Data Engineer will define and innovate sustainable revenue models to drive profitability of the Company. Collaboration: The Data Engineer will ensure that AI capabilities form a cohesive offering, including by working closely with other business divisions to learn their needs, internalize their knowledge, and define solutions to achieve the business objectives of CINQCARE. Knowledge: The Data Engineer will provide subject matter expertise in the AI solutions, including determining and recommended approaches for designing and building elegant data structures in support of existing reporting tools and custom visualization platforms.Culture: The Data Engineer is accountable for creating a productive, collaborative, safe and inclusive work environment for their team and as part of the larger Company.QualificationsThe Data Engineer should have the following qualifications:Education: Bachelor’s degree in Computer Science, Engineering, Software Engineering, or related field; Master’s degree preferred.Experience: The ideal candidate should have at least 3+ years of experience in healthcare data engineering. Experience with a variety of data projects and environments, whether on-prem or in-cloud (5+ years in SQL Server, ETL Tools, Business Intelligence & Analysis, Architecture). Familiarity with the Microsoft Stack; experience with other platforms is a plus. Mastery of Python and SQL. Strong foundational knowledge of data lakes and AWS products such as AWS Glue. Experience with healthcare eligibility and claims, implementing APIs, HL7/FHIR standards, ETL scheduling solutions, SQL, and healthcare data security.Entrepreneurial: CINQCARE seeks to fix gaps that have persisted for generations in the delivery of care to Black and Brown populations. This position is accountable for ensuring CINQCARE is positioned to innovatively deliver on its promise.Communication: Strong analytical and collaboration skills is required. Excellent verbal, written communication and presentation skills; ability to clearly articulate and present concepts and models in an accessible manner to CINQCARE’s team, investors, partners, and other stakeholders. Relationships: Ability to build and effectively manage relationships with business leaders and external constituents.Culture. Good judgement, impeccable ethics, and a strong team player; desire to succeed and grow in a fast-paced, demanding, and entrepreneurial Company.Location: New York, NYCompensation: $100,000-$120,000Powered by JazzHRMy3ehHtP4z "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3494098959?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=8uV9x6ECwxyLqwsixCGpvg%3D%3D&position=20&pageNum=0&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Tampa, FL "," 3 weeks ago "," Be among the first 25 applicants ","Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future? Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion? Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup? Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience? If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company. Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US. What You’ll Do: This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable. You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reporting Support users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reports Own the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teams Manage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automation Once you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your work will also include: AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layers Meeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group. Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issues Our tech stack: Mobile Platforms Supported: iOS & Android Cross-Platform Mobile Language: React Native Other Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda) Systems: Salesforce, AWS Amplify / Cognito / Lambda Your Profile: A minimum of 3+ years of professional experience Bachelor's Degree or equivalent experience Good interpersonal and relationship skills that include a positive attitude Self-starter who can find a way forward even when the path is unclear. Team player AND a leader simultaneously. What You’ll Bring to the Team: Passionate about creating value that changes people's lives Make low-level decisions quickly while being patient and methodical with high-level ones Are curious and passionate about digging into new technologies with a knack for picking them up quickly Adept at prioritizing value and shipping complex products while coordinating across multiple teams Love working with a diverse set of engineers, product managers, designers, and business partners Strive to excel, innovate and take pride in your work Work well with other leaders Are a positive culture driver Excited about working in a fast-paced, startup culture Experience in a regulated industry (healthcare, finance, etc.) a plus and perks: We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We make healthcare more equitable and accessible: Mission-driven organization focused on innovative digestive care Thrive on diversity with monthly DEIB discussions, activities, and more Virtual-first culture: Work from home anywhere in the US Live our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We take care of our people: Competitive compensation and meaningful equity Employer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happens Tailored professional development opportunities to learn and grow We rest, recharge and re-energize: Unlimited paid time off — take what you need, when you need it 13 paid company holidays to power down Team events, such as virtual cooking classes, games, and more Recognition of professional and personal accomplishments Oshi Health’s Core Values: Go For It Do the Right Thing Be Direct & Open Learn & Improve TEAM - Together Everyone Achieves More Oshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Powered by JazzHR j6NnJagQeP"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-analytics-generalist-at-instagram-3512575738?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=Xn5xR6tnTZWYyF3zJpemCA%3D%3D&position=21&pageNum=0&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," San Francisco, CA "," 2 weeks ago "," 63 applicants ","Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us! Data Engineer, Analytics (Generalist) Responsibilities: Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision. Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems. Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Define and manage SLA for all data sets in allocated areas of ownership. Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership. Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains. Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources. Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts. Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts. Influence product and cross-functional teams to identify data opportunities to drive impact. Mentor team members by giving/receiving actionable feedback. Minimum Qualifications: Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. 12+ years experience in the data warehouse space. 12+ years experience in custom ETL design, implementation and maintenance. 12+ years experience with object-oriented programming languages. 12+ years experience with schema design and dimensional data modeling. 12+ years experience in writing SQL statements. Experience analyzing data to identify deliverables, gaps and inconsistencies. Experience managing and communicating data warehouse plans to internal clients. Preferred Qualifications: BS/BA in Technical Field, Computer Science or Mathematics. Experience working with either a MapReduce or an MPP system. Knowledge and practical application of Python. Experience working autonomously in global teams. Experience influencing product decisions with data. Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-at-ascendion-3464950081?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=vBUV5ZyJiEN3N%2FT%2BcudjGw%3D%3D&position=22&pageNum=0&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," San Francisco, CA "," 2 weeks ago "," 63 applicants "," Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us!Data Engineer, Analytics (Generalist) Responsibilities:Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision.Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems.Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve.Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs.Define and manage SLA for all data sets in allocated areas of ownership.Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership.Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains.Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources.Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts.Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.Influence product and cross-functional teams to identify data opportunities to drive impact.Mentor team members by giving/receiving actionable feedback.Minimum Qualifications:Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.12+ years experience in the data warehouse space.12+ years experience in custom ETL design, implementation and maintenance.12+ years experience with object-oriented programming languages.12+ years experience with schema design and dimensional data modeling.12+ years experience in writing SQL statements.Experience analyzing data to identify deliverables, gaps and inconsistencies.Experience managing and communicating data warehouse plans to internal clients.Preferred Qualifications:BS/BA in Technical Field, Computer Science or Mathematics.Experience working with either a MapReduce or an MPP system.Knowledge and practical application of Python.Experience working autonomously in global teams.Experience influencing product decisions with data.Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3488399018?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=TYzs0sWX%2BOM4iEd3VdEB9w%3D%3D&position=23&pageNum=0&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," San Francisco, CA "," 2 weeks ago "," 63 applicants "," Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us!Data Engineer, Analytics (Generalist) Responsibilities:Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision.Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems.Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve.Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs.Define and manage SLA for all data sets in allocated areas of ownership.Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership.Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains.Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources.Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts.Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts.Influence product and cross-functional teams to identify data opportunities to drive impact.Mentor team members by giving/receiving actionable feedback.Minimum Qualifications:Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.12+ years experience in the data warehouse space.12+ years experience in custom ETL design, implementation and maintenance.12+ years experience with object-oriented programming languages.12+ years experience with schema design and dimensional data modeling.12+ years experience in writing SQL statements.Experience analyzing data to identify deliverables, gaps and inconsistencies.Experience managing and communicating data warehouse plans to internal clients.Preferred Qualifications:BS/BA in Technical Field, Computer Science or Mathematics.Experience working with either a MapReduce or an MPP system.Knowledge and practical application of Python.Experience working autonomously in global teams.Experience influencing product decisions with data.Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer (AWS) - Remote,https://www.linkedin.com/jobs/view/data-engineer-aws-remote-at-hyatt-hotels-corporation-3512663083?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=9SNmIWhU5UH9MHdXaFTXOw%3D%3D&position=24&pageNum=0&trk=public_jobs_jserp-result_search-card," Hyatt Hotels Corporation ",https://www.linkedin.com/company/hyatt?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","Hyatt seeks an experienced Data Engineer who will be an exceptional addition to our growing engineering team. The Data Engineer will work closely with engineering, product managers and data science teams to meet data requirements of various initiatives in Hyatt. As a Data Engineer, you will take on big data challenges in an agile way. In this role, you will build data pipelines that enable engineers, analysts and other stakeholders across the organization. You will build data models to deliver insightful analytics while ensuring the highest standard in data integrity. You will integrate different data sources, improve the efficiency, reliability, and latency of our data system, help automate data pipelines, and improve our data model and overall architecture. You will be part of a highly visible, collaborative and passionate data engineering team and will be working on all the aspects of design, development and implementation of scalable and reliable data products and pipelines. Applying the latest techniques and approaches across the domains of data engineering, and machine learning engineering isn’t just a nice to have, it’s a must. This candidate builds fantastic relationships across all levels of the organization and is recognized as a problem solver who looks to elevate the work of everyone around them. Collaborate with product managers, data scientists, engineering, and program management teams to define product features, business deliverables and strategies for data products Collaborate with business partners, operations, senior management, etc on day-to-day operational support Support operational reporting, self-service data engineering efforts, production data pipelines, and business intelligence suite Interface with multiple diverse stakeholders and gather/understand business requirements, assess feasibility and impact, and deliver on time with high quality Design appropriate solutions and recommend alternative approaches when necessary Work with high volumes of data, fine tuning database queries and able to solve complex technical problems Contribute to multiple projects/demands simultaneously Work in a fast paced, collaborative and iterative environment Exercise independent judgment in methods and techniques for obtaining results Work in an agile/scrum environment Use state of the art technologies to acquire, ingest and transform big datasets The ideal candidate demonstrates a commitment to Hyatt core values: respect, integrity, humility, empathy, creativity, and fun. Qualifications 2 to 5 years of experience within the field of data engineering or related technical work including business intelligence, analytics Experience and comfort solving problems in an ambiguous environment where there is constant change. Have the tenacity to thrive in a dynamic and fast-paced environment, inspire change, and collaborate with a variety of individuals and organizational partners Experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business Very good understanding of the full software development life cycle Very good understanding of Data warehousing concepts and approaches Experience in building Data pipelines and ETL approaches Experience in building Data warehouse and Business intelligence projects Experience in data cleansing, data validation and data wrangling Hands-on experience in AWS cloud and AWS native technologies such as Glue, Lambda, Kinesis, Lake Formation, S3, Redshift Experience using Spark EMR, RDS, EC2, Athena, API capabilities, CloudWatch, CloudTrail is a plus Experience with Business Intelligence tools like Tableau, Cognos, ThoughtSpot, etc is a plus Hands-on experience building complex business logics and ETL workflows using Informatica PowerCenter is preferred Experience in one of the scripting languages: Python or Unix Scripting Proficient in SQL, PL/SQL, relational databases (RDBMS), database concepts and dimensional modeling Strong verbal and written communication skills Demonstrate integrity and maturity, and a constructive approach to challenges Demonstrate analytical and problem-solving skills, particularly those that apply to Data Warehouse and Big Data environments Open minded, solution oriented and a very good team player Passionate about programming and learning new technologies; focused on helping yourself and the team improve skills Effective problem solving and analytical skills. Ability to manage multiple projects and report simultaneously across different stakeholders Rigorous attention to detail and accuracy Bachelor’s degree in Engineering, Computer Science, Statistics, Economics, Mathematics, Finance, or a related quantitative field The position responsibilities outlined above are in no way to be construed as all encompassing. Other duties, responsibilities, and qualifications may be required and/or assigned as necessary."," Associate "," Full-time "," Engineering and Information Technology "," Travel Arrangements and Hospitality " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-blackbird-ai-3506555861?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=hk9lNT1SB%2FZqL34z8NGQDg%3D%3D&position=25&pageNum=0&trk=public_jobs_jserp-result_search-card," BLACKBIRD.AI ",https://www.linkedin.com/company/blackbird-ai?trk=public_jobs_topcard-org-name," New York, United States "," 1 month ago "," 92 applicants ","This is a fully remote opportunity at Blackbird.AI. You will not be required to relocate. The Company: What has been the effect of disinformation on the world? Blackbird.AI creates leading-edge AI software to provide critical real-time insights to provide our clients with a deep understanding of ongoing disruptive narratives, their motives, and overall digital noise. We are united by our dedication to our mission. We believe that we have a responsibility to society and that our service is vitally needed by organizations and individuals to create an empowered and critical thinking society. If this mission resonates with you, we'd love to hear from you. The Opportunity: Get ready to join a small but growing team of highly talented engineers and leaders, building exciting AI-driven services and technologies. As a Data Engineer for Blackbird.AI, you will own the pipeline optimization for a real-time streaming cloud-hosted analytics platform that spans data collection and analysis, and serves results to a user dashboard for interactive visual exploration. Our position requires a breadth of experience with database technologies, especially the engineering of horizontally scalable solutions for big data. Responsibilities: Writes ETL processes to support ingestion and normalization of a wide variety of social media, news, and web scrape formats Designs database systems and develops tools for query and analytic processing, including for streaming real-time applications Performs analysis and comparative empirical studies to evaluate performance tradeoffs with respect to scaling (e.g., cost vs throughput/latency) Develops, manages and owns the database architecture for a real-time streaming cloud hosted analytics platform, spanning data collection, analytics and user management Owns build automation, continuous integration, deployment and performance optimization in compliance with our security requirements Requirements Must Have: BS degree in Computer Science or equivalent Demonstrated product success with deployment in the cloud and SaaS model; proven capability to develop processing pipeline for platforms that are optimized for streaming analytics applications and that are cloud agnostic (Kubernetes, dockerized solutions) Expert level capable on PostgreSQL, Neo4j (graph), ElasticSearch, MongoDB, Redis, Druid, with other NoSQL and graph DBs helpful Experienced with horizontal scaling of databases Experienced with Kafka and Airflow; expert in applying tools for runtime profiling to optimize throughput and latency and establish comparative performance benchmarks Capable in build automation, continuous integration and deployment (CI/CD) tools, e.g. Webpack, Buddy or using Jenkins + docker Expert level Python code development Experience working with distributed teams Helpful to Have: Technical background in Artificial Intelligence (AI) and Machine Learning (ML) Experience designing and implementing interactive query-driven man-machine intelligence systems Solid skills in Java Benefits Health Care Plan (Medical, Dental & Vision) Paid Time Off (Vacation, Sick & Public Holidays) Work From Home Stock Option Plan Exciting career development prospects, to grow into leadership roles Take note - due to the high volume of applicants, only shortlisted candidates will be notified. Thank you for taking the time to apply for the role at Blackbird.AI. LI-Remote"," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3488393901?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=VBcLa5f99Q2qS1OnPs0AAQ%3D%3D&position=1&pageNum=1&trk=public_jobs_jserp-result_search-card," BLACKBIRD.AI ",https://www.linkedin.com/company/blackbird-ai?trk=public_jobs_topcard-org-name," New York, United States "," 1 month ago "," 92 applicants "," This is a fully remote opportunity at Blackbird.AI. You will not be required to relocate.The Company:What has been the effect of disinformation on the world?Blackbird.AI creates leading-edge AI software to provide critical real-time insights to provide our clients with a deep understanding of ongoing disruptive narratives, their motives, and overall digital noise. We are united by our dedication to our mission. We believe that we have a responsibility to society and that our service is vitally needed by organizations and individuals to create an empowered and critical thinking society.If this mission resonates with you, we'd love to hear from you.The Opportunity:Get ready to join a small but growing team of highly talented engineers and leaders, building exciting AI-driven services and technologies. As a Data Engineer for Blackbird.AI, you will own the pipeline optimization for a real-time streaming cloud-hosted analytics platform that spans data collection and analysis, and serves results to a user dashboard for interactive visual exploration. Our position requires a breadth of experience with database technologies, especially the engineering of horizontally scalable solutions for big data.Responsibilities:Writes ETL processes to support ingestion and normalization of a wide variety of social media, news, and web scrape formatsDesigns database systems and develops tools for query and analytic processing, including for streaming real-time applicationsPerforms analysis and comparative empirical studies to evaluate performance tradeoffs with respect to scaling (e.g., cost vs throughput/latency)Develops, manages and owns the database architecture for a real-time streaming cloud hosted analytics platform, spanning data collection, analytics and user managementOwns build automation, continuous integration, deployment and performance optimization in compliance with our security requirementsRequirementsMust Have:BS degree in Computer Science or equivalentDemonstrated product success with deployment in the cloud and SaaS model; proven capability to develop processing pipeline for platforms that are optimized for streaming analytics applications and that are cloud agnostic (Kubernetes, dockerized solutions)Expert level capable on PostgreSQL, Neo4j (graph), ElasticSearch, MongoDB, Redis, Druid, with other NoSQL and graph DBs helpfulExperienced with horizontal scaling of databasesExperienced with Kafka and Airflow; expert in applying tools for runtime profiling to optimize throughput and latency and establish comparative performance benchmarksCapable in build automation, continuous integration and deployment (CI/CD) tools, e.g. Webpack, Buddy or using Jenkins + dockerExpert level Python code developmentExperience working with distributed teamsHelpful to Have:Technical background in Artificial Intelligence (AI) and Machine Learning (ML) Experience designing and implementing interactive query-driven man-machine intelligence systemsSolid skills in JavaBenefitsHealth Care Plan (Medical, Dental & Vision)Paid Time Off (Vacation, Sick & Public Holidays)Work From HomeStock Option PlanExciting career development prospects, to grow into leadership rolesTake note - due to the high volume of applicants, only shortlisted candidates will be notified. Thank you for taking the time to apply for the role at Blackbird.AI.LI-Remote "," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-i-at-spruce-3505805507?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=xNHd7NB0YbumEcmOkqHrfA%3D%3D&position=2&pageNum=1&trk=public_jobs_jserp-result_search-card," BLACKBIRD.AI ",https://www.linkedin.com/company/blackbird-ai?trk=public_jobs_topcard-org-name," New York, United States "," 1 month ago "," 92 applicants "," This is a fully remote opportunity at Blackbird.AI. You will not be required to relocate.The Company:What has been the effect of disinformation on the world?Blackbird.AI creates leading-edge AI software to provide critical real-time insights to provide our clients with a deep understanding of ongoing disruptive narratives, their motives, and overall digital noise. We are united by our dedication to our mission. We believe that we have a responsibility to society and that our service is vitally needed by organizations and individuals to create an empowered and critical thinking society.If this mission resonates with you, we'd love to hear from you.The Opportunity:Get ready to join a small but growing team of highly talented engineers and leaders, building exciting AI-driven services and technologies. As a Data Engineer for Blackbird.AI, you will own the pipeline optimization for a real-time streaming cloud-hosted analytics platform that spans data collection and analysis, and serves results to a user dashboard for interactive visual exploration. Our position requires a breadth of experience with database technologies, especially the engineering of horizontally scalable solutions for big data.Responsibilities:Writes ETL processes to support ingestion and normalization of a wide variety of social media, news, and web scrape formatsDesigns database systems and develops tools for query and analytic processing, including for streaming real-time applicationsPerforms analysis and comparative empirical studies to evaluate performance tradeoffs with respect to scaling (e.g., cost vs throughput/latency)Develops, manages and owns the database architecture for a real-time streaming cloud hosted analytics platform, spanning data collection, analytics and user managementOwns build automation, continuous integration, deployment and performance optimization in compliance with our security requirementsRequirementsMust Have:BS degree in Computer Science or equivalentDemonstrated product success with deployment in the cloud and SaaS model; proven capability to develop processing pipeline for platforms that are optimized for streaming analytics applications and that are cloud agnostic (Kubernetes, dockerized solutions)Expert level capable on PostgreSQL, Neo4j (graph), ElasticSearch, MongoDB, Redis, Druid, with other NoSQL and graph DBs helpfulExperienced with horizontal scaling of databasesExperienced with Kafka and Airflow; expert in applying tools for runtime profiling to optimize throughput and latency and establish comparative performance benchmarksCapable in build automation, continuous integration and deployment (CI/CD) tools, e.g. Webpack, Buddy or using Jenkins + dockerExpert level Python code developmentExperience working with distributed teamsHelpful to Have:Technical background in Artificial Intelligence (AI) and Machine Learning (ML) Experience designing and implementing interactive query-driven man-machine intelligence systemsSolid skills in JavaBenefitsHealth Care Plan (Medical, Dental & Vision)Paid Time Off (Vacation, Sick & Public Holidays)Work From HomeStock Option PlanExciting career development prospects, to grow into leadership rolesTake note - due to the high volume of applicants, only shortlisted candidates will be notified. Thank you for taking the time to apply for the role at Blackbird.AI.LI-Remote "," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-kellogg-company-3531410718?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=JGn4g5ltnbI3%2F95njtqv%2Bw%3D%3D&position=3&pageNum=1&trk=public_jobs_jserp-result_search-card," Kellogg Company ",https://www.linkedin.com/company/kellogg-company?trk=public_jobs_topcard-org-name," Naperville, IL "," 1 hour ago "," 45 applicants ","As a Data Engineer, you will play a pivotal role in collecting, processing, and preparing enterprise data for analytics and reporting, to support Kellogg’s digital business initiatives. You’ll have the opportunity to lead the design and implementation of data products, working with a range of AWS services, including S3, Redshift, EMR, Lambda, Glue and SageMaker. You will be scoping and designing technical solutions, working with a variety of internal customers, including functional stakeholders, data scientists and data analysts. Developing and maintaining solutions to ingest raw data from enterprise systems and third parties into the Kellogg data lake and designing and building data warehouse models in Redshift. HERE’S A TASTE OF WHAT YOU’LL BE DOING Engineering data transformations to prepare data for analytics according to business requirements. Analysis of incoming data and trends. Development of monitoring systems to track anomalies and data quality issues. Driving innovation and continuous improvement of the global data engineering practice, including evaluation of emerging technologies and approaches. Establishing and championing data engineering best practices within the engineering community. YOUR RECIPE FOR SUCCESS Scoping and designing technical solutions, working with a variety of internal customers, including functional stakeholders, data scientists and data analysts. Developing and maintaining solutions to ingest raw data from enterprise systems and third parties into the Kellogg data lake. Designing and building data warehouse models in Redshift. Engineering data transformations to prepare data for analytics according to business requirements. Analysis of incoming data and trends. Development of monitoring systems to track anomalies and data quality issues. Driving innovation and continuous improvement of the global data engineering practice, including evaluation of emerging technologies and approaches. Establishing and championing data engineering best practices within the engineering community. What’s Next After you apply, your application will be reviewed by a real recruiter – not a bot. This means it could take us a little while to get back with you so watch your inbox for updates. In the meantime, visit our How We Hire page to get insights into our hiring process and how to best prepare for a Kellogg interview. If we can help you with a reasonable accommodation throughout the application or hiring process, please USA.Recruitment@kellogg.com . This role takes part in Locate for Your Day , Kellogg’s hybrid way of working that empowers office-based employees to, in partnership with their managers, find a balance between working from home and the office. About Kellogg Company Kellogg Company is a multibillion-dollar company with over 30 thousand employees all over the globe. We are proud to make delicious foods that people love – foods that you grew up with like Frosted Flakes, Cheez It, Eggo, Pop-Tarts, Crunchy Nut, Pringles, as well as innovative foods such as MorningStar Farms, RX bar, and Noodles. Our KValues and BetterDays commitments are at the core of who we are, what we believe and what brings us together. We’re proud to say we’ve been awarded with Fortune’s “World’s Most Admired Companies”, DiversityInc’s “Top 50 Companies for Diversity”, Newsweek’s “Most Loved Workplaces”, and many more awards that you can check out here. Equity, Diversity, and Inclusion has been part of our DNA since the beginning. Clearly stated in our Code of Ethics “we have respect for individuals of all backgrounds, capability and opinions.” We believe that equity is more than leveling the playing field. It is making sure barriers, both tangible and intangible, are removed. Interested in the numbers? We hold ourselves accountable with our yearly Features report. Kellogg is proud to offer industry competitive Total Health benefits (Physical, Financial, Emotional, and Social) that vary depending on region and type of role. Be sure to ask your recruiter for more information! THE FINER PRINT The ability to work a full shift, come to work on time, work overtime as needed and the ability to work according to the necessary schedule to meet job requirements with or without reasonable accommodation is an essential function of this position. Kellogg Company is an Equal Opportunity Employer that strives to provide an inclusive work environment, a seat for everyone at the table, and embraces the diverse talent of its people. All qualified applicants will receive consideration for employment without regard to race, color, ethnicity, disability, religion, national origin, gender, gender identity, gender expression, marital status, sexual orientation, age, protected veteran status, or any other characteristic protected by law. For more information regarding our efforts to advance Equity, Diversity & Inclusion, please visit our website here . Where required by state law and/or city ordinance; this employer will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS), with information from each new employee’s Form I-9 to confirm work authorization. For additional information, please follow th is link . Let’s create the future of food, Kellogg Recruitment "," Entry level "," Full-time "," Information Technology "," Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-delta-air-lines-3515310983?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=4lZ5LsjYCTPOg9TptRTYSQ%3D%3D&position=4&pageNum=1&trk=public_jobs_jserp-result_search-card," Delta Air Lines ",https://www.linkedin.com/company/delta-air-lines?trk=public_jobs_topcard-org-name," Atlanta, GA "," 1 week ago "," Over 200 applicants ","United States, Georgia, Atlanta TechOps 21-Dec-2022 Ref #: 18674 LinkedIn Tag: LI-CM3 How you'll help us Keep Climbing (overview & key responsibilities) The Data Engineer will play a key role on the TDaaS team, responsible for transforming data from disparate systems to provide insights and analytics for business stakeholders. You’ll leverage cloud-based infrastructure to implement technology solutions that are scalable, resilient, and efficient. You will collaborate with Data Engineers, Data Analysts, Data Scientists, DBAs, cross-functional teams, and business leaders. You will architect, design, implement and operate data engineering solutions using Agile methodology, that empower users to make informed business decisions. The Data Engineer should be self-motivated, work independently, and have direct experience with all aspects of the software development lifecycle, from design to deployment. Have a deep understanding of the full data engineering lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. Strong candidates will have technical knowledge of BI systems design, big data architecture and technology landscape, API consumption, ETL/ELT orchestration, business intelligence tools. The candidate should have excellent organizational and communication skills and feel comfortable in a fast-paced environment. You should have the ability to lead and take ownership of assigned technical projects in a fast-paced environment. Excellent written and speaking communication skills are required as we work in a collaborative cross-functional environment and interact with the full spectrum of business divisions. You must demonstrate insatiable curiosity and outstanding interpersonal “soft” skills. Ideal candidates have more than just knowledge or skill set, as they also have a “can do” mindset to find solutions. This role may be located in Atlanta, GA or Minneapolis, MN What You Need To Succeed (minimum Qualifications) Development experience building and maintaining ETL pipelines Experience working with database technologies and data development such as Python, PLSQL, etc. Solid understanding of writing test cases to ensure data quality, reliability, and high level of confidence Track record of advancing new technologies to improve data quality and reliability Continuously improve quality, efficiency, and scalability of data pipelines Knowledge of working with queries/applications, including performance tuning, utilizing indexes, and materialized views to improve query performance Identify necessary business rules for extracting data along with functional or technical risks related to data sources (e.g. data latency, frequency, etc.) Develop initial queries for profiling data, validating analysis, testing assumptions, driving data quality assessment specifications, and define a path to deployment Familiar with best practices for data ingestion, data storage and data delivery design Consistently makes safety and security, of self and others, the priority. Familiarity or experience with Tableau, Power BI or other BI tools a plus. Working knowledge with Modern development patterns and platforms (Azure, AWS, GCP) Scrum experience, consulting, and facilitation skills Solid communications and interpersonal skills; ability to develop effective business relationships and build consensus Embraces a diverse set of people, thinking and styles Embraces diverse people, thinking and styles Consistently makes safety and security, of self and others, the priority High School diploma, GED or High School Equivalency Where permitted by applicable law, must have received or be willing to receive the COVID-19 vaccine by date of hire to be considered for U.S.-based job, if not currently employed by Delta Air Lines, Inc. Where permitted by applicable law, must have received or be willing to receive the COVID-19 vaccine by date of hire to be considered for U.S.-based job, if not currently employed by Delta Air Lines, Inc. Demonstrates that privacy is a priority when handling personal data. Embraces a diverse set of people, thinking and styles. Consistently makes safety and security, of self and others, the priority. What Will Give You a Competitive Edge (preferred Qualifications) Bachelor of Science degree in Computer Science or equivalent Desired Airline industry experience At least 3+ years of post-degree professional experience AWS Cloud Practitioner Certification"," Entry level "," Full-time "," Information Technology "," Airlines and Aviation " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-financials-remote-at-kuali-inc-3494013341?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=BVKGlT9Hz7IfLpTkTh44PA%3D%3D&position=5&pageNum=1&trk=public_jobs_jserp-result_search-card," Delta Air Lines ",https://www.linkedin.com/company/delta-air-lines?trk=public_jobs_topcard-org-name," Atlanta, GA "," 1 week ago "," Over 200 applicants "," United States, Georgia, AtlantaTechOps 21-Dec-2022Ref #: 18674 LinkedIn Tag: LI-CM3How you'll help us Keep Climbing (overview & key responsibilities)The Data Engineer will play a key role on the TDaaS team, responsible for transforming data from disparate systems to provide insights and analytics for business stakeholders. You’ll leverage cloud-based infrastructure to implement technology solutions that are scalable, resilient, and efficient. You will collaborate with Data Engineers, Data Analysts, Data Scientists, DBAs, cross-functional teams, and business leaders. You will architect, design, implement and operate data engineering solutions using Agile methodology, that empower users to make informed business decisions.The Data Engineer should be self-motivated, work independently, and have direct experience with all aspects of the software development lifecycle, from design to deployment. Have a deep understanding of the full data engineering lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. Strong candidates will have technical knowledge of BI systems design, big data architecture and technology landscape, API consumption, ETL/ELT orchestration, business intelligence tools. The candidate should have excellent organizational and communication skills and feel comfortable in a fast-paced environment.You should have the ability to lead and take ownership of assigned technical projects in a fast-paced environment. Excellent written and speaking communication skills are required as we work in a collaborative cross-functional environment and interact with the full spectrum of business divisions. You must demonstrate insatiable curiosity and outstanding interpersonal “soft” skills. Ideal candidates have more than just knowledge or skill set, as they also have a “can do” mindset to find solutions.This role may be located in Atlanta, GA or Minneapolis, MNWhat You Need To Succeed (minimum Qualifications)Development experience building and maintaining ETL pipelinesExperience working with database technologies and data development such as Python, PLSQL, etc.Solid understanding of writing test cases to ensure data quality, reliability, and high level of confidenceTrack record of advancing new technologies to improve data quality and reliabilityContinuously improve quality, efficiency, and scalability of data pipelinesKnowledge of working with queries/applications, including performance tuning, utilizing indexes, and materialized views to improve query performanceIdentify necessary business rules for extracting data along with functional or technical risks related to data sources (e.g. data latency, frequency, etc.)Develop initial queries for profiling data, validating analysis, testing assumptions, driving data quality assessment specifications, and define a path to deploymentFamiliar with best practices for data ingestion, data storage and data delivery designConsistently makes safety and security, of self and others, the priority. Familiarity or experience with Tableau, Power BI or other BI tools a plus.Working knowledge with Modern development patterns and platforms (Azure, AWS, GCP) Scrum experience, consulting, and facilitation skillsSolid communications and interpersonal skills; ability to develop effective business relationships and build consensusEmbraces a diverse set of people, thinking and stylesEmbraces diverse people, thinking and stylesConsistently makes safety and security, of self and others, the priorityHigh School diploma, GED or High School EquivalencyWhere permitted by applicable law, must have received or be willing to receive the COVID-19 vaccine by date of hire to be considered for U.S.-based job, if not currently employed by Delta Air Lines, Inc.Where permitted by applicable law, must have received or be willing to receive the COVID-19 vaccine by date of hire to be considered for U.S.-based job, if not currently employed by Delta Air Lines, Inc.Demonstrates that privacy is a priority when handling personal data.Embraces a diverse set of people, thinking and styles.Consistently makes safety and security, of self and others, the priority.What Will Give You a Competitive Edge (preferred Qualifications)Bachelor of Science degree in Computer Science or equivalentDesired Airline industry experienceAt least 3+ years of post-degree professional experienceAWS Cloud Practitioner Certification "," Entry level "," Full-time "," Information Technology "," Airlines and Aviation " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-analytics-generalist-at-instagram-3512573925?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=qNlUQePZDfzD5rBw6eJuzw%3D%3D&position=6&pageNum=1&trk=public_jobs_jserp-result_search-card," Delta Air Lines ",https://www.linkedin.com/company/delta-air-lines?trk=public_jobs_topcard-org-name," Atlanta, GA "," 1 week ago "," Over 200 applicants "," United States, Georgia, AtlantaTechOps 21-Dec-2022Ref #: 18674 LinkedIn Tag: LI-CM3How you'll help us Keep Climbing (overview & key responsibilities)The Data Engineer will play a key role on the TDaaS team, responsible for transforming data from disparate systems to provide insights and analytics for business stakeholders. You’ll leverage cloud-based infrastructure to implement technology solutions that are scalable, resilient, and efficient. You will collaborate with Data Engineers, Data Analysts, Data Scientists, DBAs, cross-functional teams, and business leaders. You will architect, design, implement and operate data engineering solutions using Agile methodology, that empower users to make informed business decisions.The Data Engineer should be self-motivated, work independently, and have direct experience with all aspects of the software development lifecycle, from design to deployment. Have a deep understanding of the full data engineering lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. Strong candidates will have technical knowledge of BI systems design, big data architecture and technology landscape, API consumption, ETL/ELT orchestration, business intelligence tools. The candidate should have excellent organizational and communication skills and feel comfortable in a fast-paced environment.You should have the ability to lead and take ownership of assigned technical projects in a fast-paced environment. Excellent written and speaking communication skills are required as we work in a collaborative cross-functional environment and interact with the full spectrum of business divisions. You must demonstrate insatiable curiosity and outstanding interpersonal “soft” skills. Ideal candidates have more than just knowledge or skill set, as they also have a “can do” mindset to find solutions.This role may be located in Atlanta, GA or Minneapolis, MNWhat You Need To Succeed (minimum Qualifications)Development experience building and maintaining ETL pipelinesExperience working with database technologies and data development such as Python, PLSQL, etc.Solid understanding of writing test cases to ensure data quality, reliability, and high level of confidenceTrack record of advancing new technologies to improve data quality and reliabilityContinuously improve quality, efficiency, and scalability of data pipelinesKnowledge of working with queries/applications, including performance tuning, utilizing indexes, and materialized views to improve query performanceIdentify necessary business rules for extracting data along with functional or technical risks related to data sources (e.g. data latency, frequency, etc.)Develop initial queries for profiling data, validating analysis, testing assumptions, driving data quality assessment specifications, and define a path to deploymentFamiliar with best practices for data ingestion, data storage and data delivery designConsistently makes safety and security, of self and others, the priority. Familiarity or experience with Tableau, Power BI or other BI tools a plus.Working knowledge with Modern development patterns and platforms (Azure, AWS, GCP) Scrum experience, consulting, and facilitation skillsSolid communications and interpersonal skills; ability to develop effective business relationships and build consensusEmbraces a diverse set of people, thinking and stylesEmbraces diverse people, thinking and stylesConsistently makes safety and security, of self and others, the priorityHigh School diploma, GED or High School EquivalencyWhere permitted by applicable law, must have received or be willing to receive the COVID-19 vaccine by date of hire to be considered for U.S.-based job, if not currently employed by Delta Air Lines, Inc.Where permitted by applicable law, must have received or be willing to receive the COVID-19 vaccine by date of hire to be considered for U.S.-based job, if not currently employed by Delta Air Lines, Inc.Demonstrates that privacy is a priority when handling personal data.Embraces a diverse set of people, thinking and styles.Consistently makes safety and security, of self and others, the priority.What Will Give You a Competitive Edge (preferred Qualifications)Bachelor of Science degree in Computer Science or equivalentDesired Airline industry experienceAt least 3+ years of post-degree professional experienceAWS Cloud Practitioner Certification "," Entry level "," Full-time "," Information Technology "," Airlines and Aviation " Data Engineer,United States,"Data Engineer, Intern",https://www.linkedin.com/jobs/view/data-engineer-intern-at-chartmetric-3516727550?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=WUE6PkfTa8pUM9gR%2F7%2FZUA%3D%3D&position=7&pageNum=1&trk=public_jobs_jserp-result_search-card," Chartmetric ",https://www.linkedin.com/company/chartmetric?trk=public_jobs_topcard-org-name," San Mateo, CA "," 1 week ago "," Over 200 applicants ","Chartmetric, a profitable 7-year-old startup, with a focus on music and data is looking for a Data Engineer, Intern to join our fast-growing team. We have the most advanced market intelligence tool developed for the music industry. We are trusted by Universal Music Group, Sony, Warner Music Group, as well as hundreds of other music companies, industry professionals, and indie artists. We have created a web application for the music industry to better understand the activity happening around artists. We combine hundreds of thousands of data points across Apple Music, Spotify, YouTube, TikTok, Facebook, Twitter, and Instagram through our beautifully designed tool in order to make sense of the increasingly complex landscape of the music industry. What You’ll Do Work within the Data team to improve the ETL processes for 8 million artists, 88 million tracks, 15 million playlists, and more Develop data ingestion pipelines on Airflow to integrate new streaming and social media services into our analytics platform Manage our data across AWS RDS, Elasticsearch, and Snowflake Write Python code along with PostgreSQL and Snowflake queries and deploy them to our production tools Work with our team of Data Scientists to help our customers make better decisions Who you are * Experience with Python and SQL Excellent communication and teamwork skills Ongoing degree in Computer Science or relevant field Ability to thrive in a fast-paced environment What we offer * Opportunity to learn at a profitable and growing company (and possibly join in the future) Opportunity to work on production code impacting real artists Opportunity to work on challenging problems Opportunity to learn from a team of smart and talented co-workers and a company that values ideas and inputs from all levels Frequent team-building events (our last few: go-karting, escape room, frisbee golf, paintball) This role is based on an hourly rate of $27-$33/hour depending on experience. Wellfound requires an annual salary but this role is for a 3-month full-time summer internship."," Internship "," Internship "," Engineering and Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-double-line-inc-3514296079?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=FB9wgOlwOAuAQVMeDtzV9Q%3D%3D&position=8&pageNum=1&trk=public_jobs_jserp-result_search-card," Double Line, Inc. ",https://www.linkedin.com/company/double-line-inc-?trk=public_jobs_topcard-org-name," Austin, TX "," 1 month ago "," 33 applicants ","(This is a remote position open to candidates residing in North Carolina and Texas. We have an office location in Austin, TX for use at our employees' convenience. We have no plans to return to the office on a mandatory basis.) Feeling underappreciated? Underutilized? Want to be a part of a specialized team with exposure to a wide variety of data puzzles to solve, while using your skills to improve education? Come join a team where you can Fly the Airplane, not just be a passenger in the back. We're a growing company focused on expanding our Operations team with a solutions-focused Data Engineer. Sound interesting? If so, we're looking for a motivated and driven person like you who has: Strength in thinking creatively and collaborating with other data experts in figuring out solutions to really tough data loads or transformation problems Experience leveraging SQL and/or ETL development, data mapping, and data modeling to manage and organize client data A passion for continuous improvement in refining the approach and doing it better and faster the next time Bonus points if you're bringing knowledge of or really want to learn the following: Consultancy experience with a focus on Agile practices AWS and Azure Cloud Python or similar scripting languages AWS Quicksight, Tableau, Power BI, or other visualization tools In return, we offer: A mission-driven company with a long-term focus on helping the world by untangling the technical knots that plague state and local governments, particularly in education, healthcare, and similar fields A home where your voice matters and you can affect real change An employer who cares about you, makes sure you're engaged with exciting work, and offers robust benefits, 401k with employer match, and a great culture We do not want you to make the leap without knowing what we need, so here is how we define success for this position: Soak up knowledge from the existing team of experts in the first 30 days Bring fresh eyes to our processes and techniques and bring new ideas to the table in the first 2 months Mentor a new data engineering hire in your first 90 days We need to know - can you make this happen? If so, we definitely need to talk to you. We value diversity at Double Line. We hire, recruit, and promote without regard to race, color, religion, sex, sexual orientation, gender, national origin, pregnancy or maternity, veteran status or any other status protected by applicable law. We understand the importance of creating a safe and comfortable work environment and encourage individualism and authenticity in every member of our team. Double Line does not sponsor applicants for work visas at this time. Double Line does not currently offer relocation assistance. Powered by JazzHR iS4dbmGU5w"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3496154704?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=KP24isu8PSqvN1%2BLmVF82Q%3D%3D&position=9&pageNum=1&trk=public_jobs_jserp-result_search-card," Double Line, Inc. ",https://www.linkedin.com/company/double-line-inc-?trk=public_jobs_topcard-org-name," Austin, TX "," 1 month ago "," 33 applicants "," (This is a remote position open to candidates residing in North Carolina and Texas. We have an office location in Austin, TX for use at our employees' convenience. We have no plans to return to the office on a mandatory basis.)Feeling underappreciated? Underutilized? Want to be a part of a specialized team with exposure to a wide variety of data puzzles to solve, while using your skills to improve education? Come join a team where you can Fly the Airplane, not just be a passenger in the back. We're a growing company focused on expanding our Operations team with a solutions-focused Data Engineer. Sound interesting?If so, we're looking for a motivated and driven person like you who has:Strength in thinking creatively and collaborating with other data experts in figuring out solutions to really tough data loads or transformation problemsExperience leveraging SQL and/or ETL development, data mapping, and data modeling to manage and organize client dataA passion for continuous improvement in refining the approach and doing it better and faster the next timeBonus points if you're bringing knowledge of or really want to learn the following:Consultancy experience with a focus on Agile practicesAWS and Azure CloudPython or similar scripting languagesAWS Quicksight, Tableau, Power BI, or other visualization toolsIn return, we offer:A mission-driven company with a long-term focus on helping the world by untangling the technical knots that plague state and local governments, particularly in education, healthcare, and similar fieldsA home where your voice matters and you can affect real changeAn employer who cares about you, makes sure you're engaged with exciting work, and offers robust benefits, 401k with employer match, and a great cultureWe do not want you to make the leap without knowing what we need, so here is how we define success for this position:Soak up knowledge from the existing team of experts in the first 30 daysBring fresh eyes to our processes and techniques and bring new ideas to the table in the first 2 monthsMentor a new data engineering hire in your first 90 daysWe need to know - can you make this happen? If so, we definitely need to talk to you.We value diversity at Double Line. We hire, recruit, and promote without regard to race, color, religion, sex, sexual orientation, gender, national origin, pregnancy or maternity, veteran status or any other status protected by applicable law. We understand the importance of creating a safe and comfortable work environment and encourage individualism and authenticity in every member of our team.Double Line does not sponsor applicants for work visas at this time.Double Line does not currently offer relocation assistance.Powered by JazzHRiS4dbmGU5w "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-kairos-inc-3531404685?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=FZf9qWm7kyYKiQUKnnzi%2FA%3D%3D&position=10&pageNum=1&trk=public_jobs_jserp-result_search-card," Double Line, Inc. ",https://www.linkedin.com/company/double-line-inc-?trk=public_jobs_topcard-org-name," Austin, TX "," 1 month ago "," 33 applicants "," (This is a remote position open to candidates residing in North Carolina and Texas. We have an office location in Austin, TX for use at our employees' convenience. We have no plans to return to the office on a mandatory basis.)Feeling underappreciated? Underutilized? Want to be a part of a specialized team with exposure to a wide variety of data puzzles to solve, while using your skills to improve education? Come join a team where you can Fly the Airplane, not just be a passenger in the back. We're a growing company focused on expanding our Operations team with a solutions-focused Data Engineer. Sound interesting?If so, we're looking for a motivated and driven person like you who has:Strength in thinking creatively and collaborating with other data experts in figuring out solutions to really tough data loads or transformation problemsExperience leveraging SQL and/or ETL development, data mapping, and data modeling to manage and organize client dataA passion for continuous improvement in refining the approach and doing it better and faster the next timeBonus points if you're bringing knowledge of or really want to learn the following:Consultancy experience with a focus on Agile practicesAWS and Azure CloudPython or similar scripting languagesAWS Quicksight, Tableau, Power BI, or other visualization toolsIn return, we offer:A mission-driven company with a long-term focus on helping the world by untangling the technical knots that plague state and local governments, particularly in education, healthcare, and similar fieldsA home where your voice matters and you can affect real changeAn employer who cares about you, makes sure you're engaged with exciting work, and offers robust benefits, 401k with employer match, and a great cultureWe do not want you to make the leap without knowing what we need, so here is how we define success for this position:Soak up knowledge from the existing team of experts in the first 30 daysBring fresh eyes to our processes and techniques and bring new ideas to the table in the first 2 monthsMentor a new data engineering hire in your first 90 daysWe need to know - can you make this happen? If so, we definitely need to talk to you.We value diversity at Double Line. We hire, recruit, and promote without regard to race, color, religion, sex, sexual orientation, gender, national origin, pregnancy or maternity, veteran status or any other status protected by applicable law. We understand the importance of creating a safe and comfortable work environment and encourage individualism and authenticity in every member of our team.Double Line does not sponsor applicants for work visas at this time.Double Line does not currently offer relocation assistance.Powered by JazzHRiS4dbmGU5w "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-blue-onion-3504020464?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=J0uqkWiD5HUh8hnr0lIuHA%3D%3D&position=11&pageNum=1&trk=public_jobs_jserp-result_search-card," Blue Onion ",https://www.linkedin.com/company/blue-onion-labs?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 57 applicants "," How We’re Different We’re a lean team so all of your work will have a direct and measurable impact on the business You’ll have the opportunity interact with some of our amazing beta customers who are constantly providing feedback and helping us make the product better You’ll have the opportunity to craft elegant, efficient, and (sometimes!) scrappy solutions to hard technical problems using the latest and greatest tools and technologiesTechnology Our application runs on a tech stack that is a mixture of Python and Ruby on Rails + React For our backend data processing, we use Apache Airflow. For our web application, we use Ruby on Rails, with a Typescript/React single-page-app frontend, powered by a GraphQL APIWhat Your Day Would Look Like Build scalable and fault-tolerant data pipelines in Google Cloud Platform using Apache Airflow Inspect, analyze, and transform data using SQL based tools like BigQuery and dbt Design, implement, and test features on our web application Build integrations with external servicesWhat We Look For You’re the best at what you do, take ownership for your work and are constantly looking to learn 3+ years of experience in Python and SQL You have experience with Apache Airflow and familiarity with AWS or GCP "," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-techta-llc-3512094279?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=UKI8TrfxfWxSF9CJmrEzqw%3D%3D&position=12&pageNum=1&trk=public_jobs_jserp-result_search-card," Blue Onion ",https://www.linkedin.com/company/blue-onion-labs?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 57 applicants "," How We’re Different We’re a lean team so all of your work will have a direct and measurable impact on the business You’ll have the opportunity interact with some of our amazing beta customers who are constantly providing feedback and helping us make the product better You’ll have the opportunity to craft elegant, efficient, and (sometimes!) scrappy solutions to hard technical problems using the latest and greatest tools and technologiesTechnology Our application runs on a tech stack that is a mixture of Python and Ruby on Rails + React For our backend data processing, we use Apache Airflow. For our web application, we use Ruby on Rails, with a Typescript/React single-page-app frontend, powered by a GraphQL APIWhat Your Day Would Look Like Build scalable and fault-tolerant data pipelines in Google Cloud Platform using Apache Airflow Inspect, analyze, and transform data using SQL based tools like BigQuery and dbt Design, implement, and test features on our web application Build integrations with external servicesWhat We Look For You’re the best at what you do, take ownership for your work and are constantly looking to learn 3+ years of experience in Python and SQL You have experience with Apache Airflow and familiarity with AWS or GCP "," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-ii-at-parsyl-3494141846?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=mXxo2kz%2Bwmn%2FPFvGQDVjiA%3D%3D&position=13&pageNum=1&trk=public_jobs_jserp-result_search-card," Blue Onion ",https://www.linkedin.com/company/blue-onion-labs?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 57 applicants "," How We’re Different We’re a lean team so all of your work will have a direct and measurable impact on the business You’ll have the opportunity interact with some of our amazing beta customers who are constantly providing feedback and helping us make the product better You’ll have the opportunity to craft elegant, efficient, and (sometimes!) scrappy solutions to hard technical problems using the latest and greatest tools and technologiesTechnology Our application runs on a tech stack that is a mixture of Python and Ruby on Rails + React For our backend data processing, we use Apache Airflow. For our web application, we use Ruby on Rails, with a Typescript/React single-page-app frontend, powered by a GraphQL APIWhat Your Day Would Look Like Build scalable and fault-tolerant data pipelines in Google Cloud Platform using Apache Airflow Inspect, analyze, and transform data using SQL based tools like BigQuery and dbt Design, implement, and test features on our web application Build integrations with external servicesWhat We Look For You’re the best at what you do, take ownership for your work and are constantly looking to learn 3+ years of experience in Python and SQL You have experience with Apache Airflow and familiarity with AWS or GCP "," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-atlantic-partners-corporation-3461310036?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=4y2XwQKiWKZycqZ0dFJABg%3D%3D&position=14&pageNum=1&trk=public_jobs_jserp-result_search-card," Blue Onion ",https://www.linkedin.com/company/blue-onion-labs?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 57 applicants "," How We’re Different We’re a lean team so all of your work will have a direct and measurable impact on the business You’ll have the opportunity interact with some of our amazing beta customers who are constantly providing feedback and helping us make the product better You’ll have the opportunity to craft elegant, efficient, and (sometimes!) scrappy solutions to hard technical problems using the latest and greatest tools and technologiesTechnology Our application runs on a tech stack that is a mixture of Python and Ruby on Rails + React For our backend data processing, we use Apache Airflow. For our web application, we use Ruby on Rails, with a Typescript/React single-page-app frontend, powered by a GraphQL APIWhat Your Day Would Look Like Build scalable and fault-tolerant data pipelines in Google Cloud Platform using Apache Airflow Inspect, analyze, and transform data using SQL based tools like BigQuery and dbt Design, implement, and test features on our web application Build integrations with external servicesWhat We Look For You’re the best at what you do, take ownership for your work and are constantly looking to learn 3+ years of experience in Python and SQL You have experience with Apache Airflow and familiarity with AWS or GCP "," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stealth-startup-3475297298?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=mPUdmGeiYp6ugv6RIJ1tuw%3D%3D&position=15&pageNum=1&trk=public_jobs_jserp-result_search-card," Stealth Startup ",https://www.linkedin.com/company/stealth-startup-51?trk=public_jobs_topcard-org-name," San Francisco Bay Area "," 1 week ago "," Over 200 applicants "," We are a young healthcare startup that believes the healthcare system should be accessible, transparent, and easy to navigate. As a digital-first, data-driven health plan, we are replacing legacy systems with modern infrastructure to deliver our members the care they need when they need it. If you want to build the future of healthcare, we'd love for you to join us.RequirementsBachelor's degree or higher in Computer ScienceMinimum of 5 years exp. working as a software engineerExperience using PythonExperience with Airflow, Spark, or Redshift Familiarity with data structures, CI/CD, software development lifecycles, backend frameworks, and other technical toolsDemonstrated ability to learn and work independently and make decisions with minimal supervisionAbility to work as a team player and deliver results in a remote cross-functional, and cross-cultural working environmentYou are a passionate developer, an independent worker, and a self-starter.A desire to be part of the journey to change the face of healthcare "," Not Applicable "," Full-time "," Information Technology and Engineering "," Software Development and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-analyst-at-focus-brands-llc-3499475472?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=F8lWd1gNatYGm000cfryhA%3D%3D&position=16&pageNum=1&trk=public_jobs_jserp-result_search-card," Stealth Startup ",https://www.linkedin.com/company/stealth-startup-51?trk=public_jobs_topcard-org-name," San Francisco Bay Area "," 1 week ago "," Over 200 applicants "," We are a young healthcare startup that believes the healthcare system should be accessible, transparent, and easy to navigate. As a digital-first, data-driven health plan, we are replacing legacy systems with modern infrastructure to deliver our members the care they need when they need it. If you want to build the future of healthcare, we'd love for you to join us.RequirementsBachelor's degree or higher in Computer ScienceMinimum of 5 years exp. working as a software engineerExperience using PythonExperience with Airflow, Spark, or Redshift Familiarity with data structures, CI/CD, software development lifecycles, backend frameworks, and other technical toolsDemonstrated ability to learn and work independently and make decisions with minimal supervisionAbility to work as a team player and deliver results in a remote cross-functional, and cross-cultural working environmentYou are a passionate developer, an independent worker, and a self-starter.A desire to be part of the journey to change the face of healthcare "," Not Applicable "," Full-time "," Information Technology and Engineering "," Software Development and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-rsdc-3502530583?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=GJdK01%2FFQVzFpoXczDlzLg%3D%3D&position=17&pageNum=1&trk=public_jobs_jserp-result_search-card," RSDC ",https://www.linkedin.com/company/rsdcgroup-llc?trk=public_jobs_topcard-org-name," Washington, DC "," 1 month ago "," Be among the first 25 applicants ","RSDC, LLC is a Veteran Owned Small Business (VOSB) with a presence in the Washington, DC Metropolitan area headquartered in Arlington, Virginia with offices nationwide. We deliver results for our customers through accurate requirements capture and our Strategy to Operations solutions approach which results in positive impact and enduring change that drive value in our client’s organizations. Alignment. It is more than a word to us – it is at the heart of what we do We are seeking a passionate and driven Data Engineer to support a rapidly growing Data Analytics and Business Intelligence platform focused on providing solutions that empower our Federal customers with the tools and capabilities needed to turn data into actionable insights. The ideal candidate is a critical thinker and perpetual learner; excited to gain exposure and build skillsets across a range of technologies while solving some of our clients’ toughest challenges. As a Data Engineer, you will be integral to data operations for the development and integration of multiple data types across a range of data sets and sources. Requirements You will be responsible for the day-to-day operations of systems that depend on data, ensuring data is properly processed and securely transferred to its appropriate location, in a timely manner. Processing data will include managing, manipulating, storing and parsing data in a data pipeline for variety of target sources. You will also support maintenance of applications and tools that reside on these systems such as upgrades, patches, configuration changes, etc. The work is performed in a multidisciplinary team environment using agile methodologies. The candidate we seek must be highly motivated and enthusiastic about implementing new technologies and learning about new data in a small team environment where deadlines are important. Responsibilities: Complete development efforts across data pipeline to store, manage, store, and provision to data consumers Being an active and collaborating member of an Agile/Scrum team and following all Agile/Scrum best practices Write code to ensure the performance and reliability of data extraction and processing Support continuous process automation for data ingest Achieve technical excellence by advocating for and adhering to lean-agile engineering principles and practices such as API-first design, simple design, continuous integration, version control, and automated testing Work with program management and engineers to implement and document complex and evolving requirements Help cultivate an environment that promotes customer service excellence, innovation, collaboration, and teamwork Collaborate with others as part of a cross-functional team that includes user experience researchers and designers, product managers, engineers, and other functional specialists Required Skills: Must be a US Citizen Must be able to obtain a Public Trust Clearance 7+ years of IT experience including experience in design, management, and solutioning of large, complex data sets and models. Experience with developing data pipelines from many sources from structured and unstructured data sets in a variety of formats Proficiency developing ETL processes, and performing test and validation steps Proficiency to manipulate data (Python, R, SQL, SAS) Strong knowledge of big data analysis and storage tools and technologies Strong understanding of the agile principles and ability to apply them Strong understanding of the CI/CD pipelines and ability to apply them Experience with relational database, such as, PostgreSQL Work comfortably in version control systems, such as, Git Repositories Desired Skills: Experience creating and consuming APIs Experience with DHS and knowledge of DHS standards a plus Candidates will be given special consideration for extensive experience with Python Ability to develop visualizations utilizing Tableau or PowerBI Experience in developing Shell scripts on Linux Demonstrated experience translating business and technical requirements into comprehensive data strategies and analytic solutions Demonstrated ability to communicate across all levels of the organization and communicate technical terms to non-technical audiences Benefits Health Care Plans, various options (Medical, Dental & Vision) Paid Holidays Paid Time Off/Vacation Retirement Plans / 401K Matching Tuition Assistance Employer Paid Short Term & Long Term Disability Employer Paid Life Insurance"," Not Applicable "," Full-time "," Analyst "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-analytics-generalist-at-instagram-3512575735?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=ycpoOzqi2OPOQGKTNXIAKQ%3D%3D&position=18&pageNum=1&trk=public_jobs_jserp-result_search-card," RSDC ",https://www.linkedin.com/company/rsdcgroup-llc?trk=public_jobs_topcard-org-name," Washington, DC "," 1 month ago "," Be among the first 25 applicants "," RSDC, LLC is a Veteran Owned Small Business (VOSB) with a presence in the Washington, DC Metropolitan area headquartered in Arlington, Virginia with offices nationwide. We deliver results for our customers through accurate requirements capture and our Strategy to Operations solutions approach which results in positive impact and enduring change that drive value in our client’s organizations. Alignment. It is more than a word to us – it is at the heart of what we doWe are seeking a passionate and driven Data Engineer to support a rapidly growing Data Analytics and Business Intelligence platform focused on providing solutions that empower our Federal customers with the tools and capabilities needed to turn data into actionable insights. The ideal candidate is a critical thinker and perpetual learner; excited to gain exposure and build skillsets across a range of technologies while solving some of our clients’ toughest challenges. As a Data Engineer, you will be integral to data operations for the development and integration of multiple data types across a range of data sets and sources.RequirementsYou will be responsible for the day-to-day operations of systems that depend on data, ensuring data is properly processed and securely transferred to its appropriate location, in a timely manner. Processing data will include managing, manipulating, storing and parsing data in a data pipeline for variety of target sources. You will also support maintenance of applications and tools that reside on these systems such as upgrades, patches, configuration changes, etc. The work is performed in a multidisciplinary team environment using agile methodologies. The candidate we seek must be highly motivated and enthusiastic about implementing new technologies and learning about new data in a small team environment where deadlines are important.Responsibilities:Complete development efforts across data pipeline to store, manage, store, and provision to data consumersBeing an active and collaborating member of an Agile/Scrum team and following all Agile/Scrum best practicesWrite code to ensure the performance and reliability of data extraction and processingSupport continuous process automation for data ingestAchieve technical excellence by advocating for and adhering to lean-agile engineering principles and practices such as API-first design, simple design, continuous integration, version control, and automated testingWork with program management and engineers to implement and document complex and evolving requirementsHelp cultivate an environment that promotes customer service excellence, innovation, collaboration, and teamworkCollaborate with others as part of a cross-functional team that includes user experience researchers and designers, product managers, engineers, and other functional specialistsRequired Skills:Must be a US CitizenMust be able to obtain a Public Trust Clearance7+ years of IT experience including experience in design, management, and solutioning of large, complex data sets and models.Experience with developing data pipelines from many sources from structured and unstructured data sets in a variety of formatsProficiency developing ETL processes, and performing test and validation stepsProficiency to manipulate data (Python, R, SQL, SAS)Strong knowledge of big data analysis and storage tools and technologiesStrong understanding of the agile principles and ability to apply themStrong understanding of the CI/CD pipelines and ability to apply themExperience with relational database, such as, PostgreSQLWork comfortably in version control systems, such as, Git RepositoriesDesired Skills:Experience creating and consuming APIsExperience with DHS and knowledge of DHS standards a plusCandidates will be given special consideration for extensive experience with PythonAbility to develop visualizations utilizing Tableau or PowerBIExperience in developing Shell scripts on LinuxDemonstrated experience translating business and technical requirements into comprehensive data strategies and analytic solutionsDemonstrated ability to communicate across all levels of the organization and communicate technical terms to non-technical audiencesBenefitsHealth Care Plans, various options (Medical, Dental & Vision)Paid Holidays Paid Time Off/Vacation Retirement Plans / 401K MatchingTuition AssistanceEmployer Paid Short Term & Long Term DisabilityEmployer Paid Life Insurance "," Not Applicable "," Full-time "," Analyst "," Technology, Information and Internet " Data Engineer,United States,Backend Data Engineer,https://www.linkedin.com/jobs/view/backend-data-engineer-at-sense-3518702215?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=6v9mQW5W58lOg63pWBuLog%3D%3D&position=19&pageNum=1&trk=public_jobs_jserp-result_search-card," Sense ",https://www.linkedin.com/company/sense?trk=public_jobs_topcard-org-name," Cambridge, MA "," 1 week ago "," Over 200 applicants ","About Sense Sense is a fast-growing greentech scale-up based in Cambridge, MA. We build smart home monitoring systems to help people take command of their energy usage, saving money while combating climate change. Our mission is to reduce global carbon emissions by making homes smart and efficient, and we’re looking to make an impact at scale: Sense’s technology has the potential to remove one gigaton of carbon from the atmosphere every year. We’re looking for talented self-starters who want to be part of the energy transformation and are ready, willing, and able to tackle tough challenges and complex technical problems. When you join the Sense team, you’re helping us build a cleaner, more resilient future. What you'll do: As a Backend Data Engineer for the Data Science team, you will help you be a key contributor to maintain and improve our Data Ingestion and Machine Learning pipelines as they grow to 10x and 100x in volumes. You should be: Excited about high-volume data and solving the challenges it poses. Excited to deep dive to understand and optimize storage and compute characteristics of data pipelines Interested in building (or learning how to build) high-availability, performant, and scalable systems. Tenacious when tracking down production issues, digging into metrics and logs as needed to run a problem to the ground. Curious about seeing ML solutions applied to a novel domain Requirements Who you are: 2+ years professional data engineering experience Experience with Linux, Python, relational databases and AWS Great numerical and analytical skills Degree in Computer Science, Data Science or similar filed – a Masters is a plus Must be authorized to work in the U.S. Benefits Be a part of building something that will make a difference in the world. Great opportunity to gain experience at a consumer smart home startup. Competitive compensation including equity Great work-life balance Flexible work hours Vacation starting at 3 weeks/year + 1 week paid sick time Paid parental leave (5 weeks or more depending on location) Dependent Care Accounts Generous healthcare benefits for employees and dependents Medical (90% of the premium and first 50% of the deductible) Dental (90%) Vision (100%) Flexible Spending Accounts Life, AD&D, long- and short-term disability insurance (100%) 401k plan with company match Free Sense energy monitor for your home, discounts for friends and family Competitive compensation including equity Remote-friendly Remote or local/hybrid in our Cambridge Central Square office Home office setup allowance ($200/year) Great work-life balance Flexible work hours Vacation starting at 3 weeks/year + 1 week paid sick time Paid parental leave (5 weeks or more depending on location) Dependent Care Accounts Generous healthcare benefits for employees and dependents Medical (90% of the premium and first 50% of the deductible) Dental (90%) Vision (100%) Flexible Spending Accounts Life, AD&D, long- and short-term disability insurance (100%) 401k plan including Company Match Free Sense energy monitor for your home, discounts for friends and family"," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Backend Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-proactive-md-3512654908?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=grcC9lPiLDcyd22KIv7akw%3D%3D&position=20&pageNum=1&trk=public_jobs_jserp-result_search-card," Sense ",https://www.linkedin.com/company/sense?trk=public_jobs_topcard-org-name," Cambridge, MA "," 1 week ago "," Over 200 applicants "," About SenseSense is a fast-growing greentech scale-up based in Cambridge, MA. We build smart home monitoring systems to help people take command of their energy usage, saving money while combating climate change. Our mission is to reduce global carbon emissions by making homes smart and efficient, and we’re looking to make an impact at scale: Sense’s technology has the potential to remove one gigaton of carbon from the atmosphere every year.We’re looking for talented self-starters who want to be part of the energy transformation and are ready, willing, and able to tackle tough challenges and complex technical problems. When you join the Sense team, you’re helping us build a cleaner, more resilient future.What you'll do:As a Backend Data Engineer for the Data Science team, you will help you be a key contributor to maintain and improve our Data Ingestion and Machine Learning pipelines as they grow to 10x and 100x in volumes.You should be:Excited about high-volume data and solving the challenges it poses.Excited to deep dive to understand and optimize storage and compute characteristics of data pipelinesInterested in building (or learning how to build) high-availability, performant, and scalable systems.Tenacious when tracking down production issues, digging into metrics and logs as needed to run a problem to the ground.Curious about seeing ML solutions applied to a novel domainRequirementsWho you are:2+ years professional data engineering experienceExperience with Linux, Python, relational databases and AWSGreat numerical and analytical skillsDegree in Computer Science, Data Science or similar filed – a Masters is a plusMust be authorized to work in the U.S.Benefits Be a part of building something that will make a difference in the world.Great opportunity to gain experience at a consumer smart home startup.Competitive compensation including equityGreat work-life balanceFlexible work hoursVacation starting at 3 weeks/year + 1 week paid sick timePaid parental leave (5 weeks or more depending on location)Dependent Care AccountsGenerous healthcare benefits for employees and dependentsMedical (90% of the premium and first 50% of the deductible)Dental (90%)Vision (100%)Flexible Spending AccountsLife, AD&D, long- and short-term disability insurance (100%)401k plan with company matchFree Sense energy monitor for your home, discounts for friends and familyCompetitive compensation including equityRemote-friendlyRemote or local/hybrid in our Cambridge Central Square officeHome office setup allowance ($200/year)Great work-life balanceFlexible work hoursVacation starting at 3 weeks/year + 1 week paid sick timePaid parental leave (5 weeks or more depending on location)Dependent Care AccountsGenerous healthcare benefits for employees and dependentsMedical (90% of the premium and first 50% of the deductible)Dental (90%)Vision (100%)Flexible Spending AccountsLife, AD&D, long- and short-term disability insurance (100%)401k plan including Company MatchFree Sense energy monitor for your home, discounts for friends and family "," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Neural Data Engineer,https://www.linkedin.com/jobs/view/neural-data-engineer-at-neurable-3501167609?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=JU06CCDGEbkgoxxMQENutQ%3D%3D&position=21&pageNum=1&trk=public_jobs_jserp-result_search-card," Neurable ",https://www.linkedin.com/company/neurable?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," 27 applicants ","Neurable is seeking an experienced Neural Data Engineer to join our team and help build wearable devices powered by neurotechnology. Your goal will be to bring cutting-edge data management and AI/ML solutions to bear in the field of neurotechnology. This position will be responsible for developing new tools to improve our neuroinformatics database and data archival pipeline. You will also work closely with our data science and data collection teams in order to deploy AI/ML models, clean and refine neural data streams, and develop processes for monitoring and visualizing data. If you enjoy creative problem solving in a fast-paced startup environment. If you can find yourself getting lost tinkering. If you are curious by nature, consider yourself a technologist or someone who wants to affect change, you will fit in well with our team. This is an opportunity to join a high-impact company, a world class team, and pioneer new technology that will change the way people interact with computers. We want you to have full creative latitude and know that this is your company, not just a job. What You Will Do: Maintain and enhance pipelines for filtering, denoising, featurizing, and modeling EEG data Develop and maintain state-of-the-art methods for data archival and management Work closely with our experimental team to optimize the quality of data and data labels that are being collected Develop dashboards for continuous monitoring of data quality Develop new methods for denoising and preprocessing EEG data in conjunction with other modalities including accelerometer data Support server configuration (web and application) and deployment Lead data engineering efforts, including database and API design, data extraction/transformation/load, and data aggregation/integration Containerize and deploy software and workflows on local high performance computing platforms and cloud computing infrastructure (AWS) Communicate with internal teams and external stakeholders Define experiments, provide scientific guidance, and help the engineering team throughout R&D and Product life cycles Manage and contribute to grant applications, studies, and projects The Ideal Candidate Will Possess: PhD or Master’s in Computer Science, Engineering, Cognitive or Computational Neuroscience, Physical Sciences, or Applied Mathematics/Statistics Excellent programming skills in Python, Bash, or MATLAB Experienced with version control, CI/CD, unit testing, and issue/release management Knowledgeable with API deployment and containerization The ability to get new applications up and running, and to overcome hurdles as they arise, is particularly helpful Experience in brain-computer interfaces or neuroengineering is preferred but not necessary Enthusiastic for building BCI technology and hungry to learn Basic experience with signal processing, machine learning, and deep learning frameworks (e.g., PyTorch, Tensorflow) is a plus Prior experience and continued interest in mentorship and technical development of junior data engineers Ability to solve problems that have not been solved, in other words, a willingness to explore the unknown and ability to make progress Ability to embrace uncertainty when working on challenging research questions Knowledge of perceptual/behavioral evaluations and physiological measurement is desired Hardware and software experience with multimodal data acquisition, instrumentation, and human-machine interfaces is a plus Compensation and Benefits: Competitive salary and equity High quality health insurance (100% company paid) 401(k) with employer matching contributions Generous PTO Pet friendly office, fun team activities, and homemade waffles every Wednesday! We are not able to provide a visa or sponsorship for this position. All candidates must be authorized to work in the USA. Powered by JazzHR sSwN5hThCx"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Neural Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-hyatt-hotels-corporation-3500299344?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=Oxcwqirt1A4o8fH5RoeW6Q%3D%3D&position=22&pageNum=1&trk=public_jobs_jserp-result_search-card," Neurable ",https://www.linkedin.com/company/neurable?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," 27 applicants "," Neurable is seeking an experienced Neural Data Engineer to join our team and help build wearable devices powered by neurotechnology. Your goal will be to bring cutting-edge data management and AI/ML solutions to bear in the field of neurotechnology.This position will be responsible for developing new tools to improve our neuroinformatics database and data archival pipeline. You will also work closely with our data science and data collection teams in order to deploy AI/ML models, clean and refine neural data streams, and develop processes for monitoring and visualizing data.If you enjoy creative problem solving in a fast-paced startup environment. If you can find yourself getting lost tinkering. If you are curious by nature, consider yourself a technologist or someone who wants to affect change, you will fit in well with our team.This is an opportunity to join a high-impact company, a world class team, and pioneer new technology that will change the way people interact with computers. We want you to have full creative latitude and know that this is your company, not just a job.What You Will Do:Maintain and enhance pipelines for filtering, denoising, featurizing, and modeling EEG dataDevelop and maintain state-of-the-art methods for data archival and managementWork closely with our experimental team to optimize the quality of data and data labels that are being collectedDevelop dashboards for continuous monitoring of data qualityDevelop new methods for denoising and preprocessing EEG data in conjunction with other modalities including accelerometer dataSupport server configuration (web and application) and deploymentLead data engineering efforts, including database and API design, data extraction/transformation/load, and data aggregation/integrationContainerize and deploy software and workflows on local high performance computing platforms and cloud computing infrastructure (AWS)Communicate with internal teams and external stakeholdersDefine experiments, provide scientific guidance, and help the engineering team throughout R&D and Product life cyclesManage and contribute to grant applications, studies, and projectsThe Ideal Candidate Will Possess:PhD or Master’s in Computer Science, Engineering, Cognitive or Computational Neuroscience, Physical Sciences, or Applied Mathematics/StatisticsExcellent programming skills in Python, Bash, or MATLABExperienced with version control, CI/CD, unit testing, and issue/release managementKnowledgeable with API deployment and containerizationThe ability to get new applications up and running, and to overcome hurdles as they arise, is particularly helpfulExperience in brain-computer interfaces or neuroengineering is preferred but not necessaryEnthusiastic for building BCI technology and hungry to learnBasic experience with signal processing, machine learning, and deep learning frameworks (e.g., PyTorch, Tensorflow) is a plusPrior experience and continued interest in mentorship and technical development of junior data engineersAbility to solve problems that have not been solved, in other words, a willingness to explore the unknown and ability to make progressAbility to embrace uncertainty when working on challenging research questionsKnowledge of perceptual/behavioral evaluations and physiological measurement is desiredHardware and software experience with multimodal data acquisition, instrumentation, and human-machine interfaces is a plusCompensation and Benefits:Competitive salary and equityHigh quality health insurance (100% company paid)401(k) with employer matching contributionsGenerous PTOPet friendly office, fun team activities, and homemade waffles every Wednesday! We are not able to provide a visa or sponsorship for this position. All candidates must be authorized to work in the USA.Powered by JazzHRsSwN5hThCx "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Neural Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-hcltech-3497985338?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=0O7nsmxNZPXYJPOmfl0nGw%3D%3D&position=23&pageNum=1&trk=public_jobs_jserp-result_search-card," Neurable ",https://www.linkedin.com/company/neurable?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," 27 applicants "," Neurable is seeking an experienced Neural Data Engineer to join our team and help build wearable devices powered by neurotechnology. Your goal will be to bring cutting-edge data management and AI/ML solutions to bear in the field of neurotechnology.This position will be responsible for developing new tools to improve our neuroinformatics database and data archival pipeline. You will also work closely with our data science and data collection teams in order to deploy AI/ML models, clean and refine neural data streams, and develop processes for monitoring and visualizing data.If you enjoy creative problem solving in a fast-paced startup environment. If you can find yourself getting lost tinkering. If you are curious by nature, consider yourself a technologist or someone who wants to affect change, you will fit in well with our team.This is an opportunity to join a high-impact company, a world class team, and pioneer new technology that will change the way people interact with computers. We want you to have full creative latitude and know that this is your company, not just a job.What You Will Do:Maintain and enhance pipelines for filtering, denoising, featurizing, and modeling EEG dataDevelop and maintain state-of-the-art methods for data archival and managementWork closely with our experimental team to optimize the quality of data and data labels that are being collectedDevelop dashboards for continuous monitoring of data qualityDevelop new methods for denoising and preprocessing EEG data in conjunction with other modalities including accelerometer dataSupport server configuration (web and application) and deploymentLead data engineering efforts, including database and API design, data extraction/transformation/load, and data aggregation/integrationContainerize and deploy software and workflows on local high performance computing platforms and cloud computing infrastructure (AWS)Communicate with internal teams and external stakeholdersDefine experiments, provide scientific guidance, and help the engineering team throughout R&D and Product life cyclesManage and contribute to grant applications, studies, and projectsThe Ideal Candidate Will Possess:PhD or Master’s in Computer Science, Engineering, Cognitive or Computational Neuroscience, Physical Sciences, or Applied Mathematics/StatisticsExcellent programming skills in Python, Bash, or MATLABExperienced with version control, CI/CD, unit testing, and issue/release managementKnowledgeable with API deployment and containerizationThe ability to get new applications up and running, and to overcome hurdles as they arise, is particularly helpfulExperience in brain-computer interfaces or neuroengineering is preferred but not necessaryEnthusiastic for building BCI technology and hungry to learnBasic experience with signal processing, machine learning, and deep learning frameworks (e.g., PyTorch, Tensorflow) is a plusPrior experience and continued interest in mentorship and technical development of junior data engineersAbility to solve problems that have not been solved, in other words, a willingness to explore the unknown and ability to make progressAbility to embrace uncertainty when working on challenging research questionsKnowledge of perceptual/behavioral evaluations and physiological measurement is desiredHardware and software experience with multimodal data acquisition, instrumentation, and human-machine interfaces is a plusCompensation and Benefits:Competitive salary and equityHigh quality health insurance (100% company paid)401(k) with employer matching contributionsGenerous PTOPet friendly office, fun team activities, and homemade waffles every Wednesday! We are not able to provide a visa or sponsorship for this position. All candidates must be authorized to work in the USA.Powered by JazzHRsSwN5hThCx "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Neural Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-life-science-people-3513216706?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=rV0ql2svAI4MVKzRqRKmlA%3D%3D&position=24&pageNum=1&trk=public_jobs_jserp-result_search-card," Neurable ",https://www.linkedin.com/company/neurable?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," 27 applicants "," Neurable is seeking an experienced Neural Data Engineer to join our team and help build wearable devices powered by neurotechnology. Your goal will be to bring cutting-edge data management and AI/ML solutions to bear in the field of neurotechnology.This position will be responsible for developing new tools to improve our neuroinformatics database and data archival pipeline. You will also work closely with our data science and data collection teams in order to deploy AI/ML models, clean and refine neural data streams, and develop processes for monitoring and visualizing data.If you enjoy creative problem solving in a fast-paced startup environment. If you can find yourself getting lost tinkering. If you are curious by nature, consider yourself a technologist or someone who wants to affect change, you will fit in well with our team.This is an opportunity to join a high-impact company, a world class team, and pioneer new technology that will change the way people interact with computers. We want you to have full creative latitude and know that this is your company, not just a job.What You Will Do:Maintain and enhance pipelines for filtering, denoising, featurizing, and modeling EEG dataDevelop and maintain state-of-the-art methods for data archival and managementWork closely with our experimental team to optimize the quality of data and data labels that are being collectedDevelop dashboards for continuous monitoring of data qualityDevelop new methods for denoising and preprocessing EEG data in conjunction with other modalities including accelerometer dataSupport server configuration (web and application) and deploymentLead data engineering efforts, including database and API design, data extraction/transformation/load, and data aggregation/integrationContainerize and deploy software and workflows on local high performance computing platforms and cloud computing infrastructure (AWS)Communicate with internal teams and external stakeholdersDefine experiments, provide scientific guidance, and help the engineering team throughout R&D and Product life cyclesManage and contribute to grant applications, studies, and projectsThe Ideal Candidate Will Possess:PhD or Master’s in Computer Science, Engineering, Cognitive or Computational Neuroscience, Physical Sciences, or Applied Mathematics/StatisticsExcellent programming skills in Python, Bash, or MATLABExperienced with version control, CI/CD, unit testing, and issue/release managementKnowledgeable with API deployment and containerizationThe ability to get new applications up and running, and to overcome hurdles as they arise, is particularly helpfulExperience in brain-computer interfaces or neuroengineering is preferred but not necessaryEnthusiastic for building BCI technology and hungry to learnBasic experience with signal processing, machine learning, and deep learning frameworks (e.g., PyTorch, Tensorflow) is a plusPrior experience and continued interest in mentorship and technical development of junior data engineersAbility to solve problems that have not been solved, in other words, a willingness to explore the unknown and ability to make progressAbility to embrace uncertainty when working on challenging research questionsKnowledge of perceptual/behavioral evaluations and physiological measurement is desiredHardware and software experience with multimodal data acquisition, instrumentation, and human-machine interfaces is a plusCompensation and Benefits:Competitive salary and equityHigh quality health insurance (100% company paid)401(k) with employer matching contributionsGenerous PTOPet friendly office, fun team activities, and homemade waffles every Wednesday! We are not able to provide a visa or sponsorship for this position. All candidates must be authorized to work in the USA.Powered by JazzHRsSwN5hThCx "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Neural Data Engineer,https://www.linkedin.com/jobs/view/senior-data-engineer-at-preveta-3520479147?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=vb%2FpaTtj7l%2Fu%2Fur82qK2MA%3D%3D&position=25&pageNum=1&trk=public_jobs_jserp-result_search-card," Neurable ",https://www.linkedin.com/company/neurable?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," 27 applicants "," Neurable is seeking an experienced Neural Data Engineer to join our team and help build wearable devices powered by neurotechnology. Your goal will be to bring cutting-edge data management and AI/ML solutions to bear in the field of neurotechnology.This position will be responsible for developing new tools to improve our neuroinformatics database and data archival pipeline. You will also work closely with our data science and data collection teams in order to deploy AI/ML models, clean and refine neural data streams, and develop processes for monitoring and visualizing data.If you enjoy creative problem solving in a fast-paced startup environment. If you can find yourself getting lost tinkering. If you are curious by nature, consider yourself a technologist or someone who wants to affect change, you will fit in well with our team.This is an opportunity to join a high-impact company, a world class team, and pioneer new technology that will change the way people interact with computers. We want you to have full creative latitude and know that this is your company, not just a job.What You Will Do:Maintain and enhance pipelines for filtering, denoising, featurizing, and modeling EEG dataDevelop and maintain state-of-the-art methods for data archival and managementWork closely with our experimental team to optimize the quality of data and data labels that are being collectedDevelop dashboards for continuous monitoring of data qualityDevelop new methods for denoising and preprocessing EEG data in conjunction with other modalities including accelerometer dataSupport server configuration (web and application) and deploymentLead data engineering efforts, including database and API design, data extraction/transformation/load, and data aggregation/integrationContainerize and deploy software and workflows on local high performance computing platforms and cloud computing infrastructure (AWS)Communicate with internal teams and external stakeholdersDefine experiments, provide scientific guidance, and help the engineering team throughout R&D and Product life cyclesManage and contribute to grant applications, studies, and projectsThe Ideal Candidate Will Possess:PhD or Master’s in Computer Science, Engineering, Cognitive or Computational Neuroscience, Physical Sciences, or Applied Mathematics/StatisticsExcellent programming skills in Python, Bash, or MATLABExperienced with version control, CI/CD, unit testing, and issue/release managementKnowledgeable with API deployment and containerizationThe ability to get new applications up and running, and to overcome hurdles as they arise, is particularly helpfulExperience in brain-computer interfaces or neuroengineering is preferred but not necessaryEnthusiastic for building BCI technology and hungry to learnBasic experience with signal processing, machine learning, and deep learning frameworks (e.g., PyTorch, Tensorflow) is a plusPrior experience and continued interest in mentorship and technical development of junior data engineersAbility to solve problems that have not been solved, in other words, a willingness to explore the unknown and ability to make progressAbility to embrace uncertainty when working on challenging research questionsKnowledge of perceptual/behavioral evaluations and physiological measurement is desiredHardware and software experience with multimodal data acquisition, instrumentation, and human-machine interfaces is a plusCompensation and Benefits:Competitive salary and equityHigh quality health insurance (100% company paid)401(k) with employer matching contributionsGenerous PTOPet friendly office, fun team activities, and homemade waffles every Wednesday! We are not able to provide a visa or sponsorship for this position. All candidates must be authorized to work in the USA.Powered by JazzHRsSwN5hThCx "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-michael-kors-3511301146?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=OZdOVxIFGKCK2bhZMG7Ngw%3D%3D&position=3&pageNum=0&trk=public_jobs_jserp-result_search-card," Michael Kors ",https://www.linkedin.com/company/michael-kors?trk=public_jobs_topcard-org-name," New Jersey, United States "," 1 week ago "," Over 200 applicants ","Who You Are: You are a talented Data Engineer with 5+ years of SQL and Python experience, looking for a fantastic company where you will work with the latest technology and gain visibility working with top executives. You are an analytical thinker, problem solver, and innovator that is looking to work on the latest technologies with employees up and down the organization. You love taking on projects where you interact throughout a global organization and are not afraid to be challenged. You have retail experience and strong customer data experience with high volume transactions. You are looking for amazing perks including working remotely on CST or EST, excellent vacation time off and more. What you will do: Join the global data team in building data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners. Prepare, clean, format analytical datasets for processing and analysis. Build and maintain custom ETL pipelines. Become an expert in our datasets, their strengths and weaknesses, and write code to pull and verify data. Conduct database feature engineering to support ongoing quantitative research. Work with developers to create and deploy systems for anomaly detection. Interface with data scientists, software developers, and other analytics operations staff as needed. Serve as a point-of-contact for questions about data structures, definitions, and quality. Work directly with Product and Systems Owners to deliver data products in a collaborative environment. Design department-wide principles and workflow for data quality management. Serve as liaison with our Data Analytics group You'll Need to Have: 5+ years of experience working in Data Engineering 5+ years of experience developing in SQL Experience using ETL tools such as Talend, Informatica, Data Services 5+ years of experience with data profiling and data pipeline development Experience with Python and Snowflake Experience with Azure Experience working with large data sets B.S. Computer Science/Engineering/Technology or Statistics/Mathematics or equivalent work experience We’d Love to See: Experience in the retail industry Experience working with customer data Experience with Azure Comfortable participating in tool selection processes regarding data tools and software Hands-on experience leading a team through the entire software development lifecycle of data management and business intelligence solutions Knowledge of industry leading data architecture and data management practices Excellent communication skills, including both oral and written Ability to multi-task, exercise excellent time management, and meet multiple deadlines Demonstrated excellence in project management and organization High level of critical thinking and analytical skills Ability to consistently deliver excellent customer service Excellent attention to detail and ability to document information accurately Capable of resolving escalated issues arising from operations and requiring coordination with other departments MK Perks: · Generous Paid Time Off & Holiday Schedule · Summer Fridays · Internal mobility across Capri Brands (Michael Kors, Jimmy Choo, Versace) · Cross-brand Discount · Exclusive Employee Sales · Fav 5 Cards (MK Discount for friends and family) · 401k Match · Paid Parental Leave · Thrive Wellness Program (seasonal in-office massages and more!) · Commuter Benefits · Gym Discounts In compliance with certain Pay Transparency laws, employers are required to disclose a salary range. The salary for this position will vary based on role requirements, skill set and years of experience."," Mid-Senior level "," Full-time "," Information Technology "," Retail Apparel and Fashion " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-fooda-3511793648?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=aZ9%2FP4BaHA62161OujucUQ%3D%3D&position=4&pageNum=0&trk=public_jobs_jserp-result_search-card," Fooda ",https://www.linkedin.com/company/fooda?trk=public_jobs_topcard-org-name," Chicago, IL "," 1 month ago "," 54 applicants ","Who We Are We believe a workplace food program is something employees should love and look forward to every day. Powered by technology and a network of over 1,400 restaurants, Fooda feeds hungry people at work through our ongoing food programs located within companies and office buildings. Every day each Fooda location is served by a different restaurant that comes onsite and serves fresh lunch from their chef’s unique menu. Now with over 50 million meals sold, Fooda operates in major cities across the U.S. Eight out of ten employees believe Fooda is one of their company’s top perks. Why Choose Fooda? Do you dream of complex problems that stretch your imagination and force you to grow as a problem solver? We are a close-knit product development team of engineers, PMs and data scientists, tackling technical problems around scaling, rapid user acquisition, machine learning and optimization challenges in every vertical, every single day. Like most businesses, we have been impacted by the COVID-19 pandemic. We’re using this opportunity to expand into sectors like healthcare and manufacturing, and focus our team on the most impactful projects so we can serve our diners safely and help our restaurant partners weather this time. As we expand our technology platform and explore new market segments, we are breaking ground and building functional and useful products for independent restaurants across the country. This allows Fooda to help mom & pop shops expand their own businesses and deliver great food to people looking for more local and exciting food options. About The Team Our Data Science & Analytics team is changing the way Fooda uses data. Do you want to get in on the ground floor of an analytics team at a high growth startup? The company has placed a huge strategic focus on building out our data science and analytics capabilities and you will be core to this growth. The team is responsible for all reporting & analytics for the company partnering closely with Product, Engineering, Sales, Marketing, Finance, and Operations to drive innovative analytic solutions. Will you join us? Position Opportunities And Responsibilities As a Data Engineer, you will work on the Data Science and Analytics team to drive and evolve the analytics solutions and data systems at Fooda. You will contribute to analytics decision making, analysis, and data integration to enable Fooda to become a world-class data driven organization. What You Will Be Doing Leverage Python and SQL to integrate and analyze data from internal and external systems that drive key business decisions throughout the organization Govern, analyze, and own data that drives stakeholders’ perception of the organization, which involves diving in, addressing questions, and perform root cause analysis of issues that arise within the data Develop ETL pipelines and data quality auditing systems using Airflow, Python, and SQL to create a cohesive, automated, and accurate data environment Assist in requirements gathering, design, and development of complex data systems that collect, analyze, and measure data throughout all business units Build automated data products and tools for internal and external stakeholders to drive engagement across Fooda Solve complex technical data problems to deliver insights and efficiencies to help the busines achieve growth goals across all functions Ingest, translate, and decode large volumes of data to enable a data driven culture across all levels of Fooda’s organization Collaborate with other members of the Data Team to ensure data integrity and quality of deliverables Consistently look for additional methods and ways to improve the data transformations and data consumption processes to ensure all internal systems are working efficiently What You Should Already Have 2+ years of experience working in data engineering, business intelligence, or engineering Bachelor’s Degree (Preferably in a quantitative field such as:Information Systems, Computer Science, Statistics, or Mathematics) Experience analyzing and integrating data using Python and SQL to extract and transform data according to business rules and requirements Extensive knowledge and experience working with large scale data warehouse, web APIs, and database platforms to integrate internal and external data sources Experience with programing/scripting languages and data science tools (Airflow, Python, Java, Spark) Experience diving into data quality and data profiling analysis to ensure data consistency and accuracy across enterprise reporting Experience with AWS tools and infrastructure to build and maintain a robust data warehouse Strong attention to detail and the ability to think critically and solve problems using analytical and quantitative methodologies Strong oral/written communication skills, specifically the ability to communicate and translate difficult analytical problems to stakeholders with minimal analytics background Ability to work effectively in a high paced environment with multiple priorities What We’ll Hook You Up With Competitive market salary and stock options, based on experience Unlimited vacation Comprehensive health, dental and vision plans 401k retirement plan with company match Paid maternity and parental leave benefits Flexible spending accounts A fulfilling, challenging adventure of a work experience Lots of free food! Must be authorized to work in the United States on a full-time basis. No phone calls or recruiters please."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting and Food and Beverage Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-planoly-3475924786?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=0%2B1v50gESAZ28uRtUqGK%2BA%3D%3D&position=12&pageNum=0&trk=public_jobs_jserp-result_search-card," PLANOLY ",https://www.linkedin.com/company/planoly?trk=public_jobs_topcard-org-name," Austin, TX "," 1 month ago "," 49 applicants ","PLANOLY is the industry-leading social marketing platform trusted by over 5 million users to visually plan, schedule and measure performance across Instagram and Pinterest. PLANOLY is beautifully crafted to be simple, clean and easy to use. PLANOLY believes firmly in inclusivity and is thrilled to pave the way for brands, businesses and individuals of all backgrounds to carry out their digital marketing strategies seamlessly. PLANOLY is looking for a thoughtful, well-rounded Data Engineer to join a rapidly growing startup and work on building data management, data tools and data analytics services that help power business decisions. Our software platform that influencers, brands, agencies and marketing firms use daily gives us an incredibly rich and diverse dataset that we need to collect, transform, and analyze in order to improve effectiveness of our products as well as impact business decisions. You will have the opportunity to take a leading role in our engineering team, and help solve some challenging problems in the social media marketing space. Passion for social products and building great software is a must. Tools We Use Amazon Web Services Google Cloud Google BigQuery Google Data Studio dbt Python, SQL What You Will Do Data Modeling / Architecting via designing data models and implementing appropriate abstractions for immediate requirements. Understanding data lineage and dependencies, and develop and maintain existing ETL processes. Work with leadership, product, engineering, and marketing to productize answers to business questions. Communicate results using reproducible analysis methods and data visualizations. Monitor the quality of data and information, report on results, identify, and recommend system application changes required to improve the quality of data in all applications. Manipulate and analyze complex data from multiple sources and design, develop and generate ad hoc and operational reports in support of other teams and objectives. Design and build scalable micro services hosted in AWS using Python3.7 and/or NodeJS TypeScript. Design, implement and manage data warehouse plans for our group of products. Support existing data data services and processes running in Production. Collaborate closely and autonomously with a small team of engineers, designers and cross-functional users to understand data needs. Who You Are Bachelor’s degree in Computer Science or equivalent STEM field, or 3+ years of relevant work experience. Experience writing SQL queries and creating SQL based data models. Bonus: using dbt Experience communicating data analysis to business and peers, using data visualizations and reproducible analysis. 3+ years of professional experience building scalable software. Ability to work on green field projects with relatively minimal guidance. Ability to collaborate with other engineers, QA, and non technical people. Strong foundation in database systems (relational and non-relational). Experience with data warehousing solutions (Redshift, Big Query, etc). Proficiency with Python (2+ years). NodeJS experience is a plus. Strong understanding of serverless micro service architecture in the cloud (AWS, GCP). Using version control (e.g. Git). Bonus Skills and Experience Experience in Linux command line and writing shell scripts. Working knowledge of DevOps tools including Jenkins, Docker, etc. Experience with AWS technology: SES, SNS, SQS, EC2, Elasticache, KMS, S3, etc. NodeJS DBT Who We Are We are social media experts and first and foremost users of our tools to enhance our social media strategies. Planoly is built by influencers for influencers. We’re growing super fast and have been profitable since inception. We offer an open work environment where highly motivated engineers take full ownership of the products and help steer the firm. We are a huge advocate of work-life balance, which is seen in our open vacation and work-from-any-coffee-shops policies. We’ll provide you with lunch, snacks, drinks, regular team outings Learn more about PLANOLY at https://www.planoly.com and https://www.planoly.com/blog U.S. Equal Employment Opportunity/Affirmative Action Information Planoly is proud to be an equal opportunity employer and will consider all qualified individuals seeking employment without regards to race, color, creed, religion, gender, gender identity, national origin, citizenship, age, sex, marital status, ancestry, physical or mental disability, veteran status, sexual orientation, or any other protected classification. "," Entry level "," Full-time "," Information Technology "," Advertising Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-american-express-3507120914?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=DJb9%2FR8IE9BL9ZEP%2FQYBYw%3D%3D&position=16&pageNum=0&trk=public_jobs_jserp-result_search-card," American Express ",https://www.linkedin.com/company/american-express?trk=public_jobs_topcard-org-name," New York, NY "," 1 week ago "," Over 200 applicants ","At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we’re supporting our customers’ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what’s possible — and we’re proud to back each other every step of the way. When you join Team Amex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. Here, you’ll learn and grow as we champion your meaningful career journey with programs, benefits, and flexibility to back you personally and professionally. Every colleague shares in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to our customers, communities, and each other every day. And, we’ll do it with integrity and in an environment where everyone is seen, heard and feels like they truly belong. Join #TeamAmex and let’s lead the way together. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers’ digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. As a Data Engineer, you will be a responsible to build core features and functions of card transaction systems on distributed platform deployed in hybrid cloud. Senior Engineer in American Express is an individual contributor role reporting to Director of Engineering. Responsibilities You will be responsible to design and build distributed data processing and analytical systems. Build high level design as well as detailed design of subsystems/features with emphasis on performant code. Build and code features, working with developers in day-to-day activities and helping in code and other SDLC tasks. Build POCs to validate new concepts and new technologies. You will constantly purse and learn industry leading/innovative technologies and solutions. Be acutely aware of enabling technologies and open-source products to build low latency distributed systems. Lead a culture of innovation and experimentation, engage in fun and outcome-oriented culture, and always be ready to try new concepts without fear of failure. Collaborate with peer technology and development teams across different locations. Qualifications 1+ years of work experience in software design and implementation using Java or Scala. Experience in data processing using Spark. Knowledge in designing, implementing, and operating any of the NoSQL databases such as Cassandra, Elasticsearch. Preferred Qualifications Experience in distributed data processing and analyzing using Cassandra, Elasticsearch, Spark. Experience in distributed messaging system such as Kafka Experience in building Micro services and Service Mesh is a plus Experience in cloud platforms like Docker, Kubernetes, OpenShift are a plus. Experience in Continuous integration, Continuous delivery, and DevOps Systems. Experience in architecting large scale distributed data systems considering scalability, reliability, security, performance, and flexibility. Clear understanding of various design patterns, threading and memory models supported by the language/VM. Able to mentor and provide technical guidance to other engineers. Have excellent written and verbal communications skills. Create and deliver effective presentations to Senior Leadership. Salary Range: $70,000.00 to $135,000.00 annually + bonus + benefits The above represents the expected salary range for this job requisition. Ultimately, in determining your pay, we'll consider your location, experience, and other job-related factors. American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. We back our colleagues with the support they need to thrive, professionally and personally. That's why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. Depending on role and business needs, colleagues will either work onsite, in a hybrid model (combination of in-office and virtual days) or fully virtually. US Job Seekers/Employees - Click here to view the “EEO is the Law” poster and supplement and the Pay Transparency Policy Statement. If the links do not work, please copy and paste the following URLs in a new browser window: https://www.dol.gov/agencies/ofccp/posters to access the three posters. GHC SHPE SWE SHPE Afrotech"," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ascendion-3464950081?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=vBUV5ZyJiEN3N%2FT%2BcudjGw%3D%3D&position=22&pageNum=0&trk=public_jobs_jserp-result_search-card," Ascendion ",https://www.linkedin.com/company/ascendion?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","We are a fast-growing healthcare startup that is using Machine Learning to give employees a more digital and mobile-first experience. We have raised over $60 Million from some top VCs and are currently looking for experienced Data Engineers who are interested in making a big impact in the healthcare space. Requirements Strong skills in SQL (query plan optimization, windowing functions, aggregate design for example) An understanding of technologies and design patterns in fields such as: microservices, streaming / queuing systems, SQL and key-value stores, and high-performance solutions Experience with Snowflake, Postgres, or Redshift Strong experience with Python Familiarity with Airflow or similar orchestration framework Familiarity with CI/CD is a plus Communication skills with internal and external data platform customers Solid understanding of Kafka, Apache Spark, or other messaging queues, streaming technologies, and batch processing are a plus A Bachelor’s Degree in CS, Information Systems, a related field, or equivalent work experience What's In It For You · Competitive compensation and stock options · 100% health, vision & dental insurance for you and your dependents · Unlimited PTO · 401(k)"," Mid-Senior level "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting, Software Development, and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3488399018?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=TYzs0sWX%2BOM4iEd3VdEB9w%3D%3D&position=23&pageNum=0&trk=public_jobs_jserp-result_search-card," Ascendion ",https://www.linkedin.com/company/ascendion?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants "," We are a fast-growing healthcare startup that is using Machine Learning to give employees a more digital and mobile-first experience. We have raised over $60 Million from some top VCs and are currently looking for experienced Data Engineers who are interested in making a big impact in the healthcare space.RequirementsStrong skills in SQL (query plan optimization, windowing functions, aggregate design for example)An understanding of technologies and design patterns in fields such as: microservices, streaming / queuing systems, SQL and key-value stores, and high-performance solutions Experience with Snowflake, Postgres, or RedshiftStrong experience with PythonFamiliarity with Airflow or similar orchestration frameworkFamiliarity with CI/CD is a plusCommunication skills with internal and external data platform customersSolid understanding of Kafka, Apache Spark, or other messaging queues, streaming technologies, and batch processing are a plusA Bachelor’s Degree in CS, Information Systems, a related field, or equivalent work experienceWhat's In It For You· Competitive compensation and stock options· 100% health, vision & dental insurance for you and your dependents· Unlimited PTO· 401(k) "," Mid-Senior level "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting, Software Development, and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3488393901?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=VBcLa5f99Q2qS1OnPs0AAQ%3D%3D&position=1&pageNum=1&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Jersey City, NJ "," 3 weeks ago "," Be among the first 25 applicants ","Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future? Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion? Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup? Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience? If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company. Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US. What You’ll Do: This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable. You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reporting Support users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reports Own the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teams Manage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automation Once you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your work will also include: AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layers Meeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group. Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issues Our tech stack: Mobile Platforms Supported: iOS & Android Cross-Platform Mobile Language: React Native Other Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda) Systems: Salesforce, AWS Amplify / Cognito / Lambda Your Profile: A minimum of 3+ years of professional experience Bachelor's Degree or equivalent experience Good interpersonal and relationship skills that include a positive attitude Self-starter who can find a way forward even when the path is unclear. Team player AND a leader simultaneously. What You’ll Bring to the Team: Passionate about creating value that changes people's lives Make low-level decisions quickly while being patient and methodical with high-level ones Are curious and passionate about digging into new technologies with a knack for picking them up quickly Adept at prioritizing value and shipping complex products while coordinating across multiple teams Love working with a diverse set of engineers, product managers, designers, and business partners Strive to excel, innovate and take pride in your work Work well with other leaders Are a positive culture driver Excited about working in a fast-paced, startup culture Experience in a regulated industry (healthcare, finance, etc.) a plus and perks: We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We make healthcare more equitable and accessible: Mission-driven organization focused on innovative digestive care Thrive on diversity with monthly DEIB discussions, activities, and more Virtual-first culture: Work from home anywhere in the US Live our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We take care of our people: Competitive compensation and meaningful equity Employer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happens Tailored professional development opportunities to learn and grow We rest, recharge and re-energize: Unlimited paid time off — take what you need, when you need it 13 paid company holidays to power down Team events, such as virtual cooking classes, games, and more Recognition of professional and personal accomplishments Oshi Health’s Core Values: Go For It Do the Right Thing Be Direct & Open Learn & Improve TEAM - Together Everyone Achieves More Oshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Powered by JazzHR sFLseoi3SG"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-i-at-spruce-3505805507?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=xNHd7NB0YbumEcmOkqHrfA%3D%3D&position=2&pageNum=1&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Jersey City, NJ "," 3 weeks ago "," Be among the first 25 applicants "," Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future?Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion?Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup?Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience?If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company.Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US.What You’ll Do:This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable.You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reportingSupport users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reportsOwn the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teamsManage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automationOnce you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your work will also include: AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layersMeeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group.Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issuesOur tech stack:Mobile Platforms Supported: iOS & AndroidCross-Platform Mobile Language: React NativeOther Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda)Systems: Salesforce, AWS Amplify / Cognito / LambdaYour Profile:A minimum of 3+ years of professional experienceBachelor's Degree or equivalent experienceGood interpersonal and relationship skills that include a positive attitudeSelf-starter who can find a way forward even when the path is unclear.Team player AND a leader simultaneously.What You’ll Bring to the Team:Passionate about creating value that changes people's livesMake low-level decisions quickly while being patient and methodical with high-level onesAre curious and passionate about digging into new technologies with a knack for picking them up quicklyAdept at prioritizing value and shipping complex products while coordinating across multiple teamsLove working with a diverse set of engineers, product managers, designers, and business partnersStrive to excel, innovate and take pride in your workWork well with other leadersAre a positive culture driverExcited about working in a fast-paced, startup cultureExperience in a regulated industry (healthcare, finance, etc.) a plusand perks:We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We make healthcare more equitable and accessible: Mission-driven organization focused on innovative digestive careThrive on diversity with monthly DEIB discussions, activities, and moreVirtual-first culture: Work from home anywhere in the USLive our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We take care of our people:Competitive compensation and meaningful equityEmployer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happensTailored professional development opportunities to learn and growWe rest, recharge and re-energize:Unlimited paid time off — take what you need, when you need it13 paid company holidays to power downTeam events, such as virtual cooking classes, games, and moreRecognition of professional and personal accomplishmentsOshi Health’s Core Values:Go For ItDo the Right ThingBe Direct & OpenLearn & ImproveTEAM - Together Everyone Achieves MoreOshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.Powered by JazzHRsFLseoi3SG "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3496154704?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=KP24isu8PSqvN1%2BLmVF82Q%3D%3D&position=9&pageNum=1&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 16 hours ago "," 162 applicants ","Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Job Description As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Understand and adapt existing frameworks for data engineering pipelines in the organization. Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization. Collaborate with the team and learn to build scalable data pipelines. Support data engineering pipelines and quickly respond to failures. Collaborate with the team to develop new approaches and build solutions at scale. Create documentation for learning and knowledge transfer. Learn and adapt automation skills/techniques in day-to-day activities. Qualifications 1+ years of overall technology experience includes at least 1+ years of hands-on software development and data engineering. 1+ years of development experience in programming languages like Python, PySpark, Scala, etc.Experience or knowledge in Data Modeling, SQL optimization, performance tuning is a plus. 6+ months of cloud data engineering experience in Azure Certification is a plus. Experience with version control systems like Github and deployment & CI tools. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools is a plus. Experience in working with large data sets and scaling applications like Kubernetes is a plus. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Education BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior-level management. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to coordinate effectively with the team. Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management. Proactively drive impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to learn quickly and adapt to new skills. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law. EEO Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender Identity If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy. Please view our Pay Transparency Statement"," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-kairos-inc-3531404685?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=FZf9qWm7kyYKiQUKnnzi%2FA%3D%3D&position=10&pageNum=1&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 16 hours ago "," 162 applicants "," OverviewPepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development.PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation.What PepsiCo Data Management and Operations does:Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the companyResponsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholdersIncrease awareness about available data and democratize access to it across the company Job DescriptionAs a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems.ResponsibilitiesActive contributor to code development in projects and services.Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products.Understand and adapt existing frameworks for data engineering pipelines in the organization.Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization.Collaborate with the team and learn to build scalable data pipelines.Support data engineering pipelines and quickly respond to failures.Collaborate with the team to develop new approaches and build solutions at scale.Create documentation for learning and knowledge transfer.Learn and adapt automation skills/techniques in day-to-day activities.Qualifications1+ years of overall technology experience includes at least 1+ years of hands-on software development and data engineering. 1+ years of development experience in programming languages like Python, PySpark, Scala, etc.Experience or knowledge in Data Modeling, SQL optimization, performance tuning is a plus.6+ months of cloud data engineering experience in Azure Certification is a plus.Experience with version control systems like Github and deployment & CI tools.Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.Experience with data profiling and data quality tools is a plus.Experience in working with large data sets and scaling applications like Kubernetes is a plus.Experience with Statistical/ML techniques is a plus.Experience with building solutions in the retail or in the supply chain space is a plusUnderstanding metadata management, data lineage, and data glossaries is a plus.Working knowledge of agile development, including DevOps and DataOps concepts.Familiarity with business intelligence tools (such as PowerBI).EducationBA/BS in Computer Science, Math, Physics, or other technical fields.Skills, Abilities, KnowledgeExcellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior-level management.Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements.High degree of organization and ability to coordinate effectively with the team.Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management.Proactively drive impact and engagement while bringing others along.Consistently attain/exceed individual and team goals.Ability to learn quickly and adapt to new skills.CompetenciesHighly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law.EEO StatementAll qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender IdentityIf you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy.Please view our Pay Transparency Statement "," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer II,https://www.linkedin.com/jobs/view/data-engineer-ii-at-parsyl-3494141846?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=mXxo2kz%2Bwmn%2FPFvGQDVjiA%3D%3D&position=13&pageNum=1&trk=public_jobs_jserp-result_search-card," Parsyl ",https://www.linkedin.com/company/parsyl?trk=public_jobs_topcard-org-name," Denver, CO "," 3 weeks ago "," 50 applicants ","About Parsyl Parsyl is a data-powered insurance and risk management provider for essential supply chains in food and health. Our mission is to end the days of “ship and pray” and build a world where everyone, everywhere can trust the quality of the goods they rely on, from the foods they eat to the medicines they need. We are working to achieve this by combining smart sensors, data insights, and data-driven insurance to improve risk resiliency and safeguard goods in transit and storage. This unique combination of IoT and insurance means our customers can use data to make supply chains more transparent, safe and sustainable - better for people and the planet. Parsyl was recognized as a ""Best Startups to Work for in Colorado"" by BuiltIn Colorado in 2022 and 2023. What You Bring to Parsyl We are creating a mission-driven team that aims to transform the essential supply chain industry. Data is at the core of everything we do at Parsyl. You will work with the Data Science team to ensure the team has the data it needs to build products that ultimately improve the quality of essential goods throughout the global supply chain. You will report to the Director of Data and Insights in this role. You are a good fit for this position if you are confident being an early member of the data engineering function within a data-first organization. You are equally comfortable building data pipelines and communicating technical tradeoffs to non-technical audiences. You are self-motivated and thrive in a fast-paced, ambiguous environment. In this role, you’ll get to: Maintain and improve Parsyl’s data warehouse and data tools to accelerate Data Science and Data Analyst research and development. Develop, deploy, and monitor data pipelines from transactional and IoT data, SaaS tools, and partner or purchased data. Ensure the quality, reliability, and availability of data for consumption by the data team as well as the broader business. Coordinate with the production backend team to efficiently manage data pipelines and integrations; understand and advocate for the needs of the data team and liaise between the data team, backend and product teams. Collaborate cross-functionally to understand and support Data Scientist, Analyst, Product, and Business users' data needs. Support all aspects of data governance, including access, security, and master data management. Requirements Parsyl is committed to cultivating a diverse pool of candidates interested in joining a mission-driven company. We are building an inclusive team at Parsyl that welcomes different perspectives and creative ideas in order to best achieve our mission of ending the days of ""ship and pray"" and serving our customers. What we're looking for: 3+ years of experience in data engineering and data technologies 1+ year of experience architecting cloud data solutions, preferably on AWS Experience with batch and streaming frameworks, such as Hadoop, Spark or Flink Experience with infrastructure as code, preferably Terraform Experience with data pipeline tools, such as dbt, Airflow, or Databricks Workflows Strong SQL and relational data modeling experience Experience with at least one general-purpose programming language, such as Go, Python, or Scala Eagerness to work in a fast-paced startup environment Excellent communication skills Strong project management skills *Parsyl requires all employees to be fully vaccinated against Covid-19, unless they qualify for a religious or medical accommodation. It's a bonus if you also have experience with: Working closely with Data Analysts and Scientists BI tools, preferably Looker Databricks and Apache Spark IoT data Insurance, Global Health, or supply chain industries Benefits Market competitive salary with an anticipated base compensation range of $125,000 - $145,000. Actual salaries will vary depending on a candidate’s experience, qualifications, and skills. Additional Financial Benefits include: Stock options 401(k) including company match Health and Wellness Benefits include: Medical, dental, and vision insurance effective on your start date (100% of medical, dental, and vision premiums for employees and 75% of premiums for dependents based on a solid, mid-tier plan) Six weeks of fully paid leave family and/or medical leave Monthly wellness benefit of $100 per month Time-off and Vacation Benefits include: Unlimited vacation policy Company Breaks - quarterly mental health days, summer and winter breaks Paid sabbatical program Additional Work Environment Benefits include: Significant career growth opportunities and continuing education stipend Flexible work environment based on role requirements Commuter benefit of $100 per month for public transportation or parking costs Home office set-up stipend of up to $1000 Relocation assistance available (Denver, CO candidates or candidates willing to relocate will be considered)"," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer II,https://www.linkedin.com/jobs/view/data-engineer-at-atlantic-partners-corporation-3461310036?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=4y2XwQKiWKZycqZ0dFJABg%3D%3D&position=14&pageNum=1&trk=public_jobs_jserp-result_search-card," Parsyl ",https://www.linkedin.com/company/parsyl?trk=public_jobs_topcard-org-name," Denver, CO "," 3 weeks ago "," 50 applicants "," About ParsylParsyl is a data-powered insurance and risk management provider for essential supply chains in food and health. Our mission is to end the days of “ship and pray” and build a world where everyone, everywhere can trust the quality of the goods they rely on, from the foods they eat to the medicines they need.We are working to achieve this by combining smart sensors, data insights, and data-driven insurance to improve risk resiliency and safeguard goods in transit and storage. This unique combination of IoT and insurance means our customers can use data to make supply chains more transparent, safe and sustainable - better for people and the planet. Parsyl was recognized as a ""Best Startups to Work for in Colorado"" by BuiltIn Colorado in 2022 and 2023.What You Bring to ParsylWe are creating a mission-driven team that aims to transform the essential supply chain industry. Data is at the core of everything we do at Parsyl. You will work with the Data Science team to ensure the team has the data it needs to build products that ultimately improve the quality of essential goods throughout the global supply chain. You will report to the Director of Data and Insights in this role.You are a good fit for this position if you are confident being an early member of the data engineering function within a data-first organization. You are equally comfortable building data pipelines and communicating technical tradeoffs to non-technical audiences. You are self-motivated and thrive in a fast-paced, ambiguous environment.In this role, you’ll get to:Maintain and improve Parsyl’s data warehouse and data tools to accelerate Data Science and Data Analyst research and development.Develop, deploy, and monitor data pipelines from transactional and IoT data, SaaS tools, and partner or purchased data.Ensure the quality, reliability, and availability of data for consumption by the data team as well as the broader business.Coordinate with the production backend team to efficiently manage data pipelines and integrations; understand and advocate for the needs of the data team and liaise between the data team, backend and product teams.Collaborate cross-functionally to understand and support Data Scientist, Analyst, Product, and Business users' data needs.Support all aspects of data governance, including access, security, and master data management.RequirementsParsyl is committed to cultivating a diverse pool of candidates interested in joining a mission-driven company. We are building an inclusive team at Parsyl that welcomes different perspectives and creative ideas in order to best achieve our mission of ending the days of ""ship and pray"" and serving our customers.What we're looking for:3+ years of experience in data engineering and data technologies1+ year of experience architecting cloud data solutions, preferably on AWSExperience with batch and streaming frameworks, such as Hadoop, Spark or FlinkExperience with infrastructure as code, preferably TerraformExperience with data pipeline tools, such as dbt, Airflow, or Databricks WorkflowsStrong SQL and relational data modeling experienceExperience with at least one general-purpose programming language, such as Go, Python, or ScalaEagerness to work in a fast-paced startup environmentExcellent communication skillsStrong project management skills*Parsyl requires all employees to be fully vaccinated against Covid-19, unless they qualify for a religious or medical accommodation.It's a bonus if you also have experience with:Working closely with Data Analysts and ScientistsBI tools, preferably LookerDatabricks and Apache SparkIoT dataInsurance, Global Health, or supply chain industriesBenefitsMarket competitive salary with an anticipated base compensation range of $125,000 - $145,000. Actual salaries will vary depending on a candidate’s experience, qualifications, and skills.Additional Financial Benefits include:Stock options401(k) including company matchHealth and Wellness Benefits include:Medical, dental, and vision insurance effective on your start date (100% of medical, dental, and vision premiums for employees and 75% of premiums for dependents based on a solid, mid-tier plan)Six weeks of fully paid leave family and/or medical leaveMonthly wellness benefit of $100 per monthTime-off and Vacation Benefits include:Unlimited vacation policyCompany Breaks - quarterly mental health days, summer and winter breaksPaid sabbatical programAdditional Work Environment Benefits include:Significant career growth opportunities and continuing education stipendFlexible work environment based on role requirementsCommuter benefit of $100 per month for public transportation or parking costsHome office set-up stipend of up to $1000Relocation assistance available (Denver, CO candidates or candidates willing to relocate will be considered) "," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-hyatt-hotels-corporation-3500299344?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=Oxcwqirt1A4o8fH5RoeW6Q%3D%3D&position=22&pageNum=1&trk=public_jobs_jserp-result_search-card," Hyatt Hotels Corporation ",https://www.linkedin.com/company/hyatt?trk=public_jobs_topcard-org-name," Chicago, IL "," 2 weeks ago "," 186 applicants ","Summary The Opportunity At Hyatt, we’re working to Advance Care through data-driven decisions and automation. This mission serves as the foundation for every decision as we create the future of travel. We can’t do that without the best talent – the talent that is innovative, curious, and driven to create exceptional experiences for our guests, customers, owners, and colleagues. Hyatt seeks an experienced Data Engineer who will be an exceptional addition to our growing engineering team. The Data Engineer will work closely with engineering, product managers, and data science teams to meet the data requirements of various initiatives in Hyatt. As a Data Engineer, you will take on big data challenges in an agile way. In this role, you will build data pipelines that enable engineers, analysts, and other stakeholders across the organization. You will build data models to deliver insightful analytics while ensuring the highest standard of data integrity. You will integrate different data sources, improve the efficiency, reliability, and latency of our data system, help automate data pipelines, and improve our data model and overall architecture. You will be part of a highly visible, collaborative, and passionate data engineering team and will be working on all the aspects of design, development, and implementation of scalable and reliable data products and pipelines. Applying the latest techniques and approaches across the domains of data engineering, and machine learning engineering isn’t just a nice to have, it’s a must. This candidate builds fantastic relationships across all levels of the organization and is recognized as a problem solver who looks to elevate the work of everyone around them. Who We Are At Hyatt, we believe in the power of belonging and creating a culture of care, where our colleagues become family. Since 1957, our colleagues and our guests have been at the heart of our business and helped Hyatt become one of the best, and fastest growing hospitality brands in the world. Our transformative growth and the addition of new hotels, brands and business lines can open the door for exciting career and growth opportunities to our colleagues. As we continue to grow, we never lose sight of what’s most important: People. We turn trips into journeys, encounters into experiences and jobs into careers. Why Now? This is an exciting time to be at Hyatt. We are growing rapidly and are looking for passionate changemakers to be a part of our journey. The hospitality industry is resilient and continues to offer dynamic opportunities for upward mobility, and Hyatt is no exception. How We Care For Our People What sets us apart is our purpose—to care for people so they can be their best. Every business decision is made through the lens of our purpose, and it informs how we have and will continue to support each other as members of the Hyatt family. Our care for our colleagues is the key to our success. We’re proud to have earned a place on Fortune’s prestigious 100 Best Companies to Work For® list for the last ten years. This recognition is a testament to the tremendous way our Hyatt family continues to come together to care for one another, our commitment to a culture of inclusivity, empathy and respect, and making sure everyone feels like they belong. We’re proud to offer exceptional corporate benefits which include: Annual allotment of free hotel stays at Hyatt hotels globally Flexible work schedule and location Work-life benefits including well-being initiatives such as a complimentary Headspace subscription, and a discount at the on-site fitness center A global family assistance policy with paid time off following the birth or adoption of a child as well as financial assistance for adoption Paid Time Off, Medical, Dental, Vision, 401K with company match Our Commitment to Diversity, Equity, and Inclusion Our success is underpinned by our diverse, equitable and inclusive culture and we are committed to diversity across the board—from who we hire and develop, organizations we support, and who we buy from and work with. Being part of Hyatt means always having space to be you. Our global teams are a mosaic of cultures, ethnicities, genders, ages, abilities, and identities. We constantly strive to reflect the world we care for with teams that achieve and grow together. To learn more about our commitments to DE&I, please visit the Why Hyatt section of the Hyatt career page. Who You Are As our ideal candidate, you understand the power and purpose of our Culture of Care and embody our core values of Empathy, Inclusion, Integrity, Experimentation, Respect, and Well-being. You enjoy working with others, are results driven and are looking for a variety of opportunities to develop personally and professionally. The Role Collaborate with product managers, data scientists, engineering, and program management teams to define product features, business deliverables, and strategies for data products Collaborate with business partners, operations, senior management, etc on day-to-day operational support Support operational reporting, self-service data engineering efforts, production data pipelines, and business intelligence suite Interface with multiple diverse stakeholders and gather/understand business requirements, assess feasibility and impact, and deliver on time with high quality Design appropriate solutions and recommend alternative approaches when necessary Work with high volumes of data, fine-tuning database queries and able to solve complex technical problems Contribute to multiple projects/demands simultaneously Work in a fast-paced, collaborative, and iterative environment Exercise independent judgment in methods and techniques for obtaining results Work in an agile/scrum environment Use state-of-the-art technologies to acquire, ingest and transform big datasets The ideal candidate demonstrates a commitment to Hyatt's core values: respect, integrity, humility, empathy, creativity, and fun. Qualifications Qualifications 2 to 5+ years of experience within the field of data engineering or related technical work including business intelligence, analytics Experience and comfort in solving problems in an ambiguous environment where there is constant change. Have the tenacity to thrive in a dynamic and fast-paced environment, inspire change, and collaborate with a variety of individuals and organizational partners Experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business Very good understanding of the full software development life cycle Very good understanding of Data warehousing concepts and approaches Experience in building Data pipelines and ETL approaches Experience in building Data warehouses and Business intelligence projects Experience in data cleansing, data validation, and data wrangling Hands-on experience in AWS cloud and AWS native technologies such as Glue, Lambda, Kinesis, Lake Formation, S3, Redshift Experience using Spark EMR, RDS, EC2, Athena, API capabilities, CloudWatch, and CloudTrail is a plus Experience with Business Intelligence tools like Tableau, Cognos, ThoughtSpot, etc is a plus Hands-on experience building complex business logic and ETL workflows using Informatica PowerCenter is preferred Experience in one of the scripting languages: Python or Unix Scripting Proficient in SQL, PL/SQL, relational databases (RDBMS), database concepts, and dimensional modeling Strong verbal and written communication skills Demonstrate integrity and maturity, and a constructive approach to challenges Demonstrate analytical and problem-solving skills, particularly those that apply to Data Warehouse and Big Data environments Open-minded, solution-oriented and a very good team player Passionate about programming and learning new technologies; focused on helping yourself and the team improve skills Effective problem-solving and analytical skills. Ability to manage multiple projects and report simultaneously across different stakeholders Rigorous attention to detail and accuracy Bachelor’s degree in Engineering, Computer Science, Statistics, Economics, Mathematics, Finance, or a related quantitative field The position responsibilities outlined above are in no way to be construed as all-encompassing. Other duties, responsibilities, and qualifications may be required and/or assigned as necessary"," Entry level "," Full-time "," Information Technology "," Hospitality " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-hcltech-3497985338?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=0O7nsmxNZPXYJPOmfl0nGw%3D%3D&position=23&pageNum=1&trk=public_jobs_jserp-result_search-card," HCLTech ",https://in.linkedin.com/company/hcltech?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","HCLTech is seeking Data Engineer. This is full-time remote position and should be able to work in CST hours. Job Description: · Preferred Statistics, Mathematics, Operations Research, Computer Science, Econometrics or related field · 4-5 years of professional experience · At least 4 years experience working in software or analytics related field · At least 2 years of hands-on experience doing data science work · Ability to write complex SQL queries · Statistical knowledge and intuition · Machine learning experience · Programming experience with a scripting language such as Python, Java, or Ruby. · Proficiency with a statistical analysis tool such as R or SAS preferred. · Ability and desire to mentor other team members through projects independently from start to finish, working with internal and external teams to make decisions · Proven track record in meeting aggressive deadlines; strong time management skills with the ability to manage detail work and communicate project status effectively to all levels · Identifying data requirements for analytical needs . Working AWS experience(preferred). . Databricks experience(preferred)."," Mid-Senior level "," Full-time "," Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-life-science-people-3513216706?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=rV0ql2svAI4MVKzRqRKmlA%3D%3D&position=24&pageNum=1&trk=public_jobs_jserp-result_search-card," HCLTech ",https://in.linkedin.com/company/hcltech?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," HCLTech is seeking Data Engineer. This is full-time remote position and should be able to work in CST hours.Job Description:· Preferred Statistics, Mathematics, Operations Research, Computer Science, Econometrics or related field· 4-5 years of professional experience· At least 4 years experience working in software or analytics related field· At least 2 years of hands-on experience doing data science work· Ability to write complex SQL queries· Statistical knowledge and intuition· Machine learning experience· Programming experience with a scripting language such as Python, Java, or Ruby.· Proficiency with a statistical analysis tool such as R or SAS preferred.· Ability and desire to mentor other team members through projects independently from start to finish, working with internal and external teams to make decisions· Proven track record in meeting aggressive deadlines; strong time management skills with the ability to manage detail work and communicate project status effectively to all levels· Identifying data requirements for analytical needs. Working AWS experience(preferred).. Databricks experience(preferred). "," Mid-Senior level "," Full-time "," Engineering "," IT Services and IT Consulting " Data Engineer,United States,Senior Data Engineer,https://www.linkedin.com/jobs/view/senior-data-engineer-at-preveta-3520479147?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=vb%2FpaTtj7l%2Fu%2Fur82qK2MA%3D%3D&position=25&pageNum=1&trk=public_jobs_jserp-result_search-card," Preveta ",https://www.linkedin.com/company/preveta?trk=public_jobs_topcard-org-name," California, United States "," 1 month ago "," Be among the first 25 applicants ","What You'll Do As our Senior Data Engineer, you will be responsible for the design, development and maintenance of high-performance, resilient, scalable data pipelines/platforms, providing clean and usable datasets into Preveta’s data applications and analytics. Key responsibilities include Work with product managers, analysts, subject matter experts, and the rest of the data team to identify valuable data/transformations/insights/metrics used to improve patient outcomes Design and implement infrastructure, data pipelines, orchestration, and observability to efficiently source from multiple healthcare systems - with various access methods and data/file formats - into targeted applications and datasets Identify and initiate infrastructure, development, and workflow improvements Provide mentorship and support to other Data Team members Assist with data-related technical issues Requirements ABOUT YOU The big picture: At Preveta, we look for people who are empathetic, genuine, compassionate, and knowledgeable of the healthcare challenges in patient care. We are serious about improving the future of U.S. healthcare, but we never take ourselves too seriously, we want to see you love what you do. For this role: We are looking for someone who can engage with our team and clients to help provide clarity to the complexity. We want to see your love of data and building infrastructure. We know people are so much more than their resume – but to make sure someone is set up for success in this role, we’re looking for: Self-starter that thrives in a remote-only work environment, being able to communicate effectively both synchronously and asynchronously 5+ years as a data engineer building key datasets and data pipelines against non-trivial source/target data sizes Very strong SQL skills, with an intimate understanding of things like query tuning, DDL/DML, temp tables, window functions, and JSON queries, to write and maintain accurate and performant queries over large data sources. Authorization to work in the United States Preferences Experience with teams that release frequently using CI/CD, infrastructure as code, and automated testing Experience implementing data pipelines on cloud platforms (Azure, AWS, GCP) with platforms/tools like Azure Data Factory, Synapse Analytics, Azure Functions, Databricks, Spark, Airflow, Snowflake, Big Query, Redshift Programming/scripting experience in any language Familiarity with Data Lakehouse architectures Familiarity with visualization/analytics model platforms like Power BI or Tableau Experience working in a fast-growing SaaS start-up or eager to contribute in such an environment Who We Are Empathy and care are in our DNA-we are driven by a deep and instinctive desire to help others. When our founders’ dear friend Becky was diagnosed with ovarian cancer, coordination breakdowns impeded every step of her care. If not for these systemic barriers, Becky and millions in similar situations might still be alive. Preveta was created to ensure what happened to Becky wouldn’t happen to anyone else. Our Guiding Principles Authenticity-We are transparent, empathetic, and candid during every interaction. We strive to create a strong community grounded in honesty and genuine collaboration. Growth-We focus on ensuring that every team member has the support and resources they need. As we grow internally, our impact increases exponentially. Respect- We are committed to cultivating a culture of respect. We act kindly, listen fully, take each individual's perspective to heart, and engage with empathy. Collaboration-Meaningful collaboration drives better results. We walk alongside each other through the challenges and the triumphs. The strength of the team is each individual member. An individuality-Every person on the team brings their own character, know-how, background, insights, and questions to the Preveta community. We celebrate and honor these one-of-a-kind traits because they are the elements that help make up our dynamic team. Benefits WHY JOIN US? We are in an incredibly exciting space as we work to scale a solution that helps save lives. We surround ourselves with people who inspire. We learn and grow from each other. We don’t have time for drama, we trust each other’s individual expertise. Our team is built on trust, curiosity, ingenuity, and playfulness. Our greatest tool is our people, our audacious vision and collaboration get the job done. Besides being a part of something revolutionary and changing the way we do health care we provide benefits such as: We provide a 100% remote work environment with a flexible schedule Unlimited paid time off - for vacations, if you're sick, or if you just need a day Healthcare benefits include - medical with telehealth, HSA, dental, vision, mental health Reimbursements for your work expenses - for your equipment, and even down to that favorite color sticky note Professional development reimbursements, you decide how to use it based on how you learn best We value diversity on our team and strive to maintain a culture of inclusion and belonging. We believe that teams with different backgrounds and experiences bring uniquely valuable perspectives to our work, and a culture where people can be their authentic selves makes our team stronger and more successful. We’re building a culture grounded in candor, trust, and support of one another; an environment where we are stronger together than each of us could be on our own. Salary for this position is between 140,000 to 160,000 per year"," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Financials (Remote),https://www.linkedin.com/jobs/view/data-engineer-financials-remote-at-kuali-inc-3494013341?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=BVKGlT9Hz7IfLpTkTh44PA%3D%3D&position=5&pageNum=1&trk=public_jobs_jserp-result_search-card," Kuali, Inc. ",https://www.linkedin.com/company/kualico?trk=public_jobs_topcard-org-name," Lehi, UT "," 3 weeks ago "," 84 applicants ","Who are we? Kuali builds software solutions for higher education. We help our customers — colleges & universities — focus on providing a fantastic education to students by decreasing their administrative costs. We work in a competitive space, ripe for innovation, with users ready to be delighted. Our Culture As a company, we are guided by our cultural values: Iterate to evolve Cultivate openness Act with accountability Assume the best Practice humility Deliver amazing experiences As Kuali engineers, we learn from and teach each other, we practice transparency and empathy, and we delight in delivering value to our customers. We work remotely, and have for years. Distributed work is in our bones, with a history of institutions working across state lines on open-source software for more than ten years. Our employees each work in the environment where they’re happiest, from Pennsylvania to Hawaii. We work consciously to create a collaborative and healthy remote work culture, and we travel to meet in person a few times each year. Everyone should love their work. Kuali has been voted a top place to work for 5 years in a row by the Salt Lake Tribune. We also made Forbes' list of America's Best Startup Employers for 2020. Not too shabby. Your product team You will work closely with product, design, and our customers on the Kuali Financials product. Customers use our Financials product to efficiently and effectively manage the complex accounting needs of higher education. Requirements Who are you? We’re looking for curious, enthusiastic, empathetic engineers to solve problems, execute on ideas, advocate for the customer, and contribute to a team culture built on trust and mutual respect. As a data engineer here, you’ll have a significant impact on what we do and how we do it. We build and support data pipelines and reporting solutions for a range of business needs, including end user consumption. We are focusing heavily on greenfield development: you will build new analytics and data services including data pipelines, data warehousing solutions, ETL processes, end user reports, as well as the deployment mechanisms and the platform's environments. As a Data Engineer you will have the unique opportunity to influence major decisions from how our data platform's cloud-based infrastructure is architected, to what modern data-engineering frameworks and tooling are used, to how our CI/CD will operate. While building out the new platform, you will also maintain a light-weight, legacy solution, which will be deprecated and replaced. We believe that great developers can always learn new tools. Above any specific tech stack, we’re looking for versatile developers — those who know when to think big and when to act small, and who are comfortable in both greenfield and refactoring projects. We believe the best products are created by teams who represent a broad range of ideas and perspectives. We value employees with diverse backgrounds and experiences. You... Have 3+ years of Data Engineering experience or equivalent. Architecture-level experience conceptualizing and building infrastructure that processes, stores, and vends large data sets for analytics purposes at an enterprise scale in the cloud. Advanced SQL skills including database design best practices. CTEs, functions, stored procedures don't intimidate you. Have hands-on experience with ETL-as-code frameworks such as Airflow, Luigi, or Prefect or experience building ETL processes/services from scratch with generic languages and libraries. Experience developing and supporting reports with BI and reporting tools such as Tableau, Domo, Looker, or Sisense. Understand the software development lifecycle and are able to work alongside development teams. You’re excited to collaborate closely with Application Engineers, Product Managers, and Customer Success and use real-time feedback to solve problems iteratively. Are ready to help reformulate existing frameworks to improve and expand current offerings. Aren’t afraid to get your hands dirty on devops work. We'd be delighted if you bring experience with: Shipping Software as a Service (SaaS) solutions One or more of these technologies: Java, Python, Node.js, AWS One or more relational databases: MySQL, Oracle or Postgres One or more analytics databases: Redshift, Snowflake, or Teradata. Report and dashboard requirements analysis and design Front end development with React or Angular The Higher Education community Other things you should know: This team is (and has always been) fully remote. You’d be expected to have a suitable home working environment or alternative. We try to get together in person as a team or company 2-4 times a year. Benefits Top-of-the-line equipment of your choice to get your job done A truly exceptional benefits package including full premium coverage for employee and dependent medical and dental care 401(k) matching Paid Maternity/Parental leave All the paid time off you need (just work it out with your manager) Allowance for continuing education, conferences, and/or training Space to work on self-driven projects during “hack time” Employee resource groups and community events"," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Financials (Remote),https://www.linkedin.com/jobs/view/data-engineer-analytics-generalist-at-instagram-3512573925?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=qNlUQePZDfzD5rBw6eJuzw%3D%3D&position=6&pageNum=1&trk=public_jobs_jserp-result_search-card," Kuali, Inc. ",https://www.linkedin.com/company/kualico?trk=public_jobs_topcard-org-name," Lehi, UT "," 3 weeks ago "," 84 applicants "," Who are we?Kuali builds software solutions for higher education. We help our customers — colleges & universities — focus on providing a fantastic education to students by decreasing their administrative costs. We work in a competitive space, ripe for innovation, with users ready to be delighted.Our CultureAs a company, we are guided by our cultural values:Iterate to evolveCultivate opennessAct with accountabilityAssume the bestPractice humilityDeliver amazing experiencesAs Kuali engineers, we learn from and teach each other, we practice transparency and empathy, and we delight in delivering value to our customers.We work remotely, and have for years. Distributed work is in our bones, with a history of institutions working across state lines on open-source software for more than ten years. Our employees each work in the environment where they’re happiest, from Pennsylvania to Hawaii. We work consciously to create a collaborative and healthy remote work culture, and we travel to meet in person a few times each year.Everyone should love their work.Kuali has been voted a top place to work for 5 years in a row by the Salt Lake Tribune. We also made Forbes' list of America's Best Startup Employers for 2020. Not too shabby.Your product teamYou will work closely with product, design, and our customers on the Kuali Financials product. Customers use our Financials product to efficiently and effectively manage the complex accounting needs of higher education.RequirementsWho are you?We’re looking for curious, enthusiastic, empathetic engineers to solve problems, execute on ideas, advocate for the customer, and contribute to a team culture built on trust and mutual respect. As a data engineer here, you’ll have a significant impact on what we do and how we do it.We build and support data pipelines and reporting solutions for a range of business needs, including end user consumption. We are focusing heavily on greenfield development: you will build new analytics and data services including data pipelines, data warehousing solutions, ETL processes, end user reports, as well as the deployment mechanisms and the platform's environments. As a Data Engineer you will have the unique opportunity to influence major decisions from how our data platform's cloud-based infrastructure is architected, to what modern data-engineering frameworks and tooling are used, to how our CI/CD will operate. While building out the new platform, you will also maintain a light-weight, legacy solution, which will be deprecated and replaced. We believe that great developers can always learn new tools. Above any specific tech stack, we’re looking for versatile developers — those who know when to think big and when to act small, and who are comfortable in both greenfield and refactoring projects.We believe the best products are created by teams who represent a broad range of ideas and perspectives. We value employees with diverse backgrounds and experiences.You...Have 3+ years of Data Engineering experience or equivalent.Architecture-level experience conceptualizing and building infrastructure that processes, stores, and vends large data sets for analytics purposes at an enterprise scale in the cloud. Advanced SQL skills including database design best practices. CTEs, functions, stored procedures don't intimidate you.Have hands-on experience with ETL-as-code frameworks such as Airflow, Luigi, or Prefect or experience building ETL processes/services from scratch with generic languages and libraries.Experience developing and supporting reports with BI and reporting tools such as Tableau, Domo, Looker, or Sisense.Understand the software development lifecycle and are able to work alongside development teams. You’re excited to collaborate closely with Application Engineers, Product Managers, and Customer Success and use real-time feedback to solve problems iteratively.Are ready to help reformulate existing frameworks to improve and expand current offerings.Aren’t afraid to get your hands dirty on devops work.We'd be delighted if you bring experience with:Shipping Software as a Service (SaaS) solutionsOne or more of these technologies: Java, Python, Node.js, AWSOne or more relational databases: MySQL, Oracle or PostgresOne or more analytics databases: Redshift, Snowflake, or Teradata.Report and dashboard requirements analysis and designFront end development with React or AngularThe Higher Education communityOther things you should know:This team is (and has always been) fully remote. You’d be expected to have a suitable home working environment or alternative. We try to get together in person as a team or company 2-4 times a year.BenefitsTop-of-the-line equipment of your choice to get your job doneA truly exceptional benefits package including full premium coverage for employee and dependent medical and dental care401(k) matchingPaid Maternity/Parental leaveAll the paid time off you need (just work it out with your manager)Allowance for continuing education, conferences, and/or trainingSpace to work on self-driven projects during “hack time”Employee resource groups and community events "," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3488399018?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=TYzs0sWX%2BOM4iEd3VdEB9w%3D%3D&position=23&pageNum=0&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Phoenix, AZ "," 3 weeks ago "," Be among the first 25 applicants ","Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future? Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion? Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup? Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience? If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company. Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US. What You’ll Do: This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable. You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reporting Support users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reports Own the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teams Manage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automation Once you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your work will also include: AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layers Meeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group. Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issues Our tech stack: Mobile Platforms Supported: iOS & Android Cross-Platform Mobile Language: React Native Other Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda) Systems: Salesforce, AWS Amplify / Cognito / Lambda Your Profile: A minimum of 3+ years of professional experience Bachelor's Degree or equivalent experience Good interpersonal and relationship skills that include a positive attitude Self-starter who can find a way forward even when the path is unclear. Team player AND a leader simultaneously. What You’ll Bring to the Team: Passionate about creating value that changes people's lives Make low-level decisions quickly while being patient and methodical with high-level ones Are curious and passionate about digging into new technologies with a knack for picking them up quickly Adept at prioritizing value and shipping complex products while coordinating across multiple teams Love working with a diverse set of engineers, product managers, designers, and business partners Strive to excel, innovate and take pride in your work Work well with other leaders Are a positive culture driver Excited about working in a fast-paced, startup culture Experience in a regulated industry (healthcare, finance, etc.) a plus and perks: We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We make healthcare more equitable and accessible: Mission-driven organization focused on innovative digestive care Thrive on diversity with monthly DEIB discussions, activities, and more Virtual-first culture: Work from home anywhere in the US Live our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We take care of our people: Competitive compensation and meaningful equity Employer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happens Tailored professional development opportunities to learn and grow We rest, recharge and re-energize: Unlimited paid time off — take what you need, when you need it 13 paid company holidays to power down Team events, such as virtual cooking classes, games, and more Recognition of professional and personal accomplishments Oshi Health’s Core Values: Go For It Do the Right Thing Be Direct & Open Learn & Improve TEAM - Together Everyone Achieves More Oshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Powered by JazzHR Xfc3znOTqR"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer I,https://www.linkedin.com/jobs/view/data-engineer-i-at-spruce-3505805507?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=xNHd7NB0YbumEcmOkqHrfA%3D%3D&position=2&pageNum=1&trk=public_jobs_jserp-result_search-card," Spruce ",https://www.linkedin.com/company/spruce-inc?trk=public_jobs_topcard-org-name," Austin, TX "," 2 weeks ago "," Over 200 applicants ","Who We Are At Spruce, our mission is to change the way people live in their homes by making home services more accessible. As the leading provider of lifestyle services to the multifamily industry, we offer daily chores and housekeeping services to more than 2,000 apartment communities across the US, and we work with over 60 of the top apartment managers in the country. Through the Spruce app, apartment residents can easily have their clothes folded, their dishes washed, their bed sheets changed, or their bathroom cleaned. Venture-backed and headquartered in Austin, Spruce has more than 80 employees and is growing rapidly. We promote a people-first culture where curiosity, ownership, hustle, and boldness are valued and encouraged. Each employee has a personal, measurable impact on the success of the company, and ideas are welcomed from everyone. About The Role We’re looking for a Data Engineer I who will report to the Director of Business Analytics. You’ll be serving an integral role in bridging between our data engineering and business analytics team members. Namely, this person will be owning the maintenance, organization, and improvements made to data in our BI tool, Looker. This will be a great opportunity for someone with a strong technical background who’s interested in startups and enjoys tackling unique data challenges with high impact to the business. What You Get To Do Designing data models with a deep understanding of how the data will be used in Looker Re-structuring old data models in LookML to be more efficient and flexible to evolving data needs Familiarizing yourself with what exists in our databases, data lake and data warehouse that could be pulled through to Looker to answer arising business questions Translating business analytics needs into technical requirements for engineers Pulling through new data tables/fields to Looker and ensuring proper setup in LookML (coding in LookML, choosing proper joins, model design/data architecture etc.) QAing data provided in Looker and creating systems to ensure data quality Launching new data to the business and maintaining related documentation (data dictionary) Who You Are Bachelor’s degree, or equivalent, in engineering or data related fields 3-5 years of work experience in a data/analytics oriented role Self-starter with a desire to continuously learn new skills Comfortable with ambiguity and enjoys creating structures/processes that improve efficiency with cumulative effect over time (exponential thinking). Technical Skills Strong understanding of data modeling Strong understanding of relational database design and data warehousing concepts Expert on profiling data and uncovering data quality issues that impact analysis and dashboard Expert in SQL Experience with Snowflake or similar cloud warehouse technologies Expert in building Visualizations in Looker, including strong LookML coding skills essential Things that can set you apart Strong understanding of source data optimization for Looker Processing Experience in designing and developing ETL processes What We Offer Competitive salary Stock options 401K plan Medical, vision, dental insurance Unlimited PTO 100% remote work Spruce-provided WFH setup (laptop, keyboard, monitor(s), mouse) A huge role in the growth of a company with a meaningful mission We're building a strong, diverse team of curious, creative people who want to find purpose in their work and support each other in the process. If this sounds like you, then let's talk. "," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet and Consumer Services " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-analytics-generalist-at-instagram-3512573925?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=qNlUQePZDfzD5rBw6eJuzw%3D%3D&position=6&pageNum=1&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," Menlo Park, CA "," 2 weeks ago "," 53 applicants ","Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us! Data Engineer, Analytics (Generalist) Responsibilities: Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision. Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems. Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Define and manage SLA for all data sets in allocated areas of ownership. Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership. Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains. Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources. Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts. Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts. Influence product and cross-functional teams to identify data opportunities to drive impact. Mentor team members by giving/receiving actionable feedback. Minimum Qualifications: Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. 12+ years experience in the data warehouse space. 12+ years experience in custom ETL design, implementation and maintenance. 12+ years experience with object-oriented programming languages. 12+ years experience with schema design and dimensional data modeling. 12+ years experience in writing SQL statements. Experience analyzing data to identify deliverables, gaps and inconsistencies. Experience managing and communicating data warehouse plans to internal clients. Preferred Qualifications: BS/BA in Technical Field, Computer Science or Mathematics. Experience working with either a MapReduce or an MPP system. Knowledge and practical application of Python. Experience working autonomously in global teams. Experience influencing product decisions with data. Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-kairos-inc-3531404685?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=FZf9qWm7kyYKiQUKnnzi%2FA%3D%3D&position=10&pageNum=1&trk=public_jobs_jserp-result_search-card," KAIROS, Inc. ",https://www.linkedin.com/company/kairos-inc-?trk=public_jobs_topcard-org-name," Scott AFB, IL "," 2 hours ago "," Be among the first 25 applicants ","KAIROS, Inc is searching for an energetic, experienced, and highly motivated Data Engineer, to join our team. Established in July 2013, KAIROS, Inc. is a growing Woman Owned Small Business (WOSB) providing full life cycle Cybersecurity, Program Management, Systems Engineering, and Training and Education services focused on optimizing customers’ program performance and mission through proven methodologies and ethical practices. Our headquarters is in California, MD near Naval Air Station Patuxent River. We have many other locations across the country. Responsible for providing engineering expertise to develop the data and data analytics capabilities of Air Mobility Command (AMC) systems. AMC, in concert with USTRANSCOM and its components and commercial partners, delivers Rapid Global Mobility to all the services, Combatant Commands, National Leadership Authorities, and other priority customers. Relevant systems support command and control of AMC assets, allow management of transportation requirements, provide visibility of asset movements, and integrate both mobility and combat Air Force capabilities across multiple domains. Primary Duties: Assess technical maturity and analytical rigor of data analytics technologies to recommend their implementation or further development Develop strategic plans for incremental fielding of data analytics capabilities Prototype data analytics and machine learning systems Capture, derive, translate into feasible systems designs, verify, validate requirements for data analytics and machine learning capabilities Performs other duties as assigned. Skills and Qualifications: Strong customer relations, analytics, documentation skills. Self-starter, highly motivated, strong work ethic with a commitment to quality. Microsoft office suite proficiency. Ability to work within a challenging, fast-paced, team-oriented environment Ability to work independently Ability to multi-task and meet competing, deliverable deadlines Detail oriented Excellent interpersonal and customer service skills Excellent verbal and written communication skills to provide clear status and/or communicate issues Ability to adapt to evolving technology Education and Experience: Masters degree in engineering or another related field Bachelors degree in engineering or another related field In lieu of degree, experience may be substituted for years of relevant experience At least ten (10) years of experience working in information systems field providing engineering support. Clearance: This position is subject to a government security investigation and must meet eligibility requirements for access to classified information. This position requires an ACTIVE Secret Security Clearance. KAIROS, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, ancestry, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. KAIROS offers our employees a comprehensive benefits package consisting of: Medical Coverage Employer Paid Dental, Vision, Life/AD&D, STD/LTD insurance Certification reimbursement program Employee Assistance Program (EAP) Rewards and recognition programs Community outreach events through our KAIROS Kares group To learn more about our organization be sure to check out our website, https://www.kairosinc.net/ Powered by JazzHR T5eqWsqPGK"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-atlantic-partners-corporation-3461310036?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=4y2XwQKiWKZycqZ0dFJABg%3D%3D&position=14&pageNum=1&trk=public_jobs_jserp-result_search-card," Atlantic Partners Corporation ",https://www.linkedin.com/company/atlantic-partners?trk=public_jobs_topcard-org-name," New York, NY "," 3 weeks ago "," Over 200 applicants ",", Data Strategy team have a global delivery footprint that comprises of cutting edge data management platforms supporting Front Office application groups, Business Analytics, Information Architecture and Data Governance. The team consists of highly innovative and talented Enterprise Information Solution Architects, Developers, and Analysts. The teams are collectively responsible for ensuring the quality of data that is presented to the teams, the development of common, reusable semantic layer components, and ensuring that the governing frameworks and architecture guidelines are being followed and measured. Primary Responsibilities The candidate will: Work on Data-Warehouse and Business Intelligence Platform for Investment Management. Work on Integrating Meta Data Management and Data Governance platforms Data Sourcing and Data Pipelines implementation Data on Cloud Enablement Engineering support of IM Data Warehouse Good understanding of monitoring jobs in production environment and provide production support when necessary Maintain high standard of quality and adhere to best coding practices Skills required (essential) Strong foundation for compute architecture, parallel processing, data architecture and data engineering 5+ yrs. of Data Warehouse Architecture Strong proficiency in Linux/Unix tools and scripting (Python and data frame frameworks, Java, ksh/bash). Strong proficiency with integrating ETL tools with database load utilities. Strong proficiency in graph databases Cloud Data Lake/Data warehouse technology and tools Strong proficiency in writing SQL and procedures. Good understanding of SQL performance tuning. Understanding of the SDLC, good practices and experience with different development and change management tools. Experience working with any RDBMS and integration with application. Bachelors or advanced degree in an analytical or scientific field such as mathematics, computer science etc. Skills desired Prior experience working in finance industry is preferred Some reporting experience in Tableau, ClickView"," Mid-Senior level "," Contract "," Information Technology "," Investment Management " Data Engineer,United States,"Data Engineer, Analytics (Generalist)",https://www.linkedin.com/jobs/view/data-engineer-analytics-generalist-at-instagram-3512575735?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=ycpoOzqi2OPOQGKTNXIAKQ%3D%3D&position=18&pageNum=1&trk=public_jobs_jserp-result_search-card," Instagram ",https://www.linkedin.com/company/instagram?trk=public_jobs_topcard-org-name," Burlingame, CA "," 2 weeks ago "," 41 applicants ","Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. We are looking for a technical leader in our Data Engineering team to work closely with Product Managers, Data Scientists and Software Engineers to support building out a great platform for the future of computing. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. You’ll work with some of the brightest minds in the industry, work with one of the richest data sets in the world, use cutting edge technology, and see your efforts affect products and people on a regular basis.The ideal candidate will have strong data infrastructure and data architecture skills as well as experience in areas such as governing company wide data marts, enabling security and privacy data solutions, and full stack experience with analytical technologies. Candidates should also have a proven track record of leading and scaling efforts related to end-to-end analytics systems, strong operational skills to drive efficiency and speed, strong project management leadership, and a strong vision for how data can proactively improve companies.As we continue to expand and create, we have a lot of exciting work ahead of us! Data Engineer, Analytics (Generalist) Responsibilities: Proactively drive the vision for data foundation and analytics to accelerate building and improvement of cross platform components across Instagram, and define and execute on plan to achieve that vision. Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems. Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Define and manage SLA for all data sets in allocated areas of ownership. Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership. Design, build, and launch collections of sophisticated data models and visualizations that support use cases across different products or domains. Solve our most challenging data integrations problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources. Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts. Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts. Influence product and cross-functional teams to identify data opportunities to drive impact. Mentor team members by giving/receiving actionable feedback. Minimum Qualifications: Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. 12+ years experience in the data warehouse space. 12+ years experience in custom ETL design, implementation and maintenance. 12+ years experience with object-oriented programming languages. 12+ years experience with schema design and dimensional data modeling. 12+ years experience in writing SQL statements. Experience analyzing data to identify deliverables, gaps and inconsistencies. Experience managing and communicating data warehouse plans to internal clients. Preferred Qualifications: BS/BA in Technical Field, Computer Science or Mathematics. Experience working with either a MapReduce or an MPP system. Knowledge and practical application of Python. Experience working autonomously in global teams. Experience influencing product decisions with data. Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-life-science-people-3513216706?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=rV0ql2svAI4MVKzRqRKmlA%3D%3D&position=24&pageNum=1&trk=public_jobs_jserp-result_search-card," Life Science People ",https://uk.linkedin.com/company/life-science-people?trk=public_jobs_topcard-org-name," New York, United States "," 2 days ago "," 102 applicants "," Life Science People is currently partnered with a leading biotechnology research company working on advanced molecular computational diagnosis and analysis.Successful hires will directly contribute to creating systems and infrastructure for modeling, curating, and indexing the petabytes of data generated by our special-purpose supercomputers and our large Linux HPC and GPU clusters; to implementing pipelines for processing computational and experimental data sets; and to developing and maintaining data management policies and toolkits for drug discovery data processing. "," Mid-Senior level "," Full-time "," Information Technology and Engineering "," Medical Equipment Manufacturing, Biotechnology Research, and Pharmaceutical Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-career-search-partners-3464993629?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=Es%2BSAv1GkZDCY2KtGZeuWA%3D%3D&position=1&pageNum=2&trk=public_jobs_jserp-result_search-card," Career Search Partners ",https://www.linkedin.com/company/career-search-partners?trk=public_jobs_topcard-org-name," New York, NY "," 1 week ago "," Over 200 applicants ","Elite investment firm seeks Data Engineer to join their growing team. The data engineer will be a key part of the future development of the company’s data strategy. The data team is highly integrated within the organization and the deliverables have high visibility and engagement with all senior leaders within the organization. On the research side, their goal is to build a world-class data engine that advances the investment platform, by continuing to build a data-informed industry-leading portfolio construction program. This data engine will be utilized by our investment team to drive the success of the investment decision-making process and we are passionate about delivering best-in-class technology with a scalable and resilient approach. They utilize data to drive idea generation, decision making, and risk mitigation. The data engineer will be instrumental in developing the data flows that are the foundations of these processes. On the reporting side, the data team drives analysis and data visualization to support key business initiatives and the key drivers of the business. They leverage technology for report automation, ETL, and web-based dashboards. The data engineer will continue the core development of the code repositories, working on automation, data flows, data quality, and data manipulation to both support and advance these efforts. The data engineer will combine technical proficiency with an understanding or keen interest in markets to take ownership of existing data programs and look to further enhance and evolve our approach. The company is in a growth phase and there will be a significant opportunity for the role to grow through the build-out of additional internal products and programs. They are looking for someone with a high degree of intellectual curiosity who enjoys thinking about and discussing architectural and design approaches, engineering best practices, and risk analysis. Responsibilities ·Expand our data warehouse, managing existing processes and feeds, and incorporating new data sources. ·Build processes in SQL & Python related to data flows into and out of a SQL server data warehouse (data sources include third-party vendors, APIs, admins, etc). ·Manage internal mapping tables for critical data such as underlier roll-ups, custom tagging, ticker mapping across vendors, etc. ·Ensure data quality by continuing data check development and correcting errors highlighted in daily data check reports. ·Continue core development of business processes and reporting requirements across the organization, including ETL. ·Maintain and expand the ETL and data management repo. ·Handle ad-hoc data requests across the Firm to support internal needs. Respond by providing excel, PDFs, slides or dashboards. ·Handle month and quarter-end reporting for marketing, finance, and data science teams. ·Maintain and enhance web-based dashboards that serve the organization. ·Consultatively engage with stakeholders across the organization to identify business requirements and drive automated workflows to advance outcomes. Requirements/Qualifications ·3+ years of SQL experience, with the DDL for new tables, indices, views, stored procedures. ·Python experience with the ability to build data flow processes with SQL server or other relational databases. ·Development in a Git environment adhering to best practices around code quality. ·Experience working with market data or exposure to investing and portfolio management. Exposure to security master and data warehouse concepts a plus. ·Ability to work in an agile-type development environment, with strong communication skills for cross-functional requirements gathering and stakeholder management. This individual will need to work across multiple teams simultaneously. ·Proactive with a mind toward efficiency in the process and systems that we build and a keen interest in the effectiveness of the solutions we create. ·Exceptional attention to detail with the ability to produce error-free work and quickly find issues within a dataset. ·Passion for and understanding of equity capital markets. The minimum and maximum base annual salary for this role is $150,000 to $200,000. Actual base salaries may vary based on factors including but not limited to experience, past performance, education, and other job-related factors. In addition to base salary compensation, we offer a comprehensive benefits package, and successful candidates are eligible to receive an annual discretionary bonus."," Mid-Senior level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-navi-3491765713?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=r%2F97a5TBGxa5ivILeUN51w%3D%3D&position=2&pageNum=2&trk=public_jobs_jserp-result_search-card," Navi ",https://www.linkedin.com/company/navi-marketplace?trk=public_jobs_topcard-org-name," Massachusetts, United States "," 3 weeks ago "," Over 200 applicants ","The Data Engineer is responsible for building and automating the data pipelines which feed Navi’s data-driven mobile products. This essential member of the Navi data team will design, build, and maintain an architecture which leverages modern ETL technologies, cloud infrastructure, and automation frameworks to continuously transform raw data into operational and analytical data sets. The Data Engineer will work directly with client-facing analytics teams to ensure that they receive the data they need, in either ad hoc or automated fashion.   Successful applicants will have:   ●     Proficiency building ETL pipelines using either purpose-built tools such as Informatica or Amazon Glue or generalized programming approaches in Python / Pandas / Jupyter ●     Proficiency with relational database technologies which comprise the Navi platform data layer, with a strong understanding of SQL DML and DDL ●     Experience with tools such as Airflow, Watchdog, or Jenkins to automate the execution of ETL pipelines and ensure their continuous uptime ●     Experience deploying ETL applications in AWS, Azure, or GCP cloud infrastructure ●     Successfully worked in a fast-paced, team-oriented, data driven, problem solving work environment ●     Excellent communication and collaboration abilities ●     Insatiable curiosity   The following additional experience is highly desired:   ●     Experience using a modern DevOps toolchain for continuous integration and continuous delivery of new application functionality ●     Experience using technologies such as SparkSQL, Athena/Presto, BigQuery, etc. to implement data queries at scale   About Navi Navi is an independent and unbiased US wireless services marketplace. Founded by industry veterans, Navi offers consumers the most comprehensive, easy-to-use, and rewarding wireless experience. Its flagship products include Phone Navigator, which helps consumers find the right phone at the best price, and Plan Navigator, which matches consumers with the best plan for their needs."," Full-time ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-capgemini-engineering-3499070739?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=SWFu1AfQruTtu0tmjZOHxg%3D%3D&position=3&pageNum=2&trk=public_jobs_jserp-result_search-card," Navi ",https://www.linkedin.com/company/navi-marketplace?trk=public_jobs_topcard-org-name," Massachusetts, United States "," 3 weeks ago "," Over 200 applicants "," The Data Engineer is responsible for building and automating the data pipelines which feed Navi’s data-driven mobile products. This essential member of the Navi data team will design, build, and maintain an architecture which leverages modern ETL technologies, cloud infrastructure, and automation frameworks to continuously transform raw data into operational and analytical data sets. The Data Engineer will work directly with client-facing analytics teams to ensure that they receive the data they need, in either ad hoc or automated fashion. Successful applicants will have: ●     Proficiency building ETL pipelines using either purpose-built tools such as Informatica or Amazon Glue or generalized programming approaches in Python / Pandas / Jupyter●     Proficiency with relational database technologies which comprise the Navi platform data layer, with a strong understanding of SQL DML and DDL●     Experience with tools such as Airflow, Watchdog, or Jenkins to automate the execution of ETL pipelines and ensure their continuous uptime●     Experience deploying ETL applications in AWS, Azure, or GCP cloud infrastructure●     Successfully worked in a fast-paced, team-oriented, data driven, problem solving work environment●     Excellent communication and collaboration abilities●     Insatiable curiosity The following additional experience is highly desired: ●     Experience using a modern DevOps toolchain for continuous integration and continuous delivery of new application functionality●     Experience using technologies such as SparkSQL, Athena/Presto, BigQuery, etc. to implement data queries at scale About NaviNavi is an independent and unbiased US wireless services marketplace. Founded by industry veterans, Navi offers consumers the most comprehensive, easy-to-use, and rewarding wireless experience. Its flagship products include Phone Navigator, which helps consumers find the right phone at the best price, and Plan Navigator, which matches consumers with the best plan for their needs. "," Full-time ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-experfy-3515274030?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=bApZdCaV2T82SX34QOgZ6A%3D%3D&position=4&pageNum=2&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Bridgewater Township, NJ "," 1 month ago "," 46 applicants ","Our client is looking for a Data Engineer to join our digital data team in the data architecture operation and governance team to build and operationalize data pipelines necessary for the enterprise data and analytics and insights initiatives, following industry standard practices and tools. The bulk of the work would be in building, managing, and optimizing data pipelines and then moving them effectively into production for key data and analytics consumers like business/data analysts, data scientists or any persona that needs curated data for data and analytics use cases across the enterprise. In addition, guarantee compliance with data governance and data security requirements while creating, improving, and operationalizing these integrated and reusable data pipelines. The data engineer will be the key interface in operationalizing data and analytics on behalf of the business unit(s) and organizational outcomes. Responsibilities Must work with business team to understand requirements, and translate them into technical needs Gather and organize large and complex data assets, perform relevant analysis Ensure the quality of the data in coordination with Data Analysts and Data Scientists (peer validation) Propose and implement relevant data models for each business cases Optimize data models and workflows Communicate results and findings in a structured way Partner with Product Owner and Data Analysts to prioritize the pipeline implementation plan Partner with Data Analysts and Data scientists to design pipelines relevant for business requirements Leverage existing or create new “standard pipelines” within to bring value through business use cases Ensure best practices in data manipulation are enforced end-to-end Actively contribute to Data governance community Requirements Tech skills Knowledge of AWS. Knowledge of Azure or GCP is a plus Orchestration: Airflow Project management & support: JIRA projects & service desk, Confluence, Teams Expert in ELT and ETL such as Informatica IICS, Databricks, Delta, Glue, ... Expert in Relational database technologies and concepts: Perform SQL queries Create database models Maintain and improve queries performance Snowflake is a plus Working knowledge of Python and familiar with other scripting languages Good knowledge of cloud computing Soft Skills Pragmatic and capable of solving complex issues Ability to understand business needs Good communication Push innovative solutions Service-oriented, flexible & team player Self-motivated, take initiative Attention to detail & technical intuition Experience At least 5 years experiences in a data team as Data Engineer Experience in a healthcare industry is a strong plus Preferred Qualifications BS or MS in Computer Science"," Not Applicable "," Contract "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3496021553?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=iXtrscFEiAm5HSrL6QivjA%3D%3D&position=5&pageNum=2&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 15 hours ago "," Over 200 applicants ","Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Job Description As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Understand and adapt existing frameworks for data engineering pipelines in the organization. Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization. Collaborate with the team and learn to build scalable data pipelines. Support data engineering pipelines and quickly respond to failures. Collaborate with the team to develop new approaches and build solutions at scale. Create documentation for learning and knowledge transfer. Learn and adapt automation skills/techniques in day-to-day activities. Qualifications 1+ years of overall technology experience includes at least 1+ years of hands-on software development and data engineering. 1+ years of development experience in programming languages like Python, PySpark, Scala, etc.Experience or knowledge in Data Modeling, SQL optimization, performance tuning is a plus. 6+ months of cloud data engineering experience in Azure Certification is a plus. Experience with version control systems like Github and deployment & CI tools. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools is a plus. Experience in working with large data sets and scaling applications like Kubernetes is a plus. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Education BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior-level management. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to coordinate effectively with the team. Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management. Proactively drive impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to learn quickly and adapt to new skills. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law. EEO Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender Identity If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy. Please view our Pay Transparency Statement"," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-currance-3486695584?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=hHzVzbRMwxIhG1qxIwbqmw%3D%3D&position=6&pageNum=2&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 15 hours ago "," Over 200 applicants "," OverviewPepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation.What PepsiCo Data Management and Operations does:Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the companyResponsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholdersIncrease awareness about available data and democratize access to it across the company Job DescriptionAs a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company.As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems.ResponsibilitiesActive contributor to code development in projects and services.Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products.Understand and adapt existing frameworks for data engineering pipelines in the organization.Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization.Collaborate with the team and learn to build scalable data pipelines.Support data engineering pipelines and quickly respond to failures.Collaborate with the team to develop new approaches and build solutions at scale.Create documentation for learning and knowledge transfer.Learn and adapt automation skills/techniques in day-to-day activities.Qualifications1+ years of overall technology experience includes at least 1+ years of hands-on software development and data engineering. 1+ years of development experience in programming languages like Python, PySpark, Scala, etc.Experience or knowledge in Data Modeling, SQL optimization, performance tuning is a plus.6+ months of cloud data engineering experience in Azure Certification is a plus.Experience with version control systems like Github and deployment & CI tools.Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.Experience with data profiling and data quality tools is a plus.Experience in working with large data sets and scaling applications like Kubernetes is a plus.Experience with Statistical/ML techniques is a plus.Experience with building solutions in the retail or in the supply chain space is a plus.Understanding metadata management, data lineage, and data glossaries is a plus.Working knowledge of agile development, including DevOps and DataOps concepts.Familiarity with business intelligence tools (such as PowerBI). EducationBA/BS in Computer Science, Math, Physics, or other technical fields.Skills, Abilities, KnowledgeExcellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior-level management.Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements.High degree of organization and ability to coordinate effectively with the team.Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management.Proactively drive impact and engagement while bringing others along.Consistently attain/exceed individual and team goals.Ability to learn quickly and adapt to new skills.CompetenciesHighly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law.EEO StatementAll qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender IdentityIf you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy.Please view our Pay Transparency Statement "," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-alium-3490885542?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=HUQJqV05KMunNcBuXhT48A%3D%3D&position=7&pageNum=2&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 15 hours ago "," Over 200 applicants "," OverviewPepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation.What PepsiCo Data Management and Operations does:Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the companyResponsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholdersIncrease awareness about available data and democratize access to it across the company Job DescriptionAs a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company.As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems.ResponsibilitiesActive contributor to code development in projects and services.Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products.Understand and adapt existing frameworks for data engineering pipelines in the organization.Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization.Collaborate with the team and learn to build scalable data pipelines.Support data engineering pipelines and quickly respond to failures.Collaborate with the team to develop new approaches and build solutions at scale.Create documentation for learning and knowledge transfer.Learn and adapt automation skills/techniques in day-to-day activities.Qualifications1+ years of overall technology experience includes at least 1+ years of hands-on software development and data engineering. 1+ years of development experience in programming languages like Python, PySpark, Scala, etc.Experience or knowledge in Data Modeling, SQL optimization, performance tuning is a plus.6+ months of cloud data engineering experience in Azure Certification is a plus.Experience with version control systems like Github and deployment & CI tools.Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.Experience with data profiling and data quality tools is a plus.Experience in working with large data sets and scaling applications like Kubernetes is a plus.Experience with Statistical/ML techniques is a plus.Experience with building solutions in the retail or in the supply chain space is a plus.Understanding metadata management, data lineage, and data glossaries is a plus.Working knowledge of agile development, including DevOps and DataOps concepts.Familiarity with business intelligence tools (such as PowerBI). EducationBA/BS in Computer Science, Math, Physics, or other technical fields.Skills, Abilities, KnowledgeExcellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior-level management.Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements.High degree of organization and ability to coordinate effectively with the team.Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management.Proactively drive impact and engagement while bringing others along.Consistently attain/exceed individual and team goals.Ability to learn quickly and adapt to new skills.CompetenciesHighly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law.EEO StatementAll qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender IdentityIf you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy.Please view our Pay Transparency Statement "," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ck-group-3484903398?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=BTQ4RXMCr63NOPd8oO4Z%2Fw%3D%3D&position=8&pageNum=2&trk=public_jobs_jserp-result_search-card," CK Group ",https://uk.linkedin.com/company/cka-group?trk=public_jobs_topcard-org-name," New York, United States "," 4 weeks ago "," 165 applicants ","Company Description: CK Group are working with a leading research organization who have been at the forefront of computational driven drug discovery research. Their work covers a wide range of activities including hardware, software and drug discovery computational modeling to identify key targets that allow further research through their network of CRO suppliers. This company are a world leader in computational hardware development and are a fantastic environment to be at the bleeding edge of computational based drug discovery research. Role: • Develop various predictive models from a variety of data streams • Perform statistical analysis on large omics datasets to maximize learnings • Stay at the cutting edge of data science concepts and algorithms • Develop data workflows and dashboard to contribute in the extension of our digital infrastructure Requirements: • Ph.D in Data Sciences, Statistics, Computer Science, Computational Science or a similar discipline • Working knowledge of Python and R • ML/DL model development experience Salary: The expected annual base salary for this position is $200,000–$500,000. Our compensation package also includes variable compensation in the form of sign-on and year-end bonuses, and generous benefits. The applicable annual base salary paid to a successful applicant will be determined based on multiple factors including the nature and extent of prior experience and educational background. Location: New York Visa sponsorship is available for the successful candidate."," Mid-Senior level "," Full-time "," Research and Science "," Research Services " Data Engineer,United States,Data Engineer (Remote),https://www.linkedin.com/jobs/view/data-engineer-remote-at-teamanics-3513176010?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=EbuGJhT4m7OYFSe5g%2B9zUQ%3D%3D&position=9&pageNum=2&trk=public_jobs_jserp-result_search-card," Teamanics ",https://www.linkedin.com/company/teamanics-inc?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","Samsung SDS is looking to add an experienced Data Engineer to their team. Samsung SDS plays a leading role in the global market with unique logistics services. If your expertise is AWS and other big data tools, we encourage you to apply! As a Senior Data Engineer you will ensure deployment of modern data structures to enable reliable and scalable data products and feature stores. This role required you to develop and drive multiple cross-departmental projects, and collaborate effectively with the global team and ensure day-to-day deliverables are met. Extensive experience working with AWS, EMR, SPARK, and scripting programs is required. Responsibilities Develop and manage end-to-end data pipeline and application stack (Hadoop, Vertica, Tableau, Hue, Superset etc) and lead/provide end-to-end operation support for all applications. Develop processes for automating, testing, and deploying the work Identify risks and opportunities of potential logic and data issues within the data environment Perform RCA and resolution activities for complex data issues. Develop and maintain documentation relating to all assigned systems and projects Ensure deployment of modern data structures and models to enable reliable and scalable data products and feature stores Ability to independently perform proof of concepts for new tools and frameworks and present to leadership Work in an agile environment. Develop and drive multiple cross-departmental projects. Establish effective working relationships across disparate departments to deliver business results. Collaborate effectively with the global team and ensure day-to-day deliverables are met Requirements 5 to 10 years of experience with big data tools and data processing: Hadoop, Spark, Scala, Kafka, Yarn cluster, Java, etc. 5+ years of working experience with AWS Strong development experience with EMR and SPARK is a must-have Strong experience with object-oriented/object function scripting languages: SQL, Python, Java, Scala, etc. Experience with GCP is nice to have Experience with SQL/NoSQL databases like Vertica, Postgres, Cassandra, etc. Experience with data modeling concepts is desired Experience with streaming /event driven technologies work such as Lambda, Kinesis, Kafka, etc. Nice to have but not required - exposure to ML (frameworks like pytorch/tensorflow), model management and serving, containerizing and application development experience with Talend, Tableau, Snowflake, Redshift etc. Prior experience as a senior data architect, technical lead, system architect, or similar is required Excellent verbal and written communication skills This job at Samsung SDS is being filled by Teamanics, Metabyte's rapidly growing peer network. Employment through Metabyte, Inc. W2 ONLY!"," Mid-Senior level "," Full-time "," Information Technology, Supply Chain, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer (Remote),https://www.linkedin.com/jobs/view/data-engineer-remote-at-carvana-3527090554?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=ledQjWEz1SRaiMiPw5BsAg%3D%3D&position=10&pageNum=2&trk=public_jobs_jserp-result_search-card," Teamanics ",https://www.linkedin.com/company/teamanics-inc?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants "," Samsung SDS is looking to add an experienced Data Engineer to their team. Samsung SDS plays a leading role in the global market with unique logistics services. If your expertise is AWS and other big data tools, we encourage you to apply!As a Senior Data Engineer you will ensure deployment of modern data structures to enable reliable and scalable data products and feature stores. This role required you to develop and drive multiple cross-departmental projects, and collaborate effectively with the global team and ensure day-to-day deliverables are met. Extensive experience working with AWS, EMR, SPARK, and scripting programs is required.ResponsibilitiesDevelop and manage end-to-end data pipeline and application stack (Hadoop, Vertica, Tableau, Hue, Superset etc) and lead/provide end-to-end operation support for all applications.Develop processes for automating, testing, and deploying the workIdentify risks and opportunities of potential logic and data issues within the data environmentPerform RCA and resolution activities for complex data issues.Develop and maintain documentation relating to all assigned systems and projectsEnsure deployment of modern data structures and models to enable reliable and scalable data products and feature storesAbility to independently perform proof of concepts for new tools and frameworks and present to leadershipWork in an agile environment. Develop and drive multiple cross-departmental projects.Establish effective working relationships across disparate departments to deliver business results.Collaborate effectively with the global team and ensure day-to-day deliverables are metRequirements5 to 10 years of experience with big data tools and data processing: Hadoop, Spark, Scala, Kafka, Yarn cluster, Java, etc.5+ years of working experience with AWSStrong development experience with EMR and SPARK is a must-haveStrong experience with object-oriented/object function scripting languages: SQL, Python, Java, Scala, etc.Experience with GCP is nice to haveExperience with SQL/NoSQL databases like Vertica, Postgres, Cassandra, etc.Experience with data modeling concepts is desiredExperience with streaming /event driven technologies work such as Lambda, Kinesis, Kafka, etc.Nice to have but not required - exposure to ML (frameworks like pytorch/tensorflow), model management and serving, containerizing and application development experience with Talend, Tableau, Snowflake, Redshift etc.Prior experience as a senior data architect, technical lead, system architect, or similar is requiredExcellent verbal and written communication skills This job at Samsung SDS is being filled by Teamanics, Metabyte's rapidly growing peer network. Employment through Metabyte, Inc. W2 ONLY! "," Mid-Senior level "," Full-time "," Information Technology, Supply Chain, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer (Remote),https://www.linkedin.com/jobs/view/data-engineer-at-stl-digital-3500461780?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=87dn9HYoUaw5uHWZNFRXuA%3D%3D&position=11&pageNum=2&trk=public_jobs_jserp-result_search-card," Teamanics ",https://www.linkedin.com/company/teamanics-inc?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants "," Samsung SDS is looking to add an experienced Data Engineer to their team. Samsung SDS plays a leading role in the global market with unique logistics services. If your expertise is AWS and other big data tools, we encourage you to apply!As a Senior Data Engineer you will ensure deployment of modern data structures to enable reliable and scalable data products and feature stores. This role required you to develop and drive multiple cross-departmental projects, and collaborate effectively with the global team and ensure day-to-day deliverables are met. Extensive experience working with AWS, EMR, SPARK, and scripting programs is required.ResponsibilitiesDevelop and manage end-to-end data pipeline and application stack (Hadoop, Vertica, Tableau, Hue, Superset etc) and lead/provide end-to-end operation support for all applications.Develop processes for automating, testing, and deploying the workIdentify risks and opportunities of potential logic and data issues within the data environmentPerform RCA and resolution activities for complex data issues.Develop and maintain documentation relating to all assigned systems and projectsEnsure deployment of modern data structures and models to enable reliable and scalable data products and feature storesAbility to independently perform proof of concepts for new tools and frameworks and present to leadershipWork in an agile environment. Develop and drive multiple cross-departmental projects.Establish effective working relationships across disparate departments to deliver business results.Collaborate effectively with the global team and ensure day-to-day deliverables are metRequirements5 to 10 years of experience with big data tools and data processing: Hadoop, Spark, Scala, Kafka, Yarn cluster, Java, etc.5+ years of working experience with AWSStrong development experience with EMR and SPARK is a must-haveStrong experience with object-oriented/object function scripting languages: SQL, Python, Java, Scala, etc.Experience with GCP is nice to haveExperience with SQL/NoSQL databases like Vertica, Postgres, Cassandra, etc.Experience with data modeling concepts is desiredExperience with streaming /event driven technologies work such as Lambda, Kinesis, Kafka, etc.Nice to have but not required - exposure to ML (frameworks like pytorch/tensorflow), model management and serving, containerizing and application development experience with Talend, Tableau, Snowflake, Redshift etc.Prior experience as a senior data architect, technical lead, system architect, or similar is requiredExcellent verbal and written communication skills This job at Samsung SDS is being filled by Teamanics, Metabyte's rapidly growing peer network. Employment through Metabyte, Inc. W2 ONLY! "," Mid-Senior level "," Full-time "," Information Technology, Supply Chain, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer-(Kabbage),https://www.linkedin.com/jobs/view/data-engineer-kabbage-at-american-express-3510659927?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=qioWUektrLP04nfxAGxJIw%3D%3D&position=12&pageNum=2&trk=public_jobs_jserp-result_search-card," American Express ",https://www.linkedin.com/company/american-express?trk=public_jobs_topcard-org-name," Atlanta, GA "," 1 week ago "," 51 applicants ","You Lead the Way. We’ve Got Your Back. Kabbage, an American Express Company, is setting a new standard in big data and FinTech and we are looking for a Data Engineer-(Kabbage) to help us in our mission to help small businesses be mighty. Acquired by American Express in 2020, Kabbage is a leading FinTech company changing the way small businesses manage their cash flow. Applying automation and real-time data, Kabbage provides small businesses a suite of integrated cash-flow technologies from flexible lines of credit, digital business checking accounts, fast payments and predictive business analytics. Now with the powerful backing of American Express, Kabbage can offer millions of small businesses the opportunity to access digital tools to help them grow bigger, lasting companies. While we've received numerous awards and recognition—such as Entrepreneur's Top Company Cultures, Inc Magazine's Top Private Companies, GlassDoor’s Best Places to Work, and Forbes FinTech 50—it is our people, our culture, and our leaders that make Kabbage such a great place to work. At Kabbage, we strive to be the place where a diverse mix of talented people want to come, to stay, and do their best work. Our commitment to diversity and inclusion is reflected in our people, our partners, and our customers. We are fully focused on equality and believe deeply in diversity of race, gender, sexual orientation, religion, ethnicity, national origin and all the other wonderful characteristics that make us different. When you join Team Amex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. Here, you’ll learn and grow as we champion your meaningful career journey with programs, benefits, and flexibility to back you personally and professionally. Every colleague shares in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to our customers, communities, and each other every day. And, we’ll do it with integrity and in an environment where everyone is seen, heard and feels like they truly belong. Join #TeamAmex and let’s lead the way together. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers’ digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. As part of the Kabbage American Express Database Operations team, you are working to keep our databases resilient, scalable, and secure. Your responsibilities include, but are not limited to: Delivering solutions for optimum availability, reliability, and scalability of our database infrastructure using automation. Meet stakeholder SLAs/SLOs for all solutions delivered. Integrate with other technology teams to build world class solutions. Maintain and enhance comprehensive Operational Visibility that alerts and informs the right people at the right time of potential issues. Protect valuable and sensitive data by working with Security Operations to detect, audit, and remediate security concerns. Write, review, and release code according to standards. Participate in scheduled on-call rotations, incident response, continuous learning, and knowledge sharing with the team. Increase efficiency by automating routine tasks. Minimum qualifications: 5+ years of experience and proven track record and knowledge of building, managing, and troubleshooting database solutions like SQL Server 2019 and open source (PostgreSQL or MySQL) in the cloud (RDS, EC2) or On-prem. Experience working as a reliability engineer, software engineer, or systems engineer with an emphasis in data. Experience in building, maintaining, and improving data solutions in a cloud environment (preferably AWS) Proficient with coding and scripting solutions on platforms like AWS, Windows, Linux (PowerShell Core, Python, C#, T-SQL, AWS CLI) Preferred Qualifications Experience with orchestration technologies (SSIS, Airflow, Bamboo) is a plus. Familiarity with the following is a plus: DataDog Splunk SQLSentry Terraform Salary Range: $85,000.00 to $150,000.00 annually + bonus + benefits The above represents the expected salary range for this job requisition. Ultimately, in determining your pay, we'll consider your location, experience, and other job-related factors. American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. We back our colleagues with the support they need to thrive, professionally and personally. That’s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. Depending on role and business needs, colleagues will either work onsite, in a hybrid model (combination of in-office and virtual days) or fully virtually. US Job Seekers/Employees - Click here to view the “EEO is the Law” poster and supplement and the Pay Transparency Policy Statement. If the links do not work, please copy and paste the following URLs in a new browser window: https://www.dol.gov/agencies/ofccp/posters to access the three posters. Non-considerations for sponsorship: Employment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions. Considerations for sponsorship: Depending on factors such as business unit requirements, the nature of the position, cost and applicable laws, American Express may provide visa sponsorship for certain positions."," Mid-Senior level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-torq-people-solutions-3489877343?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=hNpW1%2BT98Ft4PzXF4KmawA%3D%3D&position=13&pageNum=2&trk=public_jobs_jserp-result_search-card," Torq People Solutions ",https://www.linkedin.com/company/torq-people-solutions?trk=public_jobs_topcard-org-name," Dallas, TX "," 3 weeks ago "," 57 applicants ","What You’ll Be Doing: Our Torq Insights practice is looking for superstars who will fill a key role in building a strong data ecosystem that supports the business intelligence and analytics environment for our clients. Your passion, knowledge and confidence will fuel a powerful experience that drives meaningful impact for our clients. Below are some of the key things you’ll need to excel in: Build strong relationships with data consumers to understand consumption patterns and design intuitive data models. Develop and deliver high-quality data pipelines adhering to best practices, privacy, and governance principles. Write ETL (Extract / Transform / Load) processes and develops tools for real-time and offline analytic processing. Build real-time and batch data integrations from disparate source systems into the data lake/data warehouse. Recommend and deploy best in class key performance indicators to measure the performance and quality of the data engineering teams and processes. Participate in code reviews and provide feedback to development teams regarding best practices. Develop data definitions consistent with data management standards and conventions. Provide production support for integration and transformation pipelines. What You’ll Bring to the Table: Bachelor’s degree in Statistics, Data Science, Mathematics, Computer Science, or related discipline Experience with integrations and data warehousing using tools like Snowflake, DBT, and Airflow. Strong working knowledge of SQL and experience with relational and non-relational databases. Experience with general-purpose programming (e.g. Python, Java, R), dealing with a variety of data structures, algorithms, and serialization formats. 3+ years’ experience building ETL processes and familiarity with database architecture and design. 3+ years of experience developing data extraction, transformation, and data analysis solutions. Strong business communication acumen. Proficient time management, organizational skills, and ability to meet established deadlines. Bonus Skills (not required but a plus!) Experience with or in building Data Lakes, Data Warehouse in a hybrid environment (Cloud/On-Premises). Experience working with cloud technologies such as AWS (Lambdas, S3, step functions, SNS, SQS) Experience in Hadoop architecture, HDFS commands and experience designing & optimizing queries to build data pipelines Benefits And Other Fun Stuff We ask our consultants to be superstars, so we treat them like it. Even better, our perks are designed for employees by our employees. We do this because we believe in delivering a compelling benefits package that puts you at the heart of our rewards. Competitive Salary – your bank account will be smiling Unlimited PTO – we’re serious about that work-life balance thing Best-in-class health/vision/dental benefits – your health is our priority Generous 401K options – take care of your future with us Opportunity to be a key player at a highly reputable, fast-growing consulting firm High degree of internal mobility and diverse project opportunities Torq is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, national origin, genetic information, age, disability, veteran status, or any other legally protected basis. Note: No visa sponsorship is available for this position, all applicants must be currently authorized to work in the United States for any employer Powered by JazzHR 8zyW5zFmsE"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-ii-at-the-venetian-resort-las-vegas-3482799870?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=0BkzAACy81TEU6azpJmh0Q%3D%3D&position=14&pageNum=2&trk=public_jobs_jserp-result_search-card," Torq People Solutions ",https://www.linkedin.com/company/torq-people-solutions?trk=public_jobs_topcard-org-name," Dallas, TX "," 3 weeks ago "," 57 applicants "," What You’ll Be Doing:Our Torq Insights practice is looking for superstars who will fill a key role in building a strong data ecosystem that supports the business intelligence and analytics environment for our clients. Your passion, knowledge and confidence will fuel a powerful experience that drives meaningful impact for our clients. Below are some of the key things you’ll need to excel in:Build strong relationships with data consumers to understand consumption patterns and design intuitive data models.Develop and deliver high-quality data pipelines adhering to best practices, privacy, and governance principles. Write ETL (Extract / Transform / Load) processes and develops tools for real-time and offline analytic processing.Build real-time and batch data integrations from disparate source systems into the data lake/data warehouse.Recommend and deploy best in class key performance indicators to measure the performance and quality of the data engineering teams and processes.Participate in code reviews and provide feedback to development teams regarding best practices.Develop data definitions consistent with data management standards and conventions.Provide production support for integration and transformation pipelines.What You’ll Bring to the Table:Bachelor’s degree in Statistics, Data Science, Mathematics, Computer Science, or related disciplineExperience with integrations and data warehousing using tools like Snowflake, DBT, and Airflow.Strong working knowledge of SQL and experience with relational and non-relational databases.Experience with general-purpose programming (e.g. Python, Java, R), dealing with a variety of data structures, algorithms, and serialization formats.3+ years’ experience building ETL processes and familiarity with database architecture and design.3+ years of experience developing data extraction, transformation, and data analysis solutions.Strong business communication acumen.Proficient time management, organizational skills, and ability to meet established deadlines.Bonus Skills (not required but a plus!)Experience with or in building Data Lakes, Data Warehouse in a hybrid environment (Cloud/On-Premises).Experience working with cloud technologies such as AWS (Lambdas, S3, step functions, SNS, SQS)Experience in Hadoop architecture, HDFS commands and experience designing & optimizing queries to build data pipelinesBenefits And Other Fun StuffWe ask our consultants to be superstars, so we treat them like it. Even better, our perks are designed for employees by our employees. We do this because we believe in delivering a compelling benefits package that puts you at the heart of our rewards.Competitive Salary – your bank account will be smiling Unlimited PTO – we’re serious about that work-life balance thingBest-in-class health/vision/dental benefits – your health is our priorityGenerous 401K options – take care of your future with usOpportunity to be a key player at a highly reputable, fast-growing consulting firmHigh degree of internal mobility and diverse project opportunitiesTorq is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, national origin, genetic information, age, disability, veteran status, or any other legally protected basis.Note: No visa sponsorship is available for this position, all applicants must be currently authorized to work in the United States for any employerPowered by JazzHR8zyW5zFmsE "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/lead-data-engineer-at-duetto-3527093852?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=BLRQ5QwkBZ8UCJ1KnJPUJw%3D%3D&position=15&pageNum=2&trk=public_jobs_jserp-result_search-card," Torq People Solutions ",https://www.linkedin.com/company/torq-people-solutions?trk=public_jobs_topcard-org-name," Dallas, TX "," 3 weeks ago "," 57 applicants "," What You’ll Be Doing:Our Torq Insights practice is looking for superstars who will fill a key role in building a strong data ecosystem that supports the business intelligence and analytics environment for our clients. Your passion, knowledge and confidence will fuel a powerful experience that drives meaningful impact for our clients. Below are some of the key things you’ll need to excel in:Build strong relationships with data consumers to understand consumption patterns and design intuitive data models.Develop and deliver high-quality data pipelines adhering to best practices, privacy, and governance principles. Write ETL (Extract / Transform / Load) processes and develops tools for real-time and offline analytic processing.Build real-time and batch data integrations from disparate source systems into the data lake/data warehouse.Recommend and deploy best in class key performance indicators to measure the performance and quality of the data engineering teams and processes.Participate in code reviews and provide feedback to development teams regarding best practices.Develop data definitions consistent with data management standards and conventions.Provide production support for integration and transformation pipelines.What You’ll Bring to the Table:Bachelor’s degree in Statistics, Data Science, Mathematics, Computer Science, or related disciplineExperience with integrations and data warehousing using tools like Snowflake, DBT, and Airflow.Strong working knowledge of SQL and experience with relational and non-relational databases.Experience with general-purpose programming (e.g. Python, Java, R), dealing with a variety of data structures, algorithms, and serialization formats.3+ years’ experience building ETL processes and familiarity with database architecture and design.3+ years of experience developing data extraction, transformation, and data analysis solutions.Strong business communication acumen.Proficient time management, organizational skills, and ability to meet established deadlines.Bonus Skills (not required but a plus!)Experience with or in building Data Lakes, Data Warehouse in a hybrid environment (Cloud/On-Premises).Experience working with cloud technologies such as AWS (Lambdas, S3, step functions, SNS, SQS)Experience in Hadoop architecture, HDFS commands and experience designing & optimizing queries to build data pipelinesBenefits And Other Fun StuffWe ask our consultants to be superstars, so we treat them like it. Even better, our perks are designed for employees by our employees. We do this because we believe in delivering a compelling benefits package that puts you at the heart of our rewards.Competitive Salary – your bank account will be smiling Unlimited PTO – we’re serious about that work-life balance thingBest-in-class health/vision/dental benefits – your health is our priorityGenerous 401K options – take care of your future with usOpportunity to be a key player at a highly reputable, fast-growing consulting firmHigh degree of internal mobility and diverse project opportunitiesTorq is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, national origin, genetic information, age, disability, veteran status, or any other legally protected basis.Note: No visa sponsorship is available for this position, all applicants must be currently authorized to work in the United States for any employerPowered by JazzHR8zyW5zFmsE "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3486630200?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=t57AEJRTDvv4y6SuKyun7Q%3D%3D&position=16&pageNum=2&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 6 days ago "," Over 200 applicants ","Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Job Description As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Understand and adapt existing frameworks for data engineering pipelines in the organization. Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization. Collaborate with the team and learn to build scalable data pipelines. Support data engineering pipelines and quickly respond to failures. Collaborate with the team to develop new approaches and build solutions at scale. Create documentation for learning and knowledge transfer. Learn and adapt automation skills/techniques in day-to-day activities. Qualifications 1+ years of overall technology experience includes at least 1+ years of hands-on software development and data engineering. 1+ years of development experience in programming languages like Python, PySpark, Scala, etc.Experience or knowledge in Data Modeling, SQL optimization, performance tuning is a plus. 6+ months of cloud data engineering experience in Azure Certification is a plus. Experience with version control systems like Github and deployment & CI tools. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools is a plus. Experience in working with large data sets and scaling applications like Kubernetes is a plus. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Education BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior-level management. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to coordinate effectively with the team. Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management. Proactively drive impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to learn quickly and adapt to new skills. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law. EEO Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender Identity If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy. Please view our Pay Transparency Statement"," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-vf-corporation-3499082992?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=npcjJF%2B9i7f1M1fnqR2lTg%3D%3D&position=17&pageNum=2&trk=public_jobs_jserp-result_search-card," VF Corporation ",https://www.linkedin.com/company/vf-corporation?trk=public_jobs_topcard-org-name," Dallas-Fort Worth Metroplex "," 2 weeks ago "," 176 applicants ","Sr Data Engineer will lead development of critical data pipelines for Data Foundation platform for the Dickies brand and will be responsible for the design, implementation and quality of technical deliverables. Interact when necessary with Product Owners to understand business requirements and translate into technical stories needed to implement end-to-end solutions. Work in coordination with the data scientists, data analysts and business partners to implement and test advanced data analytics pipelines and applications. Build and maintain architecture diagrams, technical documentation and best practices for data engineering team; as well as utilizing best practices across industries and striving for innovation and efficiency. Understand and contribute to the evolution of the enterprise data architecture including the application of current and emerging data frameworks and tools, driving adoption of agile methodology, release management and DevOps processes. Provide help and support to business and analytics users as well as data scientists, working in coordination to have a seamless integration with Data Science Models, BI Tools and reporting. Drive activities related to architecture designs, DevOps, CICD pipelines and code reviews. This position will require to be within driving distance of Denver metro area or Fort Worth, preferably Denver. HOW YOU WILL MAKE A DIFFERENCE Design and build data ingestion workflows/pipelines, physical data schemas, extracts, data transformations, and data integrations and/or designs using ETL and API microservices in AWS Cloud Build data architecture and applications that enable reporting, analytics, data science, and data management and improve accessibility, efficiency, governance, processing, and quality of data. Be the point of reference for the Business, Architecture and Data Science team whenever a Big Data technology is required Coach and mentor junior team members, actively participates in and often leads peer development and code reviews within each Agile sprint, with focus on test driven development and Continuous Integration and Continuous Development (CICD). Evaluate and recommend new technology patterns for the Analytics platform Collaborate with AWS Solution Architects to ensure technical direction Enable development best practices, re-usability of code, QA and release management processes Help to coordinate agile scrum processes, meetings and backlog management YEARS OF PROFESSIONAL EXPERIENCE: 3-5 EDUCATIONAL/ POSITION REQUIREMENTS Experience 3+ years overall software development experience A deep understanding of Data Engineering and related/technologies with 2+ years in AWS cloud platform with Python programming Previous experience in leading Cloud/Big Data Engineering projects Excellent knowledge of Glue, Lambda, Redshift and other AWS services required to develop efficient data pipelines Streaming pipelines experience using Kenisis, Kafka and similar tools Ability to evaluate and improve technical design and engineering patterns to increase software reusability Familiarity with JIRA & Confluence or similar tracking and management tools Familiarity with BI Tools like Tableau, DOMO, PowerBI or similar Excellent organizational, verbal and written communication and the ability to present information in a clear, concise and complete manner Self-starter, creative, enthusiastic, innovative and collaborative attitude Ability to prioritization task based on sense of urgency and accuracy Skills Performing work with a high degree of independence of self-management of a large variety of tasks in a matrixed organization Communicating verbally and in writing to business customers with various levels of technical knowledge, educating them about our tools and data products Develop and provide development support for performant pipelines as part of the quarterly deliverables by your team Consolidation of different sources of data (API, SQL Database, CSV, S3 and ftp files etc) in to a centralized data store Agile development of Data Lake / Data Warehouse on AWS including serverless patterns using Lambda, Glue, Spark, REST APIs and Docker) Redshift and BigQuery as data ware house system. Experience with RDBMS (MySQL, Oracle or DB2..) and SQL/DDL Language, and NoSQL (DynamoDB) Development of message queue driven systems (Amazon SQS, SNS and Lambda based functions) Development of streaming system (ie: Kinesis Stream, Kinesis Firehose) Python development, PySpark development Lambda development in Python, Node Ability to perform code review and technical design review AWS Services Knowledge is mandatory Knowledge of Terraform and Infrastructure as code principles Language Skills English fluency"," Mid-Senior level "," Full-time "," Analyst, Engineering, and Information Technology "," Retail Apparel and Fashion " Data Engineer,United States,Data Engineer I - ETL Engineering,https://www.linkedin.com/jobs/view/data-engineer-i-etl-engineering-at-yipitdata-3494714413?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=A%2BVskv6pkKybNSWViCVBMQ%3D%3D&position=18&pageNum=2&trk=public_jobs_jserp-result_search-card," YipitData ",https://www.linkedin.com/company/yipitllc?trk=public_jobs_topcard-org-name," United States "," 11 hours ago "," Over 200 applicants ","About Us YipitData is the leading market research firm for the disruptive economy and recently raised $475M from The Carlyle Group at a valuation of over $1B. We analyze billions of data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments and more. Our on-demand insights team uses proprietary technology to identify, license, clean and analyze the data many of the world’s largest investment funds and corporations depend on. We are one of Inc’s Best Workplaces - a fast-growing technology company with offices located in NYC (where we are based in), Hong Kong, and Shanghai, backed by Norwest Venture Partners and The Carlyle Group with a strong culture focused on mastery, ownership, and transparency. About the Data Engineering Department: Data Engineering’s mission is to create the best-in-class data analytics platform to support YipitData’s current and future data needs. Our self-service data platform empowers our Investor and Corporate product teams to analyze billions of data points every day to provide accurate, granular insights to their clients. The Data Engineering Department is composed of 4 teams, including Data Infrastructure, Data Platform Engineering, ETL Engineering, and Analytics Engineering (~15 engineers). We offer a highly collaborative work environment where Data Engineering teams meet regularly to review architectures and strategies to empower a technical audience of data users at the company. Each team has a high degree of ownership and opportunity to work with state of the art tools in the data industry to reach their objectives. We offer the flexibility to switch teams based on your skills and career aspirations, a career ladder with growth opportunities, good work/life balance, and we have a very high employee retention rate. About The Role: We are looking for a Data Engineer (Actual Title: Software Engineer I) to join our Data Engineering ETL team. ETL Engineering team’s mission is to create the best-in-class tooling to build highly performant and reliable data pipelines. We build and maintain the most critical data pipelines at YipitData, including processing high volumes of 1st and 3rd party datasets that fuel all of our data products. We also set the gold standard for how other YipitData analyst teams build their own data pipelines, and provide training and support for 250+ analysts. The ETL team is a high-impact, high-visibility team that will be crucial to the success of our growing data feed business. We collaborate with many different stakeholders across our Investor and Corporate business units. This is a remote-friendly opportunity that can be based in NYC, where our headquarters is located, or anywhere in the US (we expect Eastern Time working hours). As a Software Engineer I you will: Build, manage, and support different internal data pipelines Collaborate with stakeholders to enforce best practices. Build tooling to enable product teams to build their pipelines. Collaborate with engineers and business stakeholders to come up with the best solution for creating pipelines. Write documentations and help shape the future of the ETL team. Responsible for ingesting Edison data sources On a given day, you might: Work with our stakeholders to build an efficient pipeline Help create documentation for our internal tooling. Work with Data Platform Engineers to experiment with new Databricks features Help build our internal toolkit that’ll be used by stakeholders Monitor different pipelines for optimization opportunities As long as you've worked with modern data tools, we're positive that you will learn and understand our technology stack: AWS: S3, CloudFormation (CDK) and many more Databricks, Fivetran, Snowflake Python, PySpark, Spark, SQL, Git For business tools we use: GSuite, Slack, Asana, Zoom You Are Likely to Succeed If: Bachelor's or Master's degree in Computer Science, STEM or related technical discipline (such as bootcamp), or equivalent experience 1-3 years of experience as a Software Engineer, Data Engineer, or Data Analyst You are comfortable working with large-scale datasets using PySpark or Pandas You are a self-starter who enjoys working collaboratively with stakeholders You have some understanding of building data pipelines You are excited about solving data challenges and learning new skills You have strong verbal and written communication skills Nice to have: experience with SQL, Databricks, Pyspark/Pandas, Python What We Offer: The annual base salary for this position is anticipated to be $100-130K. The final offer may be determined by a number of factors, including, but not limited to, the applicant's experience, knowledge, skills, and abilities, as well as internal team benchmarks. We care about your personal life. We offer flexible work hours, open vacation policy, a generous 401K match, parental leave, team events, wellness budget, learning reimbursement, and more. Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. To learn more about our culture and values, check out our Glassdoor page. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity employer."," Entry level "," Full-time "," Information Technology "," Market Research " Data Engineer,United States,Data Engineer I - ETL Engineering,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3485241839?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=0WCFqbv948sscNXA%2BelZyw%3D%3D&position=19&pageNum=2&trk=public_jobs_jserp-result_search-card," YipitData ",https://www.linkedin.com/company/yipitllc?trk=public_jobs_topcard-org-name," United States "," 11 hours ago "," Over 200 applicants "," About UsYipitData is the leading market research firm for the disruptive economy and recently raised $475M from The Carlyle Group at a valuation of over $1B.We analyze billions of data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments and more. Our on-demand insights team uses proprietary technology to identify, license, clean and analyze the data many of the world’s largest investment funds and corporations depend on.We are one of Inc’s Best Workplaces - a fast-growing technology company with offices located in NYC (where we are based in), Hong Kong, and Shanghai, backed by Norwest Venture Partners and The Carlyle Group with a strong culture focused on mastery, ownership, and transparency.About the Data Engineering Department:Data Engineering’s mission is to create the best-in-class data analytics platform to support YipitData’s current and future data needs. Our self-service data platform empowers our Investor and Corporate product teams to analyze billions of data points every day to provide accurate, granular insights to their clients.The Data Engineering Department is composed of 4 teams, including Data Infrastructure, Data Platform Engineering, ETL Engineering, and Analytics Engineering (~15 engineers). We offer a highly collaborative work environment where Data Engineering teams meet regularly to review architectures and strategies to empower a technical audience of data users at the company. Each team has a high degree of ownership and opportunity to work with state of the art tools in the data industry to reach their objectives. We offer the flexibility to switch teams based on your skills and career aspirations, a career ladder with growth opportunities, good work/life balance, and we have a very high employee retention rate.About The Role:We are looking for a Data Engineer (Actual Title: Software Engineer I) to join our Data Engineering ETL team.ETL Engineering team’s mission is to create the best-in-class tooling to build highly performant and reliable data pipelines. We build and maintain the most critical data pipelines at YipitData, including processing high volumes of 1st and 3rd party datasets that fuel all of our data products. We also set the gold standard for how other YipitData analyst teams build their own data pipelines, and provide training and support for 250+ analysts. The ETL team is a high-impact, high-visibility team that will be crucial to the success of our growing data feed business. We collaborate with many different stakeholders across our Investor and Corporate business units.This is a remote-friendly opportunity that can be based in NYC, where our headquarters is located, or anywhere in the US (we expect Eastern Time working hours).As a Software Engineer I you will:Build, manage, and support different internal data pipelinesCollaborate with stakeholders to enforce best practices. Build tooling to enable product teams to build their pipelines. Collaborate with engineers and business stakeholders to come up with the best solution for creating pipelines. Write documentations and help shape the future of the ETL team. Responsible for ingesting Edison data sourcesOn a given day, you might:Work with our stakeholders to build an efficient pipelineHelp create documentation for our internal tooling. Work with Data Platform Engineers to experiment with new Databricks featuresHelp build our internal toolkit that’ll be used by stakeholdersMonitor different pipelines for optimization opportunitiesAs long as you've worked with modern data tools, we're positive that you will learn and understand our technology stack:AWS: S3, CloudFormation (CDK) and many moreDatabricks, Fivetran, SnowflakePython, PySpark, Spark, SQL, GitFor business tools we use: GSuite, Slack, Asana, ZoomYou Are Likely to Succeed If:Bachelor's or Master's degree in Computer Science, STEM or related technical discipline (such as bootcamp), or equivalent experience 1-3 years of experience as a Software Engineer, Data Engineer, or Data Analyst You are comfortable working with large-scale datasets using PySpark or PandasYou are a self-starter who enjoys working collaboratively with stakeholdersYou have some understanding of building data pipelinesYou are excited about solving data challenges and learning new skillsYou have strong verbal and written communication skillsNice to have: experience with SQL, Databricks, Pyspark/Pandas, PythonWhat We Offer:The annual base salary for this position is anticipated to be $100-130K. The final offer may be determined by a number of factors, including, but not limited to, the applicant's experience, knowledge, skills, and abilities, as well as internal team benchmarks. We care about your personal life. We offer flexible work hours, open vacation policy, a generous 401K match, parental leave, team events, wellness budget, learning reimbursement, and more. Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. To learn more about our culture and values, check out our Glassdoor page. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity employer. "," Entry level "," Full-time "," Information Technology "," Market Research " Data Engineer,United States,Data Engineer I - ETL Engineering,https://www.linkedin.com/jobs/view/data-engineer-at-clover-health-3464127604?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=Ul6o81DMvSbBEI8E5O4sZw%3D%3D&position=20&pageNum=2&trk=public_jobs_jserp-result_search-card," YipitData ",https://www.linkedin.com/company/yipitllc?trk=public_jobs_topcard-org-name," United States "," 11 hours ago "," Over 200 applicants "," About UsYipitData is the leading market research firm for the disruptive economy and recently raised $475M from The Carlyle Group at a valuation of over $1B.We analyze billions of data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments and more. Our on-demand insights team uses proprietary technology to identify, license, clean and analyze the data many of the world’s largest investment funds and corporations depend on.We are one of Inc’s Best Workplaces - a fast-growing technology company with offices located in NYC (where we are based in), Hong Kong, and Shanghai, backed by Norwest Venture Partners and The Carlyle Group with a strong culture focused on mastery, ownership, and transparency.About the Data Engineering Department:Data Engineering’s mission is to create the best-in-class data analytics platform to support YipitData’s current and future data needs. Our self-service data platform empowers our Investor and Corporate product teams to analyze billions of data points every day to provide accurate, granular insights to their clients.The Data Engineering Department is composed of 4 teams, including Data Infrastructure, Data Platform Engineering, ETL Engineering, and Analytics Engineering (~15 engineers). We offer a highly collaborative work environment where Data Engineering teams meet regularly to review architectures and strategies to empower a technical audience of data users at the company. Each team has a high degree of ownership and opportunity to work with state of the art tools in the data industry to reach their objectives. We offer the flexibility to switch teams based on your skills and career aspirations, a career ladder with growth opportunities, good work/life balance, and we have a very high employee retention rate.About The Role:We are looking for a Data Engineer (Actual Title: Software Engineer I) to join our Data Engineering ETL team.ETL Engineering team’s mission is to create the best-in-class tooling to build highly performant and reliable data pipelines. We build and maintain the most critical data pipelines at YipitData, including processing high volumes of 1st and 3rd party datasets that fuel all of our data products. We also set the gold standard for how other YipitData analyst teams build their own data pipelines, and provide training and support for 250+ analysts. The ETL team is a high-impact, high-visibility team that will be crucial to the success of our growing data feed business. We collaborate with many different stakeholders across our Investor and Corporate business units.This is a remote-friendly opportunity that can be based in NYC, where our headquarters is located, or anywhere in the US (we expect Eastern Time working hours).As a Software Engineer I you will:Build, manage, and support different internal data pipelinesCollaborate with stakeholders to enforce best practices. Build tooling to enable product teams to build their pipelines. Collaborate with engineers and business stakeholders to come up with the best solution for creating pipelines. Write documentations and help shape the future of the ETL team. Responsible for ingesting Edison data sourcesOn a given day, you might:Work with our stakeholders to build an efficient pipelineHelp create documentation for our internal tooling. Work with Data Platform Engineers to experiment with new Databricks featuresHelp build our internal toolkit that’ll be used by stakeholdersMonitor different pipelines for optimization opportunitiesAs long as you've worked with modern data tools, we're positive that you will learn and understand our technology stack:AWS: S3, CloudFormation (CDK) and many moreDatabricks, Fivetran, SnowflakePython, PySpark, Spark, SQL, GitFor business tools we use: GSuite, Slack, Asana, ZoomYou Are Likely to Succeed If:Bachelor's or Master's degree in Computer Science, STEM or related technical discipline (such as bootcamp), or equivalent experience 1-3 years of experience as a Software Engineer, Data Engineer, or Data Analyst You are comfortable working with large-scale datasets using PySpark or PandasYou are a self-starter who enjoys working collaboratively with stakeholdersYou have some understanding of building data pipelinesYou are excited about solving data challenges and learning new skillsYou have strong verbal and written communication skillsNice to have: experience with SQL, Databricks, Pyspark/Pandas, PythonWhat We Offer:The annual base salary for this position is anticipated to be $100-130K. The final offer may be determined by a number of factors, including, but not limited to, the applicant's experience, knowledge, skills, and abilities, as well as internal team benchmarks. We care about your personal life. We offer flexible work hours, open vacation policy, a generous 401K match, parental leave, team events, wellness budget, learning reimbursement, and more. Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. To learn more about our culture and values, check out our Glassdoor page. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity employer. "," Entry level "," Full-time "," Information Technology "," Market Research " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-king-s-hawaiian-3511376049?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=CBmXKjMY%2FjMUjVdjOphGcQ%3D%3D&position=21&pageNum=2&trk=public_jobs_jserp-result_search-card," King's Hawaiian ",https://www.linkedin.com/company/kingshawaiian?trk=public_jobs_topcard-org-name," Torrance, CA "," 1 week ago "," Over 200 applicants ","Joining King’s Hawaiian makes you part of our `ohana (family). We are a family-owned business for over seventy years, respecting our roots while thinking about our future as we continue to grow and care for our customers and the communities we serve. Our `ohana members build an environment of inclusivity as they freely collaborate, pursue learning through curiosity, and explore innovation as critical thinkers. Beyond that, we are also passionate about supporting the long-term health and well-being of our employees and their families. If you’re excited to rise with our team, come and join our `ohana! Working under the general supervision of the Chief Data and Analytics Officer, the Data Engineer will support a diverse set of stakeholders (data architects, data scientist, BI developers) within King’s Hawaiian on functions related to continued design, development, and optimization of the data pipelines & data platform. The data engineer will be charged with the data pipeline which brings the data from various data sources to support the analysis, reporting, audience creation and data science needs of King’s Hawaiian. The individual will also manage workflow orchestration and demonstrate strength in machine learning data modeling. Essential Job Duties And Responsibilities Work closely with data scientists and DW/BI Architects to employ industry best-practices for efficient and reliable data pipelines Lead the effort to gather, analyze and model business data (customers, financials, operational, organizational), key performance indicators, and/or market data (competitors, products, suppliers), using a broad set of analytical tools and techniques to develop quantitative and qualitative business insights and improve decision-making Build Data Storage Solutions with SQL Server and Data Lakes. Interact with business to understand the requirements to conclude blueprint, configuration, testing, migration, support, and continuous enhancements Manage and lead the ML projects to ensure timely delivery Design, build and launch efficient and reliable data pipelines to move data across several platforms including Data Warehouse Ensure that data pipelines and data stores are high-performing, efficient, organized, and reliable, given a specific set of business requirements and constraints Design, implement, monitor, and optimize data platforms to meet the data pipeline needs Understand the technology landscape and how it affects the areas of business; trends associated with the technology; functional area and industries and the value propositions for the business to adopt new digital transformation Perform other duties as required or assigned which are reasonably within the scope of this role BASIC QUALIFICATIONS (EDUCATION And/or EXPERIENCE) Bachelor’s degree in IT from an accredited 4-year college or equivalent relevant experience required 5+ years’ experience in a data engineering role Experience in data engineering, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Experience building and managing data pipelines and repositories in cloud environments such as Google Cloud, Microsoft Azure or AWS Experience extracting/cleansing data and generating insights from large transactional data sets using SQL, R, Python, Spark on the cloud Experience in API development and/or ETL processes Lakehouse architecture experience preferred Experience in CPG industry is preferred Additional Qualifications (job Skills, Abilities, Knowledge) Proficient ability to translate business needs into advanced analytics solutions Expert in creating ML data models and data pipelines. Proficient in building Data actions and Allocation Process Working knowledge in building Analytical applications Working knowledge of SAP SAC, SAP MII, SAP DMC, SAP Fiori, HANA Studio. Expert in writing formulae in importing jobs Proficient with Self-service BI Expert in configuring delta loads Expert in building connections with sources to Data lakes, Data warehouses and data imports Expert in writing R and Python - Scripts, Basic understanding of RDBMS and familiarity with SQL Proficient ability with SAP Analytics technologies: S/4HANA Embedded Analytics, BW/4HANA, SAP Fiori, HANA XS applications, Google/Azure/AWS Cloud technologies and BI/visualization tools Working Knowledge of SAP data structures, End to End dataflow, SAC predictive capabilities, augmented analytics, forecasting, etc. Ability to be flexible and work analytically in an ever-changing, problem-solving environment Proficient leadership, communication (written and oral) and interpersonal skills Ability to travel 30% of the time. Ability to walk King’s values of excellence, dignity, saying it life it is in a way it can be heard; and curiosity, collaboration, critical thinking and emotional intelligence. King's Hawaiian is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for our ohana."," Entry level "," Full-time "," Information Technology "," Food and Beverage Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-petram-search-group-3486564679?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=VyL2uo5ZEjy32th5lBj%2FUA%3D%3D&position=22&pageNum=2&trk=public_jobs_jserp-result_search-card," Petram Search Group ",https://www.linkedin.com/company/petram-search-group?trk=public_jobs_topcard-org-name," Atlanta Metropolitan Area "," 4 weeks ago "," Over 200 applicants ","Position Summary The Data Engineer role will be the technical liaison between multiple groups including a data science team, the engineering team, product management, and business stakeholders. You do not need any insurance knowledge prior, however, you must quickly dive deep into the insurance world and ask questions to become a subject matter expert. You will be responsible for building a data platform to facilitate the data science team. You must be a self-starter that can build out features such as a data pipeline from scratch. There will be support from both engineering and data science for any buildout. Responsibilities for Data Engineer • Create and maintain optimal data pipeline architecture • Assemble large, complex data sets that meet functional / non-functional business requirements. • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS technologies, SQL, and Python. • Work with stakeholders including the Executive, Product, Data Science, and Engineering teams to assist with data-related technical issues and support their data infrastructure needs. • Work with data science and analytics teams to strive for greater functionality in our data systems. Qualifications for Data Engineer • Advanced working SQL knowledge and experience working with relational databases, strong query authoring (SQL) as well as working familiarity with a variety of databases (Snowflake, Redshift, MySQL, MSSQL, etc.) • Experience building and optimizing data pipelines, architecture and data sets. • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. • Build processes supporting data transformation, data structures, metadata, dependency, and workload management. • A successful history of transforming, processing and extracting value from large disconnected, datasets from a variety of data sources (Flat files, Excel, databases, etc.) • Strong analytic skills related to working with unstructured datasets. • Strong project management and organizational skills. • Experience supporting and working with cross-functional teams in a dynamic environment. • Ability to mentor/guide/collaborate with other team members. • We are looking for a candidate with 3-5 years of experience in a Data Engineer role. They should also have experience using the following software/tools: o Experience with relational and MPP databases, including Snowflake and MySQL. o Experience developing software in an agile environment from the requirements stage to production o Experience with version control: git o Experience with container technologies: Docker o Experience with data pipeline and workflow management tools: Airflow, Jenkins, AWS Glue, Azkaban, Luigi, etc. o Experience with AWS cloud services: EC2, ECS, Batch, S3, EMR, RDS, Redshift o Experience with other cloud services: Snowflake, Airflow o Experience with object-oriented/object function scripting languages: Python, Java, C++, etc. o Experience with data modeling and data warehouse design o Experience with data visualization tools (PowerBI, QuickSight) Education • BS Degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field."," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-st-louis-cardinals-3527720783?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=g869dSveWM2IAzCV5iKs%2Fg%3D%3D&position=23&pageNum=2&trk=public_jobs_jserp-result_search-card," Petram Search Group ",https://www.linkedin.com/company/petram-search-group?trk=public_jobs_topcard-org-name," Atlanta Metropolitan Area "," 4 weeks ago "," Over 200 applicants "," Position SummaryThe Data Engineer role will be the technical liaison between multiple groups including a data science team, the engineering team, product management, and business stakeholders. You do not need any insurance knowledge prior, however, you must quickly dive deep into the insurance world and ask questions to become a subject matter expert. You will be responsible for building a data platform to facilitate the data science team. You must be a self-starter that can build out features such as a data pipeline from scratch. There will be support from both engineering and data science for any buildout.Responsibilities for Data Engineer• Create and maintain optimal data pipeline architecture• Assemble large, complex data sets that meet functional / non-functional business requirements.• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS technologies, SQL, and Python.• Work with stakeholders including the Executive, Product, Data Science, and Engineering teams to assist with data-related technical issues and support their data infrastructure needs.• Work with data science and analytics teams to strive for greater functionality in our data systems.Qualifications for Data Engineer• Advanced working SQL knowledge and experience working with relational databases, strong query authoring (SQL) as well as working familiarity with a variety of databases (Snowflake, Redshift, MySQL, MSSQL, etc.)• Experience building and optimizing data pipelines, architecture and data sets.• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.• Build processes supporting data transformation, data structures, metadata, dependency, and workload management.• A successful history of transforming, processing and extracting value from large disconnected, datasets from a variety of data sources (Flat files, Excel, databases, etc.)• Strong analytic skills related to working with unstructured datasets.• Strong project management and organizational skills.• Experience supporting and working with cross-functional teams in a dynamic environment.• Ability to mentor/guide/collaborate with other team members.• We are looking for a candidate with 3-5 years of experience in a Data Engineer role. They should also have experience using the following software/tools: o Experience with relational and MPP databases, including Snowflake and MySQL. o Experience developing software in an agile environment from the requirements stage to production o Experience with version control: git o Experience with container technologies: Docker o Experience with data pipeline and workflow management tools: Airflow, Jenkins, AWS Glue, Azkaban, Luigi, etc. o Experience with AWS cloud services: EC2, ECS, Batch, S3, EMR, RDS, Redshift o Experience with other cloud services: Snowflake, Airflow o Experience with object-oriented/object function scripting languages: Python, Java, C++, etc. o Experience with data modeling and data warehouse design o Experience with data visualization tools (PowerBI, QuickSight)Education• BS Degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-snowflake-expert-at-experfy-3516887640?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=q9IzZ26SfApdZB%2BJxQ0KIA%3D%3D&position=24&pageNum=2&trk=public_jobs_jserp-result_search-card," Petram Search Group ",https://www.linkedin.com/company/petram-search-group?trk=public_jobs_topcard-org-name," Atlanta Metropolitan Area "," 4 weeks ago "," Over 200 applicants "," Position SummaryThe Data Engineer role will be the technical liaison between multiple groups including a data science team, the engineering team, product management, and business stakeholders. You do not need any insurance knowledge prior, however, you must quickly dive deep into the insurance world and ask questions to become a subject matter expert. You will be responsible for building a data platform to facilitate the data science team. You must be a self-starter that can build out features such as a data pipeline from scratch. There will be support from both engineering and data science for any buildout.Responsibilities for Data Engineer• Create and maintain optimal data pipeline architecture• Assemble large, complex data sets that meet functional / non-functional business requirements.• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS technologies, SQL, and Python.• Work with stakeholders including the Executive, Product, Data Science, and Engineering teams to assist with data-related technical issues and support their data infrastructure needs.• Work with data science and analytics teams to strive for greater functionality in our data systems.Qualifications for Data Engineer• Advanced working SQL knowledge and experience working with relational databases, strong query authoring (SQL) as well as working familiarity with a variety of databases (Snowflake, Redshift, MySQL, MSSQL, etc.)• Experience building and optimizing data pipelines, architecture and data sets.• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.• Build processes supporting data transformation, data structures, metadata, dependency, and workload management.• A successful history of transforming, processing and extracting value from large disconnected, datasets from a variety of data sources (Flat files, Excel, databases, etc.)• Strong analytic skills related to working with unstructured datasets.• Strong project management and organizational skills.• Experience supporting and working with cross-functional teams in a dynamic environment.• Ability to mentor/guide/collaborate with other team members.• We are looking for a candidate with 3-5 years of experience in a Data Engineer role. They should also have experience using the following software/tools: o Experience with relational and MPP databases, including Snowflake and MySQL. o Experience developing software in an agile environment from the requirements stage to production o Experience with version control: git o Experience with container technologies: Docker o Experience with data pipeline and workflow management tools: Airflow, Jenkins, AWS Glue, Azkaban, Luigi, etc. o Experience with AWS cloud services: EC2, ECS, Batch, S3, EMR, RDS, Redshift o Experience with other cloud services: Snowflake, Airflow o Experience with object-oriented/object function scripting languages: Python, Java, C++, etc. o Experience with data modeling and data warehouse design o Experience with data visualization tools (PowerBI, QuickSight)Education• BS Degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-data-affect-3509803347?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=3IlRVkJabMChSvY8iM5WFQ%3D%3D&position=25&pageNum=2&trk=public_jobs_jserp-result_search-card," Data Affect ",https://www.linkedin.com/company/dataaffect?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","About Data Affect : We are a boutique data/service management firm specializing in the delivery of data governance, enterprise data strategy, solutions architecture, data warehousing, data integrations, data security & privacy management, IT service management, business analysis (and) agile project management services to diverse clients across multiple industries. Role: Data Engineer Location: Remote Duration: Fulltime Education & Experience Bachelor’s degree in the field of business, computer science or analytics discipline 6 years’ experience with data & analytics toolsets, including data engineering and data management platforms Experience with Talend data integration tool to implement ETL processes to onboard new data source into Data Lake and perform simple to complex transformations. Develop and support data quality monitoring framework, developing data quality reporting from strategy to operation. Expert with SQL and experience with at least one programming language (Shell Scripting, Python) Experience with relational databases (Snowflake, Oracle, and SQL Server) and strong working knowledge of Data Architecture, Data Warehousing and Data Lake concepts required. Experience in developing data services APIs to share data infrastructure that enables real-time analysis of data Previous experience working on a collaborative Agile delivery team Demonstrated experience delivering high quality certified data sources as inputs to analytic solutions. Experience working with Cloud data warehouse solutions and other native Azure/ AWS data warehouse technologies Proficient with data warehouse architecture and data pipelines (ELT/ETL, data modeling) Database development experience using Snowflake and experience with relational, NoSQL, and cloud database technologies Prefer prior exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics Hands-on experience with software design practices, relational databases, and data-driven reporting and analytics applications Knowledge of Machine Language techniques, such as regression and clustering is a plus"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-adobe-3459050990?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=kQTjSy1ypkm9dWTEmYd1gA%3D%3D&position=7&pageNum=0&trk=public_jobs_jserp-result_search-card," Adobe ",https://www.linkedin.com/company/adobe?trk=public_jobs_topcard-org-name," New York, NY "," 14 hours ago "," Over 200 applicants ","Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Adobe Customer Solutions is looking for a full time Data Engineer with experience in building data integrations using AWS technology stack as part of the team's Data as a Service portfolio for Adobe’s Digital Experience enterprise customers. Customer facing Engineers who enjoy tackling complex technical challenges, have a passion for delighting customers and who are self-motivated to push themselves in a team oriented culture will thrive in our environment What You'll Do Collaborate with Data architects, Enterprise architects, Solution consultants and Product engineering teams to gather customer data integration requirements, conceptualize solutions & build required technology stack Collaborate with enterprise customer's engineering team to identify data sources, profile and quantify quality of data sources, develop tools to prepare data and build data pipelines for integrating customer data sources and third party data sources with Adobe solutions Develop new features and improve existing data integrations with customer data ecosystem Encourage team to think out-of-the-box and overcome engineering obstacles while incorporating new innovative design principles. Collaborate with a Project Manager to bill and forecast time for customer solutions What You Need To Succeed Proven experience in architecting and building fault tolerant and scalable data processing integrations using AWS. Ability to identify and resolve problems associated with production grade large scale data processing workflows. Experience leveraging REST APIs to serve and consume data. Proven track record in Python programming language Software development experience working with Apache Airflow, Spark, SQL / No SQL database. Deep understanding of streaming architecture using tools such as Spark-Streaming, Kinesis and Kafka. Experience using Docker, Containerization and Orchestration. BS/MS degree in Computer Science or equivalent proven experience At least 3 years of experience as a data engineer or in a similar role. This client-facing position requires working with a variety of stakeholders in different roles; having applicable experience is important. Previous experience in building and deploying solutions using CI/CD. Passion for crafting Intelligent data pipelines using Microservices/Event Driven Architecture under strict deadlines. Strong capacity to handle numerous projects in parallel is a must. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our outstanding Check-In approach where feedback flows freely. If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the significant benefits we offer. Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of race, gender, religion, age, sexual orientation, gender identity, disability or veteran status. Our compensation reflects the cost of labor across several  U.S. geographic markets, and we pay differently based on those defined markets. The U.S. pay range for this position is $101,500 -- $194,300 annually. Pay within this range varies by work location and may also depend on job-related knowledge, skills, and experience. Your recruiter can share more about the specific salary range for the job location during the hiring process. At Adobe, for sales roles starting salaries are expressed as total target compensation (TTC = base + commission), and short-term incentives are in the form of sales commission plans. Non-sales roles starting salaries are expressed as base salary and short-term incentives are in the form of the Annual Incentive Plan (AIP). In addition, certain roles may be eligible for long-term incentives in the form of a new hire equity award."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting, Advertising Services, and Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-teckpert-3515390151?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=OXhAJn%2FCj%2B9s7jh6wIWscA%3D%3D&position=15&pageNum=0&trk=public_jobs_jserp-result_search-card," TECKpert ",https://www.linkedin.com/company/teckpert?trk=public_jobs_topcard-org-name," Miami, FL "," 1 week ago "," Be among the first 25 applicants ","We are looking for a Data Engineer to support our client based in Miami Beach, FL US BASED CANDIDATES ONLY. *no third parties, C2C and no sponsorship* This is a Hybrid position. Candidates must be located in or near Miami Beach, FL. Who we are Founded in 2009 and headquartered in beautiful Miami, FL, TECKpert is a tech consulting and staff augmentation firm. At TECKpert, we offer a contingent workforce built for any size digital transformation project. Experts in design, development, IT, analytics and marketing, provide innovative digital solutions to achieve success in our new economy. Our leaders identify the technical talent best suited to bolster our client’s capabilities, across all industries, including, healthcare, government, finance, legal, real estate, and startups. The project TECKpert seeks to hire a Data Engineer to support our client, a government agency based in Miami Beach, FL and to work with the Data Integration and Analytics team. The Data Management and Integrations Division is requesting to recruit for two Data Engineer contractors on behalf of Miami Dade Corrections Department (MDCR). Compensation and Term This opportunity is for a full-time, 6 months contract position with possible extensions and pay commensurate with experience up to $75/hour or $156,000 per year. Medical, dental, vision and life insurance available after 30 days of hire. Qualifications You Need A successful Data Engineer candidate possesses or provides the following: MINIMUM SKILLS REQUIRED:  Minimum of eight years (8yrs) of experience in developing, supporting a medium to large organization’s database systems including database structure systems, data management. resources, data mining and data models are required.  Implement data pipelines to build Azure Analysis Services reporting data models.  Perform complex analyses of business data and processes.  Provide analytic and strategic models to address key questions across a portfolio of businesses.  Collect, organize, manipulate, and analyze a wide variety of data.  Track and report on the performance of the deployed models.  Assist in the development of dashboards to help executives in strategic decision making.  Perform and interpret data studies and product experiments pertaining to new data sources or new uses for existing data sources.  Develop prototypes, proof of concepts, measures, KPIs, derived and custom fields.  A plus: experience with Corrections, Inmate related data MINIMUM EDUCATION  Bachelor’s degree.  Additional related work experience and/or certifications may substitute for the required education on a year-for-year basis. Working with us Working with TECKpert means more options. As new opportunities arise, you tell us what you think is a good fit for you. What industries interest you most? Do you prefer an on-location, 9-5? Or would you want a flexible schedule and remote work? We proudly offer a wide variety of roles. Many of our TECKperts enjoy coworking and skills training coupled with the stability of full-time employment. We believe TECKpert gives today's digital professionals an agile path to start and advance their career. All of our opportunities require at least 20 hours per week and can be one to twelve months in length. Choose the opportunity that matches your interest and desired cadence. Powered by JazzHR 8ukD68FP16"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-digible-inc-3494570154?refId=1aGkY%2FEQtGHu5%2BqZlqgy7Q%3D%3D&trackingId=eXO2HAI727%2B9RNyXvn2d1g%3D%3D&position=19&pageNum=0&trk=public_jobs_jserp-result_search-card," Digible, Inc ",https://www.linkedin.com/company/digibleinc?trk=public_jobs_topcard-org-name," Denver, CO "," 3 weeks ago "," 107 applicants ","Company Overview Privately owned and operated, Digible was founded in 2017 with a mission to bring sophisticated digital marketing solutions to the multifamily industry. We offer a comprehensive suite of digital services as well as a predictive analytics platform, Fiona, that is the first of its kind. At Digible, Inc. we love to celebrate our diverse group of hardworking employees – and it shows. We're proud to say that for 2021, we are ranked #1 Top Workplace in Colorado AND #1 Workplace for ""Best New Ideas"". We pride ourselves on our collaborative, transparent, and authentic culture. These values are pervasive throughout every step of a Digible employee's journey. Starting with our interviews and continuing through our weekly All Hands Transparency Round-up, values are at the heart of working at Digible. We value diversity and believe forming teams in which everyone can be their authentic self is key to our success. We encourage people from underrepresented backgrounds and different industries to apply. Come join us and find out what the best work of your career could look like here at Digible. The Role Digible, Inc. is looking for an Junior Software Engineer to join our team! We are seeking a highly motivated and detail-oriented Junior Data Engineer to join our growing team. As a Junior Data Engineer, you will be responsible for assisting with the development, implementation, and maintenance of our data infrastructure. You will work closely with our Data Engineering team to ensure data quality, accuracy, and completeness across all of our data sources. Responsibilities: Assist with the design, development, and maintenance of our data infrastructure using Python, Prefect, Snowflake, Google Cloud, and AWS. Collaborate with our Data Engineering team to ensure data quality, accuracy, and completeness across all of our data sources. Develop and maintain ETL pipelines to extract, transform, and load data from various sources into our data warehouse. Assist with the implementation of data security and governance policies. Monitor and troubleshoot data issues as they arise. Continuously seek ways to improve our data infrastructure and processes. Requirements: Bachelor's degree in Computer Science, Data Science, or a related field. Experience with Python, SQL, and data modeling. Familiarity with data warehousing concepts and ETL processes. Experience working with cloud-based platforms such as Google Cloud and AWS. Strong problem-solving skills and attention to detail. Excellent communication and teamwork skills. Expectations: Ability to learn and adapt quickly to new technologies and processes. Work collaboratively with our Data Engineering team to meet project goals and deadlines. Demonstrate a high level of professionalism and dedication to quality work. Communicate effectively with cross-functional teams to ensure project success. Success Metrics: Deliver high-quality data infrastructure projects on time and within scope. Ensure data accuracy and completeness across all data sources. Maintain a high level of data security and governance. Continuously seek ways to improve our data infrastructure and processes. Collaborate effectively with cross-functional teams to meet project goals and objectives. Core Values Authenticity - The commitment to be steadfast and genuine with our actions and communication toward everyone we touch. Curiosity - The belief that a deep and fundamental curiosity (the ""why"") in our work is vital to company innovation and evolution. Focus - The collective will to remain completely devoted and ultimately accountable to our deliverables. Humility - The recognition and daily practice that ""we"" is always greater than ""I"". Happiness - The decision to prioritize passion and love for what we do above everything else. Perks and such: 4-Day Work Week (32 Hour Work Week) WFA (Work From Anywhere) Profit Sharing Bonus We offer 3 weeks of PTO as well as Sick leave, and Bereavement. We offer 10 paid holidays (New Years Eve, New Years Day, MLK day, Memorial Day, Independence Day, Labor Day, Thanksgiving + day after, Christmas Eve, and Christmas) 401(k) + Match 50% employer paid health benefits, including Medical, Dental, and Vision. We provide $75/ month reimbursement for Physical Wellness We provide $75/ month reimbursement for Mental Wellness $1000/year travel fund for employees who have been with Digible 3+ years Monthly subscription for financial wellness Dog-Friendly Office Paid Parental Leave Company Sponsored Social Events Company Provided Lunches, Snacks Employee Development Program"," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-clover-health-3464127604?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=Ul6o81DMvSbBEI8E5O4sZw%3D%3D&position=20&pageNum=2&trk=public_jobs_jserp-result_search-card," Clover Health ",https://www.linkedin.com/company/cloverhealth?trk=public_jobs_topcard-org-name," Nashville, TN "," 3 weeks ago "," Over 200 applicants ","Clover is reinventing health insurance by working to keep people healthier. We value diversity — in backgrounds and in experiences. Healthcare is a universal concern, and we need people from all backgrounds and swaths of life to help build the future of healthcare. Clover's Data Team is charged with applying our data—our most important asset—to produce value for our members. From observing how the member experience impacts clinical outcomes to making our home visits more efficient and effective, our team pushes out automated and statistical insights central to executing on our main mission. And our impact is tremendous: you'll be able to point to one of our members and say, “I helped make that person's life better.” The Job We are looking for a Data Engineer to join our team. You'll work on the development of data pipelines and tools to support our analytics and machine learning development. Applying insights through data is a core part of our thesis as a company — and you will work on a team that is a central part of helping to deliver that promise through making a wide variety of data easily accessible for internal and external consumers. We work primarily in Python and our data is primarily stored in Postgres. You will work with data scientists, other engineers, and healthcare professionals in a unique environment building tools to improve the health of real people. You should have extensive experience leading data warehousing projects with advanced knowledge in data cleansing, ingestion, ETL and data governance. As a Data Engineer, You Will Collaborate closely with operations, IT and vendor partners to understand the data landscape and contribute to the vision, development and implementation of the Data Warehouse solution. Recommend technologies and tools to support the future state architecture. Develop standards, processes and procedures that align with best practices in data governance and data management. Be responsible for logical and physical data modeling, load and query performance. Create and manage ETL packages, triggers, stored procedures, views, and SQL transactions. Develop new secure data feeds with external parties as well as internal applications. Perform regular analysis and QA, diagnose ETL and database related issues, perform root cause analysis, and recommend corrective actions to management. Work with cross-functional teams to support the design, development, implementation, monitoring, and maintenance of new ETL programs. You Will Love This Job If You value collaboration and feedback. You can communicate technical vision in clear terms— to your team and peers as well as outside of the engineering team. You are quick to jump in to help fix things that are broken and you enjoy making sustainable systems. You are happy to fill in the gaps to reach a goal where necessary, even if it is outside of your job description. You have a genuine interest in what good technology can do to help people and have a positive attitude about tackling hard problems in an important industry. You enjoy working in a fluid environment, defining and owning priorities that adapt to our larger goals. You can bring clarity to ambiguity while remaining open-minded to new information that might change your mind. You Should Get In Touch If You have a Bachelor’s degree in Computer Science or related field along with 3+ years of experience in ETL programming utilizing SSIS packages, DTS, stored procedures, and SQL scripts. You have professional experience working in a healthcare setting. Health Plan knowledge preferred. You have expertise in most of these technologies: Python, Postgres, Snowflake, DBT, Airflow, BigQuery, Data Governance, some experience with analytics, data science, ML collaboration tools such as Tableau, Mode, and Looker. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. We are an E-Verify company. About Clover: We are reinventing health insurance by combining the power of data with human empathy to keep our members healthier. We believe the healthcare system is broken, so we've created custom software and analytics to empower our clinical staff to intervene and provide personalized care to the people who need it most. We always put our members first, and our success as a team is measured by the quality of life of the people we serve. Those who work at Clover are passionate and mission-driven individuals with diverse areas of expertise, working together to solve the most complicated problem in the world: healthcare. From Clover’s inception, Diversity & Inclusion have always been key to our success. We are an Equal Opportunity Employer and our employees are people with different strengths, experiences and backgrounds, who share a passion for improving people's lives. Diversity not only includes race and gender identity, but also age, disability status, veteran status, sexual orientation, religion and many other parts of one’s identity. All of our employee’s points of view are key to our success, and inclusion is everyone's responsibility. "," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-st-louis-cardinals-3527720783?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=g869dSveWM2IAzCV5iKs%2Fg%3D%3D&position=23&pageNum=2&trk=public_jobs_jserp-result_search-card," St. Louis Cardinals ",https://www.linkedin.com/company/st.-louis-cardinals?trk=public_jobs_topcard-org-name," St Louis, MO "," 1 week ago "," Be among the first 25 applicants ","Summary Of Responsibilities The role of the Junior Data Engineer will be to maintain and further the development of modern, scalable baseball data processing systems for the St. Louis Cardinals. This person will collaborate with the Baseball Systems group to ensure high quality data is available to scouts, coaches, players, and other baseball decision-makers. This person should be detail-oriented, enjoy collaborating with others, communicate effectively both verbally and in writing, and have a growth mindset and a love for the game of baseball. Education & Experience Required Bachelor's degree in a technical field, or a combination of relevant education and work experience Experience identifying, triaging, and resolving data issues Interest in modern data system architectures, design patterns and best practices Ability to apply creative solutions to challenging technical tasks Ability to work independently in a fast-paced environment Technical knowledge and experience including: Proficiency with modern programming languages/frameworks (Python, Go/golang, TypeScript, Node.js preferred) Proficiency with data-related concepts such as data pipelines, databases, SQL, JSON, and REST APIs Education & Experience Preferred Professional experience in a software engineering, data reliability, and/or a quality assurance environment Technical experience/familiarity with: DevOps tools including Git and CI/CD tools Cloud computing & cloud technologies including serverless and event driven architectures Kubernetes and container-based environments"," Mid-Senior level "," Full-time "," Information Technology "," Spectator Sports " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-snowflake-expert-at-experfy-3516887640?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=q9IzZ26SfApdZB%2BJxQ0KIA%3D%3D&position=24&pageNum=2&trk=public_jobs_jserp-result_search-card," St. Louis Cardinals ",https://www.linkedin.com/company/st.-louis-cardinals?trk=public_jobs_topcard-org-name," St Louis, MO "," 1 week ago "," Be among the first 25 applicants "," Summary Of ResponsibilitiesThe role of the Junior Data Engineer will be to maintain and further the development of modern, scalable baseball data processing systems for the St. Louis Cardinals. This person will collaborate with the Baseball Systems group to ensure high quality data is available to scouts, coaches, players, and other baseball decision-makers. This person should be detail-oriented, enjoy collaborating with others, communicate effectively both verbally and in writing, and have a growth mindset and a love for the game of baseball.Education & Experience RequiredBachelor's degree in a technical field, or a combination of relevant education and work experienceExperience identifying, triaging, and resolving data issuesInterest in modern data system architectures, design patterns and best practicesAbility to apply creative solutions to challenging technical tasksAbility to work independently in a fast-paced environmentTechnical knowledge and experience including:Proficiency with modern programming languages/frameworks (Python, Go/golang, TypeScript, Node.js preferred)Proficiency with data-related concepts such as data pipelines, databases, SQL, JSON, and REST APIsEducation & Experience PreferredProfessional experience in a software engineering, data reliability, and/or a quality assurance environmentTechnical experience/familiarity with:DevOps tools including Git and CI/CD toolsCloud computing & cloud technologies including serverless and event driven architecturesKubernetes and container-based environments "," Mid-Senior level "," Full-time "," Information Technology "," Spectator Sports " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/bi-developer-data-engineer-at-tech-mahindra-3504241830?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=L62CLVIlXijr79YL2dZVaA%3D%3D&position=1&pageNum=3&trk=public_jobs_jserp-result_search-card," Data Affect ",https://www.linkedin.com/company/dataaffect?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants "," About Data Affect : We are a boutique data/service management firm specializing in the delivery of data governance, enterprise data strategy, solutions architecture, data warehousing, data integrations, data security & privacy management, IT service management, business analysis (and) agile project management services to diverse clients across multiple industries.Role: Data EngineerLocation: RemoteDuration: FulltimeEducation & ExperienceBachelor’s degree in the field of business, computer science or analytics discipline6 years’ experience with data & analytics toolsets, including data engineering and data management platformsExperience with Talend data integration tool to implement ETL processes to onboard new data source into Data Lake and perform simple to complex transformations.Develop and support data quality monitoring framework, developing data quality reporting from strategy to operation.Expert with SQL and experience with at least one programming language (Shell Scripting, Python)Experience with relational databases (Snowflake, Oracle, and SQL Server) and strong working knowledge of Data Architecture, Data Warehousing and Data Lake concepts required.Experience in developing data services APIs to share data infrastructure that enables real-time analysis of dataPrevious experience working on a collaborative Agile delivery teamDemonstrated experience delivering high quality certified data sources as inputs to analytic solutions.Experience working with Cloud data warehouse solutions and other native Azure/ AWS data warehouse technologiesProficient with data warehouse architecture and data pipelines (ELT/ETL, data modeling)Database development experience using Snowflake and experience with relational, NoSQL, and cloud database technologiesPrefer prior exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematicsHands-on experience with software design practices, relational databases, and data-driven reporting and analytics applicationsKnowledge of Machine Language techniques, such as regression and clustering is a plus "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-intern-summer-2023-at-cloudflare-3515394848?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=U0ZyZp77ekWG9HjU8jvpQg%3D%3D&position=2&pageNum=3&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 week ago "," Over 200 applicants ","About Us At Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! About The Department This internship is targeting students with experience and interest in Data Engineering. The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data. What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special? We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one. 1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers. Sound like something you’d like to be a part of? We’d love to hear from you! This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license. Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107."," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-at-wirescreen-3461085680?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=1ELQE%2BGBQCPQBfRFs6KbTg%3D%3D&position=3&pageNum=3&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 week ago "," Over 200 applicants "," About UsAt Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us!About The DepartmentThis internship is targeting students with experience and interest in Data Engineering.The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data.What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special?We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.Sound like something you’d like to be a part of? We’d love to hear from you!This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107. "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-at-montani-consulting-3520446935?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=ipN42R198qTd8pWNvCxXtw%3D%3D&position=4&pageNum=3&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 week ago "," Over 200 applicants "," About UsAt Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us!About The DepartmentThis internship is targeting students with experience and interest in Data Engineering.The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data.What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special?We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.Sound like something you’d like to be a part of? We’d love to hear from you!This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107. "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-at-stripe-3511633989?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=dQFhWQr6Ezr4o0kQ0FAb1g%3D%3D&position=5&pageNum=3&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 week ago "," Over 200 applicants "," About UsAt Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us!About The DepartmentThis internship is targeting students with experience and interest in Data Engineering.The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data.What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special?We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.Sound like something you’d like to be a part of? We’d love to hear from you!This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107. "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516891520?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=LSW5kUWG%2BaDjebJqMfXJdw%3D%3D&position=6&pageNum=3&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 week ago "," Over 200 applicants "," About UsAt Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us!About The DepartmentThis internship is targeting students with experience and interest in Data Engineering.The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data.What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special?We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.Sound like something you’d like to be a part of? We’d love to hear from you!This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107. "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516885995?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=%2FZt1cHNw6W3OPVHkXyq%2FsQ%3D%3D&position=7&pageNum=3&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 week ago "," Over 200 applicants "," About UsAt Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us!About The DepartmentThis internship is targeting students with experience and interest in Data Engineering.The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data.What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special?We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.Sound like something you’d like to be a part of? We’d love to hear from you!This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107. "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3485243688?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=fCcvFSci6khftjSJ2V%2F4IQ%3D%3D&position=8&pageNum=3&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 week ago "," Over 200 applicants "," About UsAt Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us!About The DepartmentThis internship is targeting students with experience and interest in Data Engineering.The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data.What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special?We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.Sound like something you’d like to be a part of? We’d love to hear from you!This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107. "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-attune-3515931919?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=cTAJVef%2BpwrruCn8I9Z%2F3w%3D%3D&position=9&pageNum=3&trk=public_jobs_jserp-result_search-card," Attune ",https://www.linkedin.com/company/attune-insurance-services-llc?trk=public_jobs_topcard-org-name," New York, NY "," 1 week ago "," 134 applicants ","Attune is making it easier, faster, and more reliable for small businesses to access insurance (think, pizza place down the street or main street store). They all need insurance to protect their businesses, but when it comes time to get a policy, they're bogged down by hundreds of questions, lots of paperwork, and a seemingly never-ending timeline stretching for weeks or months. We are changing all that. By using data and technology expertise to understand risk, automating legacy processes, and creating a better user experience for brokers, we're able to provide policies that are more applicable and more transparent in just minutes. Simply put, we're pushing insurance into the future, focusing on accuracy, speed, and ease. Our mission and our team are growing. In staying true to our values, we've earned our spot on many workplace award lists. Attune is dedicated to creating a diverse team of curious, focused, and motivated people who are excited about changing the future of an entire industry. Job Description As a Data Engineer joining our growing Data team, you will help maintain our existing data pipelines and internal web applications. You will also work with team members across the organization to develop and test new pipelines as we launch new products and integrate with third-party data sources. We work with various tools including Python/Pandas, Postgresql, Bash, AWS EC2, and S3. We are always open to using whatever tool is best for the job. Responsibilities: Maintain and improve batch ETL jobs - including SQL query optimization, bug fixes and code improvements Work with our data engineers, BI, and other teams across the business to develop/refine ETL processes Understand and answer questions on the data our team maintains Requirements: 3+ years experience in analytics, data science, or data engineering role Strong Python and Postgresql skills Solid understanding of relational database design and basic query optimization techniques Experience working with git, and Gitlab or Github Nice to have: Experience with Linux CLI, shell programming Understanding of CI/CD Experience with AWS EC2 and S3"," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-infosys-3473964566?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=NU96JpuvRQoJK1ivIh1dzA%3D%3D&position=10&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 3 weeks ago "," Over 200 applicants ","Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projects Required Qualifications: Bachelor’s degree or foreign equivalent required from an accredited institution. 4+ years of IT experience US Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time Multiple locations: Hartford, CT; Indianapolis, IN; Providence, RI; Raleigh, NC; Richardson, TX; Tempe, AZ This is Fulltime with Infosys Preferred Qualifications: Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, Kafka Strong programming knowledge in Scala or Python for Spark application development Strong knowledge and hands-on experience in SQL, Unix shell scripting Experience in data warehousing technologies, ETL/ELT implementations Sound Knowledge of Software engineering design patterns and practices Strong understanding of Functional programming. Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based tools Good hands on in RESTful APIs Good Hands-on experience on SQL Development. Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and Dockers Experience with data visualization tools like Tableau, Kibana, etc Experience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshooting Planning and Co-ordination skills Good Communication and Analytical skills Experience and desire to work in a Global delivery environment. Ability to work in team in diverse/ multiple stakeholder environment."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/lead-data-engineer-at-amwell-3509404246?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=swcHD%2FSRNJ%2FJOsKpyDVLPg%3D%3D&position=11&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 3 weeks ago "," Over 200 applicants "," Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.4+ years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timeMultiple locations: Hartford, CT; Indianapolis, IN; Providence, RI; Raleigh, NC; Richardson, TX; Tempe, AZThis is Fulltime with InfosysPreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-los-angeles-rams-3511017511?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=IDYMEQMbaeRrEXhvwvfUTw%3D%3D&position=12&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 3 weeks ago "," Over 200 applicants "," Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.4+ years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timeMultiple locations: Hartford, CT; Indianapolis, IN; Providence, RI; Raleigh, NC; Richardson, TX; Tempe, AZThis is Fulltime with InfosysPreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-networx-3509793440?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=8Yu8tSPjYifrFr%2B5zRHh6Q%3D%3D&position=13&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 3 weeks ago "," Over 200 applicants "," Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.4+ years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timeMultiple locations: Hartford, CT; Indianapolis, IN; Providence, RI; Raleigh, NC; Richardson, TX; Tempe, AZThis is Fulltime with InfosysPreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-snowflake-expert-at-experfy-3514940838?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=QcTBK%2Fx5FpxPq8sxfQ5Q%2FQ%3D%3D&position=14&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 3 weeks ago "," Over 200 applicants "," Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.4+ years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timeMultiple locations: Hartford, CT; Indianapolis, IN; Providence, RI; Raleigh, NC; Richardson, TX; Tempe, AZThis is Fulltime with InfosysPreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-infosys-3494512415?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=jSX2UBmRbIlEgd3DjK6pcA%3D%3D&position=15&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Bellevue, WA "," 2 days ago "," Over 200 applicants ","Data Engineer Job Level: 5 No. of Position: 1 Role Designation: Specialist Programmer Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projects Required Qualifications: Bachelor’s degree or foreign equivalent required from an accredited institution. Minimum 5 years of IT experience US Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time Preferred Qualifications: Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, Kafka Strong programming knowledge in Scala or Python for Spark application development Strong knowledge and hands-on experience in SQL, Unix shell scripting Experience in data warehousing technologies, ETL/ELT implementations Sound Knowledge of Software engineering design patterns and practices Strong understanding of Functional programming. Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based tools Good hands on in RESTful APIs Good Hands-on experience on SQL Development. Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and Dockers Experience with data visualization tools like Tableau, Kibana, etc Experience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshooting Planning and Co-ordination skills Good Communication and Analytical skills Experience and desire to work in a Global delivery environment. Ability to work in team in diverse/ multiple stakeholder environment. Work Location: Bellevue, WA About Us Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem. Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability."," Mid-Senior level "," Full-time "," Consulting, Information Technology, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stemboard-3511345550?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=7hGH%2Fa719l4mVFRKvMsDyQ%3D%3D&position=16&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Bellevue, WA "," 2 days ago "," Over 200 applicants "," Data EngineerJob Level: 5No. of Position: 1Role Designation: Specialist ProgrammerInfosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.Minimum 5 years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timePreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment.Work Location:Bellevue, WAAbout UsInfosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability. "," Mid-Senior level "," Full-time "," Consulting, Information Technology, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-software-technology-inc-3515944113?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=b7FtCsS5E0z0Pg5WMhdu8w%3D%3D&position=17&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Bellevue, WA "," 2 days ago "," Over 200 applicants "," Data EngineerJob Level: 5No. of Position: 1Role Designation: Specialist ProgrammerInfosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.Minimum 5 years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timePreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment.Work Location:Bellevue, WAAbout UsInfosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability. "," Mid-Senior level "," Full-time "," Consulting, Information Technology, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516891524?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=daf1Ls5dIXiVWW%2FVOUWhLw%3D%3D&position=18&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Bellevue, WA "," 2 days ago "," Over 200 applicants "," Data EngineerJob Level: 5No. of Position: 1Role Designation: Specialist ProgrammerInfosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.Minimum 5 years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timePreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment.Work Location:Bellevue, WAAbout UsInfosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability. "," Mid-Senior level "," Full-time "," Consulting, Information Technology, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516889722?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=RGDSbs503EABfsey5BMjFA%3D%3D&position=19&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Bellevue, WA "," 2 days ago "," Over 200 applicants "," Data EngineerJob Level: 5No. of Position: 1Role Designation: Specialist ProgrammerInfosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.Minimum 5 years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timePreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment.Work Location:Bellevue, WAAbout UsInfosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability. "," Mid-Senior level "," Full-time "," Consulting, Information Technology, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-abbott-3509634673?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=V8TB4fh9sf8beWZC5I%2FsKQ%3D%3D&position=20&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Bellevue, WA "," 2 days ago "," Over 200 applicants "," Data EngineerJob Level: 5No. of Position: 1Role Designation: Specialist ProgrammerInfosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications:Bachelor’s degree or foreign equivalent required from an accredited institution.Minimum 5 years of IT experienceUS Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timePreferred Qualifications:Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, KafkaStrong programming knowledge in Scala or Python for Spark application developmentStrong knowledge and hands-on experience in SQL, Unix shell scriptingExperience in data warehousing technologies, ETL/ELT implementationsSound Knowledge of Software engineering design patterns and practicesStrong understanding of Functional programming.Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based toolsGood hands on in RESTful APIsGood Hands-on experience on SQL Development.Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and DockersExperience with data visualization tools like Tableau, Kibana, etcExperience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshootingPlanning and Co-ordination skillsGood Communication and Analytical skillsExperience and desire to work in a Global delivery environment.Ability to work in team in diverse/ multiple stakeholder environment.Work Location:Bellevue, WAAbout UsInfosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability. "," Mid-Senior level "," Full-time "," Consulting, Information Technology, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-boston-globe-media-3472639345?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=JDfgSiEx9G4BXamTRz02Kg%3D%3D&position=21&pageNum=3&trk=public_jobs_jserp-result_search-card," Boston Globe Media ",https://www.linkedin.com/company/the-boston-globe?trk=public_jobs_topcard-org-name," Boston, MA "," 2 weeks ago "," Over 200 applicants ","Boston Globe Media is New England's largest newsgathering organization -- and much more. We are committed to being an indispensable, trusted, reliable source of round-the-clock information. Through the powerful journalism from our newsroom, engaging content from our content marketing studio, or through targeted advertising solutions, brands and marketers rely on us to reach highly engaged, educated and influential audiences through a variety of media and experiences. Responsibilities Collect, organize, and document often-used data resources (maps, APIs, etc). Create scripts to scrape data from websites for stakeholders. With guidance, start creation of a data style guide. Technology Basic knowledge of HTML, CSS, and JavaScript. Basic familiarity with PHP, Groovy, or another server side scripting language. Basic familiarity of build tools such as Grunt, Gulp, or Webpack. Basic familiarity with version control systems such as SVN or Git. Qualifications Understands and follows the team’s agile process. Adheres to defined coding standards. Participates in code reviews. A willingness to adapt and be audience focused, with a curious mindset and a commitment to creating an inclusive work environment Vaccination Statement We require that all BGMP employees (including temporary employees, co-ops, interns, and independent contractors) be vaccinated from COVID-19, unless an exemption from this policy has been granted as an accommodation or otherwise. All BGMP employees, regardless of vaccination status or work location, must provide proof of vaccination status as instructed by the employee's designated Human Resources contact. Employees may request a reasonable accommodation or other exemption from this policy by contacting their designated Human Resources contact. Failure to comply with or enforce any part of this policy, or misrepresentation of compliance with this policy, may result in discipline, up to and including termination of employment, subject to reasonable accommodation and other requirements of applicable federal, state, and local law. EEO Statement Boston Globe Media Partners is an equal employment opportunity employer, and does not discriminate on the basis of race, color, religion, gender, sexual orientation, gender identity or expression, age, disability, national origin, ancestry, genetic information, military or veteran status, pregnancy or pregnancy-related condition or any other protected characteristic. Boston Globe Media Partners is committed to diversity in its most inclusive sense."," Entry level "," Full-time "," Information Technology "," Online Audio and Video Media " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-infosys-3458745033?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=ki2qNeD2GbjCeOvc3%2FlnsA%3D%3D&position=22&pageNum=3&trk=public_jobs_jserp-result_search-card," Boston Globe Media ",https://www.linkedin.com/company/the-boston-globe?trk=public_jobs_topcard-org-name," Boston, MA "," 2 weeks ago "," Over 200 applicants "," Boston Globe Media is New England's largest newsgathering organization -- and much more. We are committed to being an indispensable, trusted, reliable source of round-the-clock information. Through the powerful journalism from our newsroom, engaging content from our content marketing studio, or through targeted advertising solutions, brands and marketers rely on us to reach highly engaged, educated and influential audiences through a variety of media and experiences.ResponsibilitiesCollect, organize, and document often-used data resources (maps, APIs, etc).Create scripts to scrape data from websites for stakeholders.With guidance, start creation of a data style guide.TechnologyBasic knowledge of HTML, CSS, and JavaScript. Basic familiarity with PHP, Groovy, or another server side scripting language. Basic familiarity of build tools such as Grunt, Gulp, or Webpack. Basic familiarity with version control systems such as SVN or Git.QualificationsUnderstands and follows the team’s agile process. Adheres to defined coding standards. Participates in code reviews.A willingness to adapt and be audience focused, with a curious mindset and a commitment to creating an inclusive work environment Vaccination StatementWe require that all BGMP employees (including temporary employees, co-ops, interns, and independent contractors) be vaccinated from COVID-19, unless an exemption from this policy has been granted as an accommodation or otherwise. All BGMP employees, regardless of vaccination status or work location, must provide proof of vaccination status as instructed by the employee's designated Human Resources contact. Employees may request a reasonable accommodation or other exemption from this policy by contacting their designated Human Resources contact. Failure to comply with or enforce any part of this policy, or misrepresentation of compliance with this policy, may result in discipline, up to and including termination of employment, subject to reasonable accommodation and other requirements of applicable federal, state, and local law.EEO StatementBoston Globe Media Partners is an equal employment opportunity employer, and does not discriminate on the basis of race, color, religion, gender, sexual orientation, gender identity or expression, age, disability, national origin, ancestry, genetic information, military or veteran status, pregnancy or pregnancy-related condition or any other protected characteristic. Boston Globe Media Partners is committed to diversity in its most inclusive sense. "," Entry level "," Full-time "," Information Technology "," Online Audio and Video Media " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/big-data-software-engineer-at-apple-3522802169?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=hnUbOAqsg9lf4OYwx3e6ZA%3D%3D&position=23&pageNum=3&trk=public_jobs_jserp-result_search-card," Boston Globe Media ",https://www.linkedin.com/company/the-boston-globe?trk=public_jobs_topcard-org-name," Boston, MA "," 2 weeks ago "," Over 200 applicants "," Boston Globe Media is New England's largest newsgathering organization -- and much more. We are committed to being an indispensable, trusted, reliable source of round-the-clock information. Through the powerful journalism from our newsroom, engaging content from our content marketing studio, or through targeted advertising solutions, brands and marketers rely on us to reach highly engaged, educated and influential audiences through a variety of media and experiences.ResponsibilitiesCollect, organize, and document often-used data resources (maps, APIs, etc).Create scripts to scrape data from websites for stakeholders.With guidance, start creation of a data style guide.TechnologyBasic knowledge of HTML, CSS, and JavaScript. Basic familiarity with PHP, Groovy, or another server side scripting language. Basic familiarity of build tools such as Grunt, Gulp, or Webpack. Basic familiarity with version control systems such as SVN or Git.QualificationsUnderstands and follows the team’s agile process. Adheres to defined coding standards. Participates in code reviews.A willingness to adapt and be audience focused, with a curious mindset and a commitment to creating an inclusive work environment Vaccination StatementWe require that all BGMP employees (including temporary employees, co-ops, interns, and independent contractors) be vaccinated from COVID-19, unless an exemption from this policy has been granted as an accommodation or otherwise. All BGMP employees, regardless of vaccination status or work location, must provide proof of vaccination status as instructed by the employee's designated Human Resources contact. Employees may request a reasonable accommodation or other exemption from this policy by contacting their designated Human Resources contact. Failure to comply with or enforce any part of this policy, or misrepresentation of compliance with this policy, may result in discipline, up to and including termination of employment, subject to reasonable accommodation and other requirements of applicable federal, state, and local law.EEO StatementBoston Globe Media Partners is an equal employment opportunity employer, and does not discriminate on the basis of race, color, religion, gender, sexual orientation, gender identity or expression, age, disability, national origin, ancestry, genetic information, military or veteran status, pregnancy or pregnancy-related condition or any other protected characteristic. Boston Globe Media Partners is committed to diversity in its most inclusive sense. "," Entry level "," Full-time "," Information Technology "," Online Audio and Video Media " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-dr-martens-plc-3499257608?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=43jqaLjygiNjf0UTM4jSLA%3D%3D&position=24&pageNum=3&trk=public_jobs_jserp-result_search-card," Boston Globe Media ",https://www.linkedin.com/company/the-boston-globe?trk=public_jobs_topcard-org-name," Boston, MA "," 2 weeks ago "," Over 200 applicants "," Boston Globe Media is New England's largest newsgathering organization -- and much more. We are committed to being an indispensable, trusted, reliable source of round-the-clock information. Through the powerful journalism from our newsroom, engaging content from our content marketing studio, or through targeted advertising solutions, brands and marketers rely on us to reach highly engaged, educated and influential audiences through a variety of media and experiences.ResponsibilitiesCollect, organize, and document often-used data resources (maps, APIs, etc).Create scripts to scrape data from websites for stakeholders.With guidance, start creation of a data style guide.TechnologyBasic knowledge of HTML, CSS, and JavaScript. Basic familiarity with PHP, Groovy, or another server side scripting language. Basic familiarity of build tools such as Grunt, Gulp, or Webpack. Basic familiarity with version control systems such as SVN or Git.QualificationsUnderstands and follows the team’s agile process. Adheres to defined coding standards. Participates in code reviews.A willingness to adapt and be audience focused, with a curious mindset and a commitment to creating an inclusive work environment Vaccination StatementWe require that all BGMP employees (including temporary employees, co-ops, interns, and independent contractors) be vaccinated from COVID-19, unless an exemption from this policy has been granted as an accommodation or otherwise. All BGMP employees, regardless of vaccination status or work location, must provide proof of vaccination status as instructed by the employee's designated Human Resources contact. Employees may request a reasonable accommodation or other exemption from this policy by contacting their designated Human Resources contact. Failure to comply with or enforce any part of this policy, or misrepresentation of compliance with this policy, may result in discipline, up to and including termination of employment, subject to reasonable accommodation and other requirements of applicable federal, state, and local law.EEO StatementBoston Globe Media Partners is an equal employment opportunity employer, and does not discriminate on the basis of race, color, religion, gender, sexual orientation, gender identity or expression, age, disability, national origin, ancestry, genetic information, military or veteran status, pregnancy or pregnancy-related condition or any other protected characteristic. Boston Globe Media Partners is committed to diversity in its most inclusive sense. "," Entry level "," Full-time "," Information Technology "," Online Audio and Video Media " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-bridgeway-3507282795?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=X5hoe21m5FMSPozUuWNKLg%3D%3D&position=25&pageNum=3&trk=public_jobs_jserp-result_search-card," Bridgeway ",https://www.linkedin.com/company/transport-investments?trk=public_jobs_topcard-org-name," Coraopolis, PA "," 1 month ago "," Be among the first 25 applicants ","Summary: Purpose: The Data Engineer will be responsible to 1) collaborate with business leaders to define data storage and tracking needs; 2) define and implement data modeling, scripting, and manipulations to support business reporting and analytics; and 3) provide oversight on the administration and maintenance of database systems and platforms. This position will work with the CTO to ensure projects are implemented in alignment with defined data architecture and governance principles. This is a hands-on leadership position in the organization. The successful candidate will be able to visualize, execute, and implement projects as an individual and part of a team. This person will be able to facilitate decisions between departments and to design effective solutions built off strong data management principles. Reports To: Chief Technology Officer Responsibilities: Collaborate with business leaders to understand business processes and reporting and analysis needs. Make recommendations and design solutions to ensure data is entered, stored, and managed in accordance with defined data architecture principles to ensure high data quality results. Design, build, and maintain a data warehouse and coordinate and schedule data refresh activities. Design, build, and maintain data modeling and manipulation programs and scripts that support data reporting and analysis initiatives. Perform data management tasks, such as data cleansing, manipulations, and transformations. Create and manage database reports; assist with dashboard visualizations and designs. Manage and maintain the performance, recoverability, versioning, and security of all corporate database systems. Ensure database backups and replications are set up to mitigate the impact of a data breach. Diagnose and troubleshoot database errors and data quality concerns. Participate in M&A activities to effectively merge data sets for consolidated reporting and analytics. Proactively seek out and implement improvements to data quality. Be aware of and understand current and evolving data trends and technologies. Create and maintain documentation on all responsibilities. Required Qualifications: Bachelor’s degree in computer science or a related field. 5+ years of direct experience in data architecture, modeling, scripting, or reporting. Understanding of relational and dimensional data modeling. Understanding of data management concepts and practices. Experience with SQL and other scripting languages. Experience with data reporting and analytical tools. Direct experience with SSRS and PowerBI is preferred. Experience with Windows SQL Server environments and administration tools. Advanced knowledge of database security, backup and recovery, and performance monitoring standards Excellent written and verbal communication skills Impeccable attention to detail Strong analytical, communication, and problem-solving skills. Independent sense of urgency to task and project completion. Ability to work well with a variety of personality types. Powered by JazzHR lM1FqGFai4"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-networx-3509793440?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=8Yu8tSPjYifrFr%2B5zRHh6Q%3D%3D&position=13&pageNum=3&trk=public_jobs_jserp-result_search-card," Networx ",https://www.linkedin.com/company/networx-systems?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 66 applicants ","Description Who are we, and what are we looking for? You've got the opportunity to join our awesome team! At Networx, we empower great people to do great work! Our core values are Chop Wood Carry Water, Do the Right Thing, Celebrate Success, and Evolve. These values guide our behaviors, and bold targets encourage us to bring our best selves to work daily. Our culture of fun, collaboration, and growth will help us all to share in the rewards of meeting our company mission; “Win Contractors More Jobs”. We are looking for a talented Data Engineer to join our team to build our data infrastructure and help create a data-first company culture. You will work with our Data team, to develop consistent and machine-readable formats. You will help Networx stakeholders to answer business questions utilizing data. This position is right for you if you want to or have the following: OWN our data infrastructure, KNOW how to turn raw data into consumable data, HAVE SQL experience, COMMUNICATES effectively, are PASSIONATE about data, and THRIVE in an environment where your ideas are valued. If this is you, we invite you to apply to join the Networx team! Responsibilities What You’ll Do Build the Data Warehouse to support the organizational growth Create data integrations from external sources into the warehouse using Stitch Build data modeling and transformations using DBT Build the data pipeline ensuring data accuracy and validation processes are included Use an understanding of the business to provide data that supports stakeholder needs Own data pipelines and use them to help stakeholders make informed decisions Requirements Must Have Strong experience with SQL, of at least 2 years An independent, self-learner who is passionate about data High-level verbal and written communication skills Ability to take business understanding into consideration when building the infrastructure Able to own data projects and data pipelines Data modeling and creation of data marts experience This position is open to candidates who live in the Atlanta, GA or New York, NY Metropolitan areas Nice to Have A bachelor's degree in Mathematics, Data Science, or a related field Experience with Snowflake / Google Big Query, DBT, Stitch, and Hightouch Experience working with teams that are geographically dispersed including international team members Benefits You’ll Earn Health Care Plans (Medical, Dental & Vision) Health Care Spending (HSA & FSA) Retirement Plan (401k) Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation, Bereavement & 9 Paid Holidays) Short-Term & Long-Term Disability Training & Development Work From Home Flexibility Wellness Resources Competitive pay and bonus This position offers work-from-home flexibility for candidates who live in the Atlanta or New York City Metropolitan areas. Networx proudly supports diversity in the workplace and is an Equal Opportunity Employer. The expected base salary range for this position is $110,000 - $125,000 per year. This position is eligible for an annual cash bonus. The salary offered may vary depending on factors such as job-related knowledge, skills, and experience. Salary ranges are provided for New York City-based roles as required by New York City Human Rights Law. DISCLAIMER: The above information in this description has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as an exhaustive list of all responsibilities, duties, and qualifications required of employees assigned to this job."," Entry level "," Full-time "," Information Technology "," Advertising Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-snowflake-expert-at-experfy-3514940838?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=QcTBK%2Fx5FpxPq8sxfQ5Q%2FQ%3D%3D&position=14&pageNum=3&trk=public_jobs_jserp-result_search-card," Networx ",https://www.linkedin.com/company/networx-systems?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 66 applicants "," DescriptionWho are we, and what are we looking for?You've got the opportunity to join our awesome team!At Networx, we empower great people to do great work! Our core values are Chop Wood Carry Water, Do the Right Thing, Celebrate Success, and Evolve. These values guide our behaviors, and bold targets encourage us to bring our best selves to work daily. Our culture of fun, collaboration, and growth will help us all to share in the rewards of meeting our company mission; “Win Contractors More Jobs”.We are looking for a talented Data Engineer to join our team to build our data infrastructure and help create a data-first company culture. You will work with our Data team, to develop consistent and machine-readable formats. You will help Networx stakeholders to answer business questions utilizing data. This position is right for you if you want to or have the following: OWN our data infrastructure, KNOW how to turn raw data into consumable data, HAVE SQL experience, COMMUNICATES effectively, are PASSIONATE about data, and THRIVE in an environment where your ideas are valued. If this is you, we invite you to apply to join the Networx team!ResponsibilitiesWhat You’ll DoBuild the Data Warehouse to support the organizational growth Create data integrations from external sources into the warehouse using StitchBuild data modeling and transformations using DBTBuild the data pipeline ensuring data accuracy and validation processes are included Use an understanding of the business to provide data that supports stakeholder needsOwn data pipelines and use them to help stakeholders make informed decisionsRequirementsMust HaveStrong experience with SQL, of at least 2 yearsAn independent, self-learner who is passionate about dataHigh-level verbal and written communication skillsAbility to take business understanding into consideration when building the infrastructureAble to own data projects and data pipelinesData modeling and creation of data marts experienceThis position is open to candidates who live in the Atlanta, GA or New York, NY Metropolitan areasNice to HaveA bachelor's degree in Mathematics, Data Science, or a related fieldExperience with Snowflake / Google Big Query, DBT, Stitch, and HightouchExperience working with teams that are geographically dispersed including international team membersBenefitsYou’ll EarnHealth Care Plans (Medical, Dental & Vision)Health Care Spending (HSA & FSA)Retirement Plan (401k)Life Insurance (Basic, Voluntary & AD&D)Paid Time Off (Vacation, Bereavement & 9 Paid Holidays)Short-Term & Long-Term DisabilityTraining & DevelopmentWork From Home FlexibilityWellness ResourcesCompetitive pay and bonusThis position offers work-from-home flexibility for candidates who live in the Atlanta or New York City Metropolitan areas.Networx proudly supports diversity in the workplace and is an Equal Opportunity Employer.The expected base salary range for this position is $110,000 - $125,000 per year. This position is eligible for an annual cash bonus. The salary offered may vary depending on factors such as job-related knowledge, skills, and experience. Salary ranges are provided for New York City-based roles as required by New York City Human Rights Law.DISCLAIMER: The above information in this description has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as an exhaustive list of all responsibilities, duties, and qualifications required of employees assigned to this job. "," Entry level "," Full-time "," Information Technology "," Advertising Services " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516889722?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=RGDSbs503EABfsey5BMjFA%3D%3D&position=19&pageNum=3&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 month ago "," Be among the first 25 applicants ","As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership Requirements Responsibilities: Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3 Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques. Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools) Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective Understand data and analytics use cases across Web3 / blockchains Skills & Qualifications Bachelor’s degree in computer science or related technical field. Masters or PhD a plus. 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others) 3+ years experience with Scala and Apache Spark (or Kafka) A track record of recruiting and leading technical teams in a demanding talent market Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not required Experience with rapid development cycles in a web-based environment Strong scripting and test automation knowledge Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-at-abbott-3509634673?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=V8TB4fh9sf8beWZC5I%2FsKQ%3D%3D&position=20&pageNum=3&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 month ago "," Be among the first 25 applicants "," As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processingframeworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between datawarehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a databasesystem that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadershipRequirementsResponsibilities:Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques.Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structureCodifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and managementManaging a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools)Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational componentsHighly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspectiveUnderstand data and analytics use cases across Web3 / blockchainsSkills & QualificationsBachelor’s degree in computer science or related technical field. Masters or PhD a plus.6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others)3+ years experience with Scala and Apache Spark (or Kafka)A track record of recruiting and leading technical teams in a demanding talent marketRock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not requiredNice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not requiredExperience with rapid development cycles in a web-based environmentStrong scripting and test automation knowledgeNice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-infosys-3458745033?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=ki2qNeD2GbjCeOvc3%2FlnsA%3D%3D&position=22&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 1 week ago "," Over 200 applicants ","Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projects Required Qualifications: Bachelor’s degree or foreign equivalent required from an accredited institution. Minimum 3 years of IT experience US Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time Preferred Qualifications: Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, Kafka Strong programming knowledge in Scala or Python for Spark application development Strong knowledge and hands-on experience in SQL, Unix shell scripting Experience in data warehousing technologies, ETL/ELT implementations Sound Knowledge of Software engineering design patterns and practices Strong understanding of Functional programming. Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based tools Good hands on in RESTful APIs Good Hands-on experience on SQL Development. Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and Dockers Experience with data visualization tools like Tableau, Kibana, etc Experience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshooting Planning and Co-ordination skills Good Communication and Analytical skills Experience and desire to work in a Global delivery environment. Ability to work in team in diverse/ multiple stakeholder environment. About Us Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem. Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/big-data-software-engineer-at-apple-3522802169?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=hnUbOAqsg9lf4OYwx3e6ZA%3D%3D&position=23&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 1 week ago "," Over 200 applicants "," Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications: Bachelor’s degree or foreign equivalent required from an accredited institution. Minimum 3 years of IT experience US Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timePreferred Qualifications: Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, Kafka Strong programming knowledge in Scala or Python for Spark application development Strong knowledge and hands-on experience in SQL, Unix shell scripting Experience in data warehousing technologies, ETL/ELT implementations Sound Knowledge of Software engineering design patterns and practices Strong understanding of Functional programming. Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based tools Good hands on in RESTful APIs Good Hands-on experience on SQL Development. Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and Dockers Experience with data visualization tools like Tableau, Kibana, etc Experience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshooting Planning and Co-ordination skills Good Communication and Analytical skills Experience and desire to work in a Global delivery environment. Ability to work in team in diverse/ multiple stakeholder environment.About Us Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-dr-martens-plc-3499257608?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=43jqaLjygiNjf0UTM4jSLA%3D%3D&position=24&pageNum=3&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 1 week ago "," Over 200 applicants "," Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projectsRequired Qualifications: Bachelor’s degree or foreign equivalent required from an accredited institution. Minimum 3 years of IT experience US Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this timePreferred Qualifications: Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, Kafka Strong programming knowledge in Scala or Python for Spark application development Strong knowledge and hands-on experience in SQL, Unix shell scripting Experience in data warehousing technologies, ETL/ELT implementations Sound Knowledge of Software engineering design patterns and practices Strong understanding of Functional programming. Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based tools Good hands on in RESTful APIs Good Hands-on experience on SQL Development. Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and Dockers Experience with data visualization tools like Tableau, Kibana, etc Experience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshooting Planning and Co-ordination skills Good Communication and Analytical skills Experience and desire to work in a Global delivery environment. Ability to work in team in diverse/ multiple stakeholder environment.About Us Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-techta-llc-3512094279?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=UKI8TrfxfWxSF9CJmrEzqw%3D%3D&position=12&pageNum=1&trk=public_jobs_jserp-result_search-card," TechTA LLC ",https://www.linkedin.com/company/techta-llc?trk=public_jobs_topcard-org-name," New York, NY "," 1 week ago "," Over 200 applicants ","We are TechTA, a New York-based tech-forward talent acquisition firm that bridges great talent to great opportunities. All of our clients are committed to a five-star candidate experience. Visit us at Our client is searching for a Data Engineer with at least 5 years of experience. This is a contract role, 100% remote at this time. Experience Requirements 5+ years of experience with SQL, data modeling, design, and implementation. Advanced SQL programming capabilities with Data Analytics capabilities Strong data warehouse and ETL/ELT background Experience creating data integration solutions, recognizing data patterns (e.g., data profiling and data wrangling), and developing datasets to support reporting deliverables. Good Experience with any reporting tool (such as Tableau or Power BI) Hands-on SQL skills to analyze complex datasets and deliver insights that drive business results Nice To Have, But Not Required Knowledge of SQL Server Data Warehousing (SSMS/SSIS/SSRS) Working experience with Azure Cloud, GCP and/or AWS Ability to build data pipelines with modular coding techniques Not quite a match with your skill set? We'd still like to talk! TechTA LLC is committed to ensuring the security and protection of the personal information that we process, and to provide a compliant and consistent approach to data protection. If you have any questions related to our GDPR compliance, please contact our Data Protection Officer or make a Data Subject Access Request. We value your privacy! TechTA LLC will not distribute, sell, share, or transfer your data to any commercial entity without your express written permission. Powered by JazzHR WjnRbOZafL"," Mid-Senior level "," Contract "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer Analyst,https://www.linkedin.com/jobs/view/data-engineer-analyst-at-focus-brands-llc-3499475472?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=F8lWd1gNatYGm000cfryhA%3D%3D&position=16&pageNum=1&trk=public_jobs_jserp-result_search-card," Focus Brands LLC ",https://www.linkedin.com/company/focus-brands?trk=public_jobs_topcard-org-name," Atlanta Metropolitan Area "," 2 weeks ago "," Over 200 applicants ","Job Summary The Data Engineer Analyst will have a strong technical background, with experience building, deploying, and optimizing data pipelines across the 7 Focus Brands. Developing these data pipelines will be integral in supporting Focus Supply Chains data analytics, KPI reporting, and data analysis on a scalable / automated level to improve the effectiveness and efficiency of all users. The Data Engineer Analyst will need to have a strategic and problem-solving mindset to evaluate deficiencies in our data structure, propose recommended changes, and align or win support of those recommended changes with cross functional departments. Building relationships with cross functional departments will be key to understanding the needs of customers, communicating the pros and cons of potential solutions, and training customers on using the selected solutions. The ideal candidate will have strong data engineering and architecture capabilities with proficiency using the following or similar tools (SQL, Python, Alteryx, Airflow, or others). Essential Functions Develop and optimize data pipelines to meet stakeholder and customer requirements: Attend meetings to obtain and document multiple requirements from different customers or stakeholders. Find conflicting requirements and work to resolve solutions which meet the needs of all stakeholders. Validate proposed solutions will achieve desired results. Present and obtain feedback on recommended solutions to stakeholders. Train stakeholders and customers on how to use solutions. IE. ( BI reporting team or other FSC team members) End to End Supply Chain Visibility (Systems & Project Support): Develop new pipelines of data which need to be incorporated into supply chain analytics / analysis to achieve end to end visibility and improve forecasting and supply plan capability. Collaborate with the IT department and external vendors to obtain the required data. Design, develop and implement initiatives that ensure integrity of all data in supply chain system. Development and planning of end-user training and on-going support. Undertake and manage special projects and other project needs as required Troubleshoot and resolve issues with systems / tools Education Bachelor's degree in Engineering, Computer Science, Computer Information Systems, Mathematics, Data Science, or a technical degree with at least 3 years’ experience with scripting or programming, application development and data analysis - required Master's degree/MBA with a quantitative focus, or advanced degrees in Operations Management, Supply Chain, Mathematics/Engineering - preferred Work Experience At least two years of experience required Experienced user with Palantir Software Scripting or Programming Data Analysis and Systems support Experience with IT, and BI/data architectural standards Ability to understand and apply Object Oriented methodologies SQL database experience Highly proficient in Microsoft Office applications including MS-Excel & MS-Word Skills & Abilities Excellent verbal and written skills Learn modern technology at rapid pace Particularly good at troubleshooting and identifying problems Formulate potential solutions for problems found Must be able to handle multiple tasks and communicate any prioritization conflicts. Must be able to collaborate well with teams Must be able to take direction from supervisor(s), adhere to required work schedules, focus attention on details, and follow work rules. Use API for ingesting data Understand the limitations of Data Visualization tools and how best to optimize performance via data pipelines. (Power BI, Tableau, Qlik Sense, MicroStrategy, and others ) Forecasting Knowledge (Forecast models, correlation analyses and multivariate regression analysis) Excel skills (modeling, goal seek, vlookup, index (match), pivot tables, nested IF statements) Project Management Process Management Demonstrated ability to work under constant pressure in an undefined, ever-changing environment Independently motivated and autonomous Demonstrated curious and analytical mindset Demonstrated problem solving ability Demonstrated logical thought process Ability to clearly and effectively communicate with experts and novices alike Positive attitude Process-oriented Licenses/Certifications Travel Requirement"," Associate "," Full-time "," Information Technology and Supply Chain "," Food and Beverage Services, Manufacturing, and Hospitality " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-proactive-md-3512654908?refId=YvlM5eH6562Av479dO4UVA%3D%3D&trackingId=grcC9lPiLDcyd22KIv7akw%3D%3D&position=20&pageNum=1&trk=public_jobs_jserp-result_search-card," Proactive MD ",https://www.linkedin.com/company/proactive-md?trk=public_jobs_topcard-org-name," Denver, CO "," 3 weeks ago "," 43 applicants ","People are a company's greatest resource, which is why caring for employees and keeping them healthy is so important. Proactive MD offers a comprehensive health management solution that extends well beyond the clinic walls. Access to on-site physicians, full direct primary care services, and excellent client support are the hallmarks of our program. By engaging a workforce and offering them a personal relationship with a primary care physician, we can deliver measurably better outcomes, making people happier, healthier, and more productive while significantly lowering overall medical costs for employers. We put employees' health first because amazing care yields amazing results. We are the next generation of workplace health centers. Remote work available within the contiguous United States of America JOB SUMMARY The Data Engineer is responsible for coordinating data across multiple software systems to improve patient care and client outcomes and for creating and maintaining the data pipelines and data architectures powering our Proactive IQ platform. ESSENTIAL DUTIES AND RESPONSIBILITIES • Manage and maintain data integrity and coordination across multiple core software systems and databases. • Identify opportunities for process automation and initiate designs for system integrations. • Manage bulk data import projects of historical medical records. • Build and maintain data pipelines for the extract, transformation, and a load of clinical data, medical and pharmaceutical claims, and other key data feeds. • Deliver and present data management solutions to internal and external customers as required. • Perform other duties as assigned by EVP, Solutions Engineering, and/or executive leadership. • Act as a champion for our ""patient promise"" and mission, vision, and values and partner across the company to drive a high-performance work environment. REQUIRED KNOWLEDGE, SKILLS, & ABILITIES • Bachelor’s degree or higher from an institution recognized by the Council for Higher Education Accreditation, with relevant coursework in computer science, and mathematics. • Experience using SQL for data management and query. Experience building or maintaining data pipelines with Microsoft SQL Server / Azure preferred. • Experience in handling PHI/PII data transport. • Excellent verbal and written communication skills. • Excellent interpersonal, negotiation, and conflict resolution skills. • Excellent time management skills with the proven ability to meet deadlines. • Strong analytical and problem-solving skills. • Expert proficiency with Microsoft applications, including intermediate-to-advanced Microsoft Excel and Microsoft Access skills. POSITION TYPE & EXPECTED HOURS OF WORK This role will be expected to work a minimum of 40 hours/week as directed. Typical workdays are Monday through Friday, 8:00 am to 5:00 pm. This role is considered an exempt position. Evening and weekend work are infrequent but may occasionally be required as business needs dictate. This is a non-management role with professional growth opportunities within Proactive MD. TRAVEL Infrequent, domestic travel may be required and should be expected to be less than 5% of the position’s overall responsibilities. Proactive MD is firmly committed to creating a diverse workplace and is proud to provide equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, gender identity and/or expression, sexual orientation, ethnicity, national origin, age, disability, genetics, marital status, amnesty status, or veteran status applicable to state and federal laws. "," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-capgemini-engineering-3499070739?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=SWFu1AfQruTtu0tmjZOHxg%3D%3D&position=3&pageNum=2&trk=public_jobs_jserp-result_search-card," Capgemini Engineering ",https://fr.linkedin.com/company/capgemini-engineering?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Data Engineer Remote Who We Are About Capgemini Engineering World leader in engineering and R&D services, Capgemini Engineering combines its broad industry knowledge and cutting-edge technologies in digital and software to support the convergence of the physical and digital worlds. Coupled with the capabilities of the rest of the Group, it helps clients to accelerate their journey towards Intelligent Industry. Capgemini Engineering has more than 55,000 engineer and scientist team members in over 30 countries across sectors including Aeronautics, Space, Defense, Naval, Automotive, Rail, Infrastructure & Transportation, Energy, Utilities & Chemicals, Life Sciences, Communications, Semiconductor & Electronics, Industrial & Consumer, Software & Internet. Capgemini Engineering is an integral part of the Capgemini Group, a global leader in partnering with companies to transform and manage their business by harnessing the power of technology. The Group is guided every day by its purpose of unleashing human energy through technology for an inclusive and sustainable future. It is a responsible and diverse organization of over 340,000 team members in more than 50 countries. With its strong 55-year heritage and deep industry expertise, Capgemini is trusted by its clients to address the entire breadth of their business needs, from strategy and design to operations, fuelled by the fast-evolving and innovative world of cloud, data, AI, connectivity, software, digital engineering, and platforms. The Group reported 2021 global revenues of €18 billion. Get the Future You Want | www.capgemini.com What you’ll do: · The Data Engineer is responsible for supporting the Engineering teams across segments and products by developing data structures and pipelines to support analytics. · Build and maintain data structures and databases with a particular emphasis on time-series data Design, build, and maintain data pipelines from a data lake to data exploration platform Produce statistical analysis of the data to support product quality improvement decisions. · Produce data visualizations to demonstrate learnings and opportunities that support a wide range of stakeholders. · Collaborate with Software engineers to identify and address gaps and to improve data quality including but not limited to data file structure, data formats and units, and frequency of logging. · Collaborate and learn from peers within the Imaging business to ensure we leverage best practice and reduce duplicative work. · Work with business and technical leaders to prioritize data needs and define new process improvement opportunities. What you’ll have: · NoSQL and SQL distributed system design and development. · ETL and messaging. · Data APIs. · Solid Java background. · Algorithms, data structures, and machine learning: supervised and non-supervised. · Detailed oriented, strong analytical skills, and ability to combine and interpret data from many data domains. · Strong verbal and written communication skills, writing reports, and making presentations. · Experience (3-5 years) in high volume data use case. · Ability to connect, associate, and interpret multiple disjointed but related systems simply. · Build algorithms and prototypes to influence engineering and product direction. · Conduct complex data analysis and report on results. · Explore ways to enhance data quality and reliability. · Prepare data for deterministic and probabilistic modeling. · Working with engineers and management to identify improvements, propose modifications, and build integrations. Skills Required NoSQL and SQL, ETL and messaging, Data APIs, JAVA, data analysis and report, deterministic and probabilistic modeling. Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status, or any other characteristic protected by law. This is a general description of the Duties, Responsibilities, and Qualifications required for this position. Physical, mental, sensory, or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Click the following link for more information on your rights as an Applicant http://www.capgemini.com/resources/equal-employment-opportunity-is-the-law"," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-alium-3490885542?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=HUQJqV05KMunNcBuXhT48A%3D%3D&position=7&pageNum=2&trk=public_jobs_jserp-result_search-card," Alium ",https://uk.linkedin.com/company/aliumc?trk=public_jobs_topcard-org-name," Norfolk, VA "," 3 weeks ago "," Over 200 applicants ","Title: Data Engineer Location: Norfolk, Virginia Security Clearance: NATO Secret (Can be upgraded from any clearance level) DESCRIPTION: Data science, data analytics and Artificial Intelligence (AI) are increasingly gaining momentum in NATO touching all military and political domains and functional areas. In response to HQ SACT’s understanding of the disruptive potential of data science and AI, and recognizing the strategic value of data, the Data Science & Artificial Intelligence section, established in 2020 in the Federated Interoperability Branch, is focusing on data science and AI as cross-cutting and enabling capabilities for HQ SACT and the NATO Enterprise. The section provides a broad spectrum from strategy and policy development and support to technical delivery and implementation to HQ SACT and the NATO Enterprise. In addition to serving as the centre of gravity for HQ SACT’s efforts in advancing data centricity and integrating rapidly changing technology related to data exploitation, the section has developed a substantial reputation inside NATO and is regularly invited to offer policy and technical expertise. DUTIES/ROLE: Contribute to the development and implementation of an enabling data science and AI capability at HQ SACT and for the NATO Contribute to ML/AI initiatives across HQ SACT and the NATO Enterprise with a particular focus on the data engineering side Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, proposes how to re-design infrastructure for greater scalability Develop, construct, test and maintain data pipelines and architectures such as databases and large-scale processing systems, within the constraints of existing but evolving processes and technologies Transform data into formats that can be easily analysed by developing, maintaining, and testing infrastructures for data generation Prepare data for prescriptive and predictive modelling Provide subject matter expertise to (military and civilian) staff within HQ SACT or the NATO Enterprise and develop proofs of concept, as directed Work in tandem with data scientists and software engineers Select from existing data sources and prepare data to be used by data science models Improve data quality and efficiency Support evaluation of operational requirements and objectives Interpret trends and patterns and support building of algorithms and prototypes Support educational efforts and training development related to data, AI or digital literacy Remain up-to-date with new developments in data engineering and data architectures to bring innovative ideas into implementation Support building a data-driven culture that uses data and analytics to generate insights, improve decision making at all levels, inform strategy and policy decisions, and improve performance Perform additional tasks as required, related to the LABOR category EXPERIENCE AND EDUCATION: Essential Qualifications/Experience: A Bachelor of Science degree from a recognized university in computer science, IT, software or computer engineering, data science, applied math, physics, statistics, or a related field A Master’s degree or higher from a recognized university in computer science, IT, software or computer engineering, data science, applied math, physics, statistics, or a related field Experience with advanced level SQL, including query optimization, complex joins, development of stored procedures, user-defined functions and working with Analytic Functions in the last 3 years Proficient in at least one data manipulation language such as Python, Scala, R, etc. Ability to develop ETL processes for batch and streaming data, with proficiency in tools and technologies such as Apache Spark, Apache Airflow, Pentaho Data Integration, SQL Server Integration Service Advanced knowledge of relational database architecture, including design of OLAP and OLTP databases Experience working with at least one Data Warehouse schemas – such as Star or Snowflake Ability to work with large datasets Working experience in an international environment with both military and civilian elements Understanding of the NATO organization and its functions Desirable Qualifications/Experience: Knowledge of NoSQL databases such as MongoDB, Cosmo DB Ability to work in cloud environments to develop scalable data pipelines Skills in Cloud infrastructure and technologies such as Google Cloud Compute, AWS, Azure Data Factory, distributed computing Working experience with geospatial data structures such as raster and vector-based data Ability to collect and document project requirements, and to translate the requirements to technical solutions, including working in an agile environment to implement complex database projects."," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting and Government Relations Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stl-digital-3500461780?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=87dn9HYoUaw5uHWZNFRXuA%3D%3D&position=11&pageNum=2&trk=public_jobs_jserp-result_search-card," STL Digital ",https://in.linkedin.com/company/stl-digital-tech?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Job Title: Data Engineer Time Type: Full time Job Location: Remote Job Profile Summary Provides advanced data solutions by using software to process, store, and serve data to others. Tests data quality and optimizes data availability. Ensures that data pipelines are scalable, repeatable, and secure. Utilizes a deep-dive analytical skillset on a variety of internal and external data. Job Description Core Responsibilities 1. Writes ETL (Extract / Transform / Load) processes, designs database systems, and develops real-time and offline analytic processing tools. 2. Troubleshoots software and processes for data consistency and integrity. Integrates large-scale data from various sources for business partners to generate insight and make decisions. 3. Translates business specifications into design specifications and code. Responsible for writing complex programs, ad hoc queries, and reports; all code is well structured and includes sufficient. 4. Partners with internal clients to better understand business functions and informational needs. Gains expertise in tools, technologies, and applications/databases in specific business areas and company-wide systems. Preferred Knowledge/Skills: Python on AWS - Software development experience - (numpy, pandas, sklearn, dash, dask, flask, boto, etc) Extensive experience building & supporting AWS architecture,AWS EC2, Cloudwatch, ECS, Sagemaker Experience with optimized data architecture, data pipelines, AWS Glue ETL Data Stores like : S3, Postgres, Athena, Caching Experience & Education: Bachelor’s or master’s degree in Computer Science, Computer Engineering, Information Systems, Data Analytics, or Cyber Security majors with minimum 6 yr of working experience into this field About STL Digital STL Digital is a global IT services and consulting company that enables enterprises and industries to experience the future of digital transformation. With an end-to-end portfolio of services across product engineering, software, cloud, data and analytics, enterprise application services and cyber-security, STL Digital works with global businesses to deliver innovative experiences and operational excellence with agility. To learn more, visit https://www.stldigital.tech STL Digital is a wholly owned subsidiary of STL (Sterlite Technologies Limited), one of the industry's leading integrators of digital networks that transforms everyday lives by bringing the best digital experiences to billions across the world. With core capabilities in Optical Interconnect, Virtualized Access Solutions, Network Software and System Integration, STL is the industry’s leading end-to-end solutions provider for global digital networks. For more information, visit https://www.stl.tech STL is committed to investing in the holistic health and wellbeing of all STLers and their families. Our benefits and perks programs include, but are not limited to: Competitive medical, dental and vision coverage Competitive 401(k) Plan with a generous company contribution with immediate vesting Paid Time Off, paid holidays, parental leave and more Protection Plans including; Life Insurance, Disability Insurance and EAP Employee discounts to be used on what matters most to you, whether that’s tech gadgets, wellness, childcare, travel and much more. All applicants must be authorized to work in the US. STL is an equal opportunity employer; committed to a culture of inclusion, and an environment free from discrimination, harassment, and retaliation. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via your recruiter."," Mid-Senior level "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Lead Data Engineer,https://www.linkedin.com/jobs/view/lead-data-engineer-at-duetto-3527093852?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=BLRQ5QwkBZ8UCJ1KnJPUJw%3D%3D&position=15&pageNum=2&trk=public_jobs_jserp-result_search-card," Duetto ",https://www.linkedin.com/company/duetto-research?trk=public_jobs_topcard-org-name," Austin, TX "," 3 hours ago "," Be among the first 25 applicants ","The Company We are an ambitious, well-funded, high-growth global technology company transforming the hotel industry. At Duetto, we are passionate about creating innovative analytical solutions to help hoteliers thrive. Although we work hard, the work atmosphere is casual, flexible, collaborative, and most of all, fun. Duetto offers an open and collaborative work environment and believes that by cultivating a team with diverse backgrounds, perspectives, and experiences, it will continue to lead the industry with its cutting-edge platform-based hospitality technology. Position Summary And Opportunity Duetto is an advanced analytics company, serving hotel customers with comprehensive reporting and sophisticated forecasting and pricing insights. Data engineering is a foundational need at Duetto to enable data-driven decision-making – both within Duetto and with our customers. This role will shape the way that our next wave of data engineering investment is used – defining how we transform our data and scaling analytics methods and tools to support our growing business. Responsibilities may range from developing and enhancing our data warehouse which act as sources of truth across the company, driving data quality across product departments and teams, building self-service business intelligence infrastructure for data scientists, and connecting into data interfaces that allow everyone in Duetto to discover and analyze the data. Key Responsibilities Define and own team level data architecture for trusted, accessible repository of data that enables Duetto staff to quickly deliver insights. Enable Data Science and Machine Learning engineers to develop new models efficiently Identify technical opportunities in the current Duetto data ecosystem and drive new initiatives Keep existing data sources fresh against data quality issues, design data quality assurance framework and improve the processes for developing new ones raising the level of quality expected from our work. Lead unit, integration, and system tests on our data sources to validate data against source systems, and optimize performance to improve query speed and reduce cost. Improve data understanding. Support data definition, data catalog, data lineage efforts. Improve business and engineering team processes through data architecture, engineering, test, and best practices. Make enhancements that improve data processes. Requirements 5+ years of experience in data engineering, software engineering, or other related roles Commerce, and/or FinTech experience are a strong plus Professional experience in analytics, data science, or machine learning Experience using Spark, Presto, Flink, or Snowflake Experience building solutions with Parquet, or similar storage formats Knowledge of Kubernetes, Docker Experience building solutions with processing of both structured and semi-structured data Experience in custom ETL design, implementation and maintenance Experience with developing and deploying tools used for data analysis Strong relational database knowledge with a solid knowledge of warehouse schemas Experience developing, maintaining data pipelines from multiple data sources (e.g. Airflow) Experience with best practices for development including query optimization, version control, code reviews, and documentation. Experience working with Amazon Web Services, S3, EMR Strong expertise in Java and Python PROFILE OF THE IDEAL CANDIDATE Team Player - Works well with others, highly collaborative and acts as a strong partner to other team members and functions. Leader – Ability to create direction and process for nascent team, and bring others on board.. Guide – Ability to distill complex technical foundation and intricate business metrics into consistent, clear communications. Systems thinker – Ability to see the long view, and how incremental pieces of development ladder up to success and enablement. Organization - Eager and adept at managing end-to-end project planning and execution. Ability to prioritize with multiple different competing stakeholder priorities. About Duetto We are a team of passionate hospitality and technology professionals delivering a modern platform to hoteliers in over 60 countries. Our solutions address the biggest problems faced by the hospitality industry by simplifying distribution complexity and optimizing profitability with unique and powerful applications that increase conversion, guest loyalty, operational efficiency and revenue. Our goal is to become the most trusted, effective and widely used hotel technology company in the world. Founded in 2012, Duetto is headquartered in San Francisco with offices in Las Vegas, London, Singapore and Buenos Aires. Duetto is backed by leading investors: Warburg Pincus, Accel Partners, Icon Ven Icon Ventures, and Battery Ventures. If you want to be a part of a fast-growing company, working with amazing people tackling big challenges in a massive industry, then Duetto is looking for you."," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3485241839?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=0WCFqbv948sscNXA%2BelZyw%3D%3D&position=19&pageNum=2&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 1 week ago "," 152 applicants ","Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Job Description As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Understand and adapt existing frameworks for data engineering pipelines in the organization. Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization. Collaborate with the team and learn to build scalable data pipelines. Support data engineering pipelines and quickly respond to failures. Collaborate with the team to develop new approaches and build solutions at scale. Create documentation for learning and knowledge transfer. Learn and adapt automation skills/techniques in day-to-day activities. COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law. Qualifications 1+ years of overall technology experience includes at least 1+ years of hands-on software development and data engineering. 1+ years of development experience in programming languages like Python, PySpark, Scala, etc.Experience or knowledge in Data Modeling, SQL optimization, performance tuning is a plus. 6+ months of cloud data engineering experience in Azure Certification is a plus. Experience with version control systems like Github and deployment & CI tools. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools is a plus. Experience in working with large data sets and scaling applications like Kubernetes is a plus. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Education BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior-level management. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to coordinate effectively with the team. Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management. Proactively drive impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to learn quickly and adapt to new skills. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. EEO Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender Identity If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy. Please view our Pay Transparency Statement"," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stemboard-3511345550?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=7hGH%2Fa719l4mVFRKvMsDyQ%3D%3D&position=16&pageNum=3&trk=public_jobs_jserp-result_search-card," STEMBoard ",https://www.linkedin.com/company/stemboard?trk=public_jobs_topcard-org-name," Tampa, FL "," 1 month ago "," Be among the first 25 applicants ","SOCOM – Tampa, FL (Hybrid Schedule) – Top Secret Clearance Required Position Overview: STEMBoard is seeking a data engineer with an understanding of performance optimization and data pipelining. The data engineer will create and integrate application programming interfaces (APIs) and apply multiple programming languages including knowledge of SQL/NoSQL database design. The data engineer role requires knowledge in programming for integrating complex models and using a software library frameworks to distribute large, clustered data sets. Data engineers collect and arrange data in a form that is useful for analytics. A basic knowledge in machine learning is also required to build efficient and accurate data pipelines to meet the needs for downstream users such as data scientists to create the models and analytics that produce insight. Principal Duties and Responsibilities: Collaborate with a team of data Stewards in the development of the program and data analytics projects. Developing, maintaining, and testing infrastructures for data generation to transform data from various structured and unstructured data sources. Develop complex queries to ensure accessibility while optimizing the performance of NoSQL and or big data infrastructure. Create and maintain optimal data pipeline architecture. Build and maintain the infrastructure to support extraction, transformation, and loading (ETL) of data from a wide variety of data sources. Extract data from multiple data sources, relational SQL and NoSQL databases, and other platform APIs, for data ingestion and integration. Configure and manage data analytic frameworks and pipelines using databases and tools such as NoSQL, SQL, HDInsight, MongoDB, Cassandra, Neo4j, GraphDB, OrientDB, Spark, Hadoop, Kafka, Hive, and Pig. Apply distributed systems concepts and principles such as consistency and availability, liveness and safety, durability, reliability, fault-tolerance, consensus algorithms. Administrate cloud computing and CI/CD pipelines to include Azure, Google, and Amazon Web Service (AWS). Coordinate with stakeholders, including product, data and design teams to assist with data-related technical issues and support their data infrastructure needs Requirements Required Education/Experience: Experience: 1+ years of experience with software engineering, data engineering or related experience. Education: Bachelor’s in STEM with a preference towards Data Science, Computer Science, or Software Engineering. Verifiable work experience working with data structures, database management, distributed computing, and API driven architectures using SQL and No-SQL engines. Proficient in modeling frameworks like Universal Modeling Language (UML), Agile Development, and Git Operations. Benefits Healthcare, Vision, and Dental Insurance 20 Days of PTO 401K Matching Training/Certification Reimbursement Short term/Long term disability Parental/Maternity Leave Life Insurance STEMBoard is committed to hiring and retaining a diverse workforce. All qualified candidates will receive consideration for employment without regard to disability, protected veteran status, race, color, religious creed, national origin, citizenship, marital status, sex, sexual orientation/gender identity, age, or genetic information. Selected applicant will be subject to a background investigation. STEMBoard is an Equal Opportunity/Affirmative Action employer."," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-software-technology-inc-3515944113?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=b7FtCsS5E0z0Pg5WMhdu8w%3D%3D&position=17&pageNum=3&trk=public_jobs_jserp-result_search-card," STEMBoard ",https://www.linkedin.com/company/stemboard?trk=public_jobs_topcard-org-name," Tampa, FL "," 1 month ago "," Be among the first 25 applicants "," SOCOM – Tampa, FL (Hybrid Schedule) – Top Secret Clearance RequiredPosition Overview:STEMBoard is seeking a data engineer with an understanding of performance optimization and data pipelining. The data engineer will create and integrate application programming interfaces (APIs) and apply multiple programming languages including knowledge of SQL/NoSQL database design.The data engineer role requires knowledge in programming for integrating complex models and using a software library frameworks to distribute large, clustered data sets. Data engineers collect and arrange data in a form that is useful for analytics. A basic knowledge in machine learning is also required to build efficient and accurate data pipelines to meet the needs for downstream users such as data scientists to create the models and analytics that produce insight.Principal Duties and Responsibilities:Collaborate with a team of data Stewards in the development of the program and data analytics projects.Developing, maintaining, and testing infrastructures for data generation to transform data from various structured and unstructured data sources. Develop complex queries to ensure accessibility while optimizing the performance of NoSQL and or big data infrastructure. Create and maintain optimal data pipeline architecture.Build and maintain the infrastructure to support extraction, transformation, and loading (ETL) of data from a wide variety of data sources. Extract data from multiple data sources, relational SQL and NoSQL databases, and other platform APIs, for data ingestion and integration. Configure and manage data analytic frameworks and pipelines using databases and tools such as NoSQL, SQL, HDInsight, MongoDB, Cassandra, Neo4j, GraphDB, OrientDB, Spark, Hadoop, Kafka, Hive, and Pig. Apply distributed systems concepts and principles such as consistency and availability, liveness and safety, durability, reliability, fault-tolerance, consensus algorithms.Administrate cloud computing and CI/CD pipelines to include Azure, Google, and Amazon Web Service (AWS).Coordinate with stakeholders, including product, data and design teams to assist with data-related technical issues and support their data infrastructure needsRequirementsRequired Education/Experience: Experience: 1+ years of experience with software engineering, data engineering or related experience. Education: Bachelor’s in STEM with a preference towards Data Science, Computer Science, or Software Engineering.Verifiable work experience working with data structures, database management, distributed computing, and API driven architectures using SQL and No-SQL engines.Proficient in modeling frameworks like Universal Modeling Language (UML), Agile Development, and Git Operations.BenefitsHealthcare, Vision, and Dental Insurance20 Days of PTO401K MatchingTraining/Certification ReimbursementShort term/Long term disabilityParental/Maternity LeaveLife InsuranceSTEMBoard is committed to hiring and retaining a diverse workforce. All qualified candidates will receive consideration for employment without regard to disability, protected veteran status, race, color, religious creed, national origin, citizenship, marital status, sex, sexual orientation/gender identity, age, or genetic information. Selected applicant will be subject to a background investigation. STEMBoard is an Equal Opportunity/Affirmative Action employer. "," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516891524?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=daf1Ls5dIXiVWW%2FVOUWhLw%3D%3D&position=18&pageNum=3&trk=public_jobs_jserp-result_search-card," STEMBoard ",https://www.linkedin.com/company/stemboard?trk=public_jobs_topcard-org-name," Tampa, FL "," 1 month ago "," Be among the first 25 applicants "," SOCOM – Tampa, FL (Hybrid Schedule) – Top Secret Clearance RequiredPosition Overview:STEMBoard is seeking a data engineer with an understanding of performance optimization and data pipelining. The data engineer will create and integrate application programming interfaces (APIs) and apply multiple programming languages including knowledge of SQL/NoSQL database design.The data engineer role requires knowledge in programming for integrating complex models and using a software library frameworks to distribute large, clustered data sets. Data engineers collect and arrange data in a form that is useful for analytics. A basic knowledge in machine learning is also required to build efficient and accurate data pipelines to meet the needs for downstream users such as data scientists to create the models and analytics that produce insight.Principal Duties and Responsibilities:Collaborate with a team of data Stewards in the development of the program and data analytics projects.Developing, maintaining, and testing infrastructures for data generation to transform data from various structured and unstructured data sources. Develop complex queries to ensure accessibility while optimizing the performance of NoSQL and or big data infrastructure. Create and maintain optimal data pipeline architecture.Build and maintain the infrastructure to support extraction, transformation, and loading (ETL) of data from a wide variety of data sources. Extract data from multiple data sources, relational SQL and NoSQL databases, and other platform APIs, for data ingestion and integration. Configure and manage data analytic frameworks and pipelines using databases and tools such as NoSQL, SQL, HDInsight, MongoDB, Cassandra, Neo4j, GraphDB, OrientDB, Spark, Hadoop, Kafka, Hive, and Pig. Apply distributed systems concepts and principles such as consistency and availability, liveness and safety, durability, reliability, fault-tolerance, consensus algorithms.Administrate cloud computing and CI/CD pipelines to include Azure, Google, and Amazon Web Service (AWS).Coordinate with stakeholders, including product, data and design teams to assist with data-related technical issues and support their data infrastructure needsRequirementsRequired Education/Experience: Experience: 1+ years of experience with software engineering, data engineering or related experience. Education: Bachelor’s in STEM with a preference towards Data Science, Computer Science, or Software Engineering.Verifiable work experience working with data structures, database management, distributed computing, and API driven architectures using SQL and No-SQL engines.Proficient in modeling frameworks like Universal Modeling Language (UML), Agile Development, and Git Operations.BenefitsHealthcare, Vision, and Dental Insurance20 Days of PTO401K MatchingTraining/Certification ReimbursementShort term/Long term disabilityParental/Maternity LeaveLife InsuranceSTEMBoard is committed to hiring and retaining a diverse workforce. All qualified candidates will receive consideration for employment without regard to disability, protected veteran status, race, color, religious creed, national origin, citizenship, marital status, sex, sexual orientation/gender identity, age, or genetic information. Selected applicant will be subject to a background investigation. STEMBoard is an Equal Opportunity/Affirmative Action employer. "," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-abbott-3509634673?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=V8TB4fh9sf8beWZC5I%2FsKQ%3D%3D&position=20&pageNum=3&trk=public_jobs_jserp-result_search-card," Abbott ",https://www.linkedin.com/company/abbott-?trk=public_jobs_topcard-org-name," Chicago, IL "," 2 weeks ago "," 107 applicants ","Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio of life-changing technologies spans the spectrum of healthcare, with leading businesses and products in diagnostics, medical devices, nutritionals and branded generic medicines. Our 115,000 colleagues serve people in more than 160 countries. At Abbott, You Can Do Work That Matters, Grow, And Learn, Care For Yourself And Family, Be Your True Self And Live a Full Life. You’ll Also Have Access To Career development with an international company where you can grow the career you dream of . Free medical coverage for employees* via the Health Investment Plan (HIP) PPO An excellent retirement savings plan with high employer contribution Tuition reimbursement, the Freedom 2 Save student debt program and FreeU education benefit - an affordable and convenient path to getting a bachelor’s degree. A company recognized as a great place to work in dozens of countries around the world and named one of the most admired companies in the world by Fortune. A company that is recognized as one of the best big companies to work for as well as a best place to work for diversity, working mothers, female executives, and scientists. The Opportunity This position works out of our Downtown Chicago- WeWork location in the Digital Technology Services organization. As the Data Engineer, you’ll have the chance to build code, design solution and solutions for Big Data platforms, collaborating with technical data engineers and business users. You’ll develop solutions, and support IT defects. What You’ll Work On Global Data Lakes, Data warehouses and analytics projects providing hands on experience leading large-scale solutions. Technical and hand-on with agility to meet fluid and dynamic business needs Expertise - Collaborate with Abbott employees and partners to develop big data platform solutions using Data bricks, cloud services such as Amazon Elastic Compute Cloud (EC2), Amazon Redshift, Azure Data Factory and Azure Synapse. Solutions – Understanding customer requirements, creating solution proposals and creating data service offerings. This includes participating in on-site visits. Delivery - Engagements proving the use of AWS services to support new distributed computing solutions that often span private cloud and public cloud services. Engagements will include development of solutions specific to use cases, upkeep of current big data architecture, conducting Proof of Concepts, support migration of existing applications and development of new applications using AWS cloud services. Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings, bring these insights and best practices to implement complex big data solutions Track record of implementing Spark, Databricks in a variety of distributed computing, enterprise environments. Track record of implementing AWS services in a variety of distributed computing, enterprise environments. Deep understanding of database and analytical technologies in the industry including MPP databases, Data Warehouse design, BI reporting and Dashboard development. Deliver and implement designs, data modeling, and implementations of Big Data platform and analytic applications Effectively communicate complex technical concepts to non-technical business and executive leaders Work closely with SMEs, functional experts in Commercial, R&D, finance, etc. for building data pipeline from structure and unstructured data sources Required Qualifications BA/BS degree or equivalent experience; Computer Science or Math background preferred 5+ years’ experience of IT platform implementation in a highly technical and analytical role Experience in data lake and data analytics solution components from AWS and Microsoft Azure Current hands-on implementation experience required; individual contributors only need apply Strong verbal and written communications skills and ability to deliver effectively Ability to communicate complex quantitative analysis in a clear, precise, actionable manner Hands-on experience in Big Data Components/Frameworks, AWS, Azure solution components required. Experience in one or more of the following: Python, Apache Spark, Kafka, Databricks Preferred Qualifications Experience in Healthcare industry Understanding of Regulatory requirements Customer facing skills to represent Abbott Big Data team and drive discussions with senior personnel regarding trade-offs, best practices, product management and risk mitigation Demonstrated ability to think strategically about business, product, and technical challenges in an enterprise environment Ability to travel to various Abbott locations (sometimes, internationally) when needed. Apply Now Participants who complete a short wellness assessment qualify for FREE coverage in our HIP PPO medical plan. Free coverage applies in the next calendar year. Learn more about our health and wellness benefits, which provide the security to help you and your family live full lives: www.abbottbenefits.com Follow your career aspirations to Abbott for diverse opportunities with a company that can help you build your future and live your best life. Abbott is an Equal Opportunity Employer, committed to employee diversity. Connect with us at www.abbott.com, on Facebook at www.facebook.com/Abbott and on Twitter @AbbottNews and @AbbottGlobal."," Associate "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-phaidon-international-3511236216?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=Pja1eaKQRDKDldQdpQ0xxQ%3D%3D&position=1&pageNum=4&trk=public_jobs_jserp-result_search-card," Phaidon International ",https://uk.linkedin.com/company/phaidon-international?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 1 week ago "," 191 applicants ","Ideal Candidate: ▪ 5+ years of experience in Data Engineering or related development work with distributed data technology ▪ Experience with system design and leading multi-person development efforts ▪ Strong experience in Python development ▪ Experience with Spark and Cloud computing ▪ Track record of delivering novel and innovative solutions to challenges Responsibilities will include identifying opportunities to improve efficiency, and increase revenue. NO C2C/C2H Must have Financial Service Background The position will be hybrid in the NYC area (must already be living in the US). Apply to learn more about the opportunities."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/associate-data-engineer-at-lowe-s-companies-inc-3492486840?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=4WrPzWkcY3vTzKRDbQr1KQ%3D%3D&position=2&pageNum=4&trk=public_jobs_jserp-result_search-card," Phaidon International ",https://uk.linkedin.com/company/phaidon-international?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 1 week ago "," 191 applicants "," Ideal Candidate:▪ 5+ years of experience in Data Engineering or related development work with distributed data technology▪ Experience with system design and leading multi-person development efforts▪ Strong experience in Python development▪ Experience with Spark and Cloud computing▪ Track record of delivering novel and innovative solutions to challengesResponsibilities will include identifying opportunities to improve efficiency, and increase revenue.NO C2C/C2HMust have Financial Service BackgroundThe position will be hybrid in the NYC area (must already be living in the US). Apply to learn more about the opportunities. "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-intern-summer-2023-at-cloudflare-3515396678?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=s2mjTmOb3nc1eXzvxtfuZw%3D%3D&position=3&pageNum=4&trk=public_jobs_jserp-result_search-card," Phaidon International ",https://uk.linkedin.com/company/phaidon-international?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 1 week ago "," 191 applicants "," Ideal Candidate:▪ 5+ years of experience in Data Engineering or related development work with distributed data technology▪ Experience with system design and leading multi-person development efforts▪ Strong experience in Python development▪ Experience with Spark and Cloud computing▪ Track record of delivering novel and innovative solutions to challengesResponsibilities will include identifying opportunities to improve efficiency, and increase revenue.NO C2C/C2HMust have Financial Service BackgroundThe position will be hybrid in the NYC area (must already be living in the US). Apply to learn more about the opportunities. "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-spacex-3509157715?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=bvQLOe7DdttSU537C75YEQ%3D%3D&position=4&pageNum=4&trk=public_jobs_jserp-result_search-card," Phaidon International ",https://uk.linkedin.com/company/phaidon-international?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 1 week ago "," 191 applicants "," Ideal Candidate:▪ 5+ years of experience in Data Engineering or related development work with distributed data technology▪ Experience with system design and leading multi-person development efforts▪ Strong experience in Python development▪ Experience with Spark and Cloud computing▪ Track record of delivering novel and innovative solutions to challengesResponsibilities will include identifying opportunities to improve efficiency, and increase revenue.NO C2C/C2HMust have Financial Service BackgroundThe position will be hybrid in the NYC area (must already be living in the US). Apply to learn more about the opportunities. "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-selby-jennings-3482506755?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=XUc5T1sCqbAdA2RdNgapbw%3D%3D&position=5&pageNum=4&trk=public_jobs_jserp-result_search-card," Selby Jennings ",https://uk.linkedin.com/company/selby-jennings?trk=public_jobs_topcard-org-name," Boston, MA "," 3 weeks ago "," 187 applicants ","Our client is a well known multi asset fund in downtown Boston that offers a range of investment solutions across global equity, emerging markets, and fixed income asset classes. The firm is now on the search for a Senior to Lead level Data Engineer for one of their highest growing business groups. Background 4+ Years in a Data Engineer/Software Engineer role Ability to design, develop, and maintain ETL Pipelines in a Python and AWS environment Familiarity with data platforms such as Spark, MongoDB, Parquet, PostgreSQL, DynamoDB and Redis. 4+ years in a public cloud environment (Ideally AWS) Day to Day Be a senior asset in a major AWS cloud migration Design, develop, and deploy ETL Pipelines Participate in the design review of advancements in the investment decision systems"," Mid-Senior level "," Full-time "," Information Technology, Engineering, and Finance "," Software Development and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-flights-100%25-remote-at-hopper-3483744323?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=u8CwhugLwqWOx2AdgkS5cQ%3D%3D&position=6&pageNum=4&trk=public_jobs_jserp-result_search-card," Selby Jennings ",https://uk.linkedin.com/company/selby-jennings?trk=public_jobs_topcard-org-name," Boston, MA "," 3 weeks ago "," 187 applicants "," Our client is a well known multi asset fund in downtown Boston that offers a range of investment solutions across global equity, emerging markets, and fixed income asset classes. The firm is now on the search for a Senior to Lead level Data Engineer for one of their highest growing business groups. Background4+ Years in a Data Engineer/Software Engineer roleAbility to design, develop, and maintain ETL Pipelines in a Python and AWS environmentFamiliarity with data platforms such as Spark, MongoDB, Parquet, PostgreSQL, DynamoDB and Redis.4+ years in a public cloud environment (Ideally AWS)Day to DayBe a senior asset in a major AWS cloud migrationDesign, develop, and deploy ETL PipelinesParticipate in the design review of advancements in the investment decision systems "," Mid-Senior level "," Full-time "," Information Technology, Engineering, and Finance "," Software Development and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-proactive-md-3495656844?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=%2FNRN%2Bgy0os14GtCO12TYxA%3D%3D&position=7&pageNum=4&trk=public_jobs_jserp-result_search-card," Selby Jennings ",https://uk.linkedin.com/company/selby-jennings?trk=public_jobs_topcard-org-name," Boston, MA "," 3 weeks ago "," 187 applicants "," Our client is a well known multi asset fund in downtown Boston that offers a range of investment solutions across global equity, emerging markets, and fixed income asset classes. The firm is now on the search for a Senior to Lead level Data Engineer for one of their highest growing business groups. Background4+ Years in a Data Engineer/Software Engineer roleAbility to design, develop, and maintain ETL Pipelines in a Python and AWS environmentFamiliarity with data platforms such as Spark, MongoDB, Parquet, PostgreSQL, DynamoDB and Redis.4+ years in a public cloud environment (Ideally AWS)Day to DayBe a senior asset in a major AWS cloud migrationDesign, develop, and deploy ETL PipelinesParticipate in the design review of advancements in the investment decision systems "," Mid-Senior level "," Full-time "," Information Technology, Engineering, and Finance "," Software Development and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-data-pipeline-team-remote-at-constructor-3507406792?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=cgp6rzDKGDwmEGkZHQ9FYg%3D%3D&position=8&pageNum=4&trk=public_jobs_jserp-result_search-card," Selby Jennings ",https://uk.linkedin.com/company/selby-jennings?trk=public_jobs_topcard-org-name," Boston, MA "," 3 weeks ago "," 187 applicants "," Our client is a well known multi asset fund in downtown Boston that offers a range of investment solutions across global equity, emerging markets, and fixed income asset classes. The firm is now on the search for a Senior to Lead level Data Engineer for one of their highest growing business groups. Background4+ Years in a Data Engineer/Software Engineer roleAbility to design, develop, and maintain ETL Pipelines in a Python and AWS environmentFamiliarity with data platforms such as Spark, MongoDB, Parquet, PostgreSQL, DynamoDB and Redis.4+ years in a public cloud environment (Ideally AWS)Day to DayBe a senior asset in a major AWS cloud migrationDesign, develop, and deploy ETL PipelinesParticipate in the design review of advancements in the investment decision systems "," Mid-Senior level "," Full-time "," Information Technology, Engineering, and Finance "," Software Development and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-modis-3500036690?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=fNQRGQaNKLLj9SZ2adgehg%3D%3D&position=9&pageNum=4&trk=public_jobs_jserp-result_search-card," Modis ",https://ch.linkedin.com/company/modis?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","** Candidates MUST have GCP experience** Akkodis is excited to share multiple GCP Data Engineer opportunities with one of our Fortune 500 partners. This is a long-term role with competitive pay, benefits and work life balance. The role will be based in Michigan, but is open to fully remote candidates. Overview: Akkodis is looking for data engineers and backend software engineers who will help build, validate and release innovative data projects. Responsibilities: Develop exceptional Analytics data projects using streaming, batch patterns with solid data warehouse principles. Deploy to GCP cloud platform, assist in migrating from legacy data warehouses in Hadoop. Responsible for Data modeling and data profiling to derive actionable insights from the data. Build out scalable data pipelines. Skills Required: Data Engineering experience with Teradata and GCP tools (Data Flow, Big Query, Cloud Functions, Cloud Storage) Streaming tools experience (PubSub, Kafka, Qlik Replicate) Strong Software Engineering background with Java or Python. Proficiency with continuous integration/continuous delivery tools and pipelines (e.g., Jenkins, Maven, Gradle, etc.) Previous experience with on-prem data sources (Hadoop, SQL Server) Preferred: Experience in Scala and Spark Expertise in eXtreme Programming (XP) disciplines, including Paired programming Masters degree computer science or related field Experience with loading and provisioning data via APIs, and proficiency with RESTful API standards/tools Performance tuning experience is a plus Previous cloud data migration experience *Cannot work on C2C basis* *Visa sponsorship is available for this opportunity* Equal Opportunity Employer/Veterans/Disabled To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy/ “Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria. Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client” Equal Opportunity Employer/Veterans/Disabled. To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy. The Company will consider qualified applicants with arrest and conviction records."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-valiance-solutions-3502738324?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=hS6EYG6YqXegjPLxG%2FS6gQ%3D%3D&position=10&pageNum=4&trk=public_jobs_jserp-result_search-card," Modis ",https://ch.linkedin.com/company/modis?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," ** Candidates MUST have GCP experience**Akkodis is excited to share multiple GCP Data Engineer opportunities with one of our Fortune 500 partners. This is a long-term role with competitive pay, benefits and work life balance. The role will be based in Michigan, but is open to fully remote candidates.Overview:Akkodis is looking for data engineers and backend software engineers who will help build, validate and release innovative data projects.Responsibilities:Develop exceptional Analytics data projects using streaming, batch patterns with solid data warehouse principles.Deploy to GCP cloud platform, assist in migrating from legacy data warehouses in Hadoop.Responsible for Data modeling and data profiling to derive actionable insights from the data.Build out scalable data pipelines.Skills Required:Data Engineering experience with Teradata and GCP tools (Data Flow, Big Query, Cloud Functions, Cloud Storage)Streaming tools experience (PubSub, Kafka, Qlik Replicate)Strong Software Engineering background with Java or Python.Proficiency with continuous integration/continuous delivery tools and pipelines (e.g., Jenkins, Maven, Gradle, etc.)Previous experience with on-prem data sources (Hadoop, SQL Server)Preferred:Experience in Scala and SparkExpertise in eXtreme Programming (XP) disciplines, including Paired programmingMasters degree computer science or related fieldExperience with loading and provisioning data via APIs, and proficiency with RESTful API standards/toolsPerformance tuning experience is a plusPrevious cloud data migration experience*Cannot work on C2C basis* *Visa sponsorship is available for this opportunity*Equal Opportunity Employer/Veterans/DisabledTo read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy/“Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria.Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”Equal Opportunity Employer/Veterans/Disabled.To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy.The Company will consider qualified applicants with arrest and conviction records. "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516890589?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=sjeLqxnOO7vyXozC%2BEMyPA%3D%3D&position=11&pageNum=4&trk=public_jobs_jserp-result_search-card," Modis ",https://ch.linkedin.com/company/modis?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," ** Candidates MUST have GCP experience**Akkodis is excited to share multiple GCP Data Engineer opportunities with one of our Fortune 500 partners. This is a long-term role with competitive pay, benefits and work life balance. The role will be based in Michigan, but is open to fully remote candidates.Overview:Akkodis is looking for data engineers and backend software engineers who will help build, validate and release innovative data projects.Responsibilities:Develop exceptional Analytics data projects using streaming, batch patterns with solid data warehouse principles.Deploy to GCP cloud platform, assist in migrating from legacy data warehouses in Hadoop.Responsible for Data modeling and data profiling to derive actionable insights from the data.Build out scalable data pipelines.Skills Required:Data Engineering experience with Teradata and GCP tools (Data Flow, Big Query, Cloud Functions, Cloud Storage)Streaming tools experience (PubSub, Kafka, Qlik Replicate)Strong Software Engineering background with Java or Python.Proficiency with continuous integration/continuous delivery tools and pipelines (e.g., Jenkins, Maven, Gradle, etc.)Previous experience with on-prem data sources (Hadoop, SQL Server)Preferred:Experience in Scala and SparkExpertise in eXtreme Programming (XP) disciplines, including Paired programmingMasters degree computer science or related fieldExperience with loading and provisioning data via APIs, and proficiency with RESTful API standards/toolsPerformance tuning experience is a plusPrevious cloud data migration experience*Cannot work on C2C basis* *Visa sponsorship is available for this opportunity*Equal Opportunity Employer/Veterans/DisabledTo read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy/“Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria.Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”Equal Opportunity Employer/Veterans/Disabled.To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy.The Company will consider qualified applicants with arrest and conviction records. "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-avalara-3480659857?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=xPn2sWsNGXk86xVdR8aNzQ%3D%3D&position=12&pageNum=4&trk=public_jobs_jserp-result_search-card," Modis ",https://ch.linkedin.com/company/modis?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," ** Candidates MUST have GCP experience**Akkodis is excited to share multiple GCP Data Engineer opportunities with one of our Fortune 500 partners. This is a long-term role with competitive pay, benefits and work life balance. The role will be based in Michigan, but is open to fully remote candidates.Overview:Akkodis is looking for data engineers and backend software engineers who will help build, validate and release innovative data projects.Responsibilities:Develop exceptional Analytics data projects using streaming, batch patterns with solid data warehouse principles.Deploy to GCP cloud platform, assist in migrating from legacy data warehouses in Hadoop.Responsible for Data modeling and data profiling to derive actionable insights from the data.Build out scalable data pipelines.Skills Required:Data Engineering experience with Teradata and GCP tools (Data Flow, Big Query, Cloud Functions, Cloud Storage)Streaming tools experience (PubSub, Kafka, Qlik Replicate)Strong Software Engineering background with Java or Python.Proficiency with continuous integration/continuous delivery tools and pipelines (e.g., Jenkins, Maven, Gradle, etc.)Previous experience with on-prem data sources (Hadoop, SQL Server)Preferred:Experience in Scala and SparkExpertise in eXtreme Programming (XP) disciplines, including Paired programmingMasters degree computer science or related fieldExperience with loading and provisioning data via APIs, and proficiency with RESTful API standards/toolsPerformance tuning experience is a plusPrevious cloud data migration experience*Cannot work on C2C basis* *Visa sponsorship is available for this opportunity*Equal Opportunity Employer/Veterans/DisabledTo read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy/“Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria.Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”Equal Opportunity Employer/Veterans/Disabled.To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy.The Company will consider qualified applicants with arrest and conviction records. "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-tegria-3497480831?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=yxD3L1ukDiE4ffICnCJ1UQ%3D%3D&position=13&pageNum=4&trk=public_jobs_jserp-result_search-card," Modis ",https://ch.linkedin.com/company/modis?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," ** Candidates MUST have GCP experience**Akkodis is excited to share multiple GCP Data Engineer opportunities with one of our Fortune 500 partners. This is a long-term role with competitive pay, benefits and work life balance. The role will be based in Michigan, but is open to fully remote candidates.Overview:Akkodis is looking for data engineers and backend software engineers who will help build, validate and release innovative data projects.Responsibilities:Develop exceptional Analytics data projects using streaming, batch patterns with solid data warehouse principles.Deploy to GCP cloud platform, assist in migrating from legacy data warehouses in Hadoop.Responsible for Data modeling and data profiling to derive actionable insights from the data.Build out scalable data pipelines.Skills Required:Data Engineering experience with Teradata and GCP tools (Data Flow, Big Query, Cloud Functions, Cloud Storage)Streaming tools experience (PubSub, Kafka, Qlik Replicate)Strong Software Engineering background with Java or Python.Proficiency with continuous integration/continuous delivery tools and pipelines (e.g., Jenkins, Maven, Gradle, etc.)Previous experience with on-prem data sources (Hadoop, SQL Server)Preferred:Experience in Scala and SparkExpertise in eXtreme Programming (XP) disciplines, including Paired programmingMasters degree computer science or related fieldExperience with loading and provisioning data via APIs, and proficiency with RESTful API standards/toolsPerformance tuning experience is a plusPrevious cloud data migration experience*Cannot work on C2C basis* *Visa sponsorship is available for this opportunity*Equal Opportunity Employer/Veterans/DisabledTo read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy/“Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria.Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”Equal Opportunity Employer/Veterans/Disabled.To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy.The Company will consider qualified applicants with arrest and conviction records. "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/sr-data-engineer-at-volitiion-iit-putting-intelligence-in-it-3526771680?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=OjjYte0cpZ%2B8v9MXJENaNw%3D%3D&position=14&pageNum=4&trk=public_jobs_jserp-result_search-card," Modis ",https://ch.linkedin.com/company/modis?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," ** Candidates MUST have GCP experience**Akkodis is excited to share multiple GCP Data Engineer opportunities with one of our Fortune 500 partners. This is a long-term role with competitive pay, benefits and work life balance. The role will be based in Michigan, but is open to fully remote candidates.Overview:Akkodis is looking for data engineers and backend software engineers who will help build, validate and release innovative data projects.Responsibilities:Develop exceptional Analytics data projects using streaming, batch patterns with solid data warehouse principles.Deploy to GCP cloud platform, assist in migrating from legacy data warehouses in Hadoop.Responsible for Data modeling and data profiling to derive actionable insights from the data.Build out scalable data pipelines.Skills Required:Data Engineering experience with Teradata and GCP tools (Data Flow, Big Query, Cloud Functions, Cloud Storage)Streaming tools experience (PubSub, Kafka, Qlik Replicate)Strong Software Engineering background with Java or Python.Proficiency with continuous integration/continuous delivery tools and pipelines (e.g., Jenkins, Maven, Gradle, etc.)Previous experience with on-prem data sources (Hadoop, SQL Server)Preferred:Experience in Scala and SparkExpertise in eXtreme Programming (XP) disciplines, including Paired programmingMasters degree computer science or related fieldExperience with loading and provisioning data via APIs, and proficiency with RESTful API standards/toolsPerformance tuning experience is a plusPrevious cloud data migration experience*Cannot work on C2C basis* *Visa sponsorship is available for this opportunity*Equal Opportunity Employer/Veterans/DisabledTo read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy/“Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria.Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”Equal Opportunity Employer/Veterans/Disabled.To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy.The Company will consider qualified applicants with arrest and conviction records. "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oloop-technology-solutions-3527045989?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=KMVhTM7UdfwfCGNADoM9Xw%3D%3D&position=15&pageNum=4&trk=public_jobs_jserp-result_search-card," Modis ",https://ch.linkedin.com/company/modis?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," ** Candidates MUST have GCP experience**Akkodis is excited to share multiple GCP Data Engineer opportunities with one of our Fortune 500 partners. This is a long-term role with competitive pay, benefits and work life balance. The role will be based in Michigan, but is open to fully remote candidates.Overview:Akkodis is looking for data engineers and backend software engineers who will help build, validate and release innovative data projects.Responsibilities:Develop exceptional Analytics data projects using streaming, batch patterns with solid data warehouse principles.Deploy to GCP cloud platform, assist in migrating from legacy data warehouses in Hadoop.Responsible for Data modeling and data profiling to derive actionable insights from the data.Build out scalable data pipelines.Skills Required:Data Engineering experience with Teradata and GCP tools (Data Flow, Big Query, Cloud Functions, Cloud Storage)Streaming tools experience (PubSub, Kafka, Qlik Replicate)Strong Software Engineering background with Java or Python.Proficiency with continuous integration/continuous delivery tools and pipelines (e.g., Jenkins, Maven, Gradle, etc.)Previous experience with on-prem data sources (Hadoop, SQL Server)Preferred:Experience in Scala and SparkExpertise in eXtreme Programming (XP) disciplines, including Paired programmingMasters degree computer science or related fieldExperience with loading and provisioning data via APIs, and proficiency with RESTful API standards/toolsPerformance tuning experience is a plusPrevious cloud data migration experience*Cannot work on C2C basis* *Visa sponsorship is available for this opportunity*Equal Opportunity Employer/Veterans/DisabledTo read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy/“Benefit offerings include medical, dental, vision, term life insurance, short-term disability insurance, additional voluntary benefits, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State or local law; and Holiday pay upon meeting eligibility criteria.Disclaimer: These benefit offerings do not apply to client-recruited jobs and jobs which are direct hire to a client”Equal Opportunity Employer/Veterans/Disabled.To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy.The Company will consider qualified applicants with arrest and conviction records. "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-intelletec-3496432511?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=ftfvpKEQcl%2FnpPlbCY9Y9g%3D%3D&position=16&pageNum=4&trk=public_jobs_jserp-result_search-card," Intelletec ",https://www.linkedin.com/company/intelletec-ltd?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Company: Intelletec is working with a company focused on providing commercial auto data and analytics. Backed by proprietary data & models, this company digs deep into historical data and combines it with external data to achieve granular segmentation. They appropriately describe and price commercial auto risks that will outperform the competition. Responsibilities: Increase efficiency and organization of code and coding practices, with a view especially to establishing data pipelines and making repeatable processes easier Propose and stand up an appropriate data architecture Quickly evaluate the potential utility of new data elements Partner with the other team members to turn data into models, and models into insights Perform data quality analysis of customer data, uncover insights, and make recommendations Interact with customer IT, data science, and actuarial staff Develop and maintain expertise in commercial auto insurance, especially with regard to the relevant types of data Help develop processes that ensure efficiency of our analytical efforts as we scale Qualifications: 3+ years (5+ preferred) of professional experience in data science and/or data engineering role Expert-level knowledge of Python, R, SQL Familiarity with the AWS ecosystem Familiarity with insurance products and coverages, especially auto insurance (preferred) Enjoys the velocity of a start-up culture This is a remote role in the EST or CST that will require occasional travel for customer meetings, internal meetings, & conferences."," Mid-Senior level "," Full-time "," Information Technology "," Insurance and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3517024138?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=hCQrgZxFCr6GFojQspzt0w%3D%3D&position=17&pageNum=4&trk=public_jobs_jserp-result_search-card," Intelletec ",https://www.linkedin.com/company/intelletec-ltd?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Company:Intelletec is working with a company focused on providing commercial auto data and analytics. Backed by proprietary data & models, this company digs deep into historical data and combines it with external data to achieve granular segmentation. They appropriately describe and price commercial auto risks that will outperform the competition.Responsibilities:Increase efficiency and organization of code and coding practices, with a view especially to establishing data pipelines and making repeatable processes easierPropose and stand up an appropriate data architectureQuickly evaluate the potential utility of new data elementsPartner with the other team members to turn data into models, and models into insightsPerform data quality analysis of customer data, uncover insights, and make recommendationsInteract with customer IT, data science, and actuarial staffDevelop and maintain expertise in commercial auto insurance, especially with regard to the relevant types of dataHelp develop processes that ensure efficiency of our analytical efforts as we scaleQualifications:3+ years (5+ preferred) of professional experience in data science and/or data engineering roleExpert-level knowledge of Python, R, SQLFamiliarity with the AWS ecosystemFamiliarity with insurance products and coverages, especially auto insurance (preferred)Enjoys the velocity of a start-up cultureThis is a remote role in the EST or CST that will require occasional travel for customer meetings, internal meetings, & conferences. "," Mid-Senior level "," Full-time "," Information Technology "," Insurance and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-infosys-bpm-3504415419?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=2yfH84qvNn%2BLvNMIacCrmQ%3D%3D&position=18&pageNum=4&trk=public_jobs_jserp-result_search-card," Intelletec ",https://www.linkedin.com/company/intelletec-ltd?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Company:Intelletec is working with a company focused on providing commercial auto data and analytics. Backed by proprietary data & models, this company digs deep into historical data and combines it with external data to achieve granular segmentation. They appropriately describe and price commercial auto risks that will outperform the competition.Responsibilities:Increase efficiency and organization of code and coding practices, with a view especially to establishing data pipelines and making repeatable processes easierPropose and stand up an appropriate data architectureQuickly evaluate the potential utility of new data elementsPartner with the other team members to turn data into models, and models into insightsPerform data quality analysis of customer data, uncover insights, and make recommendationsInteract with customer IT, data science, and actuarial staffDevelop and maintain expertise in commercial auto insurance, especially with regard to the relevant types of dataHelp develop processes that ensure efficiency of our analytical efforts as we scaleQualifications:3+ years (5+ preferred) of professional experience in data science and/or data engineering roleExpert-level knowledge of Python, R, SQLFamiliarity with the AWS ecosystemFamiliarity with insurance products and coverages, especially auto insurance (preferred)Enjoys the velocity of a start-up cultureThis is a remote role in the EST or CST that will require occasional travel for customer meetings, internal meetings, & conferences. "," Mid-Senior level "," Full-time "," Information Technology "," Insurance and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-akkodis-3505616141?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=0jhIwafV29zkONPc3Wxm8Q%3D%3D&position=19&pageNum=4&trk=public_jobs_jserp-result_search-card," Intelletec ",https://www.linkedin.com/company/intelletec-ltd?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Company:Intelletec is working with a company focused on providing commercial auto data and analytics. Backed by proprietary data & models, this company digs deep into historical data and combines it with external data to achieve granular segmentation. They appropriately describe and price commercial auto risks that will outperform the competition.Responsibilities:Increase efficiency and organization of code and coding practices, with a view especially to establishing data pipelines and making repeatable processes easierPropose and stand up an appropriate data architectureQuickly evaluate the potential utility of new data elementsPartner with the other team members to turn data into models, and models into insightsPerform data quality analysis of customer data, uncover insights, and make recommendationsInteract with customer IT, data science, and actuarial staffDevelop and maintain expertise in commercial auto insurance, especially with regard to the relevant types of dataHelp develop processes that ensure efficiency of our analytical efforts as we scaleQualifications:3+ years (5+ preferred) of professional experience in data science and/or data engineering roleExpert-level knowledge of Python, R, SQLFamiliarity with the AWS ecosystemFamiliarity with insurance products and coverages, especially auto insurance (preferred)Enjoys the velocity of a start-up cultureThis is a remote role in the EST or CST that will require occasional travel for customer meetings, internal meetings, & conferences. "," Mid-Senior level "," Full-time "," Information Technology "," Insurance and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cps-inc-3509671724?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=Xu4LuS2d3s%2FW3A%2Bg7%2Bg54w%3D%3D&position=20&pageNum=4&trk=public_jobs_jserp-result_search-card," CPS, Inc. ",https://www.linkedin.com/company/cps4jobs?trk=public_jobs_topcard-org-name," Chicago, IL "," 1 week ago "," 193 applicants "," Come join one of the premier financial firms where technology plays a huge part in our success. We have been around for over 20 years with offices spanning the globe. We are currently in a rapid change of technology initiatives and searching for experienced Data Engineers who can drive our new systems from design stage through implementation. Come partner with us and our highly skilled team members in building this system. Best in class benefit packageWe expect this person to have a background in several of the following:Proficient in data processing and programming (Python/SQL)Experience with: Airflow, AWS, Databricks, Docker, Jupyter, Snowflake, Spark, and moreImplementing ETL processes and manage code reposExplaining technical concepts to non-technical audiences2+ Years "," Entry level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3501580751?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=RJYGK4TqH32FKQ4LowCd0A%3D%3D&position=21&pageNum=4&trk=public_jobs_jserp-result_search-card," CPS, Inc. ",https://www.linkedin.com/company/cps4jobs?trk=public_jobs_topcard-org-name," Chicago, IL "," 1 week ago "," 193 applicants "," Come join one of the premier financial firms where technology plays a huge part in our success. We have been around for over 20 years with offices spanning the globe. We are currently in a rapid change of technology initiatives and searching for experienced Data Engineers who can drive our new systems from design stage through implementation. Come partner with us and our highly skilled team members in building this system. Best in class benefit packageWe expect this person to have a background in several of the following:Proficient in data processing and programming (Python/SQL)Experience with: Airflow, AWS, Databricks, Docker, Jupyter, Snowflake, Spark, and moreImplementing ETL processes and manage code reposExplaining technical concepts to non-technical audiences2+ Years "," Entry level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-flights-100%25-remote-at-hopper-3477021213?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=RKWp6CTFVGIU3qpf0WjyRw%3D%3D&position=22&pageNum=4&trk=public_jobs_jserp-result_search-card," CPS, Inc. ",https://www.linkedin.com/company/cps4jobs?trk=public_jobs_topcard-org-name," Chicago, IL "," 1 week ago "," 193 applicants "," Come join one of the premier financial firms where technology plays a huge part in our success. We have been around for over 20 years with offices spanning the globe. We are currently in a rapid change of technology initiatives and searching for experienced Data Engineers who can drive our new systems from design stage through implementation. Come partner with us and our highly skilled team members in building this system. Best in class benefit packageWe expect this person to have a background in several of the following:Proficient in data processing and programming (Python/SQL)Experience with: Airflow, AWS, Databricks, Docker, Jupyter, Snowflake, Spark, and moreImplementing ETL processes and manage code reposExplaining technical concepts to non-technical audiences2+ Years "," Entry level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer - Flights (100% Remote),https://www.linkedin.com/jobs/view/data-engineer-flights-100%25-remote-at-hopper-3477016847?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=BVn3nrlLk%2BsZTY4IoFok%2BQ%3D%3D&position=23&pageNum=4&trk=public_jobs_jserp-result_search-card," Hopper ",https://ca.linkedin.com/company/hopper?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","About The Job We are looking for an autonomous individual to join our Flights team as a Data Engineer to design and build data environments that are robust, high-quality, and provide fast access. The ideal candidate will leverage their experience as an engineer writing code and proficiency with data-intensive systems to build data pipelines and solve customer problems. The data engineer will own the data infrastructure, tooling, and technical guidance to enable the Flights team to make data-driven decisions. The role will be critical to building and executing on a progressive data strategy for our Flights team. A successful candidate has prior experience building a data pipeline infrastructure from scratch, scaling resilient solutions, and can find stability in abstract work. This person will also help to reinforce a culture of ownership over data integrity across all Flights teams enabling product managers in Flights to make data driven decisions at scale. Responsibilities Enable data-driven insights, accurate reporting by ensuring consistent, accurate, and accessible data for internal analysis. Work backwards from customer needs with Product to set and achieve measurable goals regarding data integrity. Develop and distribute best practices for recording, storing, and processing product data to facilitate rapid development and self-service. Draft and review specifications and plans relating to data generation and consumption related to the team's products. Write, review, and deploy production code related to data storage and processing for the team's products. Promote a culture of ownership of data concerns within the team. Strong Candidates Will Have 3+ years of recent experience building and operating high-volume and high-reliability data processing systems A strong sense of ownership over outcomes, and preference for results over process Hands-on experience working with the Google Cloud Platform suite of tools Experience with data warehousing, data infrastructure, and ETL (Terraform, AirFlow, Dataflow, BigQuery or similar tools) Enthusiasm to collaborate across multiple teams and stakeholders on abstract problems and developing solutions ETL batch and workflow orchestration with AWS Data Pipeline/Airflow or similar Experience with designing and building large scale data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, etc. Backend development experience with Scala, Java, Python, unix shell scripting and a zeal for writing well-designed, testable software Experience with Business Intelligence tools such or Data Studio, Amplitude, Tableau, or similar Demonstrated ability to create a vision, architecting scalable long term solutions that apply technology to solve business problems Preferred Qualifications Degree in a relevant technical field (Computer Science, Mathematics or Statistics) Broad understanding of cloud contact center technologies and operations More About Hopper At Hopper, we are on a mission to become the world’s best — and most fun — place to book travel. By leveraging massive amounts of data, advanced machine learning algorithms, Hopper combines its world-class travel agency offering with proprietary fintech products to help customers spend less and travel better. Ranked the third largest online travel agency in North America, the app has been downloaded nearly 80 million times and continues to gain market share globally. Here are just a few stats that demonstrate the company’s recent growth: Hopper sold around $4 billion in travel and travel fintech in 2022, up nearly 3X over 2021. In 2022, Hopper increased its revenue 2.5X year-over year. The company’s bespoke fintech products, such as Flight Disruption Guarantee and Price Freeze, now represent 30-40% of Hopper’s total app revenue. Given the success of its fintech products, Hopper launched a B2B initiative called Hopper Cloud in late 2021. Through this partnership program, any travel provider (airlines, hotels, banks, travel agencies, etc.) can integrate and seamlessly distribute Hopper’s fintech or travel inventory. As its first Hopper Cloud partnership, Hopper partnered with Capital One to co-develop Capital One Travel, a new travel portal designed specifically for cardholders. Recognized as one of the world’s most innovative companies by Fast Company four years in a row, Hopper has been downloaded over 80 million times and continues to have millions of new installs each month. Hopper has raised over $700 million USD of private capital and is backed by some of the largest institutional investors and banks in the world. Hopper is primed to continue its acceleration as the world’s fastest-growing mobile-first travel marketplace. Come take off with us!"," Associate "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Flights (100% Remote),https://www.linkedin.com/jobs/view/data-engineer-flights-100%25-remote-at-hopper-3483746062?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=wTIJU1POJWI8%2BI2D23cz1Q%3D%3D&position=24&pageNum=4&trk=public_jobs_jserp-result_search-card," Hopper ",https://ca.linkedin.com/company/hopper?trk=public_jobs_topcard-org-name," United States "," 6 days ago "," Over 200 applicants ","About The Job We are looking for an autonomous individual to join our Flights team as a Data Engineer to design and build data environments that are robust, high-quality, and provide fast access. The ideal candidate will leverage their experience as an engineer writing code and proficiency with data-intensive systems to build data pipelines and solve customer problems. The data engineer will own the data infrastructure, tooling, and technical guidance to enable the Flights team to make data-driven decisions. The role will be critical to building and executing on a progressive data strategy for our Flights team. A successful candidate has prior experience building a data pipeline infrastructure from scratch, scaling resilient solutions, and can find stability in abstract work. This person will also help to reinforce a culture of ownership over data integrity across all Flights teams enabling product managers in Flights to make data driven decisions at scale. Responsibilities Enable data-driven insights, accurate reporting by ensuring consistent, accurate, and accessible data for internal analysis. Work backwards from customer needs with Product to set and achieve measurable goals regarding data integrity. Develop and distribute best practices for recording, storing, and processing product data to facilitate rapid development and self-service. Draft and review specifications and plans relating to data generation and consumption related to the team's products. Write, review, and deploy production code related to data storage and processing for the team's products. Promote a culture of ownership of data concerns within the team. Strong Candidates Will Have 3+ years of recent experience building and operating high-volume and high-reliability data processing systems A strong sense of ownership over outcomes, and preference for results over process Hands-on experience working with the Google Cloud Platform suite of tools Experience with data warehousing, data infrastructure, and ETL (Terraform, AirFlow, Dataflow, BigQuery or similar tools) Enthusiasm to collaborate across multiple teams and stakeholders on abstract problems and developing solutions ETL batch and workflow orchestration with AWS Data Pipeline/Airflow or similar Experience with designing and building large scale data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, etc. Backend development experience with Scala, Java, Python, unix shell scripting and a zeal for writing well-designed, testable software Experience with Business Intelligence tools such or Data Studio, Amplitude, Tableau, or similar Demonstrated ability to create a vision, architecting scalable long term solutions that apply technology to solve business problems Preferred Qualifications Degree in a relevant technical field (Computer Science, Mathematics or Statistics) Broad understanding of cloud contact center technologies and operations More About Hopper At Hopper, we are on a mission to become the world’s best — and most fun — place to book travel. By leveraging massive amounts of data, advanced machine learning algorithms, Hopper combines its world-class travel agency offering with proprietary fintech products to help customers spend less and travel better. Ranked the third largest online travel agency in North America, the app has been downloaded nearly 80 million times and continues to gain market share globally. Here are just a few stats that demonstrate the company’s recent growth: Hopper sold around $4 billion in travel and travel fintech in 2022, up nearly 3X over 2021. In 2022, Hopper increased its revenue 2.5X year-over year. The company’s bespoke fintech products, such as Flight Disruption Guarantee and Price Freeze, now represent 30-40% of Hopper’s total app revenue. Given the success of its fintech products, Hopper launched a B2B initiative called Hopper Cloud in late 2021. Through this partnership program, any travel provider (airlines, hotels, banks, travel agencies, etc.) can integrate and seamlessly distribute Hopper’s fintech or travel inventory. As its first Hopper Cloud partnership, Hopper partnered with Capital One to co-develop Capital One Travel, a new travel portal designed specifically for cardholders. Recognized as one of the world’s most innovative companies by Fast Company four years in a row, Hopper has been downloaded over 80 million times and continues to have millions of new installs each month. Hopper has raised over $700 million USD of private capital and is backed by some of the largest institutional investors and banks in the world. Hopper is primed to continue its acceleration as the world’s fastest-growing mobile-first travel marketplace. Come take off with us!"," Associate "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Flights (100% Remote),https://www.linkedin.com/jobs/view/sr-data-engineer-at-experfy-3531423358?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=RZN7yU4zG7Vm87vo6te9nQ%3D%3D&position=25&pageNum=4&trk=public_jobs_jserp-result_search-card," Hopper ",https://ca.linkedin.com/company/hopper?trk=public_jobs_topcard-org-name," United States "," 6 days ago "," Over 200 applicants "," About The JobWe are looking for an autonomous individual to join our Flights team as a Data Engineer to design and build data environments that are robust, high-quality, and provide fast access. The ideal candidate will leverage their experience as an engineer writing code and proficiency with data-intensive systems to build data pipelines and solve customer problems.The data engineer will own the data infrastructure, tooling, and technical guidance to enable the Flights team to make data-driven decisions. The role will be critical to building and executing on a progressive data strategy for our Flights team. A successful candidate has prior experience building a data pipeline infrastructure from scratch, scaling resilient solutions, and can find stability in abstract work. This person will also help to reinforce a culture of ownership over data integrity across all Flights teams enabling product managers in Flights to make data driven decisions at scale.ResponsibilitiesEnable data-driven insights, accurate reporting by ensuring consistent, accurate, and accessible data for internal analysis.Work backwards from customer needs with Product to set and achieve measurable goals regarding data integrity.Develop and distribute best practices for recording, storing, and processing product data to facilitate rapid development and self-service.Draft and review specifications and plans relating to data generation and consumption related to the team's products.Write, review, and deploy production code related to data storage and processing for the team's products.Promote a culture of ownership of data concerns within the team.Strong Candidates Will Have3+ years of recent experience building and operating high-volume and high-reliability data processing systemsA strong sense of ownership over outcomes, and preference for results over processHands-on experience working with the Google Cloud Platform suite of toolsExperience with data warehousing, data infrastructure, and ETL (Terraform, AirFlow, Dataflow, BigQuery or similar tools)Enthusiasm to collaborate across multiple teams and stakeholders on abstract problems and developing solutionsETL batch and workflow orchestration with AWS Data Pipeline/Airflow or similarExperience with designing and building large scale data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, etc.Backend development experience with Scala, Java, Python, unix shell scripting and a zeal for writing well-designed, testable softwareExperience with Business Intelligence tools such or Data Studio, Amplitude, Tableau, or similarDemonstrated ability to create a vision, architecting scalable long term solutions that apply technology to solve business problemsPreferred QualificationsDegree in a relevant technical field (Computer Science, Mathematics or Statistics)Broad understanding of cloud contact center technologies and operations More About HopperAt Hopper, we are on a mission to become the world’s best — and most fun — place to book travel. By leveraging massive amounts of data, advanced machine learning algorithms, Hopper combines its world-class travel agency offering with proprietary fintech products to help customers spend less and travel better. Ranked the third largest online travel agency in North America, the app has been downloaded nearly 80 million times and continues to gain market share globally.Here are just a few stats that demonstrate the company’s recent growth: Hopper sold around $4 billion in travel and travel fintech in 2022, up nearly 3X over 2021. In 2022, Hopper increased its revenue 2.5X year-over year. The company’s bespoke fintech products, such as Flight Disruption Guarantee and Price Freeze, now represent 30-40% of Hopper’s total app revenue. Given the success of its fintech products, Hopper launched a B2B initiative called Hopper Cloud in late 2021. Through this partnership program, any travel provider (airlines, hotels, banks, travel agencies, etc.) can integrate and seamlessly distribute Hopper’s fintech or travel inventory. As its first Hopper Cloud partnership, Hopper partnered with Capital One to co-develop Capital One Travel, a new travel portal designed specifically for cardholders. Recognized as one of the world’s most innovative companies by Fast Company four years in a row, Hopper has been downloaded over 80 million times and continues to have millions of new installs each month. Hopper has raised over $700 million USD of private capital and is backed by some of the largest institutional investors and banks in the world. Hopper is primed to continue its acceleration as the world’s fastest-growing mobile-first travel marketplace.Come take off with us! "," Associate "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer II,https://www.linkedin.com/jobs/view/data-engineer-ii-at-the-venetian-resort-las-vegas-3482799870?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=0BkzAACy81TEU6azpJmh0Q%3D%3D&position=14&pageNum=2&trk=public_jobs_jserp-result_search-card," The Venetian Resort Las Vegas ",https://www.linkedin.com/company/the-venetian?trk=public_jobs_topcard-org-name," Las Vegas, NV "," 4 weeks ago "," Over 200 applicants ","Position Overview: The primary responsibility of the Data Engineer II – Enterprise Analytics is assisting in designing, developing, and deploying data-driven solutions as part of Enterprise Analytics data strategy and goals. Data Engineer II – Enterprise Analytics is responsible for creating reliable ETLs and scalable data pipelines to support Analytics and BI environment (including modeling and machine learning, visualizations, reports, cubes & applications, etc.). Data Engineer II – Enterprise Analytics participates in data modeling and development of data models and data marts by interpreting business logic required to turn complex ideas into a sustainable value add processes. All duties are to be performed in accordance with departmental and The Venetian Resort’s policies, practices, and procedures. Essential Duties & Responsibilities: Collaborate with Enterprise Analytics BI Analysts and Data Scientists, and other business stakeholders to understand business problems and build/automate data structures ingested by analytics products (e.g.: reports, dashboards, cubes, etc.). Create BI solutions, including dashboards (Power BI) and reports (SSRS, Excel, and Cognos). Development of logic for KPIs requested by the business leadership. Troubleshoot existing and create new ETLs, SSIS packages, SQL stored procedures and jobs. Assist in Star and Snowflake modeling, creating dimensional models and ETL processes. Write an efficient SQL code for use in data pipelines and data processing. Drive data quality processes like data profiling, data cleansing, etc. Develop best practices and approaches to support continuous process automation for data ingestion and data pipelines. Use innovative problem solving and critical thinking approaches to troubleshoot challenging data obstacles. Test, optimize, troubleshoot, and fine-tune queries for maximum efficiency. Maintain existing and create new Microsoft Power Apps solutions (including but not limited to Power Apps, Power Automate, Power BI, Flows, etc.) according to business needs. Perform QA and UAT processes to foster an agile development cycle. Create documentation on table design, mapping out steps and underlying logic within data marts to facilitate data adoption with minimum guidance from the Enterprise Analytics management. Identify areas of improvement not just in owned work, but also other areas of the business. Mentor and train junior Data Engineers on best practices, query and ETL optimization techniques. Create and maintain daily, weekly, monthly, quarterly reports and dashboards. Consolidate fractured enterprise reporting into a standardized product for easy visualization and cross-departmental understanding. Create reporting structures that accurately link cross-departmental data, which allows for on demand delivery of ad-hoc reports. Safety is an essential function of this job. Consistent and regular attendance is an essential function of this job. Performs other related duties as assigned. Company Standards of Conduct All The Venetian Resort Team Members are expected to conduct and carry themselves in a professional manner at all times. Team Members are required to observe the Company’s standards, work requirements and rules of conduct. Minimum Qualifications: 21 years of age. Proof of authorization/eligibility to work in the United States. Bachelor’s degree in Computer Science, Information Systems, Engineering, Analytics, or related field is required. Master’s degree in related discipline is preferred. Must be able to obtain and maintain a Nevada Gaming Control Board registration and any other certification or license, as required by law or policy. 2+ years of experience in building data pipelines and ETL processes is required. 2+ years of experience creating visualizations and reports (Power BI, Tableau, MicroStrategy, Google Analytics) is required. 2+ years of experience in writing advanced SQL, data mining and working with traditional relational databases (tables, views, window functions, scalar and aggregate functions, primary/foreign keys, indexes DML/DDL statements, joins and unions) and/or distributed systems (Hadoop, Big Query) is required. 1+ years of experience with programming/scripting languages such as Python, R or Big Query is required. Experience in either Microsoft Power Suite (Power Apps, Power Automate, Power BI, etc.), Microsoft Azure, Google Cloud Platform, or RPA tools is preferred. Excellent understanding of data types, data structures and database systems and their specific use cases is required. Strong understanding of data modeling principles including Dimensional modeling, and Data Normalization principles is required. Extensive knowledge of Microsoft Excel (Excel formulas, data wrangling, VBA macros, graphs, and pivot tables) is required. Excellent critical thinker and effective problem solver with creative solutions. Physical Requirements: Must be able to: Lift or carry 10 pounds, unassisted, in the performance of specific tasks, as assigned. Physically access all areas of the property and drive areas with or without a reasonable accommodation. Maintain composure under pressure and consistently meet deadlines with internal and external customers and contacts. Ability to interact appropriately and effectively with guests, management, other team members, and outside contacts. Ability for prolonged periods of time to walk, stand, stretch, bend and kneel. Work in a fast-paced and busy environment. Work indoors and be exposed to various environmental factors such as, but not limited to, CRT, noise, dust, and cigarette smoke."," Associate "," Full-time "," Information Technology, Analyst, and Research "," IT Services and IT Consulting, Research Services, and Gambling Facilities and Casinos " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3485243688?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=fCcvFSci6khftjSJ2V%2F4IQ%3D%3D&position=8&pageNum=3&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 1 week ago "," 179 applicants ","Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Job Description As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law. Qualifications BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. EEO Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender Identity If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy. Please view our Pay Transparency Statement"," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Big Data Software Engineer,https://www.linkedin.com/jobs/view/big-data-software-engineer-at-apple-3522802169?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=hnUbOAqsg9lf4OYwx3e6ZA%3D%3D&position=23&pageNum=3&trk=public_jobs_jserp-result_search-card," Apple ",https://www.linkedin.com/company/apple?trk=public_jobs_topcard-org-name," Cupertino, CA "," 20 hours ago "," 122 applicants "," Summary At Apple, great ideas have a way of becoming phenomenal products, services, and customer experiences very quickly. Bring passion and dedication to your job; there's no telling what you could accomplish !!! Join the Data Analytics group as a Software Engineer. As a critical member of our team, you will design and build Apple's mission-critical applications for analytics pipelines and web services that help make our products better. Our tools support internal and external developers to surface data and diagnostics to improve software quality. In addition, you will collaborate with Engineering teams across the company to provide a complete, robust solution. If you thrive in an ambiguous and fast-paced environment, operating at both strategic and tactical levels, this is the role for you. Come join us in doing the best work of your life!! Key Qualifications Three plus years of programming expertise in Java/Scala/Python. Three-plus years of distributed systems and big data experience are required. Familiarity with NoSQL databases like Cassandra and distributed queuing systems like Kafka. Strong experience with multi-threading, Spring, and RESTful services. Proven knowledge of application performance improvement techniques and caching solutions. Knowledge and experience with technologies like Akka, Spark is preferred. Thrives in a collaborative environment and is comfortable working cross-functionally. Description This exciting software engineering position demands a strong background in technology, software engineering, encouraging partnerships, and strength in communication. You will be building tools that support developers at Apple and Worldwide to build better software. We are looking for someone who has experience coding software that is efficient, scalable, debuggable, and stable code as we process multiple terabytes of data daily. You will work with various teams across Apple to identify requirements and implement efficient solutions for surfacing diagnostic data to developers. Must have extensive experience in software architecture, design, and development. Solid understanding of concurrency, scalability, and fault-tolerant techniques is desirable. Proven track record working with cross-functional teams with a focus on customer experience. Thrive in an ambiguous and fast-paced environment, operating at strategic and tactical levels. Education & Experience Bachelor's degree or equivalent work experience in Engineering, Computer Science, Business Information Systems. Pay & Benefits At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $130,000 and $196,500, and your base pay will depend on your skills, qualifications, experience, and location. Apple employees also have the opportunity to become an Apple shareholder through participation in Apple’s discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple’s Employee Stock Purchase Plan. You’ll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses — including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits. Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program. Role Number: 200456974 "," Not Applicable "," Full-time "," Engineering and Information Technology "," Computers and Electronics Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-dr-martens-plc-3499257608?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=43jqaLjygiNjf0UTM4jSLA%3D%3D&position=24&pageNum=3&trk=public_jobs_jserp-result_search-card," Dr. Martens plc ",https://uk.linkedin.com/company/dr-martens-plc?trk=public_jobs_topcard-org-name," Portland, OR "," 2 weeks ago "," 153 applicants ","THE GIG The D&A Team has set the vision “To make every DM decision better with data” and is embarking on a transformational journey to “Execute faster in creating trust, access and value globally from all our data to empower our colleagues to stride forwards”. It is an exciting time to join the D&A team, with the opportunity to both set and guide the team and DM’s strategic direction. As our Data Engineer you will be at the forefront of the team working on our data pipelines and data warehouse. You will be central to the success of the global Data Transformation initiative. We are at the beginning of the journey, so you will get the opportunity to define our data engineering approach, making fundamental changes through the design and implementation of a modern data ecosystem that underpins DMs bold growth ambitions. You will work closely with the broader business, supporting, challenging, and championing data and analytics for better decision making. THE STUFF THAT SETS YOU APART Working independently, while still part of a DE team you will be reporting to a lead based in the UK. This will require the need for you to be able to not only work on tasks assigned to you with minimal help from your peers but also to be proactive on issues found during your working day. This position will also be supporting APAC - flexibility in hours will be needed to support their time zone. As the first Data Engineer in this office, you will be responsible for representing the team in all local data project meetings. Establishing the data warehouse as a trusted, stable, and reliable source of insight for the business Building and maintaining robust and efficient data pipelines to bring internal and external data sources into our cloud-based platform for processing and ingesting into the DW Building analytics datasets that utilize our pipelines to provide actionable insights into customer understanding, operational efficiency, and other key business performance metrics. You will possess strong organizational skills and attention to detail, including the ability to manage multiple tasks autonomously You strive for improvement in your work and that of others, proactively identifying issues and opportunities Provide hands-on support to users reporting data incidents, helping the wider team triage and respond to user queries promptly Embedding technical architecture and documentation standards, governance, policies, processes, and procedures. YOUR FUNDAMENTAL QUALITIES It’s never just a job at Dr. Martens. It’s a way of life. We live and breathe our Fundamentals - INTEGRITY. PROFESSIONAL. PASSIONATE. TEAM PLAYERS. They define who we are and how we get the job done. We believe each role is as unique as the person who does it. To join our team, you will also possess these qualities: Azure and its various services Advanced SQL skills Azure Data Factory and Logic Apps development and orchestration Cloud data warehouse platforms such as Azure Data Warehouse, Synapse or Snowflake Developing Spark/Databricks pipelines using Python Event-driven architecture and data streaming Implementing data lake standards and best practice Data warehouse design and modelling skills such as dimensional or Data Vault 2.0. Agile delivery methodologies, Azure DevOps and CI/CD pipelines. Supporting a BI Development team in maintaining data warehouse and pipeline development best practices Great relationship management that delivers results through effective teamwork You’ll be a proud custodian to our DM’s culture, embodying what we stand for and encouraging others to do the same. You will bring the outside-in; you’ll share best practice across the team / business and encourage idea sharing as well as collaborative problem solving Great relationship management that delivers results through effective teamwork You’ll be a proud custodian to our DM’s culture, embodying what we stand for and encouraging others to do the same. You will bring the outside-in; you’ll share best practice across the team / business and encourage idea sharing as well as collaborative problem solving International travel required, up to 5% of travel. Ability to work at a standard computer set up 40+ hours per week, with or without accommodations. Connection with our Brand, The Stuff that Sets Us Apart and our Fundamental Qualities. At DMs, technical capability will go hand in hand with the below: Great relationship management that delivers results through effective teamwork. Be a proud custodian to our DM’s culture, embodying what we stand for and encouraging others to do the same. Help build a highly engaged team – ensuring a collaborative culture and providing guidance & support to other team members. Take ownership for own development, proactively seeking out feedback to build self-awareness. Bring the outside-in; share best practice across the team / business and encourage ideas sharing as well as collaborative problem solving. Lead the way and role model on all things DE&I & wellbeing. At Dr. Martens, we are committed to creating an environment where we can all be proud to work and be our best. Part of this commitment is being an equal opportunity employer. All qualified applicants will be considered for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. WHAT'S IN IT FOR YOU? Welcome to the brand pair of Docs Employee discount of 65% off footwear and 50% on accessories Early Friday finish in the summertime Amazing Portland based office & rooftop Hybrid work schedule Affordable & comprehensive Medical, Dental & Vision packages Our Employee Assistance Program – for when times might get tough 401(k) Pre-Tax and Roth Retirement savings plans DM Foundation, supporting and empowering our communities around the world Paid volunteer hours Are you ready to fill your boots? Apply now!"," Mid-Senior level "," Full-time "," Information Technology "," Retail Apparel and Fashion and Retail " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-intern-summer-2023-at-cloudflare-3515396678?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=s2mjTmOb3nc1eXzvxtfuZw%3D%3D&position=3&pageNum=4&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," Austin, TX "," 1 week ago "," Over 200 applicants ","About Us At Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! About The Department This internship is targeting students with experience and interest in Data Engineering. The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data. What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special? We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet. Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration. Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one. 1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers. Sound like something you’d like to be a part of? We’d love to hear from you! This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license. Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer. Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107."," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer - Intern (Summer 2023),https://www.linkedin.com/jobs/view/data-engineer-at-spacex-3509157715?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=bvQLOe7DdttSU537C75YEQ%3D%3D&position=4&pageNum=4&trk=public_jobs_jserp-result_search-card," Cloudflare ",https://www.linkedin.com/company/cloudflare?trk=public_jobs_topcard-org-name," Austin, TX "," 1 week ago "," Over 200 applicants "," About UsAt Cloudflare, we have our eyes set on an ambitious goal: to help build a better Internet. Today the company runs one of the world’s largest networks that powers approximately 25 million Internet properties, for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company.We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us!About The DepartmentThis internship is targeting students with experience and interest in Data Engineering.The Data Engineer Intern delivers full-stack data solutions across the entire data processing pipeline. This role relies on systems engineering principles to design and implement solutions that span the data lifecycle - collect, ingest, process, store, persist, access, and deliver data at scale and at speed. It includes knowledge of local, distributed, and cloud-based technologies, data virtualization, and all security and authentication mechanisms required to protect the data.What you'll do Work through all stages of a data solution lifecycle, e.g., analyze / profile data, create conceptual, logical and physical data model designs, architect and design ETL, reporting and analytics. Knowledge of modern enterprise data architectures, design patterns, and data toolsets and the ability to apply them. Identify key metrics and build exec-facing dashboards to track progress of the business and its highest priority initiatives. Identify key business levers, establish cause & effect, perform analyses, and communicate key findings to various stakeholders to facilitate data driven decision-making. Work closely across the business teams like Finance, Sales, Marketing, Legal, Customer Support, Product, Engineering. Examples Of Desirable Skills, Knowledge And Experience Pursuing degree in M.S in Computer Science, Data Analytics, or a related field Proficiency in data modeling techniques and understanding of normalization Has software engineering experience Strong problem solving, conceptualization, and communication skills Distributed data systems (e.g., Hadoop, Hive, Spark, Streaming) Data APIs (GraphQL) Database systems (SQL and NO SQL) Languages: SQL, Python, Scala, Golang, Shell Scripting, Java Script Full-stack (frameworks such as React, AngularJS and NodeJS) Role will be located in Austin, TX or San Francisco, CA. What Makes Cloudflare Special?We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.Project Galileo : We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost. Athenian Project : We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.Path Forward Partnership : Since 2016, we have partnered with Path Forward, a nonprofit organization, to create 16-week positions for mid-career professionals who want to get back to the workplace after taking time off to care for a child, parent, or loved one.1.1.1.1 : We released 1.1.1.1 to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitment and ensure that no user data is sold to advertisers or used to target consumers.Sound like something you’d like to be a part of? We’d love to hear from you!This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.Cloudflare is proud to be an equal opportunity employer. We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness. All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law. We are an AA/Veterans/Disabled Employer.Cloudflare provides reasonable accommodations to qualified individuals with disabilities. Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. If you require a reasonable accommodation to apply for a job, please contact us via e-mail at hr@cloudflare.com or via mail at 101 Townsend St. San Francisco, CA 94107. "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Computer and Network Security, and Technology, Information and Internet " Data Engineer,United States,Data Engineer - Flights (100% Remote),https://www.linkedin.com/jobs/view/data-engineer-flights-100%25-remote-at-hopper-3483744323?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=u8CwhugLwqWOx2AdgkS5cQ%3D%3D&position=6&pageNum=4&trk=public_jobs_jserp-result_search-card," Hopper ",https://ca.linkedin.com/company/hopper?trk=public_jobs_topcard-org-name," United States "," 6 days ago "," Over 200 applicants ","About The Job We are looking for an autonomous individual to join our Flights team as a Data Engineer to design and build data environments that are robust, high-quality, and provide fast access. The ideal candidate will leverage their experience as an engineer writing code and proficiency with data-intensive systems to build data pipelines and solve customer problems. The data engineer will own the data infrastructure, tooling, and technical guidance to enable the Flights team to make data-driven decisions. The role will be critical to building and executing on a progressive data strategy for our Flights team. A successful candidate has prior experience building a data pipeline infrastructure from scratch, scaling resilient solutions, and can find stability in abstract work. This person will also help to reinforce a culture of ownership over data integrity across all Flights teams enabling product managers in Flights to make data driven decisions at scale. Responsibilities Enable data-driven insights, accurate reporting by ensuring consistent, accurate, and accessible data for internal analysis. Work backwards from customer needs with Product to set and achieve measurable goals regarding data integrity. Develop and distribute best practices for recording, storing, and processing product data to facilitate rapid development and self-service. Draft and review specifications and plans relating to data generation and consumption related to the team's products. Write, review, and deploy production code related to data storage and processing for the team's products. Promote a culture of ownership of data concerns within the team. Strong Candidates Will Have 3+ years of recent experience building and operating high-volume and high-reliability data processing systems A strong sense of ownership over outcomes, and preference for results over process Hands-on experience working with the Google Cloud Platform suite of tools Experience with data warehousing, data infrastructure, and ETL (Terraform, AirFlow, Dataflow, BigQuery or similar tools) Enthusiasm to collaborate across multiple teams and stakeholders on abstract problems and developing solutions ETL batch and workflow orchestration with AWS Data Pipeline/Airflow or similar Experience with designing and building large scale data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, etc. Backend development experience with Scala, Java, Python, unix shell scripting and a zeal for writing well-designed, testable software Experience with Business Intelligence tools such or Data Studio, Amplitude, Tableau, or similar Demonstrated ability to create a vision, architecting scalable long term solutions that apply technology to solve business problems Preferred Qualifications Degree in a relevant technical field (Computer Science, Mathematics or Statistics) Broad understanding of cloud contact center technologies and operations More About Hopper At Hopper, we are on a mission to become the world’s best — and most fun — place to book travel. By leveraging massive amounts of data, advanced machine learning algorithms, Hopper combines its world-class travel agency offering with proprietary fintech products to help customers spend less and travel better. Ranked the third largest online travel agency in North America, the app has been downloaded nearly 80 million times and continues to gain market share globally. Here are just a few stats that demonstrate the company’s recent growth: Hopper sold around $4 billion in travel and travel fintech in 2022, up nearly 3X over 2021. In 2022, Hopper increased its revenue 2.5X year-over year. The company’s bespoke fintech products, such as Flight Disruption Guarantee and Price Freeze, now represent 30-40% of Hopper’s total app revenue. Given the success of its fintech products, Hopper launched a B2B initiative called Hopper Cloud in late 2021. Through this partnership program, any travel provider (airlines, hotels, banks, travel agencies, etc.) can integrate and seamlessly distribute Hopper’s fintech or travel inventory. As its first Hopper Cloud partnership, Hopper partnered with Capital One to co-develop Capital One Travel, a new travel portal designed specifically for cardholders. Recognized as one of the world’s most innovative companies by Fast Company four years in a row, Hopper has been downloaded over 80 million times and continues to have millions of new installs each month. Hopper has raised over $700 million USD of private capital and is backed by some of the largest institutional investors and banks in the world. Hopper is primed to continue its acceleration as the world’s fastest-growing mobile-first travel marketplace. Come take off with us!"," Associate "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Flights (100% Remote),https://www.linkedin.com/jobs/view/data-engineer-at-proactive-md-3495656844?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=%2FNRN%2Bgy0os14GtCO12TYxA%3D%3D&position=7&pageNum=4&trk=public_jobs_jserp-result_search-card," Hopper ",https://ca.linkedin.com/company/hopper?trk=public_jobs_topcard-org-name," United States "," 6 days ago "," Over 200 applicants "," About The JobWe are looking for an autonomous individual to join our Flights team as a Data Engineer to design and build data environments that are robust, high-quality, and provide fast access. The ideal candidate will leverage their experience as an engineer writing code and proficiency with data-intensive systems to build data pipelines and solve customer problems.The data engineer will own the data infrastructure, tooling, and technical guidance to enable the Flights team to make data-driven decisions. The role will be critical to building and executing on a progressive data strategy for our Flights team. A successful candidate has prior experience building a data pipeline infrastructure from scratch, scaling resilient solutions, and can find stability in abstract work. This person will also help to reinforce a culture of ownership over data integrity across all Flights teams enabling product managers in Flights to make data driven decisions at scale.ResponsibilitiesEnable data-driven insights, accurate reporting by ensuring consistent, accurate, and accessible data for internal analysis.Work backwards from customer needs with Product to set and achieve measurable goals regarding data integrity.Develop and distribute best practices for recording, storing, and processing product data to facilitate rapid development and self-service.Draft and review specifications and plans relating to data generation and consumption related to the team's products.Write, review, and deploy production code related to data storage and processing for the team's products.Promote a culture of ownership of data concerns within the team.Strong Candidates Will Have3+ years of recent experience building and operating high-volume and high-reliability data processing systemsA strong sense of ownership over outcomes, and preference for results over processHands-on experience working with the Google Cloud Platform suite of toolsExperience with data warehousing, data infrastructure, and ETL (Terraform, AirFlow, Dataflow, BigQuery or similar tools)Enthusiasm to collaborate across multiple teams and stakeholders on abstract problems and developing solutionsETL batch and workflow orchestration with AWS Data Pipeline/Airflow or similarExperience with designing and building large scale data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, etc.Backend development experience with Scala, Java, Python, unix shell scripting and a zeal for writing well-designed, testable softwareExperience with Business Intelligence tools such or Data Studio, Amplitude, Tableau, or similarDemonstrated ability to create a vision, architecting scalable long term solutions that apply technology to solve business problemsPreferred QualificationsDegree in a relevant technical field (Computer Science, Mathematics or Statistics)Broad understanding of cloud contact center technologies and operations More About HopperAt Hopper, we are on a mission to become the world’s best — and most fun — place to book travel. By leveraging massive amounts of data, advanced machine learning algorithms, Hopper combines its world-class travel agency offering with proprietary fintech products to help customers spend less and travel better. Ranked the third largest online travel agency in North America, the app has been downloaded nearly 80 million times and continues to gain market share globally.Here are just a few stats that demonstrate the company’s recent growth: Hopper sold around $4 billion in travel and travel fintech in 2022, up nearly 3X over 2021. In 2022, Hopper increased its revenue 2.5X year-over year. The company’s bespoke fintech products, such as Flight Disruption Guarantee and Price Freeze, now represent 30-40% of Hopper’s total app revenue. Given the success of its fintech products, Hopper launched a B2B initiative called Hopper Cloud in late 2021. Through this partnership program, any travel provider (airlines, hotels, banks, travel agencies, etc.) can integrate and seamlessly distribute Hopper’s fintech or travel inventory. As its first Hopper Cloud partnership, Hopper partnered with Capital One to co-develop Capital One Travel, a new travel portal designed specifically for cardholders. Recognized as one of the world’s most innovative companies by Fast Company four years in a row, Hopper has been downloaded over 80 million times and continues to have millions of new installs each month. Hopper has raised over $700 million USD of private capital and is backed by some of the largest institutional investors and banks in the world. Hopper is primed to continue its acceleration as the world’s fastest-growing mobile-first travel marketplace.Come take off with us! "," Associate "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Flights (100% Remote),https://www.linkedin.com/jobs/view/data-engineer-data-pipeline-team-remote-at-constructor-3507406792?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=cgp6rzDKGDwmEGkZHQ9FYg%3D%3D&position=8&pageNum=4&trk=public_jobs_jserp-result_search-card," Hopper ",https://ca.linkedin.com/company/hopper?trk=public_jobs_topcard-org-name," United States "," 6 days ago "," Over 200 applicants "," About The JobWe are looking for an autonomous individual to join our Flights team as a Data Engineer to design and build data environments that are robust, high-quality, and provide fast access. The ideal candidate will leverage their experience as an engineer writing code and proficiency with data-intensive systems to build data pipelines and solve customer problems.The data engineer will own the data infrastructure, tooling, and technical guidance to enable the Flights team to make data-driven decisions. The role will be critical to building and executing on a progressive data strategy for our Flights team. A successful candidate has prior experience building a data pipeline infrastructure from scratch, scaling resilient solutions, and can find stability in abstract work. This person will also help to reinforce a culture of ownership over data integrity across all Flights teams enabling product managers in Flights to make data driven decisions at scale.ResponsibilitiesEnable data-driven insights, accurate reporting by ensuring consistent, accurate, and accessible data for internal analysis.Work backwards from customer needs with Product to set and achieve measurable goals regarding data integrity.Develop and distribute best practices for recording, storing, and processing product data to facilitate rapid development and self-service.Draft and review specifications and plans relating to data generation and consumption related to the team's products.Write, review, and deploy production code related to data storage and processing for the team's products.Promote a culture of ownership of data concerns within the team.Strong Candidates Will Have3+ years of recent experience building and operating high-volume and high-reliability data processing systemsA strong sense of ownership over outcomes, and preference for results over processHands-on experience working with the Google Cloud Platform suite of toolsExperience with data warehousing, data infrastructure, and ETL (Terraform, AirFlow, Dataflow, BigQuery or similar tools)Enthusiasm to collaborate across multiple teams and stakeholders on abstract problems and developing solutionsETL batch and workflow orchestration with AWS Data Pipeline/Airflow or similarExperience with designing and building large scale data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, etc.Backend development experience with Scala, Java, Python, unix shell scripting and a zeal for writing well-designed, testable softwareExperience with Business Intelligence tools such or Data Studio, Amplitude, Tableau, or similarDemonstrated ability to create a vision, architecting scalable long term solutions that apply technology to solve business problemsPreferred QualificationsDegree in a relevant technical field (Computer Science, Mathematics or Statistics)Broad understanding of cloud contact center technologies and operations More About HopperAt Hopper, we are on a mission to become the world’s best — and most fun — place to book travel. By leveraging massive amounts of data, advanced machine learning algorithms, Hopper combines its world-class travel agency offering with proprietary fintech products to help customers spend less and travel better. Ranked the third largest online travel agency in North America, the app has been downloaded nearly 80 million times and continues to gain market share globally.Here are just a few stats that demonstrate the company’s recent growth: Hopper sold around $4 billion in travel and travel fintech in 2022, up nearly 3X over 2021. In 2022, Hopper increased its revenue 2.5X year-over year. The company’s bespoke fintech products, such as Flight Disruption Guarantee and Price Freeze, now represent 30-40% of Hopper’s total app revenue. Given the success of its fintech products, Hopper launched a B2B initiative called Hopper Cloud in late 2021. Through this partnership program, any travel provider (airlines, hotels, banks, travel agencies, etc.) can integrate and seamlessly distribute Hopper’s fintech or travel inventory. As its first Hopper Cloud partnership, Hopper partnered with Capital One to co-develop Capital One Travel, a new travel portal designed specifically for cardholders. Recognized as one of the world’s most innovative companies by Fast Company four years in a row, Hopper has been downloaded over 80 million times and continues to have millions of new installs each month. Hopper has raised over $700 million USD of private capital and is backed by some of the largest institutional investors and banks in the world. Hopper is primed to continue its acceleration as the world’s fastest-growing mobile-first travel marketplace.Come take off with us! "," Associate "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-valiance-solutions-3502738324?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=hS6EYG6YqXegjPLxG%2FS6gQ%3D%3D&position=10&pageNum=4&trk=public_jobs_jserp-result_search-card," Valiance Solutions ",https://in.linkedin.com/company/valiance-solutions?trk=public_jobs_topcard-org-name," Denver, CO "," 2 weeks ago "," Over 200 applicants ","About this position Valiance Solutions is a preferred technology consulting partner with many Fortune 1000 companies. We are building the technology team for one of our clients in the satellite TV & telecommunications space with the requirement of a Principal Engineer / Tech Lead. We are searching for someone with a sharp analytical mindset, a strong “data sense”, and deep expertise with SQL. The objective for this position is to help the team analyze various enterprise data sources and develop specifications about how to transform / connect these data sources into customer behavioral characteristics. These specifications will then be implemented as distributed data processing pipelines. Please review the job responsibilities and required experience outlined below. If you feel you are a good match, we’d love to hear from you. Thanks very much! Location: Denver, CO Responsibilities: Must have: 2+ years’ experience building data delivery services to support critical operational processes, reporting, and analytical models. Demonstrated strength in SQL and Python, data modeling, data warehousing, ETL development, and process automation. Perform SQL-based exploratory analysis. Design and build ETL jobs using AWS services in a managed environment. Bachelor’s degree. Nice to have: Develop Python-based tools and processes that solve operational and business problems. Experience with AWS Athena. Experience with query optimization, performance monitoring, and troubleshooting pipeline failures."," Mid-Senior level "," Contract "," Information Technology, Strategy/Planning, and Engineering "," Broadcast Media Production and Distribution, Entertainment Providers, and Media Production " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516890589?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=sjeLqxnOO7vyXozC%2BEMyPA%3D%3D&position=11&pageNum=4&trk=public_jobs_jserp-result_search-card," Valiance Solutions ",https://in.linkedin.com/company/valiance-solutions?trk=public_jobs_topcard-org-name," Denver, CO "," 2 weeks ago "," Over 200 applicants "," About this positionValiance Solutions is a preferred technology consulting partner with many Fortune 1000 companies. We are building the technology team for one of our clients in the satellite TV & telecommunications space with the requirement of a Principal Engineer / Tech Lead.We are searching for someone with a sharp analytical mindset, a strong “data sense”, and deep expertise with SQL. The objective for this position is to help the team analyze various enterprise data sources and develop specifications about how to transform / connect these data sources into customer behavioral characteristics. These specifications will then be implemented as distributed data processing pipelines.Please review the job responsibilities and required experience outlined below. If you feel you are a good match, we’d love to hear from you. Thanks very much!Location: Denver, CO Responsibilities:Must have:2+ years’ experience building data delivery services to support critical operational processes, reporting, and analytical models.Demonstrated strength in SQL and Python, data modeling, data warehousing, ETL development, and process automation.Perform SQL-based exploratory analysis.Design and build ETL jobs using AWS services in a managed environment.Bachelor’s degree.Nice to have:Develop Python-based tools and processes that solve operational and business problems.Experience with AWS Athena.Experience with query optimization, performance monitoring, and troubleshooting pipeline failures. "," Mid-Senior level "," Contract "," Information Technology, Strategy/Planning, and Engineering "," Broadcast Media Production and Distribution, Entertainment Providers, and Media Production " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-avalara-3480659857?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=xPn2sWsNGXk86xVdR8aNzQ%3D%3D&position=12&pageNum=4&trk=public_jobs_jserp-result_search-card," Valiance Solutions ",https://in.linkedin.com/company/valiance-solutions?trk=public_jobs_topcard-org-name," Denver, CO "," 2 weeks ago "," Over 200 applicants "," About this positionValiance Solutions is a preferred technology consulting partner with many Fortune 1000 companies. We are building the technology team for one of our clients in the satellite TV & telecommunications space with the requirement of a Principal Engineer / Tech Lead.We are searching for someone with a sharp analytical mindset, a strong “data sense”, and deep expertise with SQL. The objective for this position is to help the team analyze various enterprise data sources and develop specifications about how to transform / connect these data sources into customer behavioral characteristics. These specifications will then be implemented as distributed data processing pipelines.Please review the job responsibilities and required experience outlined below. If you feel you are a good match, we’d love to hear from you. Thanks very much!Location: Denver, CO Responsibilities:Must have:2+ years’ experience building data delivery services to support critical operational processes, reporting, and analytical models.Demonstrated strength in SQL and Python, data modeling, data warehousing, ETL development, and process automation.Perform SQL-based exploratory analysis.Design and build ETL jobs using AWS services in a managed environment.Bachelor’s degree.Nice to have:Develop Python-based tools and processes that solve operational and business problems.Experience with AWS Athena.Experience with query optimization, performance monitoring, and troubleshooting pipeline failures. "," Mid-Senior level "," Contract "," Information Technology, Strategy/Planning, and Engineering "," Broadcast Media Production and Distribution, Entertainment Providers, and Media Production " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-tegria-3497480831?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=yxD3L1ukDiE4ffICnCJ1UQ%3D%3D&position=13&pageNum=4&trk=public_jobs_jserp-result_search-card," Tegria ",https://www.linkedin.com/company/tegria?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Over 200 applicants ","Data Engineer As a Data Engineer, your work at Tegria will center on strategic opportunities and implementation, process improvement, and growing Tegria as a company. This role will focus on developing, constructing, testing, and maintaining data structures, data pipelines, and architecture. You will recommend and implement ways to improve the readability, efficiency, and quality of data. A Data Engineer at Tegria will also be focused on delivering data sets for data modeling, data mining, and production reporting. An effective Data Engineer will help the organization on a whole achieve success through: · Create and maintain optimal data pipeline architecture while working within time and budget constraints (i.e. SSIS, Azure Data Factory, Apache Spark, Databricks, etc.) · Work with stakeholders to assist with data-related issues and support their data infrastructure needs · Consult, assess and provide recommendations for improvement in client’s current data architecture · Assemble large, complex, and disparate datasets to meet functional and non-functional business needs · Utilize multiple programming languages to develop the best solution for the client (i.e. SQL, R, Python, C++, etc.) · Research and develop processes for utilizing cloud-based services (i.e. Snowflake, Google Cloud Services, Amazon Web Services, and Microsoft Azure) · Create data tools for analytics and data science team members that will aid in the development and optimization of our current and future services into an industry leader in healthcare analytics Client Engagement Delivery · Working independently or as part of a project team on a client engagement. Could be full-time on a single customer engagement or part-time across customers · Serving as a liaison between diverse IT and operations groups · Facilitating meetings and owning meeting scheduling and coordination, preparation, documentation, and follow-up · Utilizing, reviewing, and creating project tools and templates for assigned projects · Creating and maintaining project plans · Evaluating and documenting current-state processes through discovery and analysis. Presenting recommendations for improvements based on industry experience and best-practices · Facilitating future-state workflow, policy, and process design and planning · Building, testing, training, converting and/or deploying new infrastructure, workflows, policies, and processes · Participating in major milestone reviews and decision gates · Presenting to a wide variety of audiences · Documenting measurable outcomes resulting from initiatives through KPI analysis and impact tracking · Effectively utilizing communication, decision-making, and escalation pathways · Executing effective project wrap-up through outcomes documentation, lessons-learned, and leave-behind materials allowing customers to sustain ongoing operations · Mentoring Associate(s) on project activities and deliverables and collaborating with others on the same · Mentoring customer counterparts for successful, long-term ownership and growth Internal Team Development · Contributing to personal and team development by participating in training activities and team events, while sharing your experience and expertise to help your team grow · Participating in internal projects for Tegria’s strategic growth · Planning and executing team and company-wide gatherings, such as retreats, inter-team meetings, etc. · Modeling and holding fellow team members accountable to company values · Referring new talent to Tegria’s Talent team to continue growing Tegria’s knowledge and capabilities Tegria Business Development · Developing marketing materials and/or blog posts to promote Tegria services and outcomes · Leveraging industry connections and knowledge to identify potential business development opportunities What we’re looking for We expect: · 5+ years of professional experience working as a Data Architect, Data Engineer, ETL Developer, Database Administrator, or similar positions · Experience working with SSIS and other data integration tools and a desire to stay current as the data engineering field advances · A deep conceptual knowledge and demonstrated practical understanding of data modeling techniques and best practices · In-depth understanding of data warehouse design with advanced knowledge in querying languages such as SQL · Experience working with unstructured and semi structured data (JSON, free-text entries, etc.) · Demonstrated ability in project management (waterfall and/or agile), and other organizational management such as risk management, or change management · Capable of and comfortable with working remotely · Capable of and comfortable with traveling to client sites as needed We’d love to see: · Prior consulting experience · Experience with interface engines such as Epic Bridges, HL7 or FHIR · Some experience implementing, supporting, optimizing, and upgrading Epic · Certification in one or more Epic data model and/or application(s) · Formal project management certification – either PMP or CSM · Formal process improvement certification – ex: Lean Six Sigma or ITIL Need a few more details? Status: Exempt Eligibility: Must be legally authorized to work in the US without sponsorship Work Location: This position is remote. Must work in a location within the US. Travel: Up to 25% Benefits Eligibility: Eligible Now, a little about us ... At Tegria, we bring bold ideas and breakthroughs to improve care, technology, revenue, and operations in ways that move healthcare organizations from patient-centered to human-centered. We are helping healthcare put people first—both patients and those who dedicate their lives to delivering care. And at the very core of this vital work is our incredibly talented people. People with different backgrounds who welcome challenge and change. People who listen first, ask hard questions, and make decisions to cultivate a culture of equity and inclusion. People who chase after goals, growth, and generosity. We’re real. We’re nimble, and we believe in our mission to humanize healthcare. Perks and benefits Top talent deserves top rewards. We’ve carefully curated a best-in-class benefits package, meant to meet you wherever you are in your life and career. · Your health, holistically. We offer a choice of multiple health and dental plans with nationally recognized networks, as well as vision benefits, a total wellness program, and an employee assistance program for you and your family. · Your financial well-being. We offer competitive wages, retirement savings plans, company-paid disability and life insurance, pre-tax savings opportunities (HSA and/or FSA), and more. · And everything in between. Our lifestyle benefits are unrivaled, including professional development offerings, opportunities for remote work, and our favorite: a generous paid-time-off program, giving you the flexibility to plan a vacation, take time away for illness (or life’s important events), and shift your schedule to accommodate those unexpected curve balls thrown your way. Tegria is an equal employment opportunity employer and provides equal employment opportunities (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. All qualified candidates are encouraged to apply."," Associate "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting and Hospitals and Health Care " Data Engineer,United States,Sr Data Engineer,https://www.linkedin.com/jobs/view/sr-data-engineer-at-volitiion-iit-putting-intelligence-in-it-3526771680?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=OjjYte0cpZ%2B8v9MXJENaNw%3D%3D&position=14&pageNum=4&trk=public_jobs_jserp-result_search-card," Volitiion IIT - Putting Intelligence in IT ",https://www.linkedin.com/company/volitiion-iit-inc?trk=public_jobs_topcard-org-name," United States "," 9 hours ago "," 136 applicants ","Volitiion IIT Inc. is an IT Service and Staffing firm based of Leesburg VA. We have hiring Top Talents to add to our growing team of Technology Professionals. We are looking for a Sr Data Engineer to add to our team. Job Overview Due: EOD 3/23/2023 Justification Job Description The MyCity Portal will be a One-Stop portal allowing New Yorkers to find, check eligibility, and apply for vital services. Today, City services are challenging to find and have cumbersome and confusing processes, and some continue to require long PDF application submissions. Data from applications and services are not managed centrally, resulting in no easy mechanism to share data amongst City agencies. For the initial phases, the Senior Data Engineer will assist the OTI Application Engineering in building a robust, secure, and modern data pipelines to ingest, process, transform MyCity applications data using Informatica Intelligent Cloud Services or a comparable ETL/ELT tool employing modern data movement strategies and methodologies. The data needs to be store in an Azure Data Lake and eventually brought into a cloud-based data store such as Snowflake or a similar data store. Then, the Analytics reporting solution needs to be built either using cloud-based Google Looker or Microsoft Power BI or another similar tool. Using the Cloud OTI Data Platform, the Senior Data Engineer will build highly available, robust, concrete data pipelines and reporting solutions following industry best practices, adhering to OTI security guidelines, modifying, and running automated CI/CD pipelines used for releasing code, and modifying and executing terraform modules for deploying infrastructure components, working collaboratively under the direction of OTI data engineering management and leads. The resource will be a person with integrity who is dependable and is fully focused on delivering optimal solutions with little to no maintenance and operations overhead. Assignment Name Senior Data Engineer Labor Category Specialist 3 Work Location: 2 MTC Brooklyn/Remote an option with onsite visits for critical meetings as needed Scheduled Work Hours Normal business hours Monday-Friday 35 hours/week (not including mandatory unpaid meal break after 6 hours of work). Assignment Start April 1, 2023 Assignment End Date March 31, 2024 SCOPE OF SERVICES Responsibilities Be experienced in Data Engineering best practices, technologies, tools and processes. Bring sound knowledge of Data Warehouses and LakeHouse concepts and practical implementation experience. Build a framework of repeatable solutions and playbooks enabling efficient and predictable data pipelines. Have hands-on development experience in the implementation of an agile, cloud centric data warehousing and reporting platform with team members of various experience level. Interact with clients, both technical and non-technical stakeholders. Handle relationships with end users. Interact regularly to gather feedback, listen to their issues and concerns, and recommend solutions. Meet critical deadlines and deliver in short sprints. Ensure successful delivery of new reports and dashboards as needed. Maintain and curate data documentation including Architectural Decision Records (ADR), how-to guides, data lineage and ownership using Azure DevOps or similar tool. Maintain query performance and tuning to ensure cost optimization. Participate in joint application development sessions with co-engineers and end users and be willing to brainstorm. Complete technical documentation and be willing to transfer knowledge as needed. MANDATORY SKILLS/EXPERIENCE Note: Candidates who do not have the mandatory skills will not be considered 12+ years developing Data Pipelines / Flows using ETL/ELT tools and technologies. 10+ years of strong SQL fluency (query optimization, windowing functions, aggregation, etc.). 5+ years building complex Analytics and Reporting solutions. 3+ years' experience with a cloud data lake/warehouse solution (Snowflake, Redshift, GCP etc.). Hands on experience working with data integration tools like Informatica Intelligent Cloud Services, Informatica Power Center or SSIS or a similar tool. Extensive experience developing production grade, large scale data solutions. Experience performing conceptual, logical, and physical data modeling using data modeling tools in complex, large-scale environments. Experience working with Microsoft Azure cloud computing platform and services. Experience managing data orchestration at scale using tools such as Airflow, Prefect and Dagster. Experience with traditional RDMS platforms (Oracle and SQL Server). Experience working with version control systems (e.g., Git) Good understanding of CI/CD principles. Experience developing dashboards and reports in applications such as Oracle Analytics Server (OAS), Microsoft Power BI and Google Looker. Desirable Skills/Experience Experience using Azure services for Security, Blob Storage, Data Lake, Databricks, Data Factory etc. Programming experience with Python or Java Experience with Azure Monitoring services Microsoft Certified Azure Solutions Architect Expert or a Snowpro Certification or a similar one 4 Reasons To Join Volitiion IIT, Inc. Our Commitment to You - We offer competitive pay, multi-year projects, and a list of exciting clients. Work-Life Balance - We work hard; we work smart and have quality time for family and ""life."" Our Mantra - We treat our consultants the way we want to be treated: with integrity, professionalism, and trust. Career Development - We help you meet your career goals and continuously support your efforts to build your skillset. Check out our Referral Program! Volitiion IIT Inc will pay you up to $1000 for every qualified professional that you refer and we place. If you see a position posted by Volitiion IIT Inc. and know the perfect person for the job, please send us your referral. Volitiion IIT Inc. is an Equal Opportunity/Affirmative Action Employer."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Sr Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oloop-technology-solutions-3527045989?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=KMVhTM7UdfwfCGNADoM9Xw%3D%3D&position=15&pageNum=4&trk=public_jobs_jserp-result_search-card," Volitiion IIT - Putting Intelligence in IT ",https://www.linkedin.com/company/volitiion-iit-inc?trk=public_jobs_topcard-org-name," United States "," 9 hours ago "," 136 applicants "," Volitiion IIT Inc. is an IT Service and Staffing firm based of Leesburg VA. We have hiring Top Talents to add to our growing team of Technology Professionals.We are looking for a Sr Data Engineer to add to our team.Job OverviewDue: EOD 3/23/2023JustificationJob DescriptionThe MyCity Portal will be a One-Stop portal allowing New Yorkers to find, check eligibility, and apply for vital services. Today, City services are challenging to find and have cumbersome and confusing processes, and some continue to require long PDF application submissions. Data from applications and services are not managed centrally, resulting in no easy mechanism to share data amongst City agencies.For the initial phases, the Senior Data Engineer will assist the OTI Application Engineering in building a robust, secure, and modern data pipelines to ingest, process, transform MyCity applications data using Informatica Intelligent Cloud Services or a comparable ETL/ELT tool employing modern data movement strategies and methodologies. The data needs to be store in an Azure Data Lake and eventually brought into a cloud-based data store such as Snowflake or a similar data store. Then, the Analytics reporting solution needs to be built either using cloud-based Google Looker or Microsoft Power BI or another similar tool.Using the Cloud OTI Data Platform, the Senior Data Engineer will build highly available, robust, concrete data pipelines and reporting solutions following industry best practices, adhering to OTI security guidelines, modifying, and running automated CI/CD pipelines used for releasing code, and modifying and executing terraform modules for deploying infrastructure components, working collaboratively under the direction of OTI data engineering management and leads. The resource will be a person with integrity who is dependable and is fully focused on delivering optimal solutions with little to no maintenance and operations overhead.Assignment NameSenior Data EngineerLabor CategorySpecialist 3Work Location:2 MTC Brooklyn/Remote an option with onsite visits for critical meetings as neededScheduled Work HoursNormal business hours Monday-Friday 35 hours/week (not including mandatory unpaid meal break after 6 hours of work).Assignment StartApril 1, 2023Assignment End DateMarch 31, 2024SCOPE OF SERVICESResponsibilitiesBe experienced in Data Engineering best practices, technologies, tools and processes.Bring sound knowledge of Data Warehouses and LakeHouse concepts and practical implementation experience.Build a framework of repeatable solutions and playbooks enabling efficient and predictable data pipelines.Have hands-on development experience in the implementation of an agile, cloud centric data warehousing and reporting platform with team members of various experience level.Interact with clients, both technical and non-technical stakeholders.Handle relationships with end users. Interact regularly to gather feedback, listen to their issues and concerns, and recommend solutions.Meet critical deadlines and deliver in short sprints.Ensure successful delivery of new reports and dashboards as needed.Maintain and curate data documentation including Architectural Decision Records (ADR), how-to guides, data lineage and ownership using Azure DevOps or similar tool.Maintain query performance and tuning to ensure cost optimization.Participate in joint application development sessions with co-engineers and end users and be willing to brainstorm.Complete technical documentation and be willing to transfer knowledge as needed.MANDATORY SKILLS/EXPERIENCE Note: Candidates who do not have the mandatory skills will not be considered12+ years developing Data Pipelines / Flows using ETL/ELT tools and technologies.10+ years of strong SQL fluency (query optimization, windowing functions, aggregation, etc.).5+ years building complex Analytics and Reporting solutions. 3+ years' experience with a cloud data lake/warehouse solution (Snowflake, Redshift, GCP etc.). Hands on experience working with data integration tools like Informatica Intelligent Cloud Services, Informatica Power Center or SSIS or a similar tool.Extensive experience developing production grade, large scale data solutions. Experience performing conceptual, logical, and physical data modeling using data modeling tools in complex, large-scale environments.Experience working with Microsoft Azure cloud computing platform and services.Experience managing data orchestration at scale using tools such as Airflow, Prefect and Dagster. Experience with traditional RDMS platforms (Oracle and SQL Server). Experience working with version control systems (e.g., Git)Good understanding of CI/CD principles. Experience developing dashboards and reports in applications such as Oracle Analytics Server (OAS), Microsoft Power BI and Google Looker.Desirable Skills/ExperienceExperience using Azure services for Security, Blob Storage, Data Lake, Databricks, Data Factory etc.Programming experience with Python or JavaExperience with Azure Monitoring servicesMicrosoft Certified Azure Solutions Architect Expert or a Snowpro Certification or a similar one4 Reasons To Join Volitiion IIT, Inc. Our Commitment to You - We offer competitive pay, multi-year projects, and a list of exciting clients. Work-Life Balance - We work hard; we work smart and have quality time for family and ""life."" Our Mantra - We treat our consultants the way we want to be treated: with integrity, professionalism, and trust. Career Development - We help you meet your career goals and continuously support your efforts to build your skillset.Check out our Referral Program!Volitiion IIT Inc will pay you up to $1000 for every qualified professional that you refer and we place. If you see a position posted by Volitiion IIT Inc. and know the perfect person for the job, please send us your referral.Volitiion IIT Inc. is an Equal Opportunity/Affirmative Action Employer. "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer - Flights (100% Remote),https://www.linkedin.com/jobs/view/data-engineer-flights-100%25-remote-at-hopper-3477021213?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=RKWp6CTFVGIU3qpf0WjyRw%3D%3D&position=22&pageNum=4&trk=public_jobs_jserp-result_search-card," Hopper ",https://ca.linkedin.com/company/hopper?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","About The Job We are looking for an autonomous individual to join our Flights team as a Data Engineer to design and build data environments that are robust, high-quality, and provide fast access. The ideal candidate will leverage their experience as an engineer writing code and proficiency with data-intensive systems to build data pipelines and solve customer problems. The data engineer will own the data infrastructure, tooling, and technical guidance to enable the Flights team to make data-driven decisions. The role will be critical to building and executing on a progressive data strategy for our Flights team. A successful candidate has prior experience building a data pipeline infrastructure from scratch, scaling resilient solutions, and can find stability in abstract work. This person will also help to reinforce a culture of ownership over data integrity across all Flights teams enabling product managers in Flights to make data driven decisions at scale. Responsibilities Enable data-driven insights, accurate reporting by ensuring consistent, accurate, and accessible data for internal analysis. Work backwards from customer needs with Product to set and achieve measurable goals regarding data integrity. Develop and distribute best practices for recording, storing, and processing product data to facilitate rapid development and self-service. Draft and review specifications and plans relating to data generation and consumption related to the team's products. Write, review, and deploy production code related to data storage and processing for the team's products. Promote a culture of ownership of data concerns within the team. Strong Candidates Will Have 3+ years of recent experience building and operating high-volume and high-reliability data processing systems A strong sense of ownership over outcomes, and preference for results over process Hands-on experience working with the Google Cloud Platform suite of tools Experience with data warehousing, data infrastructure, and ETL (Terraform, AirFlow, Dataflow, BigQuery or similar tools) Enthusiasm to collaborate across multiple teams and stakeholders on abstract problems and developing solutions ETL batch and workflow orchestration with AWS Data Pipeline/Airflow or similar Experience with designing and building large scale data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, etc. Backend development experience with Scala, Java, Python, unix shell scripting and a zeal for writing well-designed, testable software Experience with Business Intelligence tools such or Data Studio, Amplitude, Tableau, or similar Demonstrated ability to create a vision, architecting scalable long term solutions that apply technology to solve business problems Preferred Qualifications Degree in a relevant technical field (Computer Science, Mathematics or Statistics) Broad understanding of cloud contact center technologies and operations More About Hopper At Hopper, we are on a mission to become the world’s best — and most fun — place to book travel. By leveraging massive amounts of data, advanced machine learning algorithms, Hopper combines its world-class travel agency offering with proprietary fintech products to help customers spend less and travel better. Ranked the third largest online travel agency in North America, the app has been downloaded nearly 80 million times and continues to gain market share globally. Here are just a few stats that demonstrate the company’s recent growth: Hopper sold around $4 billion in travel and travel fintech in 2022, up nearly 3X over 2021. In 2022, Hopper increased its revenue 2.5X year-over year. The company’s bespoke fintech products, such as Flight Disruption Guarantee and Price Freeze, now represent 30-40% of Hopper’s total app revenue. Given the success of its fintech products, Hopper launched a B2B initiative called Hopper Cloud in late 2021. Through this partnership program, any travel provider (airlines, hotels, banks, travel agencies, etc.) can integrate and seamlessly distribute Hopper’s fintech or travel inventory. As its first Hopper Cloud partnership, Hopper partnered with Capital One to co-develop Capital One Travel, a new travel portal designed specifically for cardholders. Recognized as one of the world’s most innovative companies by Fast Company four years in a row, Hopper has been downloaded over 80 million times and continues to have millions of new installs each month. Hopper has raised over $700 million USD of private capital and is backed by some of the largest institutional investors and banks in the world. Hopper is primed to continue its acceleration as the world’s fastest-growing mobile-first travel marketplace. Come take off with us!"," Associate "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Snowflake expert,https://www.linkedin.com/jobs/view/data-engineer-snowflake-expert-at-experfy-3516887640?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=q9IzZ26SfApdZB%2BJxQ0KIA%3D%3D&position=24&pageNum=2&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," Be among the first 25 applicants ","Opportunity Description We are looking for a Data Engineer to join our digital data team in the data architecture operation and governance team to build and operationalize data pipelines necessary for the enterprise data and analytics and insights initiatives, following industry standard practices and tools. The bulk of the work would be in building, managing, and optimizing data pipelines and then moving them effectively into production for key data and analytics consumers like business/data analysts, data scientists or any persona that needs curated data for data and analytics use cases across the enterprise. In addition, guarantee compliance with data governance and data security requirements while creating, improving, and operationalizing these integrated and reusable data pipelines. The data engineer will be the key interface in operationalizing data and analytics on behalf of the business unit(s) and organizational outcomes. Tech Skills Knowledge of AWS. Knowledge of Azure or GCP is a plus Orchestration: Airflow Project management & support: JIRA projects & service desk, Confluence, Teams Expert in ELT and ETL Expert in Relational database technologies and concepts: Snowflake is a must have Perform SQL queries Create database models Maintain and improve queries performance Working knowledge of Python and familiar with other scripting languages Good knowledge of cloud computing Soft Skills Pragmatic and capable of solving complex issues Ability to understand business needs Good communication Push innovative solutions Service-oriented, flexible & team player Self-motivated, take initiative Attention to detail & technical intuition Experience At least 5 years experiences in a data team as Data Engineer Experience in a healthcare industry is a strong plus Snowflake certified Preferred Qualifications BS or MS in Computer Science Requirements Responsibilities Must work with business team to understand requirements, and translate them into technical needs Gather and organize large and complex data assets, perform relevant analysis Ensure the quality of the data in coordination with Data Analysts and Data Scientists (peer validation) Propose and implement relevant data models for each business cases Optimize data models and workflows Communicate results and findings in a structured way Partner with Product Owner and Data Analysts to prioritize the pipeline implementation plan Partner with Data Analysts and Data scientists to design pipelines relevant for business requirements Leverage existing or create new “standard pipelines” within to bring value through business use cases Ensure best practices in data manipulation are enforced end-to-end Actively contribute to Data governance community"," Not Applicable "," Contract "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,BI Developer / Data Engineer,https://www.linkedin.com/jobs/view/bi-developer-data-engineer-at-tech-mahindra-3504241830?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=L62CLVIlXijr79YL2dZVaA%3D%3D&position=1&pageNum=3&trk=public_jobs_jserp-result_search-card," Tech Mahindra ",https://in.linkedin.com/company/tech-mahindra?trk=public_jobs_topcard-org-name," Redmond, WA "," 2 weeks ago "," Over 200 applicants "," Technical Skills: Power BI, DAX, SQL, Kusto, Azure Data Factory, Azure Data LakeJob Description:Power BI Development & MaintenanceBuilding and maintaining ADF pipelinesWriting and maintaining Kusto queries/functionsCOSMOS SCOPE ScriptsAd hoc data requests at timesGood Communication Skills "," Mid-Senior level "," Full-time "," Information Technology, Engineering, and Consulting "," IT Services and IT Consulting, Information Services, and Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-wirescreen-3461085680?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=1ELQE%2BGBQCPQBfRFs6KbTg%3D%3D&position=3&pageNum=3&trk=public_jobs_jserp-result_search-card," WireScreen ",https://www.linkedin.com/company/wirescreen-ai?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 36 applicants ","About Us Wirescreen.ai is a venture-backed startup with a mission to help global businesses make smarter investment decisions, track their supply chains, monitor cross-border transactions and respond to evolving global risks. Our leadership team includes a Pulitzer Prize winner and senior engineers who have worked at Google, Twitter and Oracle. We aim to build the world’s leading global supply chain platform, and our AI-driven data platform is already the industry leader in tracking Chinese businesses and entrepreneurs. We are looking for a Junior engineer with outstanding programming skills and a good understanding of web technologies for a data engineer role. In this position, you will play an important part in acquiring, extracting, and processing data that powers Wirescreen’s platform and services What You'll Do At WireScreen: As a startup we all wear multiple hats, so you’ll need to be comfortable with both a hands-on mandate and a broader portfolio of responsibilities. While there may be some ambiguity as we collectively learn and improve, this role offers a unique opportunity to develop a breadth of skills alongside a highly motivated, capable team.Understand the structure of data in websites and APIs. Find innovative ways to get data across multiple sources Work on large-scale crawling and scraping applications, API data extractions, data integrity, and monitoring systems Design, implement, and maintain various components in our data pipeline Who You Are/You Have: Solid programming skills in Python. Familiarity with SQL Good understanding of engineering and computer science fundamentals Strong communication skills. Team player with a can-do attitude Passionate about data engineering and data scraping Fast Learner/Curious Excited about working in a fast-paced startup environment Ideally, You'll Also Have: Experience working on a public cloud platform, preferably AWS Experience using Apache Spark and scientific Python. A desire to be a team player, can collaborate and communicate well with different audiences while bringing a result-driven, focused, high energy, confident, curious, quirky, and most of all fun sense of self, then we’d love to have you join us! Why WireScreen? Why Now? We are seeking candidates who aspire to excellence; colleagues who are curious, passionate and determined to build something innovative. We are eager to learn new things and collaborate on projects that aspire to change the way people understand the architecture of global business, and its role in shaping our society and the environment. The base pay range for this position and to exclude possible bonus, equity, commissions, enhancements, etc is $85,000-$125,900+ per/year may vary depending on job-related knowledge, skills and experience "," Not Applicable "," Full-time "," Information Technology "," Information Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-montani-consulting-3520446935?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=ipN42R198qTd8pWNvCxXtw%3D%3D&position=4&pageNum=3&trk=public_jobs_jserp-result_search-card," WireScreen ",https://www.linkedin.com/company/wirescreen-ai?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 36 applicants "," About UsWirescreen.ai is a venture-backed startup with a mission to help global businesses make smarter investment decisions, track their supply chains, monitor cross-border transactions and respond to evolving global risks. Our leadership team includes a Pulitzer Prize winner and senior engineers who have worked at Google, Twitter and Oracle. We aim to build the world’s leading global supply chain platform, and our AI-driven data platform is already the industry leader in tracking Chinese businesses and entrepreneurs.We are looking for a Junior engineer with outstanding programming skills and a good understanding of web technologies for a data engineer role. In this position, you will play an important part in acquiring, extracting, and processing data that powers Wirescreen’s platform and servicesWhat You'll Do At WireScreen:As a startup we all wear multiple hats, so you’ll need to be comfortable with both a hands-on mandate and a broader portfolio of responsibilities. While there may be some ambiguity as we collectively learn and improve, this role offers a unique opportunity to develop a breadth of skills alongside a highly motivated, capable team.Understand the structure of data in websites and APIs. Find innovative ways to get data across multiple sourcesWork on large-scale crawling and scraping applications, API data extractions, data integrity, and monitoring systemsDesign, implement, and maintain various components in our data pipelineWho You Are/You Have:Solid programming skills in Python. Familiarity with SQLGood understanding of engineering and computer science fundamentalsStrong communication skills. Team player with a can-do attitude Passionate about data engineering and data scrapingFast Learner/CuriousExcited about working in a fast-paced startup environmentIdeally, You'll Also Have:Experience working on a public cloud platform, preferably AWSExperience using Apache Spark and scientific Python. A desire to be a team player, can collaborate and communicate well with different audiences while bringing a result-driven, focused, high energy, confident, curious, quirky, and most of all fun sense of self, then we’d love to have you join us!Why WireScreen? Why Now?We are seeking candidates who aspire to excellence; colleagues who are curious, passionate and determined to build something innovative. We are eager to learn new things and collaborate on projects that aspire to change the way people understand the architecture of global business, and its role in shaping our society and the environment.The base pay range for this position and to exclude possible bonus, equity, commissions, enhancements, etc is $85,000-$125,900+ per/year may vary depending on job-related knowledge, skills and experience "," Not Applicable "," Full-time "," Information Technology "," Information Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stripe-3511633989?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=dQFhWQr6Ezr4o0kQ0FAb1g%3D%3D&position=5&pageNum=3&trk=public_jobs_jserp-result_search-card," WireScreen ",https://www.linkedin.com/company/wirescreen-ai?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 36 applicants "," About UsWirescreen.ai is a venture-backed startup with a mission to help global businesses make smarter investment decisions, track their supply chains, monitor cross-border transactions and respond to evolving global risks. Our leadership team includes a Pulitzer Prize winner and senior engineers who have worked at Google, Twitter and Oracle. We aim to build the world’s leading global supply chain platform, and our AI-driven data platform is already the industry leader in tracking Chinese businesses and entrepreneurs.We are looking for a Junior engineer with outstanding programming skills and a good understanding of web technologies for a data engineer role. In this position, you will play an important part in acquiring, extracting, and processing data that powers Wirescreen’s platform and servicesWhat You'll Do At WireScreen:As a startup we all wear multiple hats, so you’ll need to be comfortable with both a hands-on mandate and a broader portfolio of responsibilities. While there may be some ambiguity as we collectively learn and improve, this role offers a unique opportunity to develop a breadth of skills alongside a highly motivated, capable team.Understand the structure of data in websites and APIs. Find innovative ways to get data across multiple sourcesWork on large-scale crawling and scraping applications, API data extractions, data integrity, and monitoring systemsDesign, implement, and maintain various components in our data pipelineWho You Are/You Have:Solid programming skills in Python. Familiarity with SQLGood understanding of engineering and computer science fundamentalsStrong communication skills. Team player with a can-do attitude Passionate about data engineering and data scrapingFast Learner/CuriousExcited about working in a fast-paced startup environmentIdeally, You'll Also Have:Experience working on a public cloud platform, preferably AWSExperience using Apache Spark and scientific Python. A desire to be a team player, can collaborate and communicate well with different audiences while bringing a result-driven, focused, high energy, confident, curious, quirky, and most of all fun sense of self, then we’d love to have you join us!Why WireScreen? Why Now?We are seeking candidates who aspire to excellence; colleagues who are curious, passionate and determined to build something innovative. We are eager to learn new things and collaborate on projects that aspire to change the way people understand the architecture of global business, and its role in shaping our society and the environment.The base pay range for this position and to exclude possible bonus, equity, commissions, enhancements, etc is $85,000-$125,900+ per/year may vary depending on job-related knowledge, skills and experience "," Not Applicable "," Full-time "," Information Technology "," Information Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516891520?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=LSW5kUWG%2BaDjebJqMfXJdw%3D%3D&position=6&pageNum=3&trk=public_jobs_jserp-result_search-card," WireScreen ",https://www.linkedin.com/company/wirescreen-ai?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 36 applicants "," About UsWirescreen.ai is a venture-backed startup with a mission to help global businesses make smarter investment decisions, track their supply chains, monitor cross-border transactions and respond to evolving global risks. Our leadership team includes a Pulitzer Prize winner and senior engineers who have worked at Google, Twitter and Oracle. We aim to build the world’s leading global supply chain platform, and our AI-driven data platform is already the industry leader in tracking Chinese businesses and entrepreneurs.We are looking for a Junior engineer with outstanding programming skills and a good understanding of web technologies for a data engineer role. In this position, you will play an important part in acquiring, extracting, and processing data that powers Wirescreen’s platform and servicesWhat You'll Do At WireScreen:As a startup we all wear multiple hats, so you’ll need to be comfortable with both a hands-on mandate and a broader portfolio of responsibilities. While there may be some ambiguity as we collectively learn and improve, this role offers a unique opportunity to develop a breadth of skills alongside a highly motivated, capable team.Understand the structure of data in websites and APIs. Find innovative ways to get data across multiple sourcesWork on large-scale crawling and scraping applications, API data extractions, data integrity, and monitoring systemsDesign, implement, and maintain various components in our data pipelineWho You Are/You Have:Solid programming skills in Python. Familiarity with SQLGood understanding of engineering and computer science fundamentalsStrong communication skills. Team player with a can-do attitude Passionate about data engineering and data scrapingFast Learner/CuriousExcited about working in a fast-paced startup environmentIdeally, You'll Also Have:Experience working on a public cloud platform, preferably AWSExperience using Apache Spark and scientific Python. A desire to be a team player, can collaborate and communicate well with different audiences while bringing a result-driven, focused, high energy, confident, curious, quirky, and most of all fun sense of self, then we’d love to have you join us!Why WireScreen? Why Now?We are seeking candidates who aspire to excellence; colleagues who are curious, passionate and determined to build something innovative. We are eager to learn new things and collaborate on projects that aspire to change the way people understand the architecture of global business, and its role in shaping our society and the environment.The base pay range for this position and to exclude possible bonus, equity, commissions, enhancements, etc is $85,000-$125,900+ per/year may vary depending on job-related knowledge, skills and experience "," Not Applicable "," Full-time "," Information Technology "," Information Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516885995?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=%2FZt1cHNw6W3OPVHkXyq%2FsQ%3D%3D&position=7&pageNum=3&trk=public_jobs_jserp-result_search-card," WireScreen ",https://www.linkedin.com/company/wirescreen-ai?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 36 applicants "," About UsWirescreen.ai is a venture-backed startup with a mission to help global businesses make smarter investment decisions, track their supply chains, monitor cross-border transactions and respond to evolving global risks. Our leadership team includes a Pulitzer Prize winner and senior engineers who have worked at Google, Twitter and Oracle. We aim to build the world’s leading global supply chain platform, and our AI-driven data platform is already the industry leader in tracking Chinese businesses and entrepreneurs.We are looking for a Junior engineer with outstanding programming skills and a good understanding of web technologies for a data engineer role. In this position, you will play an important part in acquiring, extracting, and processing data that powers Wirescreen’s platform and servicesWhat You'll Do At WireScreen:As a startup we all wear multiple hats, so you’ll need to be comfortable with both a hands-on mandate and a broader portfolio of responsibilities. While there may be some ambiguity as we collectively learn and improve, this role offers a unique opportunity to develop a breadth of skills alongside a highly motivated, capable team.Understand the structure of data in websites and APIs. Find innovative ways to get data across multiple sourcesWork on large-scale crawling and scraping applications, API data extractions, data integrity, and monitoring systemsDesign, implement, and maintain various components in our data pipelineWho You Are/You Have:Solid programming skills in Python. Familiarity with SQLGood understanding of engineering and computer science fundamentalsStrong communication skills. Team player with a can-do attitude Passionate about data engineering and data scrapingFast Learner/CuriousExcited about working in a fast-paced startup environmentIdeally, You'll Also Have:Experience working on a public cloud platform, preferably AWSExperience using Apache Spark and scientific Python. A desire to be a team player, can collaborate and communicate well with different audiences while bringing a result-driven, focused, high energy, confident, curious, quirky, and most of all fun sense of self, then we’d love to have you join us!Why WireScreen? Why Now?We are seeking candidates who aspire to excellence; colleagues who are curious, passionate and determined to build something innovative. We are eager to learn new things and collaborate on projects that aspire to change the way people understand the architecture of global business, and its role in shaping our society and the environment.The base pay range for this position and to exclude possible bonus, equity, commissions, enhancements, etc is $85,000-$125,900+ per/year may vary depending on job-related knowledge, skills and experience "," Not Applicable "," Full-time "," Information Technology "," Information Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-los-angeles-rams-3511017511?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=IDYMEQMbaeRrEXhvwvfUTw%3D%3D&position=12&pageNum=3&trk=public_jobs_jserp-result_search-card," Los Angeles Rams ",https://www.linkedin.com/company/los-angeles-rams?trk=public_jobs_topcard-org-name," Agoura Hills, CA "," 1 week ago "," 95 applicants ","Every year, the Los Angeles Rams bring together millions of fans across the country and around the world, as we play for the chance to be named the best team in the NFL. But the Rams are about much more than just football. As an organization, we strive to make Los Angeles better by connecting people through sport, both on and off the field. We are about the legacy of fandom that brings together generations of families year after year, we are about the excitement of sport and live events - an experience that cannot be replicated, we are about our community and making a meaningful impact in the world around us, and we are about ensuring we embody and represent the diversity and uniqueness of our city, Los Angeles. Following the opening of SoFi Stadium, and more recently, winning Super Bowl LVI, the Los Angeles Rams are at an exciting stage of growth and are looking for passionate individuals that have the desire to be a part of something bigger.   Job Responsibilities: Working with Executive, Product Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues Working with data, design, product, and executive teams, assisting them with data-related technical issues Configuring servers or databases in development & production environments Assembling large complex sets of data that meet non-functional and functional business requirements Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes  Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using Azure, AWS, GCP and SQL technologies Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition Review the average duration of backup, significant changes occurred would need investigation Query optimization for speed and resource usage Extraction of data from different sources Monitoring & maintenance of systems Identifying problems, create tickets and work to resolution Data modeling, translation of logical designs to physical storage structures   Required skills: Bachelor’s degree in computer science 3-5 years of related experience required Ability to create cutting edge data structures, stored procedures, and outputs in an ever-changing data environment Ability to build, develop, and maintain Azure data lake & Azure SQL, GCP & AWS environments Build Data Automation Strong MS SQL skills Scripting language Python, PowerShell, Perl, Bash Database integrity and auditing practices Comply with cybersecurity guidelines Maintain HIPAA compliance in data environments   Familiarity with: API ingestion and creation SQL replication and data distribution across data hubs / instances/ servers ETL (Extract Transform Load) best practices and comfortable operating in “Data Factory “ Cloud data infrastructure experience AWS, GCP, Azure understanding   Salary Range: $90,000/yr. - $110,000/yr.   The Los Angeles Rams are proud to be an Equal Opportunity Employer.  We strive to create a sense of belonging for all employees by fostering a culture of respect and inclusion, empowering everyone to be their true selves. "," Mid-Senior level "," Full-time "," Information Technology "," Spectator Sports " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-software-technology-inc-3515944113?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=b7FtCsS5E0z0Pg5WMhdu8w%3D%3D&position=17&pageNum=3&trk=public_jobs_jserp-result_search-card," Software Technology Inc. ",https://www.linkedin.com/company/software-technology-inc?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 172 applicants ","Hi, Hope you are doing well, We at Software Technology Inc. hiring for an Data Engineer, role, if you are interested and a good fit, feel free to reach me directly at ksuresh@stiorg.com or (609) 998-3431. Data Engineer Remote/Hybrid in Raleigh, NC / Westlake, TX Multi-year project starting at 12 months minimum Top 3 requirements? Hands on experience writing SQL queries and debugging stored procedures in Oracle Environment (Oracle/PL/SQL) Hands on Informatica experience (Control M, Unix, etc.) Some experience in AWS Skills The Expertise You Have and the Skills You Bring 5+ years of development experience in Database Development Writing SQL queries and debugging stored procedures within an Oracle environment. Strong hands-on working knowledge in Scripting Experience and/or certification with Amazon Web Services, Google Cloud Platform, or Microsoft Azure is a plus. Knowledge of Informatica and/or ETL tools Assist in identification, isolation, resolution, and communication of problems within the production and nonproduction environment and perform troubleshooting. Professional in scripting with the ability to develop automation tools. Define, maintain, and support our enterprise products Solving and prioritizing requirements in production and nonproduction environment Standout colleague, self-starter, collaborative, innovative and eager to learn every day. Excellent communication and documentation skills. Enjoy experimental development solutions Ability to multi-task within various initiatives if needed The Value You Deliver Accountable for consistent delivery of functional software sprint to sprint, release to release Participates in application-level architecture Develops original and creative technical solutions to ongoing development efforts Responsible for QA readiness of software work you're doing (end-to-end tests, unit tests, automation) Responsible for supporting implementation of moderate-scope projects or major initiatives Works on complex assignments and often multiple phases of a project COVID Work Policy Safety is our top priority. Once we can be together in person with fewer safety measures, this role will follow our dynamic working approach. You'll be spending some of your time onsite depending on the nature and needs of your role. Dynamic Working Post Pandemic Our aim is to combine the best of working offsite with coming together in person. For most teams this means a consistent balance of working from home and office that supports the needs of your role, experience level, and working style. Your success and growth is important to us, so you'll want to enjoy the benefits of coming together in person face to face learning and training, quality time with your manager and teammates, building your career network, making friends, and taking full advantage of cultural and social experiences Fidelity provides for you. Regards , Suresh Reddy.k Technical Recruiter Software Technology Inc. Email: ksuresh@stiorg.com 609-998-3431"," Entry level "," Full-time "," Information Technology "," Information Technology & Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516891524?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=daf1Ls5dIXiVWW%2FVOUWhLw%3D%3D&position=18&pageNum=3&trk=public_jobs_jserp-result_search-card," Software Technology Inc. ",https://www.linkedin.com/company/software-technology-inc?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 172 applicants "," Hi,Hope you are doing well,We at Software Technology Inc. hiring for an Data Engineer, role, if you are interested and a good fit, feel free to reach me directly at ksuresh@stiorg.com or (609) 998-3431.Data Engineer Remote/Hybrid in Raleigh, NC / Westlake, TXMulti-year project starting at 12 months minimumTop 3 requirements? Hands on experience writing SQL queries and debugging stored procedures in Oracle Environment (Oracle/PL/SQL) Hands on Informatica experience (Control M, Unix, etc.) Some experience in AWSSkillsThe Expertise You Have and the Skills You Bring5+ years of development experience in Database DevelopmentWriting SQL queries and debugging stored procedures within an Oracle environment.Strong hands-on working knowledge in ScriptingExperience and/or certification with Amazon Web Services, Google Cloud Platform, or Microsoft Azure is a plus.Knowledge of Informatica and/or ETL toolsAssist in identification, isolation, resolution, and communication of problems within the production and nonproduction environment and perform troubleshooting.Professional in scripting with the ability to develop automation tools.Define, maintain, and support our enterprise productsSolving and prioritizing requirements in production and nonproduction environmentStandout colleague, self-starter, collaborative, innovative and eager to learn every day.Excellent communication and documentation skills.Enjoy experimental development solutionsAbility to multi-task within various initiatives if neededThe Value You DeliverAccountable for consistent delivery of functional software sprint to sprint, release to releaseParticipates in application-level architectureDevelops original and creative technical solutions to ongoing development effortsResponsible for QA readiness of software work you're doing (end-to-end tests, unit tests, automation)Responsible for supporting implementation of moderate-scope projects or major initiativesWorks on complex assignments and often multiple phases of a projectCOVID Work PolicySafety is our top priority. Once we can be together in person with fewer safety measures, this role will follow our dynamic working approach. You'll be spending some of your time onsite depending on the nature and needs of your role.Dynamic Working Post PandemicOur aim is to combine the best of working offsite with coming together in person. For most teams this means a consistent balance of working from home and office that supports the needs of your role, experience level, and working style.Your success and growth is important to us, so you'll want to enjoy the benefits of coming together in person face to face learning and training, quality time with your manager and teammates, building your career network, making friends, and taking full advantage of cultural and social experiences Fidelity provides for you.Regards ,Suresh Reddy.kTechnical RecruiterSoftware Technology Inc.Email: ksuresh@stiorg.com609-998-3431 "," Entry level "," Full-time "," Information Technology "," Information Technology & Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-spacex-3509157715?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=bvQLOe7DdttSU537C75YEQ%3D%3D&position=4&pageNum=4&trk=public_jobs_jserp-result_search-card," SpaceX ",https://www.linkedin.com/company/spacex?trk=public_jobs_topcard-org-name," Redmond, WA "," 1 month ago "," Be among the first 25 applicants ","SpaceX was founded under the belief that a future where humanity is out exploring the stars is fundamentally more exciting than one where we are not. Today SpaceX is actively developing the technologies to make this possible, with the ultimate goal of enabling human life on Mars. DATA ENGINEER At SpaceX we're leveraging our experience in building rockets and spacecraft to deploy Starlink, the world's most advanced broadband internet system. Starlink is the world's largest satellite constellation and is providing fast, reliable internet to 1M+ users worldwide. We design, build, test, and operate all parts of the system – thousands of satellites, consumer receivers that allow users to connect within minutes of unboxing, and the software that brings it all together. We've only begun to scratch the surface of Starlink's potential global impact and are looking for best-in-class engineers to help maximize Starlink's utility for communities and businesses around the globe. As a Data Engineer, you will be responsible for developing the strategy, key metrics, tools, software services and processes for assessing how well key aspects of Starlink are scaling and the effectiveness of the Starlink Network in serving millions of users around the globe. You will work with operators, subsystem responsible engineers, software engineers, and network engineers inside the Starlink organization as well as key contacts with various major external partners to help ensure the growth of this program. RESPONSIBILITIES: Build and maintain mission-critical infrastructure, tools, processes, and custom software to objectively assess growth areas for the Starlink program Automate the aggregation of metrics and detection of widespread application issues across Starlink Establish and maintain relationship with key third party application/content owners Lead technical investigations about chronic application-level issues Build ground-based software systems that ingest, transform, and store data Apply data analytics, models, and techniques to data products created by space vehicles  Create catalogs of data and tools that can be used by you and other teams to perform analytics  Fuse data from multiple sources to create usable information BASIC QUALIFICATIONS: Bachelor's degree in computer science, data science, physics, mathematics, or a STEM discipline; OR 2+ years of professional experience in data engineering in lieu of a degree Development experience in an object-oriented programming language (i.e. C, C++, Python) PREFERRED SKILLS AND EXPERIENCE: Professional experience in analytics, data science, or machine learning Experience using Spark, Presto, Flink, or Snowflake Experience building solutions with Parquet, or similar storage formats Knowledge of Kubernetes Experience building solutions with in-stream data processing of structured and semi-structured data   Experience building predictive models and machine learning pipelines (clustering analysis, prediction, anomaly detection)   Experience in custom ETL design, implementation and maintenance  Experience handling large (TB+) datasets  Experience with developing and deploying tools used for data analysis Ability to work effectively in a dynamic environment that includes working with changing needs and requirements Ability to take on projects that require taking initiative and developing new expertise ADDITIONAL REQUIREMENTS: Must be willing to work extended hours and weekends as needed COMPENSATION AND BENEFITS: Pay range: Data Engineer/Level I: $120,000.00 - $145,000.00/per year Data Engineer/Level II: $140,000.00 - $170,000.00/per year Your actual level and base salary will be determined on a case-by-case basis and may vary based on the following considerations: job-related knowledge and skills, education, and experience. Base salary is just one part of your total rewards package at SpaceX. You may also be eligible for long-term incentives, in the form of company stock, stock options, or long-term cash awards, as well as potential discretionary bonuses and the ability to purchase additional stock at a discount through an Employee Stock Purchase Plan. You will also receive access to comprehensive medical, vision, and dental coverage, access to a 401(k)-retirement plan, short & long-term disability insurance, life insurance, paid parental leave, and various other discounts and perks. You may also accrue 3 weeks of paid vacation & will be eligible for 10 or more paid holidays per year. Exempt employees are eligible for 5 days of sick leave per year. ITAR REQUIREMENTS: To conform to U.S. Government space technology export regulations, including the International Traffic in Arms Regulations (ITAR) you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State. Learn more about the ITAR here. SpaceX is an Equal Opportunity Employer; employment with SpaceX is governed on the basis of merit, competence and qualifications and will not be influenced in any manner by race, color, religion, gender, national origin/ethnicity, veteran status, disability status, age, sexual orientation, gender identity, marital status, mental or physical disability or any other legally protected status. Applicants wishing to view a copy of SpaceX's Affirmative Action Plan for veterans and individuals with disabilities, or applicants requiring reasonable accommodation to the application/interview process should notify the Human Resources Department at (310) 363-6000."," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-proactive-md-3495656844?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=%2FNRN%2Bgy0os14GtCO12TYxA%3D%3D&position=7&pageNum=4&trk=public_jobs_jserp-result_search-card," Proactive MD ",https://www.linkedin.com/company/proactive-md?trk=public_jobs_topcard-org-name," Simpsonville, SC "," 3 weeks ago "," 55 applicants ","People are a company's greatest resource, which is why caring for employees and keeping them healthy is so important. Proactive MD offers a comprehensive health management solution that extends well beyond the clinic walls. Access to on-site physicians, full direct primary care services, and excellent client support are the hallmarks of our program. By engaging a workforce and offering them a personal relationship with a primary care physician, we can deliver measurably better outcomes, making people happier, healthier, and more productive while significantly lowering overall medical costs for employers. We put employees' health first because amazing care yields amazing results. We are the next generation of workplace health centers. Remote work available within the contiguous United States of America JOB SUMMARY The Data Engineer is responsible for coordinating data across multiple software systems to improve patient care and client outcomes and for creating and maintaining the data pipelines and data architectures powering our Proactive IQ platform. ESSENTIAL DUTIES AND RESPONSIBILITIES • Manage and maintain data integrity and coordination across multiple core software systems and databases. • Identify opportunities for process automation and initiate designs for system integrations. • Manage bulk data import projects of historical medical records. • Build and maintain data pipelines for the extract, transformation, and a load of clinical data, medical and pharmaceutical claims, and other key data feeds. • Deliver and present data management solutions to internal and external customers as required. • Perform other duties as assigned by EVP, Solutions Engineering, and/or executive leadership. • Act as a champion for our ""patient promise"" and mission, vision, and values and partner across the company to drive a high-performance work environment. REQUIRED KNOWLEDGE, SKILLS, & ABILITIES • Bachelor’s degree or higher from an institution recognized by the Council for Higher Education Accreditation, with relevant coursework in computer science, and mathematics. • Experience using SQL for data management and query. Experience building or maintaining data pipelines with Microsoft SQL Server / Azure preferred. • Experience in handling PHI/PII data transport. • Excellent verbal and written communication skills. • Excellent interpersonal, negotiation, and conflict resolution skills. • Excellent time management skills with the proven ability to meet deadlines. • Strong analytical and problem-solving skills. • Expert proficiency with Microsoft applications, including intermediate-to-advanced Microsoft Excel and Microsoft Access skills. POSITION TYPE & EXPECTED HOURS OF WORK This role will be expected to work a minimum of 40 hours/week as directed. Typical workdays are Monday through Friday, 8:00 am to 5:00 pm. This role is considered an exempt position. Evening and weekend work are infrequent but may occasionally be required as business needs dictate. This is a non-management role with professional growth opportunities within Proactive MD. TRAVEL Infrequent, domestic travel may be required and should be expected to be less than 5% of the position’s overall responsibilities. Proactive MD is firmly committed to creating a diverse workplace and is proud to provide equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, gender identity and/or expression, sexual orientation, ethnicity, national origin, age, disability, genetics, marital status, amnesty status, or veteran status applicable to state and federal laws. "," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer: Data Pipeline Team (Remote),https://www.linkedin.com/jobs/view/data-engineer-data-pipeline-team-remote-at-constructor-3507406792?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=cgp6rzDKGDwmEGkZHQ9FYg%3D%3D&position=8&pageNum=4&trk=public_jobs_jserp-result_search-card," Constructor ",https://www.linkedin.com/company/constructor-io?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 month ago "," 74 applicants ","About Us Constructor.io powers product search and discovery for the largest retailers in the world, like Sephora and Backcountry, serving billions of requests every year: you have most likely used our product without knowing it. Each year we are growing in revenue and scale by several multiples, helping customers in every eCommerce vertical worldwide. We love working together to help each other succeed and are committed to maintaining an open, cooperative culture as we grow. We get to the correct answer with empathy, ownership, and passion for making an impact. The Data Pipeline team within Data Science and Engineering is an integral unit that serves internal stakeholders. It develops an easy-to-use platform for engineers to create, schedule, and run their data workloads. We mainly focus on the developer experience of ML and DS folks, the system's performance and robustness, and cost-effectiveness. Data Science and Engineering consist of a mix of data engineers & analysts owning & collaborating on multiple projects. As a Data Pipeline team member, you will use world-class analytical, engineering, and data processing techniques to build the foundational infrastructure, tooling, and analytical capabilities and enable the business to move forward. Challenges you will tackle Build a platform to enable data teams to iterate fast and write & execute their data workloads reliably and robustly. Ensure high data quality and integrity across multiple data sources, and automate data validation. Make the infrastructure costs observable, transparent, and effective. Requirements You are skilled at building and maintaining data ingestion pipelines. You excel at Apache Spark You excel at Python, Scala or Java You have proficiency with any variant of SQL You have deep experience with the big data stack (data query engines, metadata stores, schedulers, data queues, etc.) and are capable of designing and engineering data platform and its processes. You have a basic knowledge of cloud services, like AWS EC2, S3, Glue, IAM, Athena, Lambda, ECS. You are knowledgeable when it comes to different types of data storages, their use cases and trade offs Benefits Unlimited vacation time -we strongly encourage all of our employees take at least 3 weeks per year A competitive compensation package including stock options Company sponsored US health coverage (100% paid for employee) Fully remote team - choose where you live Work from home stipend! We want you to have the resources you need to set up your home office Apple laptops provided for new employees Training and development budget for every employee, refreshed each year Parental leave for qualified employees Work with smart people who will help you grow and make a meaningful impact Diversity, Equity, and Inclusion at Constructor At Constructor.io we are committed to cultivating a work environment that is diverse, equitable, and inclusive. As an equal opportunity employer, we welcome individuals of all backgrounds and provide equal opportunities to all applicants regardless of their education, diversity of opinion, race, color, religion, gender, gender expression, sexual orientation, national origin, genetics, disability, age, veteran status or affiliation in any other protected group."," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516890589?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=sjeLqxnOO7vyXozC%2BEMyPA%3D%3D&position=11&pageNum=4&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," San Diego, CA "," 1 month ago "," Be among the first 25 applicants ","As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership Requirements Responsibilities: Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3 Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques. Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools) Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective Understand data and analytics use cases across Web3 / blockchains Skills & Qualifications Bachelor’s degree in computer science or related technical field. Masters or PhD a plus. 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others) 3+ years experience with Scala and Apache Spark (or Kafka) A track record of recruiting and leading technical teams in a demanding talent market Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not required Experience with rapid development cycles in a web-based environment Strong scripting and test automation knowledge Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this Apply for this job"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-at-avalara-3480659857?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=xPn2sWsNGXk86xVdR8aNzQ%3D%3D&position=12&pageNum=4&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," San Diego, CA "," 1 month ago "," Be among the first 25 applicants "," As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processingframeworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between datawarehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a databasesystem that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadershipRequirementsResponsibilities:Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques.Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structureCodifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and managementManaging a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools)Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational componentsHighly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspectiveUnderstand data and analytics use cases across Web3 / blockchainsSkills & QualificationsBachelor’s degree in computer science or related technical field. Masters or PhD a plus.6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others)3+ years experience with Scala and Apache Spark (or Kafka)A track record of recruiting and leading technical teams in a demanding talent marketRock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not requiredNice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not requiredExperience with rapid development cycles in a web-based environmentStrong scripting and test automation knowledgeNice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into thisApply for this job "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oloop-technology-solutions-3527045989?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=KMVhTM7UdfwfCGNADoM9Xw%3D%3D&position=15&pageNum=4&trk=public_jobs_jserp-result_search-card," Oloop Technology Solutions ",https://www.linkedin.com/company/oloop?trk=public_jobs_topcard-org-name," Pittsburgh, PA "," 7 hours ago "," Be among the first 25 applicants "," Location: Pittsburgh, PA ( Hybrid)Job Duration; 12 Months CTHSkills And QualificationsGood knowledge of Relational Model, Dimensional Data Modeling, and Data Warehouse Advanced working SQL knowledge and performance tuning Experience working with relational databases, forex, ample Oracle, and Vertica; familiar with Database objects like Indexes, Views, Synonyms Hands-on experience in database programming and design using PL/SQL, Stored Procedures, Functions, Triggers, and Views, using Oracle 12c or better Knowledge of ETL, experience in related programming, Java, Python, etc Unix/Linux and Scripting, Bash, Python, etc Experience using version control and DevOps tools like Git, Jenkins, etc. Comfortable working with large, complex datasets, performs data analysis required to troubleshoot data data-related uses Experience in Reference/Master Data is a plus Knowledge of data security is a plus A logical thinker with demonstrably strong analytical skills Good written and verbal communication skills contribute to engineering wiki and documents work Establishes good working relationships with peers and teams 5+ years of experience in architecture in a data engineer role. Master's or Bachelor's degree in Computer Science "," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-infosys-bpm-3504415419?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=2yfH84qvNn%2BLvNMIacCrmQ%3D%3D&position=18&pageNum=4&trk=public_jobs_jserp-result_search-card," Infosys BPM ",https://in.linkedin.com/company/infosys-bpm?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","Job Description : Infosys is seeking an GCP Data Engineer with experience working in a Big Data ecosystem. The position will primarily be responsible interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle including Requirements Elicitation and Design. You will play an important role in creating the high-level design artifacts. You will also deliver high quality code deliverables for a module, lead validation for all types of testing and support activities related to implementation, transition and warranty. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued. Required Qualifications: Bachelor’s degree or foreign equivalent required from accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education 4 years of experience with Information Technology 2 years of hands on-experience working with technologies like – GCP with data engineering – data flow / air flow, pub sub/ Kafka, data proc/Hadoop, Big Query. 3 years of ETL development experience with strong SQL background such as Python/R, Scala, Java, Hive, Spark, Kafka 2 years of experience with any traditional RDBMS (e.g., Teradata, Oracle, DB2). Preferred Qualifications: GCP (google cloud platform) experience Python Experience Data analysis / Data mapping skills CI / CD exposure Ability to work in team in diverse/ multiple stakeholder environment Ability to communicate complex technology solutions to diverse teams namely, technical, business and management teams."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-akkodis-3505616141?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=0jhIwafV29zkONPc3Wxm8Q%3D%3D&position=19&pageNum=4&trk=public_jobs_jserp-result_search-card," Akkodis ",https://www.linkedin.com/company/akkodis?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Position Description: We’re seeking an experienced GCP Data Engineer who can build cloud analytics platform to meet ever expanding business requirements with speed and quality using lean Agile practices. This is an excellent opportunity in an advanced environment, offering an enjoyable fully-remote work environment, medical benefits, and for those who may require it, full sponsorship facilities including Green Card processing. In this role, you will work on analyzing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the Google Cloud Platform (GCP). You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications to GCP. Experience with large scale solution and operationalization of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on Google Cloud Platform. You will: Work in collaborative environment including pairing and mobbing with other cross-functional engineers. Work on a small Agile team to deliver working, tested software. Work effectively with fellow data engineers, product owners, data champions and other technical experts. Demonstrate technical knowledge/leadership skills and advocate for technical excellence. Develop exceptional Analytics data products using streaming, batch ingestion patterns in the Google Cloud Platform with solid Data warehouse principles. Be the Subject Matter Expert in Data Engineering and GCP tool technologies. Skills Required: Experience in working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Implement methods for automation of all parts of the pipeline to minimize labor in development and production. Very strong SQL skills. 2+ years of experience with Google Cloud Platform (GCP). Experience in analyzing complex data, organizing raw data and integrating massive datasets from multiple data sources to build subject areas and reusable data products. Experience in working with architects to evaluate and productionalize appropriate GCP tools for data ingestion, integration, presentation, and reporting. Experience in working with all stakeholders to formulate business problems as technical data requirement, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management. Proficient in Machine Learning model architecture, data pipeline interaction and metrics interpretation. This includes designing and deploying a pipeline with automated data lineage. Identify, develop, evaluate and summarize Proof of Concepts to prove out solutions. Test and compare competing solutions and report out a point of view on the best solution. Integration between GCP Data Catalog and Informatica EDC. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Skills Preferred: Experience building Machine Learning solutions using TensorFlow, BigQueryML, AutoML, Vertex AI. Experience in building solution architecture, provision infrastructure, secure and reliable data-centric services and application in GCP. Experience with DataPlex or Informatica EDC is preferred. Experience with development eco-system such as Git, Jenkins and CICD. Exceptional problem solving and communication skills. Experience in working with DBT/Dataform. Demonstrated commitment to quality and project timing. Demonstrated ability to document complex systems. Experience in creating and executing detailed test plans. Apply today! Equal Opportunity Employer/Veterans/Disabled To read our Candidate Privacy Information Statement, which explains how we will use your information, please visit https://www.modis.com/en-us/candidate-privacy/ The Company will consider qualified applicants with arrest and conviction records."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting and Motor Vehicle Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-currance-3486695584?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=hHzVzbRMwxIhG1qxIwbqmw%3D%3D&position=6&pageNum=2&trk=public_jobs_jserp-result_search-card," Currance ",https://www.linkedin.com/company/currance?trk=public_jobs_topcard-org-name," Irvine, CA "," 4 weeks ago "," 124 applicants ","Must have healthcare background with focus on billing and account follow up. $80,000 - $105,000/year At Currance, employees make the difference for our customers. We are looking for people who are dedicated, consistent, organized, and proud of the work they produce. If this describes you, we want you to join our team. This is a remote job. Candidates who meet the job minimum qualifications must complete a prescreen video. Job Details Remote Paid every two weeks Employee referral program Advancement opportunities Benefits Medical Dental Vision 401K Life insurance Voluntary long-term disability Voluntary short-term disability Health savings account Job Overview Supports the integration of data extracts into the business platform. Deployment of proprietary software solutions to end-users. Develop and maintain data processing software like databases. Job Duties And Responsibilities Perform data integrity validation, code mappings, and client-specific parameterization of the features and functionalities available in our applications. Identify and implement change to current data infrastructure, including staging tables, ETL procedures, data warehouses, and cubes to support the integration of new clients or to provide the foundation for new lines of business. Build required infrastructure for optimal extraction, transformation, and loading of data from various data sources using various technologies. Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer support. Work with stakeholders including data, design, product, and executive to assist with data-related technical issues. Identify and implement change to current data infrastructure, including staging tables, ETL procedures, data warehouses, and cubes to support the integration of new clients or to provide the foundation for new lines of business. Develop, deploy, and maintain multi-level performance tracking reports. Safeguard the integrity, accuracy, and currency of the data made available to organizations. Develop customized reports based on clients’ needs. Perform other duties as assigned. Qualifications Bachelor’s degree in a technical field (preferably engineering, analytics, or information technology) Proven experience with relational databases and T-SQL, multidimensional schemas, and MDX Healthcare background with a focus on billing and account follow up, X12 data (837, 835, 276, 277) Knowledge, Skills, And Abilities Knowledge of basic financial and accounting concepts. Knowledge of assimilating information from diverse sources. Skilled in analytics with a solid focus on accuracy and attention to detail. Skilled in verbal and written communications. Ability to manage multiple tasks at a time. Ability to complete deliverables based on prioritization and deadlines established. Ability to build and optimize data sets, data pipelines, and architectures. Ability to perform root cause analysis on external/internal data to identify opportunities for improvement. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata Powered by JazzHR Wp7xe2mIEN"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer (remote),https://www.linkedin.com/jobs/view/data-engineer-remote-at-carvana-3527090554?refId=wWftybpyf1t1LXclHsm6uw%3D%3D&trackingId=ledQjWEz1SRaiMiPw5BsAg%3D%3D&position=10&pageNum=2&trk=public_jobs_jserp-result_search-card," Carvana ",https://www.linkedin.com/company/carvana?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," 42 applicants ","About ADESA... ADESA, a Carvana-owned company, currently operates 56 locations throughout the US. Our Vehicle Service & Logistics Centers, some up to 200 acres, provide a wide array of vehicle services including repair & reconditioning, and auction remarketing. Many of our sites serve as market hub distribution centers. Our inventory comprises hundreds of thousands of vehicles across North America from retail to commercial, OEM & more.  We’re excited about the future! As an industry leader, ADESA is poised for a multi-year expansion including huge investments in facilities, massive sales growth, and an ever-increasing inventory of vehicles! We are looking for great people who want to take this journey with us!   About The Position... We are looking for a data engineer with cloud data warehouse experience and a passion for turning data into information. You will develop data pipelines and data models that underpin our KPIs and business processes. This exciting, fast-paced role requires excellent organizational skills, critical thinking, problem-solving, and teamwork to enable business partners to make informed decisions. What You'll Be Doing... Implement data models and data engineering solutions for our cloud data warehouse Work with business stakeholders to discover ROI and key success metrics to guide our investments in technology and improve our online operations Analyze data to ensure quality and help assess whether business value can be achieved Develop SQL to prototype concepts and troubleshoot issues Create visual solutions to deliver to stakeholders What You Should Have… Experience engineering data ingestion and transformation solutions Demonstrated ability to understand and implement data models, particularly Data Warehouse use cases SQL and RDBMS experience (ex. Snowflake, RedShift, Oracle, SQL Server, MySQL, etc.) Experience with cloud warehousing and analytics (ex. Snowflake, BigQuery, etc.) OOP or functional programming experience (ex. Python, JavaScript, etc.) Excellent analytical and problem-solving skills with the ability to analyze and break down problems Drive to match data solutions to business goals Ability to visualize data (Tableau experience preferred) Strong oral and written communication skills Motivated self-starter, ability to initiate and drive work to completion Legal stuff… Hiring is contingent on passing a complete background check. This role is eligible for visa sponsorship. Carvana is an equal employment opportunity employer. All applicants receive consideration for employment without regard to race, color, religion, gender, sexual orientation, gender identity or expression, marital status, national origin, age, mental or physical disability, protected veteran status, or genetic information, or any other basis protected by applicable law. Carvana also prohibits harassment of applicants or employees based on any of these protected categories. Please note this job description is not designed to contain a comprehensive listing of activities, duties, or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice."," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-montani-consulting-3520446935?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=ipN42R198qTd8pWNvCxXtw%3D%3D&position=4&pageNum=3&trk=public_jobs_jserp-result_search-card," Montani Consulting ",https://www.linkedin.com/company/montani-consulting-services?trk=public_jobs_topcard-org-name," Wilmington, NC "," 1 week ago "," 113 applicants ","At Pioneering fuel management software is our passion. We are constantly updating and refining our software and pride ourselves on delivering knowledge and power to our customers. As our company continues to grow, so does our team, this is where you come in! We are a tight-knit, dedicated team that works hard and has fun doing it. We’re looking to bring on a Data Engineer. This role will ensure data pipelines are scalable, repeatable, and secure and can serve multiple business units within the organization. The ideal candidate will have strong communication skills to train and educate fellow team members. Does this sound like you? Read more below! What we have to offer you: Medical, dental, and vision insurance with the company covering 100% of employee costs Short and long-term disability insurance 401K with up to 4% match Generous paid time off, including; 10 paid holidays + an additional 10 days Opportunities for growth and promotion from within Competitive compensation package Sophisticated, spacious, modern office space less than three miles from Wrightsville Beach On an average day, you will: Work with other development teams to review and approve database changes according to database design standards and principles. Create and maintain SQL and/or python jobs that import and export data between systems. Create data flow diagrams for data management systems. Code, test, and document new or modified data systems to create robust and scalable applications for data analytics. Provide detailed analysis, design, tuning, testing, implementation, and documentation of the production database systems. Translate business requirements into system requirements and assess where support is needed relative to existing technical system design. Participate in creating strategies and roadmaps for business intelligence and data platforms. Resolve conflicts between models, ensuring that data models are consistent with the ecosystem model (e.g., entity names, relationships, and definitions). To qualify for this job, you must have the following: A Bachelor’s degree in computer science, computer engineering, or similar technical discipline or equivalent work experience 2+ years of experience in MSSQL, writing complex stored procedures and functions Strong knowledge of relational databases, database structures and design, systems design, data management, and data warehouse Experience with performance tuning, building, and correcting indexes to optimize performance Experience using SQL CDC and triggers Experience with Kafka/Confluent Ability to learn and utilize programs to analyze data from different sources and produce meaningful reports and/or presentations We’d also love it if you have the following: Ability to understand Python, PHP, or other scripting languages 1+ years of experience supporting and maintaining complex SaaS application infrastructure EEO Statement: GE Software Inc. is an equal-opportunity employer committed to workplace diversity. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, gender identity, protected veteran status, status as a disabled individual, or any other protected group status or non-job characteristic as directed by law. Powered by JazzHR seZfvnjlbZ"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stripe-3511633989?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=dQFhWQr6Ezr4o0kQ0FAb1g%3D%3D&position=5&pageNum=3&trk=public_jobs_jserp-result_search-card," Stripe ",https://www.linkedin.com/company/stripe?trk=public_jobs_topcard-org-name," United States "," 1 day ago "," Over 200 applicants ","Who we are About Stripe Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career. About The Team The Data Science team builds data and intelligence into our product, sales, and operations. This spans across building data foundations and applying statistical techniques and machine learning to measure and optimize our product, build data-driven products, and conduct in-depth analysis to inform strategic decisions. What you’ll do We’re looking for people with a strong background in data engineering and analytics to help us scale while maintaining correct and complete data. You’ll be working with a variety of internal teams -- Engineering, Business -- to help them solve their data needs. Your work will provide teams with visibility into how Stripe’s products are being used and how we can better serve our customers. Responsibilities You’ll be working with a variety of internal teams -- Engineering, Business -- to help them solve their data needs Your work will provide teams with visibility into how Stripe’s products are being used and how we can better serve our customers Identify data needs for business and product teams, understand their specific requirements for metrics and analysis, and build efficient and scalable data pipelines to enable data-driven decisions across Stripe Design, develop, and own data pipelines and models that power internal analytics for product and business teams Help the Data Science team apply and generalize statistical and econometric models on large datasets Drive the collection of new data and the refinement of existing data sources, develop relationships with production engineering teams to manage our data structures as the Stripe product evolves Develop strong subject matter expertise and manage the SLAs for those data pipelines Who you are If you are data curious, excited about designing data pipelines, and motivated by having an impact on the business, we want to hear from you. Minimum Requirements Have a strong engineering background and are interested in data 5+ years of experience with writing and debugging data pipelines using a distributed data framework (Hadoop/Spark/Pig etc…) Have an inquisitive nature in diving into data inconsistencies to pinpoint issues Strong coding skills in Scala, Python, Java or another language for building performance data pipelines. Strong understanding and practical experience with systems such as Hadoop, Spark, Presto, Iceberg, and Airflow The ability to communicate cross-functionally with solid stakeholder management to derive requirements and architect scalable solutions."," Mid-Senior level "," Full-time "," Information Technology "," Software Development, Technology, Information and Internet, and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516891520?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=LSW5kUWG%2BaDjebJqMfXJdw%3D%3D&position=6&pageNum=3&trk=public_jobs_jserp-result_search-card," Stripe ",https://www.linkedin.com/company/stripe?trk=public_jobs_topcard-org-name," United States "," 1 day ago "," Over 200 applicants "," Who we areAbout StripeStripe is a financial infrastructure platform for businesses. Millionsof companies—from the world’s largest enterprises to the most ambitiousstartups—use Stripe to accept payments, grow their revenue, andaccelerate new business opportunities. Our mission is to increase theGDP of the internet, and we have a staggering amount of work ahead. Thatmeans you have an unprecedented opportunity to put the global economywithin everyone’s reach while doing the most important work of yourcareer.About The TeamThe Data Science team builds data and intelligence into our product,sales, and operations. This spans across building data foundations andapplying statistical techniques and machine learning to measure andoptimize our product, build data-driven products, and conduct in-depthanalysis to inform strategic decisions.What you’ll doWe’re looking for people with a strong background in data engineeringand analytics to help us scale while maintaining correct and completedata. You’ll be working with a variety of internal teams -- Engineering,Business -- to help them solve their data needs. Your work will provideteams with visibility into how Stripe’s products are being used and howwe can better serve our customers.ResponsibilitiesYou’ll be working with a variety of internal teams -- Engineering, Business -- to help them solve their data needsYour work will provide teams with visibility into how Stripe’s products are being used and how we can better serve our customersIdentify data needs for business and product teams, understand their specific requirements for metrics and analysis, and build efficient and scalable data pipelines to enable data-driven decisions across StripeDesign, develop, and own data pipelines and models that power internal analytics for product and business teamsHelp the Data Science team apply and generalize statistical and econometric models on large datasetsDrive the collection of new data and the refinement of existing data sources, develop relationships with production engineering teams to manage our data structures as the Stripe product evolvesDevelop strong subject matter expertise and manage the SLAs for those data pipelinesWho you areIf you are data curious, excited about designing data pipelines, andmotivated by having an impact on the business, we want to hear from you.Minimum RequirementsHave a strong engineering background and are interested in data5+ years of experience with writing and debugging data pipelines using a distributed data framework (Hadoop/Spark/Pig etc…)Have an inquisitive nature in diving into data inconsistencies to pinpoint issuesStrong coding skills in Scala, Python, Java or another language for building performance data pipelines.Strong understanding and practical experience with systems such as Hadoop, Spark, Presto, Iceberg, and AirflowThe ability to communicate cross-functionally with solid stakeholder management to derive requirements and architect scalable solutions. "," Mid-Senior level "," Full-time "," Information Technology "," Software Development, Technology, Information and Internet, and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516885995?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=%2FZt1cHNw6W3OPVHkXyq%2FsQ%3D%3D&position=7&pageNum=3&trk=public_jobs_jserp-result_search-card," Stripe ",https://www.linkedin.com/company/stripe?trk=public_jobs_topcard-org-name," United States "," 1 day ago "," Over 200 applicants "," Who we areAbout StripeStripe is a financial infrastructure platform for businesses. Millionsof companies—from the world’s largest enterprises to the most ambitiousstartups—use Stripe to accept payments, grow their revenue, andaccelerate new business opportunities. Our mission is to increase theGDP of the internet, and we have a staggering amount of work ahead. Thatmeans you have an unprecedented opportunity to put the global economywithin everyone’s reach while doing the most important work of yourcareer.About The TeamThe Data Science team builds data and intelligence into our product,sales, and operations. This spans across building data foundations andapplying statistical techniques and machine learning to measure andoptimize our product, build data-driven products, and conduct in-depthanalysis to inform strategic decisions.What you’ll doWe’re looking for people with a strong background in data engineeringand analytics to help us scale while maintaining correct and completedata. You’ll be working with a variety of internal teams -- Engineering,Business -- to help them solve their data needs. Your work will provideteams with visibility into how Stripe’s products are being used and howwe can better serve our customers.ResponsibilitiesYou’ll be working with a variety of internal teams -- Engineering, Business -- to help them solve their data needsYour work will provide teams with visibility into how Stripe’s products are being used and how we can better serve our customersIdentify data needs for business and product teams, understand their specific requirements for metrics and analysis, and build efficient and scalable data pipelines to enable data-driven decisions across StripeDesign, develop, and own data pipelines and models that power internal analytics for product and business teamsHelp the Data Science team apply and generalize statistical and econometric models on large datasetsDrive the collection of new data and the refinement of existing data sources, develop relationships with production engineering teams to manage our data structures as the Stripe product evolvesDevelop strong subject matter expertise and manage the SLAs for those data pipelinesWho you areIf you are data curious, excited about designing data pipelines, andmotivated by having an impact on the business, we want to hear from you.Minimum RequirementsHave a strong engineering background and are interested in data5+ years of experience with writing and debugging data pipelines using a distributed data framework (Hadoop/Spark/Pig etc…)Have an inquisitive nature in diving into data inconsistencies to pinpoint issuesStrong coding skills in Scala, Python, Java or another language for building performance data pipelines.Strong understanding and practical experience with systems such as Hadoop, Spark, Presto, Iceberg, and AirflowThe ability to communicate cross-functionally with solid stakeholder management to derive requirements and architect scalable solutions. "," Mid-Senior level "," Full-time "," Information Technology "," Software Development, Technology, Information and Internet, and Financial Services " Data Engineer,United States,Lead Data Engineer,https://www.linkedin.com/jobs/view/lead-data-engineer-at-amwell-3509404246?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=swcHD%2FSRNJ%2FJOsKpyDVLPg%3D%3D&position=11&pageNum=3&trk=public_jobs_jserp-result_search-card," Amwell ",https://www.linkedin.com/company/amwellcorp?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," 34 applicants ","Company Description Amwell is a leading telehealth platform in the United States and globally, connecting and enabling providers, insurers, patients, and innovators to deliver greater access to more affordable, higher quality care. Amwell believes that digital care delivery will transform healthcare. We offer a single, comprehensive platform to support all telehealth needs from urgent to acute and post-acute care, as well as chronic care management and healthy living. With over a decade of experience, Amwell powers telehealth solutions for over 150 health systems comprised of 2,000 hospitals and 55 health plan partners with over 36,000 employers, covering over 80 million lives. Brief Overview: The Lead Data Engineer should have experience in end-to-end implementation of data-warehousing projects. This individual will manage, utilize, move, and transform data from our source system and applications data to the cloud to create reports for senior management and internal users. This individual will work both independently on assigned projects and collaboratively with other team members. The Lead Data Engineer will collaborate with architects, business users, and source (data) team members on discovering data sources and determine tech feasibility for fetching these data sources into the consolidated data environment/platform. This individual will iteratively design and build core components for our data platform, build various ETL pipelines among the various tools in play to surface data for consumption by our reporting tools, and prioritize competing requests from internal and external stakeholders in addition to keeping the reporting infrastructure on par with new product functionality and release cycles. The Lead Data Engineer will become a subject matter expert in data classification within the platform and utilize their expertise to identify the most efficient path to deliver data from source to target, as needed. Core Responsibilities: Design and write excellent, fully tested code to ETL/ELT data pipelines, and stream on a cloud platform. Have good communication skills, as well as the ability to work effectively across internal and external organizations and virtual teams. Take ownership of design & development processes, ensuring incorporation of best practices, the sanity of code and versioning into different environments through tools like Git, etc. Implement product features and refine specifications with our product manager and product owners. Continuously improve team processes to ensure information is of the highest quality, contributing to the overall effectiveness of the team. Stay familiar with industry changes, especially in the areas of cloud data and analytics technologies.\Able to work on multiple areas, like data pipeline ETL, data modelling design, writing complex SQL queries, etc., and have a good understanding of BI/DWH principles Plan and execute both short-term and long-term goals individually and leading the team. Provide best practices and direction for data engineering and design across multiple projects and functional areas. Understand SDLC (Software Development life cycle) and have knowledge of Scrum, Agile. Qualifications: 13+ years of development experience building data pipelines. Bachelor's Degree or equivalent experience is required. Preferred in Computer Science or related degree. Minimum of 5 years of experience in architecture of modern data warehousing platforms using technologies such as Big Data, Cloud, and Kafka experience. Cloud experience - any cloud, preferably Bigquery, data flow, pub-sub, and data fusion. Migration experience, experience utilizing GCP to move data from on-prem servers to the cloud. Strong Python development for data transfers and extractions (ELT or ETL). Experience developing and deploying ETL solutions like Informatica or similar tools. Experience working within an agile development process (Scrum, Kanban, etc). Familiarity with CI/CD concepts. Demonstrated proficiency in creating technical documentation. Understand modern concepts (how new-gen DB is implemented – like how BQ/Redshift works?). Airflow, Dag development experience Informatica or any ETL tool previous experience. Ability and experience in BI and Data Analysis, end-to-end development in data platform environments. Write excellent, fully tested code to build ETL /ELT data pipelines on Cloud. Provide in-depth and always-improving code reviews to your teammates. Build cloud data solutions and provide domain perspective on storage, big data platform services, serverless architectures, RDBMS, and DW/DM. Participate in deep architectural discussions to build confidence and ensure customer success when building new solutions on the GCP platform. Fix things before they break. Additional Information Your Team: Should you join Amwell and the Engineering team, you can expect: The development organization is a multi-disciplinary team of engineers dedicated to creating a state-of-the-art TeleHealth experience on every platform we can get our hands on. Our cross-functional teams follow a pragmatic Agile methodology as we balance feature requests, strategic initiatives, tech debt, and exciting partnerships on the path to delivering a market leading product to a quickly growing customer base. We work hand in hand with the whole Amwell organization to ensure that our product meets the needs of all of our users. Working at Amwell: Amwell is changing how care is delivered through online and mobile technology. We strive to make the hard work of healthcare look easy. In order to make this a reality, we look for people with a fast-paced, mission-driven mentality. We're a culture that prides itself on quality, efficiency, smarts, initiative, creative thinking, and a strong work ethic. Our Core Values include One Team, Customer First, and Deliver Awesome. Customer First and Deliver Awesome are all about our product and services and how we strive to serve. As part of One Team, we operate the Amwell Cares program, which brings needed assistance to our communities, whether that be free healthcare for the underserved or for people affected by natural disasters, support for equality, honoring doctors and nurses, or annual Amwell-matched donations to food banks. Amwell aims to be a force for good for our employees, our clients, and our communities. Amwell cares deeply about and supports Diversity, Equity and Inclusion. These initiatives are highlighted and reflected within our Three DE&I Pillars - our Workplace, our Workforce and our Community. Amwell is a ""virtual first"" workplace, which means you can work from anywhere, coming together physically for ideation, collaboration and client meetings. We enable our employees with the tools, resources and opportunities to do their jobs effectively wherever they are! Amwell has collaboration spaces in Boston, Tysons Corner, Portland, Woodland Hills, and Seattle. The typical base salary range for this position is $120,000 - $165,000. The actual salary offer will ultimately depend on multiple factors including, but not limited to, knowledge, skills, relevant education, experience, complexity or specialization of talent, and other objective factors. In addition to base salary, this role may be eligible for an annual bonus based on a combination of company performance and employee performance. Long-term incentive and short-term variable compensation may be offered as part of the compensation package dependent on the role. Some roles may be commission based, in which case the total compensation will be based on a commission and the above range may not be an accurate representation of total compensation. Further, the above range is subject to change based on market demands and operational needs and does not constitute a promise of a particular wage or a guarantee of employment. Your recruiter can share more during the hiring process about the specific salary range based on the above factors listed. Unlimited Personal Time Off (Vacation time) 401K match Competitive healthcare, dental and vision insurance plans Paid Parental Leave (Maternity and Paternity leave) Employee Stock Purchase Program Free access to Amwell's Telehealth Services, SilverCloud and The Clinic by Cleveland Clinic's second opinion program Free Subscription to the Calm App Tuition Assistance Program Pet Insurance"," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Snowflake expert,https://www.linkedin.com/jobs/view/data-engineer-snowflake-expert-at-experfy-3514940838?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=QcTBK%2Fx5FpxPq8sxfQ5Q%2FQ%3D%3D&position=14&pageNum=3&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," Be among the first 25 applicants ","Opportunity Description We are looking for a Data Engineer to join our digital data team in the data architecture operation and governance team to build and operationalize data pipelines necessary for the enterprise data and analytics and insights initiatives, following industry standard practices and tools. The bulk of the work would be in building, managing, and optimizing data pipelines and then moving them effectively into production for key data and analytics consumers like business/data analysts, data scientists or any persona that needs curated data for data and analytics use cases across the enterprise. In addition, guarantee compliance with data governance and data security requirements while creating, improving, and operationalizing these integrated and reusable data pipelines. The data engineer will be the key interface in operationalizing data and analytics on behalf of the business unit(s) and organizational outcomes. Tech Skills Knowledge of AWS. Knowledge of Azure or GCP is a plus Orchestration: Airflow Project management & support: JIRA projects & service desk, Confluence, Teams Expert in ELT and ETL Expert in Relational database technologies and concepts: Snowflake is a must have Perform SQL queries Create database models Maintain and improve queries performance Working knowledge of Python and familiar with other scripting languages Good knowledge of cloud computing Soft Skills Pragmatic and capable of solving complex issues Ability to understand business needs Good communication Push innovative solutions Service-oriented, flexible & team player Self-motivated, take initiative Attention to detail & technical intuition Experience At least 5 years experiences in a data team as Data Engineer Experience in a healthcare industry is a strong plus Snowflake certified Preferred Qualifications BS or MS in Computer Science Requirements Responsibilities Must work with business team to understand requirements, and translate them into technical needs Gather and organize large and complex data assets, perform relevant analysis Ensure the quality of the data in coordination with Data Analysts and Data Scientists (peer validation) Propose and implement relevant data models for each business cases Optimize data models and workflows Communicate results and findings in a structured way Partner with Product Owner and Data Analysts to prioritize the pipeline implementation plan Partner with Data Analysts and Data scientists to design pipelines relevant for business requirements Leverage existing or create new “standard pipelines” within to bring value through business use cases Ensure best practices in data manipulation are enforced end-to-end Actively contribute to Data governance community"," Not Applicable "," Contract "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Associate Data Engineer,https://www.linkedin.com/jobs/view/associate-data-engineer-at-lowe-s-companies-inc-3492486840?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=4WrPzWkcY3vTzKRDbQr1KQ%3D%3D&position=2&pageNum=4&trk=public_jobs_jserp-result_search-card," Lowe's Companies, Inc. ",https://www.linkedin.com/company/lowe%27s-home-improvement?trk=public_jobs_topcard-org-name," Charlotte, NC "," 2 days ago "," Over 200 applicants ","Job Summary The primary purpose of this role is to assist with the translation of business requirements and functional specifications into modules and Data or Platform solutions. This includes assisting with the implementation and maintenance of business data solutions to ensure successful deployment of released applications. This role participates in all software development lifecycle phases and is critical to supporting software testing activities. Key Responsibilities: Assists with translating business requirements and specifications into modules and Data or Platform solutions with guidance from senior colleagues; provides insight into recommendations for technical solutions that meet design and functional needs Supports the development, configuration, or modification of integrated business and/or enterprise application solutions within various computing environments by leveraging various software development methodologies and programming languages Assists in the implementation and maintenance of business data solutions to ensure successful deployment of released applications with guidance from senior colleagues as appropriate Supports systems integration testing (SIT) and user acceptance testing (UAT) with guidance from senior colleagues to ensure quality software deployment Supports all software development end-to-end product lifecycle phases by applying an understanding of company methodologies, policies, standards, and controls Understands Computer Science and/or Computer Engineering fundamentals. Learning software architecture; actively seeks knowledge and applies to data solutions or platform applications Drives the adoption of new technologies by researching innovative technical trends and developments Solves technical problems; solutions may need refinement and/or feedback from more senior level engineers Data Engineering Responsibilities Supports the build, maintenance and enhancements of data lake development through a continuous learning of ingestion toolsets for DBMS and file system-based data ingestion Supports the build, maintenance and enhancements of data curation pipelines which are of simple to medium complexity Builds an understanding of policies, data, and resources to support projects Maintains the health and monitoring of assigned analytic capabilities for a specific analytic function BI Engineering Responsibilities Supports the build, maintenance and enhancements of BI solutions; creates basic reports, metrics, filters, and prompts Develops an understanding of the proper usage of attributes, facts, and transformations; develops an understanding of SQL generation from reports Provides daily monitoring of jobs/cubes, and overall health of the platform; resolves user access requests Platform Engineering Responsibilities Administers Hadoop clusters in DTQ and Production environments Performs application installs, upgrades, patching and troubleshooting efforts Performs cluster maintenance, user provisioning, automation of routine tasks, troubleshooting of failed jobs, configure and maintain security policies Helps team to create the conceptual, logical and physical design for hybrid cloud-based solutions for infrastructure and platforms Qualifications Minimum Qualifications High School or GED - General Studies Completion of coursework or program focused on software development or related skills (e.g., technology-related college coursework, bootcamp, certification program, coding academy) Preferred Qualifications Bachelor's degree in computer science, CIS, or related field 1 year of experience in software development or related field 1 year of experience developing and implementing business systems within an organization 1 year of experience working with defect or incident tracking software 1 year of experience writing technical documentation in a software development environment 1 year of experience with Web Services Experience with application and integration middleware Experience with database technologies 1 year of experience in Hadoop, NO-SQL, RDBMS, Teradata, MicroStrategy or any Cloud Bigdata components About Lowe’s Lowe’s Companies, Inc. (NYSE: LOW) is a FORTUNE® 50 home improvement company serving approximately 19 million customer transactions a week in the United States and Canada. With fiscal year 2021 sales of over $96 billion, Lowe’s and its related businesses operate or service nearly 2,200 home improvement and hardware stores and employ over 300,000 associates. Based in Mooresville, N.C., Lowe’s supports the communities it serves through programs focused on creating safe, affordable housing and helping to develop the next generation of skilled trade experts. For more information, visit Lowes.com. EEO Statement Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law."," Associate "," Full-time "," Information Technology and Engineering "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3517024138?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=hCQrgZxFCr6GFojQspzt0w%3D%3D&position=17&pageNum=4&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 1 week ago "," 126 applicants ","Overview Job Title: Data Engineer - L06 PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Job Description As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Understand and adapt existing frameworks for data engineering pipelines in the organization. Responsible for adopting best practices around systems integration, security, performance, and data management defined within the organization. Collaborate with the team and learn to build scalable data pipelines. Support data engineering pipelines and quickly respond to failures. Collaborate with the team to develop new approaches and build solutions at scale. Create documentation for learning and knowledge transfer. Learn and adapt automation skills/techniques in day-to-day activities. Qualifications 1+ years of overall technology experience includes at least 1+ years of hands-on software development and data engineering. 1+ years of development experience in programming languages like Python, PySpark, Scala, etc.Experience or knowledge in Data Modeling, SQL optimization, performance tuning is a plus. 6+ months of cloud data engineering experience in Azure Certification is a plus. Experience with version control systems like Github and deployment & CI tools. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools is a plus. Experience in working with large data sets and scaling applications like Kubernetes is a plus. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Education BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, and the ability to influence and demonstrate confidence in communications with senior-level management. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to coordinate effectively with the team. Positive and flexible attitude and adjust to different needs in an ever-changing environment. Foster a team culture of accountability, communication, and self-management. Proactively drive impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to learn quickly and adapt to new skills. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law. EEO Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender Identity If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy. Please view our Pay Transparency Statement"," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3501580751?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=RJYGK4TqH32FKQ4LowCd0A%3D%3D&position=21&pageNum=4&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Los Angeles, CA "," 1 month ago "," 26 applicants ","As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership Requirements Responsibilities: Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3 Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques. Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools) Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective Understand data and analytics use cases across Web3 / blockchains Skills & Qualifications Bachelor’s degree in computer science or related technical field. Masters or PhD a plus. 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others) 3+ years experience with Scala and Apache Spark (or Kafka) A track record of recruiting and leading technical teams in a demanding talent market Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not required Experience with rapid development cycles in a web-based environment Strong scripting and test automation knowledge Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Sr. Data Engineer,https://www.linkedin.com/jobs/view/sr-data-engineer-at-experfy-3531423358?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=RZN7yU4zG7Vm87vo6te9nQ%3D%3D&position=25&pageNum=4&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Los Angeles, CA "," 3 weeks ago "," Be among the first 25 applicants "," A Sr. Data Engineer is proficient in the development of all aspects of data processing including data warehouse architecture/modeling and ETL processing. The position focuses research on development and delivery of analytical solutions using various tools including Confluent Kafka, Kinesis, Glue, Lambda, Snowflake and SQL Server. A Sr. Data Engineer must be able to work autonomously with little guidance or instruction to deliver business value.ResponsibilitiesPosition Responsibilities Partner with business stakeholders to gather requirements and translate them into technical specifications and process documentation for IT counterparts (on-prem and offshore) Highly proficient in the architecture and development of an event driven data warehouse; streaming, batch, data modeling, and storage Advanced database knowledge; creating/optimizing SQL queries, stored procedures, functions, partitioning data, indexing, and reading execution plans Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption Expert level of understanding and implementing ETL architecture; data profiling, process flow, metric logging and error handling Support continuous improvement by investigating and presenting alternatives to processes and technologies to an architectural review board Develop and ensure adherence to published system architectural decisions and development standards Multi-task across several ongoing projects and daily duties of varying priorities as required Interact with global technical teams to communicate business requirements and collaboratively build data solutionsRequirementsRequirements 8+ years of development experience Expert level in data warehouse design/architecture, dimensional data modeling and ETL process development Advanced level development in SQL/NoSQL scripting and complex stored procedures (Snowflake, SQL Server, DynomoDB, NEO4J a plus) Extremely proficient in Python, PySpark, and Java AWS Expertise – Kinesis, Glue (Spark), EMR, S3, Lambda, and Athena Streaming Services – Confluent Kafka and Kinesis (or equivalent) Hands on experience in designing and developing applications using Java Spring Framework (Spring Boot, Spring Cloud, Spring Data etc)Apply for this job "," Not Applicable "," Contract "," Marketing "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Flights (100% Remote),https://www.linkedin.com/jobs/view/data-engineer-flights-100%25-remote-at-hopper-3483746062?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=SL%2FBJm1Rc4fbv7YpxhJc9A%3D%3D&position=1&pageNum=5&trk=public_jobs_jserp-result_search-card," Hopper ",https://ca.linkedin.com/company/hopper?trk=public_jobs_topcard-org-name," United States "," 6 days ago "," Over 200 applicants ","About The Job We are looking for an autonomous individual to join our Flights team as a Data Engineer to design and build data environments that are robust, high-quality, and provide fast access. The ideal candidate will leverage their experience as an engineer writing code and proficiency with data-intensive systems to build data pipelines and solve customer problems. The data engineer will own the data infrastructure, tooling, and technical guidance to enable the Flights team to make data-driven decisions. The role will be critical to building and executing on a progressive data strategy for our Flights team. A successful candidate has prior experience building a data pipeline infrastructure from scratch, scaling resilient solutions, and can find stability in abstract work. This person will also help to reinforce a culture of ownership over data integrity across all Flights teams enabling product managers in Flights to make data driven decisions at scale. Responsibilities Enable data-driven insights, accurate reporting by ensuring consistent, accurate, and accessible data for internal analysis. Work backwards from customer needs with Product to set and achieve measurable goals regarding data integrity. Develop and distribute best practices for recording, storing, and processing product data to facilitate rapid development and self-service. Draft and review specifications and plans relating to data generation and consumption related to the team's products. Write, review, and deploy production code related to data storage and processing for the team's products. Promote a culture of ownership of data concerns within the team. Strong Candidates Will Have 3+ years of recent experience building and operating high-volume and high-reliability data processing systems A strong sense of ownership over outcomes, and preference for results over process Hands-on experience working with the Google Cloud Platform suite of tools Experience with data warehousing, data infrastructure, and ETL (Terraform, AirFlow, Dataflow, BigQuery or similar tools) Enthusiasm to collaborate across multiple teams and stakeholders on abstract problems and developing solutions ETL batch and workflow orchestration with AWS Data Pipeline/Airflow or similar Experience with designing and building large scale data pipelines in distributed environments with technologies such as Hadoop, Cassandra, Kafka, Spark, HBase, etc. Backend development experience with Scala, Java, Python, unix shell scripting and a zeal for writing well-designed, testable software Experience with Business Intelligence tools such or Data Studio, Amplitude, Tableau, or similar Demonstrated ability to create a vision, architecting scalable long term solutions that apply technology to solve business problems Preferred Qualifications Degree in a relevant technical field (Computer Science, Mathematics or Statistics) Broad understanding of cloud contact center technologies and operations More About Hopper At Hopper, we are on a mission to become the world’s best — and most fun — place to book travel. By leveraging massive amounts of data, advanced machine learning algorithms, Hopper combines its world-class travel agency offering with proprietary fintech products to help customers spend less and travel better. Ranked the third largest online travel agency in North America, the app has been downloaded nearly 80 million times and continues to gain market share globally. Here are just a few stats that demonstrate the company’s recent growth: Hopper sold around $4 billion in travel and travel fintech in 2022, up nearly 3X over 2021. In 2022, Hopper increased its revenue 2.5X year-over year. The company’s bespoke fintech products, such as Flight Disruption Guarantee and Price Freeze, now represent 30-40% of Hopper’s total app revenue. Given the success of its fintech products, Hopper launched a B2B initiative called Hopper Cloud in late 2021. Through this partnership program, any travel provider (airlines, hotels, banks, travel agencies, etc.) can integrate and seamlessly distribute Hopper’s fintech or travel inventory. As its first Hopper Cloud partnership, Hopper partnered with Capital One to co-develop Capital One Travel, a new travel portal designed specifically for cardholders. Recognized as one of the world’s most innovative companies by Fast Company four years in a row, Hopper has been downloaded over 80 million times and continues to have millions of new installs each month. Hopper has raised over $700 million USD of private capital and is backed by some of the largest institutional investors and banks in the world. Hopper is primed to continue its acceleration as the world’s fastest-growing mobile-first travel marketplace. Come take off with us!"," Associate "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Sr. Data Engineer,https://www.linkedin.com/jobs/view/sr-data-engineer-at-experfy-3531423358?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=E3CYhiKQaGRhcQ%2B7zKsx7Q%3D%3D&position=2&pageNum=5&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Los Angeles, CA "," 3 weeks ago "," Be among the first 25 applicants "," A Sr. Data Engineer is proficient in the development of all aspects of data processing including data warehouse architecture/modeling and ETL processing. The position focuses research on development and delivery of analytical solutions using various tools including Confluent Kafka, Kinesis, Glue, Lambda, Snowflake and SQL Server. A Sr. Data Engineer must be able to work autonomously with little guidance or instruction to deliver business value.ResponsibilitiesPosition Responsibilities Partner with business stakeholders to gather requirements and translate them into technical specifications and process documentation for IT counterparts (on-prem and offshore) Highly proficient in the architecture and development of an event driven data warehouse; streaming, batch, data modeling, and storage Advanced database knowledge; creating/optimizing SQL queries, stored procedures, functions, partitioning data, indexing, and reading execution plans Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption Expert level of understanding and implementing ETL architecture; data profiling, process flow, metric logging and error handling Support continuous improvement by investigating and presenting alternatives to processes and technologies to an architectural review board Develop and ensure adherence to published system architectural decisions and development standards Multi-task across several ongoing projects and daily duties of varying priorities as required Interact with global technical teams to communicate business requirements and collaboratively build data solutionsRequirementsRequirements 8+ years of development experience Expert level in data warehouse design/architecture, dimensional data modeling and ETL process development Advanced level development in SQL/NoSQL scripting and complex stored procedures (Snowflake, SQL Server, DynomoDB, NEO4J a plus) Extremely proficient in Python, PySpark, and Java AWS Expertise – Kinesis, Glue (Spark), EMR, S3, Lambda, and Athena Streaming Services – Confluent Kafka and Kinesis (or equivalent) Hands on experience in designing and developing applications using Java Spring Framework (Spring Boot, Spring Cloud, Spring Data etc)Apply for this job "," Not Applicable "," Contract "," Marketing "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-allcloud-3519897865?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=NFJb%2FVxPndmyrT%2FV4LzS7A%3D%3D&position=3&pageNum=5&trk=public_jobs_jserp-result_search-card," AllCloud ",https://www.linkedin.com/company/allcloud?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","About AllCloud AllCloud is a global professional services company providing organizations with cloud enablement and transformation. Through a unique combination of expertise and agility, AllCloud accelerates cloud innovation and helps organizations fully unlock the value received from cloud technology and data and analytics. As an AWS Premier Consulting Partner and audited MSP, a Salesforce Platinum Partner and Snowflake Premier Partner, AllCloud helps clients connect their front office and back office by building a new operating model that allows them to harness the benefits of cloud technology. AllCloud is supported by a robust ecosystem of technology partners, proven methodologies, and well-documented best practices. Thereby elevating customers by achieving operational excellence on the cloud, within a secure environment, at every milestone of the journey to becoming cloud first. With years of experience and a portfolio of thousands of successful cloud deployments, AllCloud serves clients across the globe. AllCloud has offices in Israel, Europe and North America About The Position Are you passionate about data and delivering solutions for clients that turn data into valuable, actionable information for their business? We are hiring Sr. Data Engineers with strong experience across the entire Cloud Data stack. The ideal candidate will have extensive experience in data pipelines (ELT/ETL), data replication, data warehousing and dimensional modeling, and curation of data sets for Data Scientists and Business Intelligence users. This candidate will also have excellent problem-solving ability dealing with large volumes of data. Responsibilities Establishing Data Replications Using Oracle GoldenGate and Qlik Data Integration (must have) Building scalable Cloud data solutions using MPP Data Warehouses (Snowflake, Redshift, or Azure Data Warehouse/Synapse), data storage (S3, Azure Blob Storage, Delta Lakes, or AWS Lake Formation) and analytics platforms (i.e. Spark, Databricks, etc.) Creation of data pipelines and transformations ELT – Matillion, FiveTran, etc. ETL – Informatica, Talend, etc. Transformations – dbt Load historical data to a data warehouse Scripting in Python or Shell Workflow Orchestrations using Apache Airflow, AWS Step Functions, etc. Familiarity with automated promotions, SCM tools, and CICD best practices Modeling and curation of data for visualization and predictive modeling users Design and implementation of AWS and/or Azure services such as Lambda, SNS, etc. Creating data integrations with scripting languages such as Python Writing complex SQL queries, stored procedures, etc. Requirements Bachelor’s degree, or equivalent experience, in Computer Science, Engineering, Mathematics or a related field. Commensurate work experience will be considered in lieu of degree 3+ years' experience using Oracle Golden Gate 3+ years' experience using Qlik Data Integration 5+ years’ experience building scalable Cloud data solutions using MPP Data Warehouses (Snowflake, Redshift, or Azure Data Warehouse/Synapse), data storage (S3, Azure Blob Storage, Delta Lakes, or AWS Lake Formation) and analytics platforms (i.e. Spark, Databricks, etc.) 7+ years with complex SQL queries and scripting 5+ years’ experience building data pipelines via Python, Spark, or GUI Based tools 5+ years’ experience loading historical data to data warehouses 5+ years’ experience with AWS and/or Azure Cloud 5+ years developing, and deploying scalable enterprise data solutions (Enterprise Data Warehouses, Data Marts, ETL/ELT workloads, etc.) 5+ years of supporting business intelligence and analytic projects 2+ years’ DevOps experience 2+ years’ experience working in an environment with automated promotions to production Good understanding of code repositories such as GIT Excellent written and oral communication skills Pluses: Experience with a Business Intelligence tool such as Tableau, PowerBI, etc. Experience working at a consulting company Experience with Data Vault architecture Snowflake SnowPro AWS Data & Analytics Specialty AWS Database Specialty AWS Solutions Architect Associate AWS Developer Associate Why Work For Us? Our team inspires progress in each other and in our customers through our relentless pursuit of excellence; you will work with leaders who promote learning and personal development. We offer competitive salaries, bonus incentives, benefits, flexible hours, and mentoring. Apply now to become part of the team. AllCloud is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, provincial, or local law."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-elsdon-consulting-ltd-3493468069?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=x0ODtkb%2BDTjLGFfbk9pWmA%3D%3D&position=4&pageNum=5&trk=public_jobs_jserp-result_search-card," Elsdon Consulting ltd ",https://uk.linkedin.com/company/elsdon-consulting?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Over 200 applicants ","Are you a Data Engineer? Do you enjoy working within the Aerospace sector? Would you be excited at the opportunity to work on mission-critical projects? If so, this may be the opportunity you have been looking for... This first tier supplier are looking for a dedicated and passionate Data Engineer to implement data management systems from conceptual stage all the way through to deployment. This role would be suited to someone who enjoys being involved in the full project lifecycle! At this time this role is only open to US Citizens / Green card holders and the role will be largely remote. The Data Engineer will need: 2 years’ experience as a data engineer or similar role such as data developer, data architect, ETL developer, integration specialist, etc. Bachelor’s degree in data engineering, computer science, or a related field Preferred to have at least one related certificate such as Certified Analytics Professional, Azure Data Engineer Associate, AWS Big Data Specialty, etc. The responsibilities of the Data Engineer will be: Integrate multi-plant/system data within the enterprise data lake Develop and maintain efficient ingestion and transformation pipelines Monitor data pipelines and ensure they meet quality and timeliness SLAs Partner with data stewards, analysts, and scientists to ensure data is fit-for-purpose So if you are a Data Engineer and have been looking for a new position, or this role simply caught your eye, please do apply today or contact me directly at max.morrell@elsdonconsulting.com"," Mid-Senior level "," Full-time "," Information Technology "," Airlines and Aviation " Data Engineer,United States,Data Engineer I,https://www.linkedin.com/jobs/view/data-engineer-i-at-bloom-insurance-3513563302?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=EmDr5Hm4Md7k4td%2FAWBvmg%3D%3D&position=5&pageNum=5&trk=public_jobs_jserp-result_search-card," Bloom Insurance ",https://www.linkedin.com/company/bloominsurance?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 152 applicants ","To support internal and external clients via processing and handling of data. To generate data solutions for ongoing immediate day to day business needs. Essential Functions Day to day functions include the following: Design data models and develop database structures in Microsoft SQL server. Write various database objects like stored procedures, functions, views, triggers for various front end applications. Write SQL scripts, create SQL agent jobs to automate tasks like data importing, exporting, cleansing tasks. Create database deployment packages for deploying changes. Identify & repair inconsistencies in data, database tuning, query optimization. Able to generate ad hoc data on demand. Able to identify best practices, documentation, communicate all aspects of projects in a clear, concise manner Develop simple SSIS packages to perform various ETL functions including data cleansing, manipulating, importing, exporting. Develop & maintain client facing reports by using various data manipulation techniques in SSRS and Visual Studio. Documentation Optimization recommendations Day to day troubleshooting .NET Programming as needed Education/Experience BA, BS, or Masters in computer science/related field preferred or an equivalent combination of education and experience derived from at least 2 years of professional work experience Solid experience with various versions of MS SQL Server and TSQL programming Microsoft Certified DBA a plus Skills/Knowledge Strong experience in writing efficient SQL code Working knowledge of SQL Server Management Studio (SSMS) Knowledge of SQL Server Reporting Services (SSRS) Knowledge of SQL Server Integration Services (SSIS) Knowledge of Red Gate DBA Tool Belt (SQL Compare, SQL Data Compare, SQL Source Control) a plus Knowledge of data science technologies is a plus Clear, concise communication skills, excellent organizational skills Highly self-motivated and directed Keen attention to detail High level of work intensity in a team environment High integrity and values-driven Eager for professional development Experience and understanding of source control management a plus What We Offer At Bloom, we offer an engaging, supportive work environment, great benefits, and the opportunity to build the career you always wanted. Benefits of working for Bloom include: Competitive compensation Comprehensive health benefits Long-term career growth and mentoring About Bloom As an insurance services company licensed in 48 contiguous U.S. states, Bloom focuses on enabling health plans to increase membership and improve the enrollee experience while reducing costs. We concentrate on two areas of service: technology services and call center services and are committed to ensuring our state-of-the-art software products and services provide greater efficiency and cost savings to clients. Ascend Technology ™ Bloom provides advanced sales and enrollment automation technology to the insurance industry through our Ascend ™. Our Ascend™ technology platform focuses on sales automation efficiencies and optimizing the member experience from the first moment a prospect considers a health plan membership. Bloom is proud to be an Equal Opportunity employer. We do not discriminate based upon race, religion, color, national origin, sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law."," Entry level "," Full-time "," Strategy/Planning and Information Technology "," Insurance " Data Engineer,United States,Data Engineer I,https://www.linkedin.com/jobs/view/data-engineer-1-at-above-lending-3490585771?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=GVwEYS8FkmY0Rw1FMQSMuQ%3D%3D&position=6&pageNum=5&trk=public_jobs_jserp-result_search-card," Bloom Insurance ",https://www.linkedin.com/company/bloominsurance?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 152 applicants "," To support internal and external clients via processing and handling of data. To generate data solutions for ongoing immediate day to day business needs.Essential FunctionsDay to day functions include the following:Design data models and develop database structures in Microsoft SQL server.Write various database objects like stored procedures, functions, views, triggers for various front end applications.Write SQL scripts, create SQL agent jobs to automate tasks like data importing, exporting, cleansing tasks.Create database deployment packages for deploying changes.Identify & repair inconsistencies in data, database tuning, query optimization.Able to generate ad hoc data on demand.Able to identify best practices, documentation, communicate all aspects of projects in a clear, concise mannerDevelop simple SSIS packages to perform various ETL functions including data cleansing, manipulating, importing, exporting.Develop & maintain client facing reports by using various data manipulation techniques in SSRS and Visual Studio.DocumentationOptimization recommendationsDay to day troubleshooting.NET Programming as neededEducation/ExperienceBA, BS, or Masters in computer science/related field preferred or an equivalent combination of education and experience derived from at least 2 years of professional work experienceSolid experience with various versions of MS SQL Server and TSQL programmingMicrosoft Certified DBA a plusSkills/KnowledgeStrong experience in writing efficient SQL codeWorking knowledge of SQL Server Management Studio (SSMS)Knowledge of SQL Server Reporting Services (SSRS)Knowledge of SQL Server Integration Services (SSIS)Knowledge of Red Gate DBA Tool Belt (SQL Compare, SQL Data Compare, SQL Source Control) a plusKnowledge of data science technologies is a plusClear, concise communication skills, excellent organizational skillsHighly self-motivated and directedKeen attention to detailHigh level of work intensity in a team environmentHigh integrity and values-drivenEager for professional developmentExperience and understanding of source control management a plusWhat We OfferAt Bloom, we offer an engaging, supportive work environment, great benefits, and the opportunity to build the career you always wanted. Benefits of working for Bloom include:Competitive compensation Comprehensive health benefits Long-term career growth and mentoring About BloomAs an insurance services company licensed in 48 contiguous U.S. states, Bloom focuses on enabling health plans to increase membership and improve the enrollee experience while reducing costs. We concentrate on two areas of service: technology services and call center services and are committed to ensuring our state-of-the-art software products and services provide greater efficiency and cost savings to clients.Ascend Technology ™Bloom provides advanced sales and enrollment automation technology to the insurance industry through our Ascend ™. Our Ascend™ technology platform focuses on sales automation efficiencies and optimizing the member experience from the first moment a prospect considers a health plan membership.Bloom is proud to be an Equal Opportunity employer. We do not discriminate based upon race, religion, color, national origin, sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. "," Entry level "," Full-time "," Strategy/Planning and Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-agrograph-3462221858?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=hEWqC%2BkLgugLXXUFgyHw%2FA%3D%3D&position=7&pageNum=5&trk=public_jobs_jserp-result_search-card," Agrograph ",https://www.linkedin.com/company/agrograph?trk=public_jobs_topcard-org-name," Madison, WI "," 1 month ago "," 42 applicants ","Agrograph Inc., a global agrifinance company is seeking a Data Engineer. We are seeking a skilled Data Engineer with experience working with databases & object-oriented programming languages like Python, with an interest in working with a systems programming language like Rust. The ideal candidate will have a strong background in data architecture and management, as well as experience working with geospatial data. Responsibilities Design, develop, and maintain data pipelines and architectures to support the collection, storage, and analysis of large-scale data sets Work closely with data scientists and analysts to identify and implement data solutions that meet their needs Build and maintain data storage solutions, including databases and data lakes Write and maintain code in Python and Rust to support data processing and analysis Optimize data pipelines for performance and scalability Collaborate with other teams to ensure data is accessible, accurate, and secure Qualifications Strong knowledge of databases and data storage solutions Experience with multiple programming languages, Python & Rust is a plus Exposure to geospatial data sources and processes Familiarity with data warehousing, ETL processes and data modeling Familiarity with cloud-based container deployments Strong analytical and problem-solving skills Excellent communication and collaboration skills Bachelor's degree in Computer Science, Data Science, or a related field If you are a self-motivated and experienced data engineer with a passion for working with large-scale data, please apply for this role. Benefits Contribution to Simple IRA retirement package with 3% matching Flexible schedule, Remote Position"," Entry level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-data-analytics-at-costco-wholesale-3512816976?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=VT8yIVNPDGznB16Ou01sjg%3D%3D&position=8&pageNum=5&trk=public_jobs_jserp-result_search-card," Agrograph ",https://www.linkedin.com/company/agrograph?trk=public_jobs_topcard-org-name," Madison, WI "," 1 month ago "," 42 applicants "," Agrograph Inc., a global agrifinance company is seeking a Data Engineer. We are seeking a skilled Data Engineer with experience working with databases & object-oriented programming languages like Python, with an interest in working with a systems programming language like Rust. The ideal candidate will have a strong background in data architecture and management, as well as experience working with geospatial data.ResponsibilitiesDesign, develop, and maintain data pipelines and architectures to support the collection, storage, and analysis of large-scale data setsWork closely with data scientists and analysts to identify and implement data solutions that meet their needsBuild and maintain data storage solutions, including databases and data lakesWrite and maintain code in Python and Rust to support data processing and analysisOptimize data pipelines for performance and scalabilityCollaborate with other teams to ensure data is accessible, accurate, and secureQualificationsStrong knowledge of databases and data storage solutionsExperience with multiple programming languages, Python & Rust is a plusExposure to geospatial data sources and processesFamiliarity with data warehousing, ETL processes and data modelingFamiliarity with cloud-based container deploymentsStrong analytical and problem-solving skillsExcellent communication and collaboration skillsBachelor's degree in Computer Science, Data Science, or a related fieldIf you are a self-motivated and experienced data engineer with a passion for working with large-scale data, please apply for this role.BenefitsContribution to Simple IRA retirement package with 3% matchingFlexible schedule, Remote Position "," Entry level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-addison-group-3518297127?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=jt%2Fo4M47OIoE%2F0hms4qezQ%3D%3D&position=9&pageNum=5&trk=public_jobs_jserp-result_search-card," Agrograph ",https://www.linkedin.com/company/agrograph?trk=public_jobs_topcard-org-name," Madison, WI "," 1 month ago "," 42 applicants "," Agrograph Inc., a global agrifinance company is seeking a Data Engineer. We are seeking a skilled Data Engineer with experience working with databases & object-oriented programming languages like Python, with an interest in working with a systems programming language like Rust. The ideal candidate will have a strong background in data architecture and management, as well as experience working with geospatial data.ResponsibilitiesDesign, develop, and maintain data pipelines and architectures to support the collection, storage, and analysis of large-scale data setsWork closely with data scientists and analysts to identify and implement data solutions that meet their needsBuild and maintain data storage solutions, including databases and data lakesWrite and maintain code in Python and Rust to support data processing and analysisOptimize data pipelines for performance and scalabilityCollaborate with other teams to ensure data is accessible, accurate, and secureQualificationsStrong knowledge of databases and data storage solutionsExperience with multiple programming languages, Python & Rust is a plusExposure to geospatial data sources and processesFamiliarity with data warehousing, ETL processes and data modelingFamiliarity with cloud-based container deploymentsStrong analytical and problem-solving skillsExcellent communication and collaboration skillsBachelor's degree in Computer Science, Data Science, or a related fieldIf you are a self-motivated and experienced data engineer with a passion for working with large-scale data, please apply for this role.BenefitsContribution to Simple IRA retirement package with 3% matchingFlexible schedule, Remote Position "," Entry level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-apt-3519817015?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=b%2FbghSuZKujiTmzGrKYs3A%3D%3D&position=10&pageNum=5&trk=public_jobs_jserp-result_search-card," Agrograph ",https://www.linkedin.com/company/agrograph?trk=public_jobs_topcard-org-name," Madison, WI "," 1 month ago "," 42 applicants "," Agrograph Inc., a global agrifinance company is seeking a Data Engineer. We are seeking a skilled Data Engineer with experience working with databases & object-oriented programming languages like Python, with an interest in working with a systems programming language like Rust. The ideal candidate will have a strong background in data architecture and management, as well as experience working with geospatial data.ResponsibilitiesDesign, develop, and maintain data pipelines and architectures to support the collection, storage, and analysis of large-scale data setsWork closely with data scientists and analysts to identify and implement data solutions that meet their needsBuild and maintain data storage solutions, including databases and data lakesWrite and maintain code in Python and Rust to support data processing and analysisOptimize data pipelines for performance and scalabilityCollaborate with other teams to ensure data is accessible, accurate, and secureQualificationsStrong knowledge of databases and data storage solutionsExperience with multiple programming languages, Python & Rust is a plusExposure to geospatial data sources and processesFamiliarity with data warehousing, ETL processes and data modelingFamiliarity with cloud-based container deploymentsStrong analytical and problem-solving skillsExcellent communication and collaboration skillsBachelor's degree in Computer Science, Data Science, or a related fieldIf you are a self-motivated and experienced data engineer with a passion for working with large-scale data, please apply for this role.BenefitsContribution to Simple IRA retirement package with 3% matchingFlexible schedule, Remote Position "," Entry level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-robert-half-3490309645?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=%2FSurG8D3wZuNBG4kewmTTg%3D%3D&position=11&pageNum=5&trk=public_jobs_jserp-result_search-card," Agrograph ",https://www.linkedin.com/company/agrograph?trk=public_jobs_topcard-org-name," Madison, WI "," 1 month ago "," 42 applicants "," Agrograph Inc., a global agrifinance company is seeking a Data Engineer. We are seeking a skilled Data Engineer with experience working with databases & object-oriented programming languages like Python, with an interest in working with a systems programming language like Rust. The ideal candidate will have a strong background in data architecture and management, as well as experience working with geospatial data.ResponsibilitiesDesign, develop, and maintain data pipelines and architectures to support the collection, storage, and analysis of large-scale data setsWork closely with data scientists and analysts to identify and implement data solutions that meet their needsBuild and maintain data storage solutions, including databases and data lakesWrite and maintain code in Python and Rust to support data processing and analysisOptimize data pipelines for performance and scalabilityCollaborate with other teams to ensure data is accessible, accurate, and secureQualificationsStrong knowledge of databases and data storage solutionsExperience with multiple programming languages, Python & Rust is a plusExposure to geospatial data sources and processesFamiliarity with data warehousing, ETL processes and data modelingFamiliarity with cloud-based container deploymentsStrong analytical and problem-solving skillsExcellent communication and collaboration skillsBachelor's degree in Computer Science, Data Science, or a related fieldIf you are a self-motivated and experienced data engineer with a passion for working with large-scale data, please apply for this role.BenefitsContribution to Simple IRA retirement package with 3% matchingFlexible schedule, Remote Position "," Entry level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-adroit-software-inc-3515908328?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=CFsnJOGOTe2gAzkR9MnMXg%3D%3D&position=12&pageNum=5&trk=public_jobs_jserp-result_search-card," Adroit Software Inc. ",https://www.linkedin.com/company/adroit-software-inc.?trk=public_jobs_topcard-org-name," Smithfield, RI "," 1 week ago "," 33 applicants "," For a financial client we need Data Engineer. This position is based in Westlake, TX or Smithfield, RI or Durham, NC. We are Primarily looking for W2 Candidates and not looking for Third Party Candidates.MUST Have SkillsBuilding pipelines from Oracle to AWS using PythonSnowflake data lake hosted on AWSIn-depth understanding of ETL & Data warehousing SystemDeveloping SQL and PL/SQL objects as per business logic in Teradata, PostgreSQL and/or Oracle databasesWorking experience with Informatica and Unix Programming/Shell Scripting "," Executive "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-hobbs-madison-inc-3519175692?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=gkrIb6V99zaWWr9fKT3uCQ%3D%3D&position=13&pageNum=5&trk=public_jobs_jserp-result_search-card," Adroit Software Inc. ",https://www.linkedin.com/company/adroit-software-inc.?trk=public_jobs_topcard-org-name," Smithfield, RI "," 1 week ago "," 33 applicants "," For a financial client we need Data Engineer. This position is based in Westlake, TX or Smithfield, RI or Durham, NC. We are Primarily looking for W2 Candidates and not looking for Third Party Candidates.MUST Have SkillsBuilding pipelines from Oracle to AWS using PythonSnowflake data lake hosted on AWSIn-depth understanding of ETL & Data warehousing SystemDeveloping SQL and PL/SQL objects as per business logic in Teradata, PostgreSQL and/or Oracle databasesWorking experience with Informatica and Unix Programming/Shell Scripting "," Executive "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,jr Data Engineer,https://www.linkedin.com/jobs/view/jr-data-engineer-at-princeton-it-services-inc-3488504801?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=PMBbTqtKwP4OfADulX9uXw%3D%3D&position=14&pageNum=5&trk=public_jobs_jserp-result_search-card," Princeton IT Services, Inc ",https://www.linkedin.com/company/princeton-it-services-inc?trk=public_jobs_topcard-org-name," Raleigh, NC "," 3 weeks ago "," Be among the first 25 applicants ","Position: Jr Data Engineer Location: Raleigh, NC or Boston, MA Job Length: Long term Position Type: C2C/W2 Qualifications 4 years of Experience in Java , Python , Spark 4 years Experience in Snowflake, Data Pipelines , SQL Stays current with technology trends in order to provide best options for solutions Self-directed and is able to decompose work into problem sets for self and project team. Equally capable working as part of a team or independently Responsibilities Designs, develops, tests, and delivers software solutions using one or more commercial languages as well as, open-source tools. Data processing and analysis using Snowflake. Data warehouse using Data Pipelines along with data transformation and optimization. Comfortable working within a culture of accountability and experimentation Work closely with internal stakeholders to implement solutions and generate reporting to meet business goals. Demonstrate critical thinking for potential roadblocks; comprehends bigger picture of the business and effectively communicates these issues to greater news digital organization. Collaborates with reporting teams and business owners to turn data into actionable business insights using self-service analytics and reporting tools. Skills Required : Snowflake, Data Pipelines , SQL"," Mid-Senior level "," Full-time "," Information Technology "," Information Technology & Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-atlantic-partners-corporation-3493469708?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=%2BEDDe81tl%2BgJXNiXtOKgwA%3D%3D&position=15&pageNum=5&trk=public_jobs_jserp-result_search-card," Atlantic Partners Corporation ",https://www.linkedin.com/company/atlantic-partners?trk=public_jobs_topcard-org-name," Chicago, IL "," 3 weeks ago "," 134 applicants ","This position requires the individual to be comfortable working in a team environment and multi-tasking across one or more initiatives. The candidate must be a self-starter, resourceful, have exceptional analytical and problem-solving skills, while being willing to adapt to new technologies. In addition, the candidate should have an inquisitive nature, a passion for excellent service and very strong development skills in relevant technologies. This hands-on role requires critical thinking, enterprise vision and strong technical skills to oversee the development and testing of application and data projects. Responsibilities include: Serve as subject matter expert on the Azure Data platform Design, develop, test and implement solutions in either Agile or Waterfall methodologies using SQL, C#, Python, PowerShell, Azure Data Factory, Logic Apps, Synapse Understand client needs to align the solution with business requirements and delivery schedule and manage client expectations throughout solution implementation Develop detailed requirements documentation including technical / application architecture design, use cases, design specifications, data flows, acceptance criteria and user guide Lead the development process, including planning and monitoring development efforts, coordinating activities with other groups, reviewing deliverables and communicating statuses to stakeholders Provide technical guidance and mentorship to other team members Perform code reviews to ensure adherence to standards; responsible for overall quality and accurate implementation of developed solutions utilizing Azure DevOps integrations Lead SLA compliance and own performance tuning and optimization efforts of the Data platform Maintain a broad and current understanding of data analytics and business intelligence technologies, methodologies and tools Manage and build client relationships during project execution, effectively becoming a trusted advisor to the client Confidently participate in meetings and manage frequent communications with all levels of the Firm including key stakeholders and senior management Be resourceful to proactively identify and engage available resources and subject matter experts in related areas to achieve your goals Ensure that all delivered solutions meet the highest quality standards and satisfy all specified business requirements through rigorous validation and testing, both independent of, and in collaboration with, business and technical users Demonstrate a strong ability to multitask, manage time and deadlines under pressure Demonstrate flexibility in adjusting priorities to respond to changing internal / external demands in a very fast paced, growth environment Triage support issues and provide maintenance break / fix support for application and data solutions Executes best practices in development to ensure the delivery of robust solutions which can scale to meet business needs and avoid unnecessary down time Candidate Requirements Qualifications & Experience: Bachelor's Degree (Computer Science, Business or related field preferred) with 7-10 years of experience in a similar role in Financial Services In-depth experience in architecting, designing, performance optimizing and implementing complex data and analytics solutions with the Azure data platform: experience in full data pipeline technologies including Azure SQL Database / Managed Instance, Data Factory, Azure Logic App, Azure Blob Storage, Azure Data Lake, Synapse and other related tools 5+ total years of hands-on working experience with Microsoft SQL Server database / Azure SQL DB / Azure SQL Managed Instance DB 2+ years of hands-on working experience building automated ETL / ELT or data integration pipelines utilizing multiple source systems and (4) experience with DevOps source control and CI / CD pipelines Strong problem-solving skills, an inquisitive nature, a passion for excellent service and resourcefulness Proactive self-starter with a positive can-do and 'no job too small' attitude Results-oriented with a high level of personal accountability Motivated by a fast paced, growing, and complex environment Delivery-oriented with very high quality and customer service standards Working knowledge of all phases of the SDLC in a team-oriented structure Exceptional interpersonal, verbal, written and presentation skills, with an eagerness to explore and educate development teams on new technologies and best practices Strong data analysis skills are required; experience executing small to medium-sized projects and a general understanding of Project Management principles are desired COVID vaccinations required, subject to applicable local, state and federal law"," Mid-Senior level "," Full-time "," Information Technology "," Investment Banking and Investment Management " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-global-payments-usds-at-tiktok-3496170873?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=nxe2Y5hzEsieI63%2FW%2Bu9iA%3D%3D&position=16&pageNum=5&trk=public_jobs_jserp-result_search-card," Atlantic Partners Corporation ",https://www.linkedin.com/company/atlantic-partners?trk=public_jobs_topcard-org-name," Chicago, IL "," 3 weeks ago "," 134 applicants "," This position requires the individual to be comfortable working in a team environment and multi-tasking across one or more initiatives. The candidate must be a self-starter, resourceful, have exceptional analytical and problem-solving skills, while being willing to adapt to new technologies. In addition, the candidate should have an inquisitive nature, a passion for excellent service and very strong development skills in relevant technologies. This hands-on role requires critical thinking, enterprise vision and strong technical skills to oversee the development and testing of application and data projects.Responsibilities include:Serve as subject matter expert on the Azure Data platformDesign, develop, test and implement solutions in either Agile or Waterfall methodologies using SQL, C#, Python, PowerShell, Azure Data Factory, Logic Apps, SynapseUnderstand client needs to align the solution with business requirements and delivery schedule and manage client expectations throughout solution implementationDevelop detailed requirements documentation including technical / application architecture design, use cases, design specifications, data flows, acceptance criteria and user guideLead the development process, including planning and monitoring development efforts, coordinating activities with other groups, reviewing deliverables and communicating statuses to stakeholdersProvide technical guidance and mentorship to other team membersPerform code reviews to ensure adherence to standards; responsible for overall quality and accurate implementation of developed solutions utilizing Azure DevOps integrationsLead SLA compliance and own performance tuning and optimization efforts of the Data platformMaintain a broad and current understanding of data analytics and business intelligence technologies, methodologies and toolsManage and build client relationships during project execution, effectively becoming a trusted advisor to the clientConfidently participate in meetings and manage frequent communications with all levels of the Firm including key stakeholders and senior managementBe resourceful to proactively identify and engage available resources and subject matter experts in related areas to achieve your goalsEnsure that all delivered solutions meet the highest quality standards and satisfy all specified business requirements through rigorous validation and testing, both independent of, and in collaboration with, business and technical usersDemonstrate a strong ability to multitask, manage time and deadlines under pressureDemonstrate flexibility in adjusting priorities to respond to changing internal / external demands in a very fast paced, growth environmentTriage support issues and provide maintenance break / fix support for application and data solutionsExecutes best practices in development to ensure the delivery of robust solutions which can scale to meet business needs and avoid unnecessary down timeCandidate Requirements Qualifications & Experience: Bachelor's Degree (Computer Science, Business or related field preferred) with 7-10 years of experience in a similar role in Financial ServicesIn-depth experience in architecting, designing, performance optimizing and implementing complex data and analytics solutions with the Azure data platform:experience in full data pipeline technologies including Azure SQL Database / Managed Instance, Data Factory, Azure Logic App, Azure Blob Storage, Azure Data Lake, Synapse and other related tools5+ total years of hands-on working experience with Microsoft SQL Server database / Azure SQL DB / Azure SQL Managed Instance DB2+ years of hands-on working experience building automated ETL / ELT or data integration pipelines utilizing multiple source systems and (4) experience with DevOps source control and CI / CD pipelinesStrong problem-solving skills, an inquisitive nature, a passion for excellent service and resourcefulnessProactive self-starter with a positive can-do and 'no job too small' attitudeResults-oriented with a high level of personal accountabilityMotivated by a fast paced, growing, and complex environmentDelivery-oriented with very high quality and customer service standardsWorking knowledge of all phases of the SDLC in a team-oriented structureExceptional interpersonal, verbal, written and presentation skills, with an eagerness to explore and educate development teams on new technologies and best practicesStrong data analysis skills are required; experience executing small to medium-sized projects and a general understanding of Project Management principles are desiredCOVID vaccinations required, subject to applicable local, state and federal law "," Mid-Senior level "," Full-time "," Information Technology "," Investment Banking and Investment Management " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-firstpro-inc-3516812070?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=g3eZ%2FlrPiM90t0umYQMfWg%3D%3D&position=17&pageNum=5&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 6 days ago "," 82 applicants "," FirstPro is now accepting resumes for a Data Engineer position based in Orlando, FL. This role will focus on operationalizing urgent data and building and managing data pipelines, moving that data to production while ensuring compliance. This is a permanent, direct-hire role that can offer benefits, annual bonus and a hybrid remote/onsite schedule. ResponsibilitiesServe as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure. Perform analysis on large datasets to make and implement recommendations for maximizing customer experience. Assists in the design and implementation of relational databases and structures as needed. Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing. Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability. Participates in developing cutting-edge storage design structures and data processing flows. Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use: It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed. Participate in agile development ceremonies, and interact with both business analysts and end-users to come up with well-performing and scalable solutions. QualificationsBachelor's degree in Computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required. 5+ years of experience developing SQL/T-SQL including, Single-row and Multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF. Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases. Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design. Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production. Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others. Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-blue-cross-nc-3526779212?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=%2BIIE6Xv%2Br%2B97x%2FC%2FyP102Q%3D%3D&position=18&pageNum=5&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 6 days ago "," 82 applicants "," FirstPro is now accepting resumes for a Data Engineer position based in Orlando, FL. This role will focus on operationalizing urgent data and building and managing data pipelines, moving that data to production while ensuring compliance. This is a permanent, direct-hire role that can offer benefits, annual bonus and a hybrid remote/onsite schedule. ResponsibilitiesServe as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure. Perform analysis on large datasets to make and implement recommendations for maximizing customer experience. Assists in the design and implementation of relational databases and structures as needed. Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing. Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability. Participates in developing cutting-edge storage design structures and data processing flows. Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use: It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed. Participate in agile development ceremonies, and interact with both business analysts and end-users to come up with well-performing and scalable solutions. QualificationsBachelor's degree in Computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required. 5+ years of experience developing SQL/T-SQL including, Single-row and Multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF. Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases. Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design. Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production. Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others. Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-kuali-inc-3511021430?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=VTmLFMt5PxhGCNYEeYkJEw%3D%3D&position=19&pageNum=5&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 6 days ago "," 82 applicants "," FirstPro is now accepting resumes for a Data Engineer position based in Orlando, FL. This role will focus on operationalizing urgent data and building and managing data pipelines, moving that data to production while ensuring compliance. This is a permanent, direct-hire role that can offer benefits, annual bonus and a hybrid remote/onsite schedule. ResponsibilitiesServe as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure. Perform analysis on large datasets to make and implement recommendations for maximizing customer experience. Assists in the design and implementation of relational databases and structures as needed. Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing. Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability. Participates in developing cutting-edge storage design structures and data processing flows. Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use: It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed. Participate in agile development ceremonies, and interact with both business analysts and end-users to come up with well-performing and scalable solutions. QualificationsBachelor's degree in Computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required. 5+ years of experience developing SQL/T-SQL including, Single-row and Multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF. Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases. Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design. Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production. Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others. Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-web-scraper-at-collectbase-inc-3495982411?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=2AoM0n4Tn%2BWBGZ7%2FWAXjGQ%3D%3D&position=20&pageNum=5&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 6 days ago "," 82 applicants "," FirstPro is now accepting resumes for a Data Engineer position based in Orlando, FL. This role will focus on operationalizing urgent data and building and managing data pipelines, moving that data to production while ensuring compliance. This is a permanent, direct-hire role that can offer benefits, annual bonus and a hybrid remote/onsite schedule. ResponsibilitiesServe as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure. Perform analysis on large datasets to make and implement recommendations for maximizing customer experience. Assists in the design and implementation of relational databases and structures as needed. Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing. Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability. Participates in developing cutting-edge storage design structures and data processing flows. Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use: It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed. Participate in agile development ceremonies, and interact with both business analysts and end-users to come up with well-performing and scalable solutions. QualificationsBachelor's degree in Computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required. 5+ years of experience developing SQL/T-SQL including, Single-row and Multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF. Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases. Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design. Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production. Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others. Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-agility-partners-3516435971?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=oAmgZIbDRFJXVYmgf1Y40Q%3D%3D&position=21&pageNum=5&trk=public_jobs_jserp-result_search-card," Agility Partners ",https://www.linkedin.com/company/agilitypartners?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 1 week ago "," Over 200 applicants ","A Little About This Gig Agility Partners is working with a leading financial institution in search of a talented Data Engineer to join the Enterprise Data Office. The data engineer designs and builds platforms, tools, and solutions that help the bank manage, secure, and generate value from its data. The person in this role creates scalable and reusable solutions for gathering, collecting, storing, processing, and serving data on both small and very large (i.e. Big Data) scales. These solutions can include on-premise and cloud-based data platforms, and solutions in any of the following domains ETL, business intelligence, analytics, persistence (relational, NoSQL, data lakes), search, messaging, data warehousing, stream processing, and machine learning. The Ideal Candidate Bachelor's degree in Computer Science/Information Systems or equivalent combination of education and experience. Must be able to communicate ideas both verbally and in writing to management, business and IT sponsors, and technical resources in language that is appropriate for each group. Knowledge of application and data security concepts, best practices, and common vulnerabilities. Conceptual understanding of ONE OR MORE of the following disciplines preferred: big data technologies and distributions metadata management products commercial ETL tools Bi and reporting tools & messaging systems data warehousing Java (language and run time environment) major version control systems continuous integration/delivery tools infrastructure automation and virtualization tools major cloud, or rest API design and development Reasons to Love It Work with in a collaborative team environment where ideas and creativity are welcomed! Family and Work Life balance are important to this organization and valued for the employees. Working for an organization that focuses on company culture, inclusion and diversity Paid Holidays, 50% medical coverage for you and your entire family, short/long term disability and life insurance options 401(k) Paid Holidays Life Insurance Disability coverage"," Mid-Senior level "," Contract "," Marketing, Public Relations, and Writing/Editing "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/cloud-data-engineer-at-govx-3498721045?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=pr37s2L6F5T2w%2BjUt5FKNw%3D%3D&position=22&pageNum=5&trk=public_jobs_jserp-result_search-card," Agility Partners ",https://www.linkedin.com/company/agilitypartners?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 1 week ago "," Over 200 applicants "," A Little About This GigAgility Partners is working with a leading financial institution in search of a talented Data Engineer to join the Enterprise Data Office. The data engineer designs and builds platforms, tools, and solutions that help the bank manage, secure, and generate value from its data. The person in this role creates scalable and reusable solutions for gathering, collecting, storing, processing, and serving data on both small and very large (i.e. Big Data) scales. These solutions can include on-premise and cloud-based data platforms, and solutions in any of the following domains ETL, business intelligence, analytics, persistence (relational, NoSQL, data lakes), search, messaging, data warehousing, stream processing, and machine learning.The Ideal CandidateBachelor's degree in Computer Science/Information Systems or equivalent combination of education and experience.Must be able to communicate ideas both verbally and in writing to management, business and IT sponsors, and technical resources in language that is appropriate for each group.Knowledge of application and data security concepts, best practices, and common vulnerabilities.Conceptual understanding of ONE OR MORE of the following disciplines preferred:big data technologies and distributionsmetadata management productscommercial ETL toolsBi and reporting tools & messaging systemsdata warehousingJava (language and run time environment)major version control systemscontinuous integration/delivery toolsinfrastructure automation and virtualization toolsmajor cloud, or rest API design and developmentReasons to Love ItWork with in a collaborative team environment where ideas and creativity are welcomed! Family and Work Life balance are important to this organization and valued for the employees. Working for an organization that focuses on company culture, inclusion and diversity Paid Holidays, 50% medical coverage for you and your entire family, short/long term disability and life insurance options 401(k) Paid Holidays Life Insurance Disability coverage "," Mid-Senior level "," Contract "," Marketing, Public Relations, and Writing/Editing "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-insight-global-3487289353?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=RWxmPeL0%2FfX4Ofi%2FZiQ9ug%3D%3D&position=23&pageNum=5&trk=public_jobs_jserp-result_search-card," Agility Partners ",https://www.linkedin.com/company/agilitypartners?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 1 week ago "," Over 200 applicants "," A Little About This GigAgility Partners is working with a leading financial institution in search of a talented Data Engineer to join the Enterprise Data Office. The data engineer designs and builds platforms, tools, and solutions that help the bank manage, secure, and generate value from its data. The person in this role creates scalable and reusable solutions for gathering, collecting, storing, processing, and serving data on both small and very large (i.e. Big Data) scales. These solutions can include on-premise and cloud-based data platforms, and solutions in any of the following domains ETL, business intelligence, analytics, persistence (relational, NoSQL, data lakes), search, messaging, data warehousing, stream processing, and machine learning.The Ideal CandidateBachelor's degree in Computer Science/Information Systems or equivalent combination of education and experience.Must be able to communicate ideas both verbally and in writing to management, business and IT sponsors, and technical resources in language that is appropriate for each group.Knowledge of application and data security concepts, best practices, and common vulnerabilities.Conceptual understanding of ONE OR MORE of the following disciplines preferred:big data technologies and distributionsmetadata management productscommercial ETL toolsBi and reporting tools & messaging systemsdata warehousingJava (language and run time environment)major version control systemscontinuous integration/delivery toolsinfrastructure automation and virtualization toolsmajor cloud, or rest API design and developmentReasons to Love ItWork with in a collaborative team environment where ideas and creativity are welcomed! Family and Work Life balance are important to this organization and valued for the employees. Working for an organization that focuses on company culture, inclusion and diversity Paid Holidays, 50% medical coverage for you and your entire family, short/long term disability and life insurance options 401(k) Paid Holidays Life Insurance Disability coverage "," Mid-Senior level "," Contract "," Marketing, Public Relations, and Writing/Editing "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-modis-3488195421?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=SukY4EGaxTBGHNK5syXryg%3D%3D&position=24&pageNum=5&trk=public_jobs_jserp-result_search-card," Modis ",https://ch.linkedin.com/company/modis?trk=public_jobs_topcard-org-name," Dearborn, MI "," 3 weeks ago "," Over 200 applicants ","The Data Factory Enablement Team, as the name suggests enables teams build their solutions in the GCP Data Factory Platform by proving Tools, Guidelines, processes and support. We are looking for candidates who have a broad set of technology skills across areas and come from a background of DevOps, with exposure to infrastructure and solution monitoring. This person will be expected to provide consultative services to the Software Development and Database Engineering teams. Key responsibilities include: • Work as part of an implementation team from concept to operations, providing deep technical subject matter expertise for successfully deployment of company's Data Platform • Implement methods for standardization of all parts of the pipeline to maximize data usability and consistency • Test and compare competing solutions and report out a point of view on the best solution • Design and Build CICD Pipelines for Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, Data Fusion and others • Work with stakeholders including Analytics, Product, and Design teams to assist with data related technical issues and support their data infrastructure needs • Develop IAC tekton pipelines to execute pattern playbooks and templates • Designing cloud performance and monitoring strategies • Designing and implementing workflows to automate the infrastructure release and upgrade process for applications in Dev, UAT and Production environments • Mentor and grow technical skills of engineers across multiple sprint teams by giving high quality feedback in design and code reviews and providing training for new methods, tools, and patterns Skills Required: • Someone who understands Cloud as being a way to operate and not a place to host systems... • In-depth understanding of GCP product technology and underlying architectures. • Experience and very strong with development eco-system such as Git, Jenkins, Terraform and Tekton for CI/CD • Experience in working with Agile and Lean methodologies"," Mid-Senior level "," Contract "," Information Technology and Analyst "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-fortress-information-security-3489435646?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=NH2LAdIoZ4hgxKKlrgcMDg%3D%3D&position=25&pageNum=5&trk=public_jobs_jserp-result_search-card," Fortress Information Security ",https://www.linkedin.com/company/fortress-information-security?trk=public_jobs_topcard-org-name," Greater Orlando "," 12 hours ago "," Over 200 applicants "," Data Engineer What you can expect as a Data Engineer at Fortress: The Data Engineer will be responsible for defining data input rules and processes to ensure data quality and integrity. The data input comes from many stakeholders and the candidate will need to identify anomalies and develop programmatic solutions to solve problems. A successful candidate will be well versed in the implementation of data management strategies and data technologies. Day to day activities include data cleanup, documentation and definition of solution architectures, and hands on development activities. Responsibilities IncludeDevelop an in-depth knowledge of Fortress’s data, data flows, and processes Create and document data management standards, policies, and best practices Drive alignment on enterprise data models and definitions Provide technical guidance Develop with a focus on automation and consistency Can work independently or as a member of a team environment Anticipate, recognize, report, and help resolve issues Ability to understand how technical requirements’ impact current processes Ability to understand and facilitate coordination of data requirements across teams Minimum Qualifications2+ years of experience in a data engineer or equivalent role Expertise in structured and unstructured databases such as PostgreSQL, MongoDB, and ElasticSearch Expertise in building data models and complex SQL queries Expertise in data quality validation with the ability to conduct data analysis, investigation, and document resolution Experience in data process design, implementation, and improvement Ability to lead your own projects and operate with a high degree of autonomy in a remote working environment Must have ability to explain technical concepts and make decisions with non-technical team members Develops clean and intuitive code Excellent written and verbal communication skills Preferred ExperienceExperience with Jira Experience with Python and data analysis libraries such as Pandas Use of agile and DevOps practices for project and software management including continuous integration and continuous delivery Excellent time management skills and proven ability to multi-task competing priorities EducationBachelor's Degree in Information Technology, Computer Science, Data Engineering, Data Analytics or equivalent degree from an accredited University Employee BenefitsRemote and Hybrid working environment Competitive pay structure Medical, dental, vision plans with employees covered up to 90% with highly progressive options for dependents and families Company paid life, short- and long-term disability insurance Employee Assistance Program 401(k) match Paid time off and holiday pay Access to thousands of Learning & Development courses that range from mental health and wellbeing, stress, and time management to an array of technical and business-related courses Employment PerksWe provide each employee with professional growth opportunities through succession planning, up-skilling, and certifications Tuition and certification reimbursement Employee Referral Programs Company Sponsored Events Fortress is proud to be an Equal Opportunity Employer. All employees and applicants will receive consideration for employment without regard to age, color, disability, gender, national origin, race, religion, sexual orientation, gender identity, protected veteran status, or any other classification protected by federal, state, or local law. Fortress Information Security takes part in the E-Verify process for all new hires.For positions located in the US, the following conditions apply. If you are made a conditional offer of employment, you will have to undergo a drug test. ADA Disclaimer: In developing this job description care was taken to include all competencies needed to successfully perform in this position. However, for Americans with Disabilities Act (ADA) purposes, the essential functions of the job may or may not have been described for purposes of ADA reasonable accommodation. All reasonable accommodation requests will be reviewed and evaluated on a case-by-case basis. "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516891520?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=LSW5kUWG%2BaDjebJqMfXJdw%3D%3D&position=6&pageNum=3&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," New York, United States "," 1 month ago "," Be among the first 25 applicants ","As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership Requirements Responsibilities: Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3 Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques. Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools) Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective Understand data and analytics use cases across Web3 / blockchains Skills & Qualifications Bachelor’s degree in computer science or related technical field. Masters or PhD a plus. 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others) 3+ years experience with Scala and Apache Spark (or Kafka) A track record of recruiting and leading technical teams in a demanding talent market Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not required Experience with rapid development cycles in a web-based environment Strong scripting and test automation knowledge Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this Apply for this job"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516885995?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=%2FZt1cHNw6W3OPVHkXyq%2FsQ%3D%3D&position=7&pageNum=3&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," Be among the first 25 applicants ","As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadershi Requirements Responsibilities: Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3 Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques. Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools) Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective Understand data and analytics use cases across Web3 / blockchains Skills & Qualifications Bachelor’s degree in computer science or related technical field. Masters or PhD a plus. 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others) 3+ years experience with Scala and Apache Spark (or Kafka) A track record of recruiting and leading technical teams in a demanding talent market Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not required Experience with rapid development cycles in a web-based environment Strong scripting and test automation knowledge Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this Apply for this job"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer 1,https://www.linkedin.com/jobs/view/data-engineer-1-at-above-lending-3490585771?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=GVwEYS8FkmY0Rw1FMQSMuQ%3D%3D&position=6&pageNum=5&trk=public_jobs_jserp-result_search-card," Above Lending ",https://www.linkedin.com/company/above-lending?trk=public_jobs_topcard-org-name," Chicago, IL "," 3 weeks ago "," 161 applicants ","Above Lending, Inc. is a next-generation financial services company. We provide simple and transparent products aimed to help our clients achieve their personal finance aspirations and take control of their debt. With competitive rates and personalized support from our loan specialists, our mission is to simplify the lending process and help borrowers attain financial well-being. We are passionate about making credit more affordable and accessible, and we're committed to helping our clients accomplish their goals. About The Role (Chicago/Remote Position) The Data Engineer I will report to the Principal Data Engineer and will support data ingestion and reporting for our loan products, marketing campaigns, and bank partners. The role will collaborate cross-functionally with Credit, Strategy & Operations, and Engineering teams to deliver high quality reporting and novel, actionable insights that contribute to meeting company goals. The ideal candidate will be a results-driven, strategic thinker able to thrive in a dynamic, rapid-growth environment, including demonstrable skills in SQL, Python, and data manipulation. What You'll Do Create and monitor data connectors from raw data sources to our Snowflake data warehouse. Design and implement the logic for both ad-hoc reporting, and for generalized reporting tables and transformations. Create tests, troubleshoot failures, and ensure data quality for various data ingestion and transformation jobs. Support and troubleshoot automated deployments and testing where needed, in accordance with CI/CD best practices. Assist the governance process of ensuring all stakeholders are congruent on the meaning of the data. Analyze data and deliver actionable recommendations to improve customer acquisition, performance, and retention. What We Look For Bachelor's degree or relevant work experience in Computer Science, Mathematics, or a related technical discipline. 0-3 years professional experience in a data engineering role. Prior experience in Strategy Consulting, Financial Services, or Start-up environments, preferred. Proficiency in SQL is required. Competency with Python is preferred. Familiarity with Airflow, Tableau, Fivetran, Snowflake, and/or DBT is a plus. Willingness to learn and grow as the company expands. The base salary range represents the low and high end of the anticipated salary range for this position. The actual base salary offered will depend on numerous factors including the individual’s skills, experience, performance, and the location where work is performed. Base salary may also be only one component of the offered competitive total rewards for this position that may also include commission, bonus, health care benefits, or other incentives. Base Salary Range $75,000—$95,000 USD Why join us? We are looking for great people to join a fast-paced, growing, and innovative business. For eligible fulltime employees, we offer: Considerable employer contributions for health, dental and vision programs Generous personal time-off 401(K) match Merit advancement opportunities Career development & training More importantly, our team spirit and culture are what really sets us apart as a company. We're a world-class company that loves what we do…and we have fun doing it! Under the California Consumer Privacy Act (""CCPA""), Above Lending is informing California residents who are our job applicants, contractors or prospective employees (together ""job applicants"") about the categories of personal information we collect about you and the purposes for which we will use this information. This notice and our Privacy Policy contain important information relating to the CCPA and apply only to personal information that is subject to the CCPA. Please see our website for the full CCPA statement. *Above Lending is an equal opportunity Employer* Above Lending does not accept unsolicited resumes from individual recruiters or third-party recruiting agencies in response to job positions. No fee will be paid to their parties who submit unsolicited candidates directly to Above Lending employees or the Above Lending Finance and HR teams. No placement fee will be paid to any third party unless such a request has been made by the Above Lending HR team."," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer - Data Analytics,https://www.linkedin.com/jobs/view/data-engineer-data-analytics-at-costco-wholesale-3512816976?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=VT8yIVNPDGznB16Ou01sjg%3D%3D&position=8&pageNum=5&trk=public_jobs_jserp-result_search-card," Costco Wholesale ",https://www.linkedin.com/company/costco-wholesale?trk=public_jobs_topcard-org-name," Seattle, WA "," 1 week ago "," 152 applicants ","This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture. The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes. Our employees and our members come FIRST. Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others. In 2021, Costco contributed over $58 million to organizations such as United Way and Children's Miracle Network Hospitals. Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed. As proof, Costco ranks seventh in Forbes “World’s Best Employers”. The Data Engineer - Data Analytics is responsible for the end to end data pipelines to power analytics and data services. At Costco, we are on a mission to significantly leverage data to provide better products and services for our members. This role is focused on data engineering to build and deliver automated data pipelines from a plethora of internal and external data sources. The Data Engineer will partner with product owners, engineering, and data platform teams to design, build, test, and automate data pipelines that are relied upon across the company as the single source of truth. If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined. ROLE Develops and operationalizes data pipelines to make data available for consumption (BI, Advanced analytics, Services). Works in tandem with data architects and data/BI engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality, and orchestration. Designs, develops, and implements ETL/ELT processes using IICS (informatica cloud). Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Azure Data Factory to improve and speed up delivery of our data products and services. Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements - automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency, and quality of data management. Communicates technical concepts to non-technical audiences both in written and verbal form. Performs peer reviews for other data engineers’ work. Required 3+ years’ experience engineering and operationalizing data pipelines with large and complex datasets. 3+ years’ of hands-on experience with Informatica PowerCenter. 3+ years’ experience with Data Modeling, ETL, and Data Warehousing. 1+ years’ of hands-on experience with Informatica IICS. 2+ years’ experience working with Cloud technologies such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB, and other big data technologies. Extensive experience working with various data sources (SQL, Oracle database, flat files (csv, delimited), Web API, XML. Advanced SQL skills. Solid understanding of relational databases and business data; ability to write complex SQL queries against a variety of data sources. Strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing). Able to work in a fast-paced agile development environment. Scheduling flexibility to meet the needs of the business including weekends, holidays, and 24/7 on call responsibilities on a rotational basis. Recommended BA/BS in Computer Science, Engineering, or equivalent software/services experience. Azure Certifications. Experience implementing data integration techniques such as event/message based integration (Kafka, Azure Event Hub), ETL. Experience with Git / Azure DevOps. Experience delivering data solutions through agile software development methodologies. Exposure to the retail industry. Excellent verbal and written communication skills. Experience with UC4 Job Scheduler. Required Documents Cover Letter Resume California applicants, please click here to review the Costco Applicant Privacy Notice. Pay Ranges Level 1 - $75,000 - $110,000 Level 2 - $100,000 - $135,000 Level 3 - $125,000 - $165,000 Level 4 - $155,000 - 195,000 - Potential Bonus and Restricted Stock Unit (RSU) eligible level We offer a comprehensive package of benefits including paid time off, health benefits — medical/dental/vision/hearing aid/pharmacy/behavioral health/employee assistance, health care reimbursement account, dependent care assistance plan, commuter benefits, short-term disability and long-term disability insurance, AD&D insurance, life insurance, 401(k), stock purchase plan, SmartDollar financial wellness program, to eligible employees. Costco is committed to a diverse and inclusive workplace. Costco is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or any other legally protected status. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to IT-Recruiting@costco.com If hired, you will be required to provide proof of authorization to work in the United States. In some cases, applicants and employees for selected positions will not be sponsored for work authorization, including, but not limited to H1-B visas."," Entry level "," Full-time "," Information Technology "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-addison-group-3518297127?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=jt%2Fo4M47OIoE%2F0hms4qezQ%3D%3D&position=9&pageNum=5&trk=public_jobs_jserp-result_search-card," Addison Group ",https://www.linkedin.com/company/addisongroup?trk=public_jobs_topcard-org-name," Greater Chicago Area "," 1 week ago "," 73 applicants ","Title: Data Engineer Location: Hybrid 2x Onsite in Chicago- River North Type: Direct Hire Salary: $120k-$140k No sponsorship or visa transfer available Responsibilities: Own projects from ideation through execution that evolve our data architecture/integration patterns while also solving for real and current business needs Develop and maintain ETL processes to transform raw data into usable formats for data analysis and reporting Evolve our data pipeline infrastructure using cloud native tools such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics Collaborate with other data stewards to ensure that our data is accurate, reliable, and consistent Stay up-to-date with the latest trends and technologies in data engineering Helps deliver results through a comprehensive understanding of iterative Agile delivery Requirements: Bachelor’s Degree in Computer Science with approximately 2-5 years of experience in designing, developing, and deploying data pipelines Experience taking difficult problems and translating them into solutions Strong proficiency in Python, JavaScript, SQL, Power Shell, SSIS, SSRS Strong understanding of data modeling, data warehousing, and data architecture principles Experience with Data Warehouse (Azure Synapse/Snowflake), Data Lake (Azure Data Lake Storage, Databricks), and Business Intelligence (Tableau, Power BI) platforms a plus Knowledge of enterprise data management platforms (Collibra, Microsoft Azure Data Catalog, etc.) a plus Success Factors: Self-starter and results oriented individual with the ability to multitask under minimal supervision Outstanding team player with an ability to motivate others to share your vision and enthusiasm A hands-on problem solver with a passion for Financial Services and technology Intellectually curious, willing to develop new skills outside of your comfort zone Exceptional communication and customer service skills, including the ability to interact professionally with a diverse group of customers Ability to work effectively on tight deadlines, as necessary"," Mid-Senior level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-apt-3519817015?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=b%2FbghSuZKujiTmzGrKYs3A%3D%3D&position=10&pageNum=5&trk=public_jobs_jserp-result_search-card," Addison Group ",https://www.linkedin.com/company/addisongroup?trk=public_jobs_topcard-org-name," Greater Chicago Area "," 1 week ago "," 73 applicants "," Title: Data EngineerLocation: Hybrid 2x Onsite in Chicago- River NorthType: Direct HireSalary: $120k-$140kNo sponsorship or visa transfer availableResponsibilities:Own projects from ideation through execution that evolve our data architecture/integration patterns while also solving for real and current business needsDevelop and maintain ETL processes to transform raw data into usable formats for data analysis and reportingEvolve our data pipeline infrastructure using cloud native tools such as Azure Data Factory, Azure Databricks, and Azure Synapse AnalyticsCollaborate with other data stewards to ensure that our data is accurate, reliable, and consistentStay up-to-date with the latest trends and technologies in data engineeringHelps deliver results through a comprehensive understanding of iterative Agile deliveryRequirements:Bachelor’s Degree in Computer Science with approximately 2-5 years of experience in designing, developing, and deploying data pipelinesExperience taking difficult problems and translating them into solutionsStrong proficiency in Python, JavaScript, SQL, Power Shell, SSIS, SSRSStrong understanding of data modeling, data warehousing, and data architecture principlesExperience with Data Warehouse (Azure Synapse/Snowflake), Data Lake (Azure Data Lake Storage, Databricks), and Business Intelligence (Tableau, Power BI) platforms a plusKnowledge of enterprise data management platforms (Collibra, Microsoft Azure Data Catalog, etc.) a plusSuccess Factors:Self-starter and results oriented individual with the ability to multitask under minimal supervisionOutstanding team player with an ability to motivate others to share your vision and enthusiasmA hands-on problem solver with a passion for Financial Services and technologyIntellectually curious, willing to develop new skills outside of your comfort zoneExceptional communication and customer service skills, including the ability to interact professionally with a diverse group of customersAbility to work effectively on tight deadlines, as necessary "," Mid-Senior level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-robert-half-3490309645?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=%2FSurG8D3wZuNBG4kewmTTg%3D%3D&position=11&pageNum=5&trk=public_jobs_jserp-result_search-card," Robert Half ",https://www.linkedin.com/company/robert-half-international?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Over 200 applicants ","Robert Half is looking for a Data Warehouse Engineer to work with a client specialized in transforming the e-learning experience. In this role you will design, code, test, and deploy new data warehouse features that ensure established standards are followed for application architecture, development and deployment. This is a fully-remote, contract opportunity for a candidate with a background in Healthcare or Education. Primary Responsibilities: Actively participate in code walkthroughs/inspections to ensure consistency and quality. Follow the established software development methodology Collaborate with stakeholders and other departments to plan and deploy new data warehouse releases or product enhancements Analyze query performance and perform query tuning to assist development engineers in designing and optimizing queries Troubleshoot production support issues in the application environments Extract data from disparate sources and transform into internal formats for loading into our platform Perform technical analysis and requirements definition with our partners on service integrations Required Qualifications: Bachelor’s degree in computer science or related technical field Solid understanding of database troubleshooting, and background with industry standards for database operations Previous experience working as a Data Warehouse Developer or a related role SQL - Advanced Python – For data pipeline building Strong Data warehousing expertise – ETL Proficiency in T-SQL Demonstrated proficiency in ETL processes such as SSIS/SSRS Background working in Healthcare or Education"," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-hobbs-madison-inc-3519175692?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=gkrIb6V99zaWWr9fKT3uCQ%3D%3D&position=13&pageNum=5&trk=public_jobs_jserp-result_search-card," Hobbs Madison, Inc. ",https://www.linkedin.com/company/hobbs-madison-inc-?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","Hobbs|Madison, Inc is a Business and Technology consulting firm focused on Financial Services. We are seeking a Data Engineer to join our growing Banking segment. Data sits at the center of all information and helps fuel insight and analytics to empower bankers and service client’s demanding needs. We’re seeking an experienced data engineer with a deep understanding of SQL and data technologies to assist in the modernization and migration of credit card systems. The role will be responsible for comparing data elements across the legacy system and the modernized system to ensure quality, completeness and consistency of data across the systems. To achieve this, the role will work with extracts from the legacy system and the modernized system and will build custom programs to compare data elements across different domains, including customer data and transactional data. The intent is to build a reusable set of tools / programs to perform data validation in an automated and repeatable way. Our ideal team member will have experience with database technologies, file processing, data migration and building data comparison tools. They enjoy the challenge of large and complex data sources and have a solid background in transactional system data. The right candidate will be excited by the prospect of building flexible, scalable & reusable tools and interacting with client counterparts to drive identified discrepancies to resolution. You will work closely with a team of technologists and business subject matter experts to help facilitate the modernization and migration of credit card systems. Objectives of this Role · Work with technologists and business subject matter experts to help execute the modernization and migration of credit card systems · Facilitate the comparison of data extracts between the legacy system and the modernized system to establish quality, completeness & consistency and to identify errors · Identify false positives and summarize discrepancies between data sets in a way that will facilitate resolution with subject matter experts · Create flexible, reusable & scalable data comparison tools for data extracts that can be leverage multiple times over the course of the migration and can be used incrementally (e.g., data domain by data domain) Daily and Monthly Responsibilities · Work as the lead to identify, summarize, and manage data quality issues between the legacy system and the modernized system · Work with stakeholders including technology and business subject matter experts to assist in resolving identified data issues · Experience in credit card or other financial service terms and concepts · Build flexible, reusable & scalable data comparison tools to compare data extracts from legacy and modernized systems to identify and flag discrepancies · Track data issues to resolution and maintain log of identified issues and resolution over time, including any “known issues” that may not be resolved immediately · Summarize and synthesize data quality insights across both systems · Act as a subject matter expert for data comparison, data migration and data quality · Build data expertise and identify and support data quality across selected areas Skills and Qualifications · Bachelor’s degree in Computer Science, Computer Engineering, or related field from an accredited institution · 3+ years of experience in a data wrangler/engineer role 3+ years working in Financial Services · 4+ years of experience with SQL and flat files to aggregate and compare data across different sources · Highly proficient working SQL and flat file processing knowledge, combined with experience joining and summarizing data across tables and systems · Experience manipulating, comparing and cleansing large data sets (1M+ records with multiple attributes) · Experience in credit card or other financial service terms and concepts · Experience in system or data migration, including quality assurance of migrated data · Experience with data tools and formats such as MS SQL Database, Java, Python, CSV, , etc. · Proficiency in data gathering, cleansing, and analyzing (internal and external) · Comfort working in a dynamic, research-oriented group with several ongoing concurrent projects · Conceptual understanding of quality assurance, error tracking and trends · Strong organizational skills with an inquisitive analytical mindset What Makes You Stand Out • Degree in Computer Science, Statistics, Applied Math, Data Engineering or related field • Direct experience in comparing customer data sets and transactional data sets for equivalency • Ability to summarize findings to socialize with subject matter experts and drive to resolution"," Mid-Senior level "," Full-time "," Consulting and Engineering "," Banking and Financial Services " Data Engineer,United States,"Data Engineer, Global Payments - USDS",https://www.linkedin.com/jobs/view/data-engineer-global-payments-usds-at-tiktok-3496170873?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=nxe2Y5hzEsieI63%2FW%2Bu9iA%3D%3D&position=16&pageNum=5&trk=public_jobs_jserp-result_search-card," TikTok ",https://www.linkedin.com/company/tiktok?trk=public_jobs_topcard-org-name," New York, NY "," 12 hours ago "," 198 applicants ","Responsibilities TikTok is the leading destination for short-form mobile video. Our mission is to inspire creativity and bring joy. TikTok has global offices including Los Angeles, New York, London, Paris, Berlin, Dubai, Mumbai, Singapore, Jakarta, Seoul and Tokyo. Why Join Us At TikTok, our people are humble, intelligent, compassionate and creative. We create to inspire - for you, for us, and for more than 1 billion users on our platform. We lead with curiosity and aim for the highest, never shying away from taking calculated risks and embracing ambiguity as it comes. Here, the opportunities are limitless for those who dare to pursue bold ideas that exist just beyond the boundary of possibility. Join us and make impact happen with a career at TikTok. About USDS At TikTok, we're committed to a process of continuous innovation and improvement in our user experience and safety controls. We're proud to be able to serve a global community of more than a billion people who use TikTok to creatively express themselves and be entertained, and we're dedicated to giving them a platform that builds opportunity and fosters connection. We also take our responsibility to safeguard our community seriously, both in how we address potentially harmful content and how we protect against unauthorized access to user data. U.S. Data Security (“USDS”) is a standalone department of TikTok in the U.S. This new security-first division was created to bring heightened focus and governance to our data protection policies and content assurance protocols to keep U.S. users safe. Our focus is on providing oversight and protection of the TikTok platform and user data in the U.S., so millions of Americans can continue turning to TikTok to learn something new, earn a living, express themselves creatively, or be entertained. The teams within USDS that deliver on this commitment daily span Trust & Safety, Security & Privacy, Engineering, User & Product Ops, Corporate Functions and more. About the Team The Global Payment team of the US Tech Service department of TikTok provides all-round payment solutions for the company's overseas products, overseas commercialization, and the company's overseas travel and procurement, including channel access, product order design, user interaction, capital management, tax and exchange optimization, settlement Reconciliation and so on. In this role, you'll have the opportunity to develop and manage the complex challenges of scale with your expertise in large-scale system design. Responsibilities - Build data pipelines to portray business status, based on a deep understanding of our fast changing business and data-driven approach. - Extract information and signals from a broad range of data and build hierarchies to accomplish analytical and mining goals for “Packaged Business Capability” such as user-growth, gaming and searching. - Keep improving the integrity of data pipelines to provide a comprehensive data service. Qualifications - Bachelor's degree in Computer Science, Statistics, Data Science or a related field. - Skilled in SQL and additional object-oriented programming language (e.g. Scala, Java, or Python). - Experience in issue tracking and problem solving on data pipelines. - Fast business understanding and collaborative in teamwork. - Experience working with user growth is a plus. TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too. TikTok is committed to providing reasonable accommodations during our recruitment process. If you need assistance or an accommodation, please reach out to us at lois.chen@tiktok.com. Job Information: 【For Pay Transparency】Compensation Description (annually) The base salary range for this position in the selected city is $102400 - $221760 annually.​ Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.​ At ByteDance/TikTok our benefits are designed to convey company culture and values, to create an efficient and inspiring work environment, and to support ByteDancers to give their best in both work and life. We offer the following benefits to eligible employees: ​ We cover 100% premium coverage for employee medical insurance, approximately 75% premium coverage for dependents and offer a Health Savings Account(HSA) with a company match. As well as Dental, Vision, Short/Long term Disability, Basic Life, Voluntary Life and AD&D insurance plans. In addition to Flexible Spending Account(FSA) Options like Health Care, Limited Purpose and Dependent Care. ​ Our time off and leave plans are: 10 paid holidays per year plus 17 days of Paid Personal Time Off(PPTO) (prorated upon hire and increased by tenure) and 10 paid sick days per year as well as 12 weeks of paid Parental leave and 8 weeks of paid Supplemental Disability. ​ We also provide generous benefits like mental and emotional health benefits through our EAP and Lyra. A 401K company match, gym and cellphone service reimbursements. The Company reserves the right to modify or change these benefits programs at any time, with or without notice.​"," Not Applicable "," Full-time "," Research, Information Technology, and Engineering "," Entertainment Providers " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-blue-cross-nc-3526779212?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=%2BIIE6Xv%2Br%2B97x%2FC%2FyP102Q%3D%3D&position=18&pageNum=5&trk=public_jobs_jserp-result_search-card," Blue Cross NC ",https://www.linkedin.com/company/bluecrossnc?trk=public_jobs_topcard-org-name," Durham, NC "," 3 weeks ago "," Be among the first 25 applicants ","Additional Locations: Full Time Remote - Alabama, Full Time Remote - Alabama, Full Time Remote - Arizona, Full Time Remote - Arkansas, Full Time Remote - Florida, Full Time Remote - Georgia, Full Time Remote - Idaho, Full Time Remote - Indiana, Full Time Remote - Iowa, Full Time Remote - Kansas, Full Time Remote - Kentucky, Full Time Remote - Louisiana, Full Time Remote - Maryland, Full Time Remote - Michigan, Full Time Remote - Mississippi, Full Time Remote - Missouri, Full Time Remote - Ohio, Full Time Remote - Oklahoma, Full Time Remote - Pennsylvania, Full Time Remote - South Carolina, Full Time Remote - South Dakota, Full Time Remote - Tennessee, Full Time Remote - Texas, Full Time Remote - Utah, Full Time Remote - Virginia {+ 1 more} Job Description IT is different here. Our work as technology specialists pushes the boundaries of what’s possible in health care. You will build solutions that make a real difference in people’s lives. Driven by the importance of their work, our team members innovate to elevate. We’re encouraged to be curious, collaborate, and turn ideas into solutions that transform this space. In this role you will work closely with senior engineers, data scientists and other stakeholders to design and maintain moderate to advanced data models. The Data Engineer is responsible for developing and supporting advanced reports that provide accurate and timely data for internal and external clients. The Data Engineer will design and grow a data infrastructure that powers our ability to make timely and data-driven decisions. If you are ready to make a career out of making a difference, then you are the person for this team. What You’ll Do Define and extract data from multiple sources, integrate disparate data into a common data model, and integrate data into a target database, application, or file using efficient programming processes Document, and test moderate data systems that bring together data from disparate sources, making it available to data scientists, and other users using scripting and/or programming languages Write and refine code to ensure performance and reliability of data extraction and processing Participate in requirements gathering sessions with business and technical staff to distill technical requirement from business requests Develop SQL queries to extract data for analysis and model construction Own delivery of moderately sized data engineering projects Design and develop scalable, efficient data pipeline processes to handle data ingestion, cleansing, transformation, integration, and validation required to provide access to prepared data sets to analysts and data scientists Ensure performance and reliability of data processes Document and test data processes including performance of through data validation and verification Collaborate with cross functional team to resolve data quality and operational issues and ensure timely delivery of products Develop and implement scripts for database and data process maintenance, monitoring, and performance tuning Analyze and evaluate databases in order to identify and recommend improvements and optimization Design eye-catching visualizations to convey information to users What You Will Bring Bachelor’s degree and 3 years of experience with Oracle, Data Warehouses and Data Lakes, Big Data platforms and programming in Python, R or other related language. In lieu of degree, 5 years of the experience as stated above Experience with Snowflake & AWS or the desire to learn"," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-kuali-inc-3511021430?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=VTmLFMt5PxhGCNYEeYkJEw%3D%3D&position=19&pageNum=5&trk=public_jobs_jserp-result_search-card," Blue Cross NC ",https://www.linkedin.com/company/bluecrossnc?trk=public_jobs_topcard-org-name," Durham, NC "," 3 weeks ago "," Be among the first 25 applicants "," Additional Locations:Full Time Remote - Alabama, Full Time Remote - Alabama, Full Time Remote - Arizona, Full Time Remote - Arkansas, Full Time Remote - Florida, Full Time Remote - Georgia, Full Time Remote - Idaho, Full Time Remote - Indiana, Full Time Remote - Iowa, Full Time Remote - Kansas, Full Time Remote - Kentucky, Full Time Remote - Louisiana, Full Time Remote - Maryland, Full Time Remote - Michigan, Full Time Remote - Mississippi, Full Time Remote - Missouri, Full Time Remote - Ohio, Full Time Remote - Oklahoma, Full Time Remote - Pennsylvania, Full Time Remote - South Carolina, Full Time Remote - South Dakota, Full Time Remote - Tennessee, Full Time Remote - Texas, Full Time Remote - Utah, Full Time Remote - Virginia {+ 1 more}Job DescriptionIT is different here. Our work as technology specialists pushes the boundaries of what’s possible in health care. You will build solutions that make a real difference in people’s lives. Driven by the importance of their work, our team members innovate to elevate. We’re encouraged to be curious, collaborate, and turn ideas into solutions that transform this space.In this role you will work closely with senior engineers, data scientists and other stakeholders to design and maintain moderate to advanced data models. The Data Engineer is responsible for developing and supporting advanced reports that provide accurate and timely data for internal and external clients. The Data Engineer will design and grow a data infrastructure that powers our ability to make timely and data-driven decisions.If you are ready to make a career out of making a difference, then you are the person for this team.What You’ll DoDefine and extract data from multiple sources, integrate disparate data into a common data model, and integrate data into a target database, application, or file using efficient programming processesDocument, and test moderate data systems that bring together data from disparate sources, making it available to data scientists, and other users using scripting and/or programming languagesWrite and refine code to ensure performance and reliability of data extraction and processingParticipate in requirements gathering sessions with business and technical staff to distill technical requirement from business requestsDevelop SQL queries to extract data for analysis and model constructionOwn delivery of moderately sized data engineering projectsDesign and develop scalable, efficient data pipeline processes to handle data ingestion, cleansing, transformation, integration, and validation required to provide access to prepared data sets to analysts and data scientistsEnsure performance and reliability of data processesDocument and test data processes including performance of through data validation and verificationCollaborate with cross functional team to resolve data quality and operational issues and ensure timely delivery of productsDevelop and implement scripts for database and data process maintenance, monitoring, and performance tuningAnalyze and evaluate databases in order to identify and recommend improvements and optimizationDesign eye-catching visualizations to convey information to usersWhat You Will BringBachelor’s degree and 3 years of experience with Oracle, Data Warehouses and Data Lakes, Big Data platforms and programming in Python, R or other related language.In lieu of degree, 5 years of the experience as stated aboveExperience with Snowflake & AWS or the desire to learn "," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer - Web Scraper,https://www.linkedin.com/jobs/view/data-engineer-web-scraper-at-collectbase-inc-3495982411?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=2AoM0n4Tn%2BWBGZ7%2FWAXjGQ%3D%3D&position=20&pageNum=5&trk=public_jobs_jserp-result_search-card," Collectbase Inc. ",https://www.linkedin.com/company/collectbase?trk=public_jobs_topcard-org-name," San Francisco, CA "," 3 weeks ago "," Over 200 applicants ","Collectibles.com (Collectbase, Inc.) is building the world’s first Web3 marketplace and community for the industry, integrating blockchain technology to create a new consumer experience and innovative business model. We are U.S.-based company with offices in Germany, venture-backed by leading Web3 and marketplace investors & advisors. By offering a powerful asset management system, Collectibles.com will help collectors organize, dynamically value, and trade their physical + digital items — with transparent data, fair fees and more trusted transactions. Powered by the industry’s most comprehensive data and a more efficient marketplace model, Collectibles.com will deliver a superior solution and user experience. As a seed-stage startup with big vision and huge ambitions, we’re currently a small team of experienced entrepreneurs and passionate collectors working to build a new category-defining business and leading destination. In growing our team and looking selectively for exceptionally talented, fellow travelers, passionate builders who share our desire to innovate and succeed. Opportunity Even beating the S&P, collectibles are becoming a more widely recognized alternative financial asset on a worldwide scale. Massive potential for disruption exists in the collectibles industry and consumer experience, which has remained unchanged for decades. Currently valued at over $400B and with a projected 6% annual growth rate, the collectibles TAM is anticipated to reach nearly $500 billion by 2027. Requirements Your Role We are seeking a talented Data Engineer to join our company's growing team. As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from a variety of sources, including web scraping. You will also be responsible for developing and implementing data pipelines to support our business needs. The ideal candidate should have a strong background in data architecture and engineering principles, as well as experience with web scraping. You should be able to work independently and as part of a team, have excellent communication skills, and possess a strong desire to learn and grow within the company. If you are passionate about data and want to make an impact on our company's success, we would love to hear from you. Your Role Design and implement the architecture of a large-scale crawling system (50+ crawlers) Design, implement, and maintain various components of our data acquisition infrastructure (building new crawlers, maintain existing crawlers, data cleaners & loaders) Work on developing tools to facilitate the scraping at scale, monitor the health of crawlers and ensure data quality of the scraped items Collaborate with our product and business teams to understand / anticipate requirements to strive for greater functionality and impact in our data gathering systems Your Profile 2+ Years experience with Python for data wrangling and cleaning 2+ Years experience with data crawling & scraping at scale (50+ spiders at least) Productionized experience with Scrapy is mandatory Solid understanding of web technologies (HTML, JavaScript, CSS, JSON, Selenium, API´s etc) Familiarity with data pipelining to integrate scraped items into existing data pipelines Ability to maintain all aspects of a scraping pipeline end to end (building and maintaining spiders, avoiding bot prevention techniques, data cleaning and pipelining, monitoring spider health and performance) Experience using techniques to protect web scrapers against site ban, IP leak, browser crash, CAPTCHA and proxy failure Knowledge using MongoDB, Postgres and Redis is a big plus Benefits Healthcare (Medical/Dental/Vision) coverage Holiday Pay: All regular, full-time employees are eligible for paid holidays Flexible Schedule: We provide a working environment where you're in charge of your time and schedule. Fully remote culture: Work from home (or wherever!) Learning budget — Buy courses and books Hardware — Whatever you need to get things done"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Cloud Data Engineer,https://www.linkedin.com/jobs/view/cloud-data-engineer-at-govx-3498721045?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=pr37s2L6F5T2w%2BjUt5FKNw%3D%3D&position=22&pageNum=5&trk=public_jobs_jserp-result_search-card," GovX ",https://www.linkedin.com/company/govx-inc?trk=public_jobs_topcard-org-name," San Diego, CA "," 1 month ago "," Be among the first 25 applicants ","The Cloud Data Engineer is part of a Data Management team that is responsible for modernizing and transforming our data and reporting capabilities across our products by implementing a new modernized data architecture. The position will be responsible for day-to-day data collection, transportation, maintenance/curation, and access to the GovX corporate data asset. The Cloud Data Engineer will work cross-functionally across the enterprise to centralize data and standardize it for use by business reporting, machine learning, data science or other stakeholders. This position plays a critical role in increasing the awareness about available data and democratizing access to it across GovX and our data partners. This position will be under the supervision of the Chief Technology Officer. Responsibilities Crafting and maintaining efficient data pipeline architecture. Assembling large, complex data sets that meet functional / non-functional business requirements. Create and maintain optimal data pipeline/flow architecture. Identifying, crafting, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Working with technical and non-technical stakeholders to assist with data-related technical issues and support their data infrastructure needs. Working with the team to strive for clean and meaningful data, and greater functionality and flexibility within the team’s data systems. Design processes supporting data transformation, data structures, metadata, dependency, and workload management. Requirements Advanced working knowledge of SQL and NoSQL query authoring. Experience working with streams such as Event Hubs and Event Driven Architectures. Experience with Microsoft Power Platform including Power BI and Power Apps. Proficiency with object-oriented/object function scripting languages: Python, C#, etc. Experience with big data tools: Databricks, Spark, etc. Experience building, maintaining, and optimizing ‘big data’ data pipelines, architectures, and data sets. Experience cleaning, testing, and evaluating data quality from a wide variety of ingestible data sources. Strong project management, communication, and organizational skills. Supervisory Responsibilities This position has no supervisory responsibilities. This position provides oversight and mentoring. Work Environment This job operates in a professional office environment. This role routinely uses standard office equipment such as computers, phones, photocopiers, filing cabinets and fax machines. This role occasionally must lift and carry the office equipment. Work Location Due to state law and tax implications, remote work candidates must live and work in one of the following states: California, Texas, Washington, Florida, Tennessee, or New York. Physical/Mental Demands Physical - This is largely a sedentary role. Mental – Problem solving, making decisions, interpret data, organize, read/write. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Travel No travel is expected for this position. Preferred Education And Experience Bachelor’s degree or equivalent experience. 4+ years of proven experience deploying and maintaining always-on data services. 2+ years in building data engineering pipelines in Azure (ADF, Databricks, or Synapse). 2+ years in cloud data engineering experience in Azure. 2+ years of experience with SQL Server. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Benefits Work for a company with a greater social mission. We work to serve those who serve our country and communities every day. No boring days. This is a fast-paced environment where you will learn a lot and have the direct ability to leave your mark on a growing e-commerce company. Remote-first work environment with modern San Diego HQ office that is available as needed. Flexible paid time off, paid holidays and sick leave. Competitive insurance benefits to include: Medical, Dental, Vision, and Life Flexible Spend Account (FSA) and Health Savings Account (HSA) also available. 401(k) plan with discretionary match available. Employee discounts on the GovX website. You won't find a better team of people to work with! Salary Range $95,000.00 - $140,000.00 annually AAP/EEO Statement EOE. Veterans/Disabled Position will require successful completion of a background check and drug testing prior to starting employment. About GovX, Inc. Savings for Those Who Serve GovX was founded in 2011 to offer exclusive benefits to those who serve our country. The GovX membership is comprised of current and former members of the American military, law enforcement, firefighting, medical services, and government personnel. We are dedicated to supporting these communities and to offering unique value to our members, while delivering an authentic platform for brands to reach our growing customer base. As the largest and fastest growing digital platform serving this deserving audience, we are committed to stretching the limits of ecommerce to deliver the best assortment for our members’ on-duty and off-duty needs. 0123"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Cloud Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-insight-global-3487289353?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=RWxmPeL0%2FfX4Ofi%2FZiQ9ug%3D%3D&position=23&pageNum=5&trk=public_jobs_jserp-result_search-card," GovX ",https://www.linkedin.com/company/govx-inc?trk=public_jobs_topcard-org-name," San Diego, CA "," 1 month ago "," Be among the first 25 applicants "," The Cloud Data Engineer is part of a Data Management team that is responsible for modernizing and transforming our data and reporting capabilities across our products by implementing a new modernized data architecture. The position will be responsible for day-to-day data collection, transportation, maintenance/curation, and access to the GovX corporate data asset. The Cloud Data Engineer will work cross-functionally across the enterprise to centralize data and standardize it for use by business reporting, machine learning, data science or other stakeholders. This position plays a critical role in increasing the awareness about available data and democratizing access to it across GovX and our data partners.This position will be under the supervision of the Chief Technology Officer.ResponsibilitiesCrafting and maintaining efficient data pipeline architecture.Assembling large, complex data sets that meet functional / non-functional business requirements.Create and maintain optimal data pipeline/flow architecture.Identifying, crafting, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Working with technical and non-technical stakeholders to assist with data-related technical issues and support their data infrastructure needs.Working with the team to strive for clean and meaningful data, and greater functionality and flexibility within the team’s data systems.Design processes supporting data transformation, data structures, metadata, dependency, and workload management.RequirementsAdvanced working knowledge of SQL and NoSQL query authoring.Experience working with streams such as Event Hubs and Event Driven Architectures.Experience with Microsoft Power Platform including Power BI and Power Apps.Proficiency with object-oriented/object function scripting languages: Python, C#, etc.Experience with big data tools: Databricks, Spark, etc.Experience building, maintaining, and optimizing ‘big data’ data pipelines, architectures, and data sets.Experience cleaning, testing, and evaluating data quality from a wide variety of ingestible data sources.Strong project management, communication, and organizational skills.Supervisory ResponsibilitiesThis position has no supervisory responsibilities. This position provides oversight and mentoring.Work EnvironmentThis job operates in a professional office environment. This role routinely uses standard office equipment such as computers, phones, photocopiers, filing cabinets and fax machines. This role occasionally must lift and carry the office equipment.Work LocationDue to state law and tax implications, remote work candidates must live and work in one of the following states: California, Texas, Washington, Florida, Tennessee, or New York. Physical/Mental DemandsPhysical - This is largely a sedentary role. Mental – Problem solving, making decisions, interpret data, organize, read/write.Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.TravelNo travel is expected for this position.Preferred Education And ExperienceBachelor’s degree or equivalent experience.4+ years of proven experience deploying and maintaining always-on data services.2+ years in building data engineering pipelines in Azure (ADF, Databricks, or Synapse).2+ years in cloud data engineering experience in Azure.2+ years of experience with SQL Server.Other DutiesPlease note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.BenefitsWork for a company with a greater social mission. We work to serve those who serve our country and communities every day.No boring days. This is a fast-paced environment where you will learn a lot and have the direct ability to leave your mark on a growing e-commerce company.Remote-first work environment with modern San Diego HQ office that is available as needed.Flexible paid time off, paid holidays and sick leave.Competitive insurance benefits to include: Medical, Dental, Vision, and LifeFlexible Spend Account (FSA) and Health Savings Account (HSA) also available.401(k) plan with discretionary match available.Employee discounts on the GovX website.You won't find a better team of people to work with! Salary Range$95,000.00 - $140,000.00 annuallyAAP/EEO StatementEOE. Veterans/DisabledPosition will require successful completion of a background check and drug testing prior to starting employment.About GovX, Inc.Savings for Those Who ServeGovX was founded in 2011 to offer exclusive benefits to those who serve our country. The GovX membership is comprised of current and former members of the American military, law enforcement, firefighting, medical services, and government personnel. We are dedicated to supporting these communities and to offering unique value to our members, while delivering an authentic platform for brands to reach our growing customer base. As the largest and fastest growing digital platform serving this deserving audience, we are committed to stretching the limits of ecommerce to deliver the best assortment for our members’ on-duty and off-duty needs.0123 "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-avalara-3480659857?refId=%2BW5eTwAEySJGim0oPmH7fQ%3D%3D&trackingId=xPn2sWsNGXk86xVdR8aNzQ%3D%3D&position=12&pageNum=4&trk=public_jobs_jserp-result_search-card," Avalara ",https://www.linkedin.com/company/avalara?trk=public_jobs_topcard-org-name," United States "," 1 day ago "," Over 200 applicants ","THIS IS NOT A CONTRACT ROLE - NO C2C OR C2H. Job Duties: Design functional data models by understanding use cases and data sources Build scalable, complex DBT models Design and build scalable data orchestration and transformation Inject SDLC best practices into the data stack, and provide guidance to other data engineers Implement CI/CD pipelines with automated testing, code instrumentation, and real time monitoring Practice/Implement data security, encryption and masking policies across various data sets and data sources Work with business and engineering teams to identify scope, constraints, dependencies, and risks Identify risks and opportunities across the business and drive solutions Qualifications: Minimum of 5 years work experience in data engineering field Minimum of 3 years work experience with cloud technologies Minimum of 2 years work experience in data pipelines (ETL / ELT) Minimum of 1 years work experience with Snowflake (or similar) Experience with DBT (CLI or Cloud) Bachelor's degree in Computer Science or Engineering Advanced SQL proficiency Advanced understanding of GIT and GIT fundamentals Experience conducting thorough code reviews using GIT Working knowledge of DevOps concepts, and CI / CD pipelines Working knowledge of Agile frameworks and Jira Proven ability to communicate effectively with technical and non-technical stakeholders across multiple business units Excellent problem-solving skills Demonstrated ability to debug complex environments and data pipelines Preferred Qualifications: Experience with Data Visualization tools (ex: Tableau and Power BI) Experience with creating CI / CD Pipeline Functional experience with AWS About Avalara: Avalara, Inc., (www.Avalara.com), is the leading provider of cloud-based software that delivers a broad array of compliance solutions related to sales tax and other transactional taxes. We're building cloud-based tax compliance solutions to handle every transaction in the world. Imagine every transaction you make — every tank of gas, cup of coffee, or pair of sneakers, every movie ticket, meal kit, or streamed song, every sensor-to-sensor ping. Nearly every time you make a purchase, physical or digital, there's an accompanying unique and nuanced tax compliance calculation. The logic behind calculating taxes — the rules, rates, and boundaries — is a global, layered, three-dimensional mess of complexity, with compliance dictated by governments and applied by every business, every day. Avalara works with businesses of all sizes, all over the world — from corner stores to gigantic global retailers — to calculate tax accurately and automatically, at speeds measured in milliseconds. That's a massive technical challenge, in terms of scale, reliability, and complexity, and we do it better than anyone. That's why we're growing fast. Headquartered in Seattle, Avalara has offices across the U.S., Canada, Brazil, UK, Europe and India. What is it like to work at Avalara? Come find out! We are committed to the following success traits that embody our culture and how we work together to accomplish great things: Fun. Passion. Adaptability. Urgency. Simplicity. Curiosity. Humility. Ownership. Optimism. Avalara is an Equal Opportunity Employer. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law."," Mid-Senior level "," Full-time "," Engineering "," Software Development and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-apt-3519817015?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=b%2FbghSuZKujiTmzGrKYs3A%3D%3D&position=10&pageNum=5&trk=public_jobs_jserp-result_search-card," Apt ",https://www.linkedin.com/company/apt-staffing-consulting?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","Our client is looking to grow their Global Technology team, by bringing on an Data Engineer focusing on Informatica. You will be responsible for product management, solution design, and operations for our client's global IAM platform and processes. This position is at least a 1 year contract and has the potential to go full time. What you'll do: Perform data analytics and develop data integrations using a variety of tools such as MS Excel, MS SQL Server, Oracle SQL Developer, Active Directory/LDAP, Informatica, Tableau and others. Learn and assist with the configuration, administration and support of our client's IAM platform ecosystem, including Sailpoint IdentityIQ, PingFederate, ServiceNow and Microsoft 365. Gather business requirements from stakeholders and leverage them to propose new solutions and/or coordinate adoption of existing services. Define, document and maintain business process flows, data flows, architecture diagrams and other process & technical design artifacts. Coordinate and perform a variety of testing efforts including unit, quality assurance, user acceptance, integration, performance, regression, and production validation testing. Assist with the development, distribution and demonstration of IAM change management & training materials. Skills needed for success: Hands-on experience with data tools such as Microsoft SQL Server, Oracle Database, ETL Informatica Power Centre, and AutoSys (or similar products). Experience with data integration through APIs, especially developing connectivity to cloud based applications through REST API’s and testing connectivity through tools such as Postman. Experience with Unix and scripting knowledge on Python, Shell Scripts, etc. Ability to understand ETL design, source to target mapping (STTM) and creation of other ETL specifications documents. Strong interpersonal and collaboration skills; Clear and concise documentation and communications. Attention to detail and proficiency in dissecting issues into component parts in order to extract requirements, identify problems and recommend tactical or strategic solutions. Adaptability to work with an evolving portfolio of technologies and platforms. Aptitude to innovate and approach problems with a variety of different approaches. Self-motivated, willing to learn and responsible to deliver assignments on time. Knowledge of the hospitality industry and common hotel related business processes / functions."," Mid-Senior level "," Contract "," Engineering and Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-kuali-inc-3511021430?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=VTmLFMt5PxhGCNYEeYkJEw%3D%3D&position=19&pageNum=5&trk=public_jobs_jserp-result_search-card," Kuali, Inc. ",https://www.linkedin.com/company/kualico?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","Description Who are we? Kuali builds software solutions for higher education. We help our customers — colleges & universities — focus on providing a fantastic education to students by decreasing their administrative costs. We work in a competitive space, ripe for innovation, with users ready to be delighted. Our Culture As a company, we are guided by our cultural values: Iterate to evolve Cultivate openness Act with accountability Assume the best Practice humility Deliver amazing experiences As Kuali engineers, we learn from and teach each other, we practice transparency and empathy, and we delight in delivering value to our customers. We work remotely, and have for years. Distributed work is in our bones, with a history of institutions working across state lines on open-source software for more than ten years. Our employees each work in the environment where they’re happiest, from Pennsylvania to Hawaii. We work consciously to create a collaborative and healthy remote work culture, and we travel to meet in person a few times each year. Everyone should love their work. Kuali has been voted a top place to work for 5 years in a row by the Salt Lake Tribune. We also made Forbes' list of America's Best Startup Employers for 2020. Not too shabby. Your product team You will work closely with product, design, and our customers on the Kuali Financials product. Customers use our Financials product to efficiently and effectively manage the complex accounting needs of higher education. Requirements Who are you? We’re looking for curious, enthusiastic, empathetic engineers to solve problems, execute on ideas, advocate for the customer, and contribute to a team culture built on trust and mutual respect. As a data engineer here, you’ll have a significant impact on what we do and how we do it. We build and support data pipelines and reporting solutions for a range of business needs, including end user consumption. We are focusing heavily on greenfield development: you will build new analytics and data services including data pipelines, data warehousing solutions, ETL processes, end user reports, as well as the deployment mechanisms and the platform's environments. As a Data Engineer you will have the unique opportunity to influence major decisions from how our data platform's cloud-based infrastructure is architected, to what modern data-engineering frameworks and tooling are used, to how our CI/CD will operate. While building out the new platform, you will also maintain a light-weight, legacy solution, which will be deprecated and replaced. We believe that great developers can always learn new tools. Above any specific tech stack, we’re looking for versatile developers — those who know when to think big and when to act small, and who are comfortable in both greenfield and refactoring projects. We believe the best products are created by teams who represent a broad range of ideas and perspectives. We value employees with diverse backgrounds and experiences. You... Have 3+ years of Data Engineering experience or equivalent. Architecture-level experience conceptualizing and building infrastructure that processes, stores, and vends large data sets for analytics purposes at an enterprise scale in the cloud. Advanced SQL skills including database design best practices. CTEs, functions, stored procedures don't intimidate you. Have hands-on experience with ETL-as-code frameworks such as Airflow, Luigi, or Prefect or experience building ETL processes/services from scratch with generic languages and libraries. Experience developing and supporting reports with BI and reporting tools such as Tableau, Domo, Looker, or Sisense. Understand the software development lifecycle and are able to work alongside development teams. You’re excited to collaborate closely with Application Engineers, Product Managers, and Customer Success and use real-time feedback to solve problems iteratively. Are ready to help reformulate existing frameworks to improve and expand current offerings. Aren’t afraid to get your hands dirty on devops work. We'd be delighted if you bring experience with: Shipping Software as a Service (SaaS) solutions One or more of these technologies: Java, Python, Node.js, AWS One or more relational databases: MySQL, Oracle or Postgres One or more analytics databases: Redshift, Snowflake, or Teradata. Report and dashboard requirements analysis and design Front end development with React or Angular The Higher Education community Other things you should know: This team is (and has always been) fully remote. You’d be expected to have a suitable home working environment or alternative. We try to get together in person as a team or company 2-4 times a year. Benefits Top-of-the-line equipment of your choice to get your job done A truly exceptional benefits package including full premium coverage for employee and dependent medical and dental care 401(k) matching Paid Maternity/Parental leave All the paid time off you need (just work it out with your manager) Allowance for continuing education, conferences, and/or training Space to work on self-driven projects during “hack time” Employee resource groups and community events"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-insight-global-3487289353?refId=%2FAz5JL6YjUeCQVoPwZaIvQ%3D%3D&trackingId=RWxmPeL0%2FfX4Ofi%2FZiQ9ug%3D%3D&position=23&pageNum=5&trk=public_jobs_jserp-result_search-card," Insight Global ",https://www.linkedin.com/company/insight-global?trk=public_jobs_topcard-org-name," Chicago, IL "," 3 weeks ago "," Over 200 applicants ","**This is a direct hire/permanent position and is 100% REMOTE work. Position: Data Engineer / Data Modeler Location: REMOTE - need to work CST Start Date: ASAP Musts: 5+ years' experience in Data warehousing and various ELT or ETL tools Experience in Data bricks, Azure, ADF (Azure Data Factory) Experience with creating the data models and sets themselves Extensive experience with Pyspark Experience with Coding in Data modeling A minimum of 5 years' experience building database tables and models. Ability to write Complex SQL for DDL and DML operations fluently Strong understanding of enterprise integration patterns (EIP) and data warehouse modeling. Experience with development and data warehouse requirements gathering analysis and design. Plusses: Hands on Python experience Experience with Designing and Building Data models – star schema, snowflake Day to Day: The Data Warehouse (DW) Data Engineer is responsible for developing batch integrations to our clients standards. The Data Engineer is expected to have deep knowledge of the EDW, Data modelling, integration patterns (ETL, ELT, etc…) and may work with one or a range of tools depending on project deliverables and team resourcing. The Data engineer will also be expected to understand traditional relational database systems and be able to assist in administering these systems. Candidates must be interested in working in a collaborative environment and possess great communication skills, experience working directly with all levels of a business and able to work in both a team environment as well as individually. Responsibilities range from batch application\client integration, aggregating data from multiple sources into a data warehouse, automate integration solution generation using reusable patterns\scripting, prototyping integration solutions, and security. Develops batch integration solutions for our client which includes traditional DW workloads and nightly large extracts that are scheduled. Design and Build Data models – star schema, snowflake Create ADF pipelines to bring new data from various sources Create Data bricks notebooks for Data transformation Documents all solutions as needed using Clients standard documentation. Plans, reviews, and performs the implementation of database changes for integrations/DW work. Maintain integration documentation and audit tools. To include developing/updating the integration dashboard. Work with BI team, PO to build required tables and transform data to load into Snowflake Provides support for database/database servers as a member of the Data Management team. Works with project management and business analysis team to provide estimates and ensure documentation of all requirements. Provide logical layers (database views) for end-user access to data in database systems. Partners with functional support and help desk teams to ensure communication, collaboration and compliance with support process standards at ABC. Performs data management tasks as needed."," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-omniskope-inc-3520444490?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=mwh1QS13vesJNEvrv9hQ6Q%3D%3D&position=1&pageNum=6&trk=public_jobs_jserp-result_search-card," Omniskope, Inc ",https://www.linkedin.com/company/omniskopeinc?trk=public_jobs_topcard-org-name," St Louis, MO "," 1 week ago "," Over 200 applicants ","About the company At Omniskope, our mission is to accelerate digital transformation, keeping people at the forefront of what we do and doing it well by doing good. Our people are our employees, our customers, and the communities around us. We are a growing company headquartered in Saint Louis, Missouri, USA, with teams across the USA. We’ve partnered with industry leaders Salesforce and Oracle to enhance our customer business experiences. Our expert services have made us a trusted technology partner to industry leaders across the United States. Our Values Customer Success & Excellence. Own & Inspire Never stop learning Collaborate & Communicate Empathic & People-First Be a Multiplier Foster Equality Have fun Learn more about Omniskope at https://www.omniskope.com/ Job Description: We are seeking a skilled Data Engineer with experience in Oracle GoldenGate to join our team. The ideal candidate will be responsible for designing and implementing data integration and replication solutions using Oracle GoldenGate to ensure the smooth flow of data across different systems and platforms. Must have extensive experience in data pipelines (ELT/ETL), data replication, data warehousing and dimensional modeling, and curation of data sets for Data Scientists and Business Intelligence users. Responsibilities: Establishing Data Replications Using Oracle GoldenGate and Qlik Data Integration (must have) Building scalable Cloud data solutions using MPP Data Warehouses (Snowflake, Redshift, or Azure Data Warehouse/Synapse), data storage (S3, Azure Blob Storage, Delta Lakes, or AWS Lake Formation), and analytics platforms (i.e. Spark, Databricks, etc.) Design and implement data integration and replication solutions using Oracle GoldenGate. Develop data pipelines to ensure data flow across different systems and platforms. Load historical data to a data warehouse Scripting in Python or Shell Workflow Orchestrations using Apache Airflow, AWS Step Functions, etc. Familiarity with automated promotions, SCM tools, and CICD best practices Modeling and curation of data for visualization and predictive modeling users Design and implementation of AWS and/or Azure services such as Lambda, SNS, etc. Creating data integrations with scripting languages such as Python Writing complex SQL queries, stored procedures, etc. Requirements: Minimum of 8 years of experience as a data engineer. 3+ years' experience using Oracle Golden Gate and Qlik Data Integration Strong experience in building scalable Cloud data solutions using MPP Data Warehouses, data storage, and analytics platforms. Need to have solid experience with complex SQL queries and scripting 5+ years’ experience building data pipelines via Python, Spark, or GUI Based tools 5+ years’ experience loading historical data to data warehouses Experience with cloud-based data platforms such as AWS or Azure. 5+ years developing and deploying scalable enterprise data solutions (Enterprise Data Warehouses, Data Marts, ETL/ELT workloads, etc.) Knowledge of programming languages such as Java, Python, or Scala. Excellent written and oral communication skills Visa Sponsorship is available for the right candidate if required. Omniskope Inc is an Equal Employment Opportunity employer. This organization participates in E-Verify and conducts pre-employment criminal history background checks. Omniskope Inc is committed to employing a diverse workforce."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Sr Data Engineer,https://www.linkedin.com/jobs/view/sr-data-engineer-at-nike-3527755764?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=3QTOJWtsujgnF4Uypdsu3A%3D%3D&position=2&pageNum=6&trk=public_jobs_jserp-result_search-card," Nike ",https://www.linkedin.com/company/nike?trk=public_jobs_topcard-org-name," Beaverton, OR "," 3 hours ago "," Be among the first 25 applicants ","Work options: Remote Must work PST Hours Title: Sr Data Engineer Location: Remote, US Duration: 3 Month Contract Responsibilities Design and build reusable components, frameworks and libraries at scale to support analytics products Design and implement product features in collaboration with business and Technology stakeholders Anticipate, identify and solve issues concerning data management to improve data quality Clean, prepare and optimize data at scale for ingestion and consumption Drive the implementation of new data management projects and re-structure of the current data architecture Implement complex automated workflows and routines using workflow scheduling tools Build continuous integration, test-driven development and production deployment frameworks Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards Analyze and profile data for designing scalable solutions Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues Mentor and develop other data engineers in adopting best practices Skills Experience with the following SQL (Snowflake) Pyspark AWS Work"," Mid-Senior level "," Contract "," Information Technology "," Retail " Data Engineer,United States,"Data Analyst / Engineer (Hybrid, Arlington VA)",https://www.linkedin.com/jobs/view/data-analyst-engineer-hybrid-arlington-va-at-heuristic-solutions-3509305215?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=aekWGDfYBa4yaX0epqWw5w%3D%3D&position=3&pageNum=6&trk=public_jobs_jserp-result_search-card," Heuristic Solutions ",https://www.linkedin.com/company/heuristic-solutions?trk=public_jobs_topcard-org-name," Arlington, VA "," 2 weeks ago "," Be among the first 25 applicants ","Do you speak data? If so, we would love to talk! Bring your data skill and join a growing technology team and an awesome customer community. Are you... extremely proficient at moving, manipulating, and managing data in relational databases? motivated by delighting customers and representing a team? able to plan your day, track your work, and keep your promises? Heuristic Solutions is looking for an extraordinary person to join our Implementation Team as a LearningBuilder Technical Analyst. Technical Analysts like to work with customers on requirements and wow them with the data. As an Analyst, you will use professional tools to manipulate data and achieve shared understanding. LearningBuilder is a low-code credentialing management system that allows strong Analysts to go from being the person who “gets it” to becoming the person that “gets it and gets it done.” LearningBuilder customers include small certification and licensure boards as well as large and complex organizations and the federal government. If you love working with customers and data, to push an implementation to the finish line, this job is for you. What You’ll Do On a given day, a LearningBuilder Technical Analyst will complete any of the following tasks: Evaluate client requirements for data migration and customer ad hoc projects; Write SQL scripts, queries, and stored procedures to extract and import data; Translate, merge, scrub and map data into conceptual, logical and physical data models; Validate data migration and mappings; Facilitate client reviews and update routines as needed; Communicate project timelines internally and externally; Develop complex reports using SQL, Microsoft Office and other utilities or languages as necessary; Perform ad hoc and structured data analysis; Identify data issues proactively and flag them to respective stakeholders; Work with internal teams to ensure project continuity; Leave breadcrumbs of your work using clear documentation; Follow best practices to preserve the integrity, quality, and security of data at all times; and Uses data to answer stakeholder questions as required. What We’ll Accomplish Together We are building a credentialing management system that is changing the way people think about certification, licensure, accreditation, and continuing competence. We are doing more than orchestrating a complex process. We give programs options they hadn’t thought of and ways to relate to their community to build relationship. People choose LearningBuilder because we help them build programs that show what a credentialing program can be. To be successful, we need to build a great product, a great team, and a great community of customers. You will be a LearningBuilder implementer and a LearningBuilder customer. You will influence the direction of the product because you will know your clients' needs and you will know how best to meet them. You will provide clear guidance and requirements to help the Product team build the right product. We will build a great Team by surrounding you with people who share your values and are harnessed to a common cause. You will bring your vision, skill, and drive to be part of something greater than yourself. You will do your “job” and you’ll always be thinking about how we could do something better. You’ll look around the field, find empty space, and go there to have an impact. We will build a great community of customers together as you become a valued resource for your clients. Customers will refer their industry colleagues to you because of how transformative you have been for them. Customers will know you look out for their best interest and will provide unmatched guidance. Not exactly what you are looking for? Do you like what you see but you aren't quite sure if you meet all the technical requirements? Let us know! We are also looking for superstars to join our Support and Professional Services Teams. If you have a knack for technical things but don't want data to be your sole focus, consider applying as a Support Specialist or Analyst. These are good roles for people that are smart and balance customer and technical focus but don't have enough technical experience to lead a project. On the Professional Services Team, we are also hiring Implementation Analysts and Consultants. Analysts get to the fine details of what needs to get done and manage projects from start to finish. Consultants take the ""unsolved"" problems and turn them into masterpieces. Let us know what inspires you! Requirements Key indicators of success in this role are attention to detail, structured problem solving, documentation-first mindset, and commitment to continuous improvement. You will have proven teamwork and communication skills, passion for delighting the customer, and an awareness of when and how to ask for help when you need it. Who You Are Good Technical Analysts share some common characteristics: You learn quickly. You take in information like a sponge. You get things before other people. You grasp what people are saying even if they aren’t saying it quite right. You internalize information quickly so you can reflect it back to your collaborators efficiently. You’re the one who sees the patterns and can help others see them, too.. You hunger for the “why”? You know that the more you understand the reasons behind decisions, the better you can help. You help others make decisions by helping them grapple with inconsistencies and details they hadn’t thought of. You love data. Data speaks to you and tells a story. You read the story and adjust the ending as needed to get the desired result. You are supremely well organized. You know what you have to accomplish and you give yourself the space to do things right. You know how to negotiate a new request with a smile so it gets done when it needs to get done. You rarely work late or on week-ends. You’ve set realistic expectations for when things will get done and given yourself buffer for the unexpected. People count on you to keep your promises. You love working with people. You build relationships with people to get things done. You aren’t afraid to ask questions of senior people when you get stuck. You love helping more junior people think through how to do better. Clients know you will find a way to help. You know how to run meeting that makes good use of everyone’s time. You are a life-long learner. You have sources and sites that guide you to be a better professional. You love to read and are always looking for how others are doing things better. Benefits Parking, 401k contribution, health insurance benefits, paid time off, office dog."," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-major-league-soccer-3459623965?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=2NuI25ffEGQe67koqlYlIQ%3D%3D&position=4&pageNum=6&trk=public_jobs_jserp-result_search-card," Major League Soccer ",https://www.linkedin.com/company/major-league-soccer?trk=public_jobs_topcard-org-name," New York, NY "," 13 hours ago "," Over 200 applicants ","Overview Our next generation data platform will enable the fans to have the best experience with Major League Soccer with personalized content and offerings through sophisticated analytics. This position will contribute to developing future of our data platform at MLS by using powerful cloud platform capabilities. This is role responsible for building scalable data solutions that crosscuts data engineering, architecture and software development fields. You will be accountable for design and implementation of cloud native solutions for data ingestion, processing and compute at scale. Responsibilities Design, implement, document and automate scalable production grade end to end data pipelines including API and ingestion, transformation, processing, monitoring and analytics capabilities while adhering to best practices in software development. Build data-intensive application solutions on top of cloud platforms such as AWS, using innovative solutions including distributed compute, lake house, real-time streaming while implementing standard methodologies for data modeling and engineering. Work with infrastructure engineering team to setup infrastructure required for efficient extraction, transformation, and loading of data from a wide variety of data sources using AWS technologies. Additional responsibilities as assigned. Qualifications Bachelor’s Degree in computer science or related field required. 8+ years related experience with track record of shipping production software. Hands-On experience building and delivering data solutions in AWS platform (AWS certification is a plus) Proven CS fundamentals with experience across a range of fields, with one or more area of deep knowledge and experience in a sophisticated programming language. Working experience of distributed processing systems including Apache Spark. RDBMS, NoSQL DBs and object storage systems. Hands-On Experience with at least one of the following in each category: Infrastructure as code: AWS CDK, Terraform Orchestration and Transformation: AWS step functions, AWS glue Airflow, Dagster, Prefect, dbt Streaming: AWS MSK, Kafka Container: AWS ECS, EKS, Fargate, K8 Observability: AWS Cloudwatch, Grafana, Prometheus, ELK stack Deep understanding of best software practices and application of them in data engineering and devops. Familiarity with data science and machine learning workflows and frameworks such as AWS sagmaker. Work independently and collaborate with multi-functional teams to complete projects. Lead integration of technical components with other teams as vital. High-level of commitment to a quality work product and organizational ethics, integrity and compliance Ability to work effectively in a fast paced, team environment Strong interpersonal skills and the ability to effectively communicate, both verbally and in writing Demonstrated decision making and problem-solving skills High attention to detail with the ability to multi-task and meet deadlines with minimal supervision Proficiency in Word, Excel, PowerPoint and Outlook Total Rewards Starting Base Salary: $115,000 – $165,000. MLS/SUM base salaries are contingent upon several factors including individual qualifications, market financials, and operational business needs. We are committed to providing a Total Rewards package that attracts, supports, engages, and retains talent through the following: Benefits – comprehensive and competitive medical, dental, and vision benefits, as well as a suite of programs to promote well-being including a $500 Wellness Reimbursement. A generous PTO offering, and hybrid Office/Remote Work Schedule are also offered to promote Work-Life balance. Career & Professional Development – on the job training, feedback, and on-going educational opportunities to continue your personal and professional development. Employee Engagement – office perks, discounts and employee events that go “beyond the traditional paycheck” to make you feel a part of our team and inspire you to elevate the Game! We are an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law."," Entry level "," Full-time "," Information Technology "," Spectator Sports " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-indotronix-avani-group-3526739461?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=PZdyjqwN3ua2DZT%2BYKpkZg%3D%3D&position=5&pageNum=6&trk=public_jobs_jserp-result_search-card," Major League Soccer ",https://www.linkedin.com/company/major-league-soccer?trk=public_jobs_topcard-org-name," New York, NY "," 13 hours ago "," Over 200 applicants "," OverviewOur next generation data platform will enable the fans to have the best experience with Major League Soccer with personalized content and offerings through sophisticated analytics. This position will contribute to developing future of our data platform at MLS by using powerful cloud platform capabilities.This is role responsible for building scalable data solutions that crosscuts data engineering, architecture and software development fields. You will be accountable for design and implementation of cloud native solutions for data ingestion, processing and compute at scale.Responsibilities Design, implement, document and automate scalable production grade end to end data pipelines including API and ingestion, transformation, processing, monitoring and analytics capabilities while adhering to best practices in software development. Build data-intensive application solutions on top of cloud platforms such as AWS, using innovative solutions including distributed compute, lake house, real-time streaming while implementing standard methodologies for data modeling and engineering. Work with infrastructure engineering team to setup infrastructure required for efficient extraction, transformation, and loading of data from a wide variety of data sources using AWS technologies. Additional responsibilities as assigned. Qualifications Bachelor’s Degree in computer science or related field required. 8+ years related experience with track record of shipping production software. Hands-On experience building and delivering data solutions in AWS platform (AWS certification is a plus) Proven CS fundamentals with experience across a range of fields, with one or more area of deep knowledge and experience in a sophisticated programming language. Working experience of distributed processing systems including Apache Spark. RDBMS, NoSQL DBs and object storage systems. Hands-On Experience with at least one of the following in each category: Infrastructure as code: AWS CDK, Terraform Orchestration and Transformation: AWS step functions, AWS glue Airflow, Dagster, Prefect, dbt Streaming: AWS MSK, Kafka Container: AWS ECS, EKS, Fargate, K8 Observability: AWS Cloudwatch, Grafana, Prometheus, ELK stack Deep understanding of best software practices and application of them in data engineering and devops. Familiarity with data science and machine learning workflows and frameworks such as AWS sagmaker. Work independently and collaborate with multi-functional teams to complete projects. Lead integration of technical components with other teams as vital. High-level of commitment to a quality work product and organizational ethics, integrity and compliance Ability to work effectively in a fast paced, team environment Strong interpersonal skills and the ability to effectively communicate, both verbally and in writing Demonstrated decision making and problem-solving skills High attention to detail with the ability to multi-task and meet deadlines with minimal supervision Proficiency in Word, Excel, PowerPoint and Outlook Total Rewards Starting Base Salary: $115,000 – $165,000. MLS/SUM base salaries are contingent upon several factors including individual qualifications, market financials, and operational business needs.We are committed to providing a Total Rewards package that attracts, supports, engages, and retains talent through the following: Benefits – comprehensive and competitive medical, dental, and vision benefits, as well as a suite of programs to promote well-being including a $500 Wellness Reimbursement. A generous PTO offering, and hybrid Office/Remote Work Schedule are also offered to promote Work-Life balance. Career & Professional Development – on the job training, feedback, and on-going educational opportunities to continue your personal and professional development. Employee Engagement – office perks, discounts and employee events that go “beyond the traditional paycheck” to make you feel a part of our team and inspire you to elevate the Game! We are an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law. "," Entry level "," Full-time "," Information Technology "," Spectator Sports " Data Engineer,United States,Data Engineer ( US Citizens),https://www.linkedin.com/jobs/view/data-engineer-us-citizens-at-rei-systems-3492911087?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=bqWpi9njQicjbilGqsPEXA%3D%3D&position=6&pageNum=6&trk=public_jobs_jserp-result_search-card," REI Systems ",https://www.linkedin.com/company/rei-systems?trk=public_jobs_topcard-org-name," Sterling, VA "," 3 weeks ago "," 70 applicants ","REI Systems provides reliable, effective, and innovative technology solutions that advance federal, state, local, and nonprofit missions. Our technologists and consultants are passionate about solving complex challenges that impact millions of lives. We take a Mindful Modernization approach in delivering our application modernization, grants management systems, government data analytics, and advisory services. Mindful Modernization is the REI Way of delivering mission impact by aligning our government customers’ strategic objectives to measurable outcomes through people, processes, and technology. Learn more at REIsystems.com. Employees voted REI Systems a Washington Post Top Workplace in 2015, 2016, 2018, 2020, 2021 and 2022! Responsibilities As a senior data engineer, you will/may: Monitor and troubleshoot operational or data issues in the data pipelines. Develop code based automated data pipelines able to process millions of data points. Improve database and data warehouse performance by tuning inefficient queries. Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems. Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem. Qualifications Required Qualifications: Expertise in Python. Experienced in Data Pipeline development and Data Cleansing. Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational). Understand the basics of designing and implementing a data schema (e.g., normalization, relational model vs dimensional model). 5 yr. experience with data mining and data transformation. 5 yr. experience with database and/or data warehouse 5 yr. experience with SQL. 5 yr. experience with Python, Spark (PySpark), Databricks, AWS, Azure Preferred Qualifications: Experience building code based on data pipelines in production able to process big datasets. Knowledge of writing and optimizing SQL queries with large-scale, complex datasets. Industry certifications including Databricks and AWS Experience with Spark MLlib and applying existing machine learning algorithms against data lakehouses to drive insight and predictive capabilities Experience with data mining and data transformation. Experience with database and/or data warehouse Experience building data pipelines or automated ETL processes. Experience with Tableau Education: Bachelor’s degree in computer science, data analytics, business intelligence, economics, statistics, or mathematics Clearance: US Citizen able to obtain Public Trust Certification(s): AWS & Oracle certification is preferred. Location/Remote: Hybrid- Sterling, VA - Washington, DC Covid Policy Disclosure: Should the essential functions of this position require that the employee performing this role work on-site at REI’s Sterling location the following requirements will apply: the individual holding this position must be fully vaccinated, as defined in CDC guidance, as a condition of continued employment. REI will consider requests to be excused from this policy whenever necessary to comply with legal requirements and will consider any requests for reasonable accommodations due to a disability, religion, or other exemptions on an individual basis in accordance with applicable legal requirements. Employees and applicants requesting accommodations should request the accommodation in writing and should explain in detail the reasons why they are seeking an accommodation. REI will request additional information or documentation it deems necessary to inform its decision on an employee’s or applicant’s accommodation request. REI Systems is an Equal Opportunity Employer (Minority/Female/Disability/Vet) Apply for this job online Email this job to a friend Share on your newsfeed Application FAQs"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer ( US Citizens),https://www.linkedin.com/jobs/view/data-engineer-at-planet-technology-3512699609?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=gHXe0oFl0Lr0Vy%2Bj6t1xSw%3D%3D&position=7&pageNum=6&trk=public_jobs_jserp-result_search-card," REI Systems ",https://www.linkedin.com/company/rei-systems?trk=public_jobs_topcard-org-name," Sterling, VA "," 3 weeks ago "," 70 applicants "," REI Systems provides reliable, effective, and innovative technology solutions that advance federal, state, local, and nonprofit missions. Our technologists and consultants are passionate about solving complex challenges that impact millions of lives. We take a Mindful Modernization approach in delivering our application modernization, grants management systems, government data analytics, and advisory services. Mindful Modernization is the REI Way of delivering mission impact by aligning our government customers’ strategic objectives to measurable outcomes through people, processes, and technology.Learn more at REIsystems.com. Employees voted REI Systems a Washington Post Top Workplace in 2015, 2016, 2018, 2020, 2021 and 2022! ResponsibilitiesAs a senior data engineer, you will/may:Monitor and troubleshoot operational or data issues in the data pipelines.Develop code based automated data pipelines able to process millions of data points.Improve database and data warehouse performance by tuning inefficient queries.Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems.Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem.QualificationsRequired Qualifications:Expertise in Python.Experienced in Data Pipeline development and Data Cleansing.Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational).Understand the basics of designing and implementing a data schema (e.g., normalization, relational model vs dimensional model).5 yr. experience with data mining and data transformation.5 yr. experience with database and/or data warehouse5 yr. experience with SQL.5 yr. experience with Python, Spark (PySpark), Databricks, AWS, AzurePreferred Qualifications:Experience building code based on data pipelines in production able to process big datasets.Knowledge of writing and optimizing SQL queries with large-scale, complex datasets.Industry certifications including Databricks and AWSExperience with Spark MLlib and applying existing machine learning algorithms against data lakehouses to drive insight and predictive capabilitiesExperience with data mining and data transformation.Experience with database and/or data warehouseExperience building data pipelines or automated ETL processes.Experience with TableauEducation: Bachelor’s degree in computer science, data analytics, business intelligence, economics, statistics, or mathematics Clearance: US Citizen able to obtain Public Trust Certification(s): AWS & Oracle certification is preferred. Location/Remote: Hybrid- Sterling, VA - Washington, DC Covid Policy Disclosure: Should the essential functions of this position require that the employee performing this role work on-site at REI’s Sterling location the following requirements will apply: the individual holding this position must be fully vaccinated, as defined in CDC guidance, as a condition of continued employment. REI will consider requests to be excused from this policy whenever necessary to comply with legal requirements and will consider any requests for reasonable accommodations due to a disability, religion, or other exemptions on an individual basis in accordance with applicable legal requirements. Employees and applicants requesting accommodations should request the accommodation in writing and should explain in detail the reasons why they are seeking an accommodation. REI will request additional information or documentation it deems necessary to inform its decision on an employee’s or applicant’s accommodation request. REI Systems is an Equal Opportunity Employer (Minority/Female/Disability/Vet) Apply for this job onlineEmail this job to a friendShare on your newsfeedApplication FAQs "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-24-seven-talent-3500267637?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=MQQ4SGknmzcrNDO2p%2FIsFA%3D%3D&position=8&pageNum=6&trk=public_jobs_jserp-result_search-card," 24 Seven Talent ",https://www.linkedin.com/company/24seventalent?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Title: Data Engineer Pay 60-70/hr Location: Remote USA Duration: 6 months contract As a Staff Data Engineer on Foundation Analytics, you will be responsible for building many key parts of the foundation data services platform. You'll work in a collaborative Agile environment using the latest in engineering best practices with involvement in all aspects of the software development lifecycle. You will be responsible for ensuring the team makes sound design & configuration decisions to develop curated data products, applies standard architectural practices, and supports the Data Product Managers in evolving core data products. What you get to do. Publishing well written and tested code to production daily using technologies such as Linux, Docker, Kubernetes, AWS, Kafka and Python Drive data architecture and integration design and development discussions with engineering and other teams Investigate production issues and fine-tune our data pipelines Build a platform that will be the foundation for our customer facing reporting features, our machine learning initiatives, and internal product analytics Perform rapid prototyping Participate in designing, developing key features and functionality of our data platform Continually improve the data platform development for high efficiency, throughput and quality of data Collaborate with team members with researching & brainstorming different solutions for technical challenges facing the team Develop standard methodologies and mentor other engineers on the team to help make technical decisions on our projects and roadmap. Skills: 7+ years of software development/data engineering experience 4+ years of hands-on experience of building scalable data platforms and/or reliable data pipelines Proficiency in at least one of the following programming languages: Java, Python, Scala Experience with AWS or related cloud technologies Experience in developing and operating high volume, high availability environments Working understanding of Kubernetes' infrastructure and security best practices Ability to work effectively in a dynamic, occasionally interrupt driven environment that includes geographically spread teams and customers BS degree in Engineering, CS, or equivalent Keywords: Education: Preferred Qualifications Experience writing ETL jobs to help address various data engineering challenges Strong understanding of Build tools and Deployment tools Familiarity with Kafka, Flink, Spark frameworks with validated understanding of at least one job scheduling tool: Airflow, Celery, AWS Step functions Our data pipelines are written in Java and Python based software stacks We utilize many open source technologies, including: Spark, Flink, Hudi, Airflow Our software runs on AWS services like EMR and in Kubernetes, and integrates with AWS services S3, Athena, and Glue for data access."," Associate "," Contract "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-fetch-3506658705?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=zVaxUEwJhdLf0mgBul97mg%3D%3D&position=9&pageNum=6&trk=public_jobs_jserp-result_search-card," 24 Seven Talent ",https://www.linkedin.com/company/24seventalent?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Title: Data EngineerPay 60-70/hrLocation: Remote USADuration: 6 months contractAs a Staff Data Engineer on Foundation Analytics, you will be responsible for building many key parts of the foundation data services platform. You'll work in a collaborative Agile environment using the latest in engineering best practices with involvement in all aspects of the software development lifecycle. You will be responsible for ensuring the team makes sound design & configuration decisions to develop curated data products, applies standard architectural practices, and supports the Data Product Managers in evolving core data products.What you get to do.Publishing well written and tested code to production daily using technologies such as Linux, Docker, Kubernetes, AWS, Kafka and PythonDrive data architecture and integration design and development discussions with engineering and other teamsInvestigate production issues and fine-tune our data pipelinesBuild a platform that will be the foundation for our customer facing reporting features, our machine learning initiatives, and internal product analyticsPerform rapid prototypingParticipate in designing, developing key features and functionality of our data platformContinually improve the data platform development for high efficiency, throughput and quality of dataCollaborate with team members with researching & brainstorming different solutions for technical challenges facing the teamDevelop standard methodologies and mentor other engineers on the team to help make technical decisions on our projects and roadmap.Skills:7+ years of software development/data engineering experience4+ years of hands-on experience of building scalable data platforms and/or reliable data pipelinesProficiency in at least one of the following programming languages: Java, Python, ScalaExperience with AWS or related cloud technologiesExperience in developing and operating high volume, high availability environmentsWorking understanding of Kubernetes' infrastructure and security best practicesAbility to work effectively in a dynamic, occasionally interrupt driven environment that includes geographically spread teams and customersBS degree in Engineering, CS, or equivalentKeywords:Education: Preferred QualificationsExperience writing ETL jobs to help address various data engineering challengesStrong understanding of Build tools and Deployment toolsFamiliarity with Kafka, Flink, Spark frameworks with validated understanding of at least one job scheduling tool: Airflow, Celery, AWS Step functionsOur data pipelines are written in Java and Python based software stacksWe utilize many open source technologies, including: Spark, Flink, Hudi, AirflowOur software runs on AWS services like EMR and in Kubernetes, and integrates with AWS services S3, Athena, and Glue for data access. "," Associate "," Contract "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-simplex-3480660437?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=St7dz1NdZHU2ixrah4%2BFUg%3D%3D&position=10&pageNum=6&trk=public_jobs_jserp-result_search-card," Simplex. ",https://www.linkedin.com/company/simplex-?trk=public_jobs_topcard-org-name," United States "," 6 hours ago "," 190 applicants ","This role is 100% remote and based anywhere in the USA. Our client is a well-funded B2B SaaS Construction Tech startup that is looking to scale from 1MM to 3-5MM ARR over the next year. They are hiring a Data Engineer to join their team immediately! Responsibilities Building data pipelines Reconciling missed data Acquire datasets that align with business needs Develop algorithms to transform data into useful, actionable information Build, test, and maintain database pipeline architectures Collaborate with management to understand company objectives Create new data validation methods and data analysis tools Ensure compliance with data governance and security policies Experience: 3+ years experience in Data Engineering SQL Programming Languages Comfortable speaking with customers Data Modeling Techniques ETL Data Storage Clouding Computing Benefits Competitive Salary + Equity Remote first work environment 1X/Quarter off-sites in locations like NY, SF, and Austin Unlimited PTO Fully covered medical, dental & vision Monthly Internet stipend WFH equipment reimbursement 401k Plan Life and AD&D Insurance Pet Insurance"," Mid-Senior level "," Full-time "," Accounting/Auditing and Finance "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-kids2-3510647972?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=Ehac%2FFYXaIyC48uNTZwftQ%3D%3D&position=11&pageNum=6&trk=public_jobs_jserp-result_search-card," Kids2 ",https://www.linkedin.com/company/kids-ii?trk=public_jobs_topcard-org-name," Atlanta, GA "," 1 week ago "," 151 applicants ","Role is onsite 3 days per week minimum and must be located in the Atlanta metro area: SUMMARY Kids2 is seeking a Data Engineer to assist our Enterprise Data Solutions team. The engineer will support current team objectives and projects, learning all aspects of the data solutions life cycle. The engineer collaborates with a variety of management levels on projects that contribute to the success of the team. This role uses discipline-specific knowledge, skills, and abilities to assist with various projects, presentations, and business improvement opportunities. PRIMARY RESPONSIBILITIES AND ESSENTIAL FUNCTIONS: Apply data modeling techniques to ensure development and implementation support efforts meet integration and performance expectations. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. Work with Enterprise Data Solutions team to serve as a subject-matter expert on various data engineering, data integration and data pipeline projects. Partner with lead data engineer on database management and administration. Comfortable learning and using Power BI for analytics and reporting in the long run. QUALIFICATIONS & EXPERIENCE Experience working with relational databases and data warehouse such as Microsoft SQL Server. Familiar with cloud computing data management and architecture. Familiar with Dimensional Modeling or other Data Warehousing concepts Experience in refining and automating regular processes, track issues, and document changes. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience supporting and working with cross-functional teams in a dynamic environment. Knowledge of Power BI or other data visualization platforms is a plus. Excellent organizational, analytical, and problem-solving skills. Demonstrates aptitude to learn new software and technical concepts. Interact with internal and external customers of all levels. EDUCATION & SKILLS Bachelor’s degree in computer science, analytics, engineering, math, statistics, information system, data science or a related discipline. Working experience spanning at least two IT disciplines, including technical architecture, application development, database management, business analytics information system, or operations. PHYSICAL DEMANDS While performing the duties of this job, the employee is regularly required to sit; have flexible use of hands; and talk or hear. The employee is occasionally required to stand, walk, and reach with hands and arms. The employee must occasionally lift and/or move up to 50 pounds. Specific vision abilities required by this job include close vision, and color vision. WORK ENVIRONMENT In our Buckhead office a min of 3 days per week. We offer competitive pay, flexible hours, and generous benefits. Plus, to keep things fun (because we are all kids at heart), we offer a host of team member activities and philanthropic efforts throughout the year and company-wide awards and recognition for a job well done! Check out our website at www.kids2.com and our social media pages on LinkedIn, Facebook and Instagram for more information and open positions in the career section."," Associate "," Full-time "," Analyst and Strategy/Planning "," Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-securrency-3506579390?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=9Bl%2FIzfrdVFSJ7aU4qix9g%3D%3D&position=12&pageNum=6&trk=public_jobs_jserp-result_search-card," Securrency ",https://www.linkedin.com/company/securrency?trk=public_jobs_topcard-org-name," Raleigh, NC "," 2 weeks ago "," 37 applicants ","Securrency is a financial markets infrastructure technology company focused on enhancing capital formation and stimulating global liquidity. Securrency is driving change at the core of financial services via a patent-pending distributed identity and compliance framework and a state-of-the-art infrastructure designed to bridge legacy financial platforms to blockchain networks. One of the industry's most advanced regulatory technology providers, we have developed compliance tools that automate enforcement of the multi-jurisdictional regulatory policy. These tools provide transparency and consistency to strengthen investor confidence and provide regulators with increased oversight of the market activity. Securrency provides software-as-a-service (SaaS) and platform-as-a-service (PaaS) delivery models to offer blockchain-based financial services infrastructure to banks and other financial services providers. Our proprietary, patent-pending Compliance Aware Token™ technology provides multi-jurisdictional compliance and unprecedented convenience to financial services providers and market participants to facilitate the issuance, trading, and servicing of digital securities and other digital assets. Securrency's technology is blockchain-agnostic, and its compliance and policy-enforcement tools support ledger-to-ledger transactions across multiple blockchains. We have built a state-of-the-art blockchain-based financial service and compliance platform that will serve as the global rails along which all future value moves in a transparent and interoperable manner. Well, on its way to being a technology unicorn, but while we are growing rapidly, we still retain the spirit and camaraderie of a dynamic start-up. Job Purpose As a data engineer, your primary responsibility is to design, develop, maintain, and test data pipelines and infrastructure that enable efficient, secure, and scalable data processing and analysis. You will work closely with data scientists, analysts, and stakeholders to understand their data requirements and design and implement data solutions that meet those requirements. Additionally, you will develop and implement data architecture strategies and best practices, manage and maintain large and complex data sets and databases, and identify and resolve data pipeline and infrastructure issues. You will also stay up-to-date with emerging technologies and trends in data engineering and provide technical guidance and mentorship to junior data engineers and data analysts. Finally, you will participate in data governance initiatives, ensure compliance with relevant data regulations and standards, and collaborate with cross-functional teams to ensure alignment with broader organizational goals and initiatives. Responsibilities Work with large, complex data sets and high throughput data pipelines that meet business requirements. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources. Build data and analytics tools that utilize the data pipeline to provide actionable insights to operational efficiency and other key business performance metrics. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs. Collaborate with data scientists and architects on several projects. Solve various complex problems. Requirements Previous experience as a data engineer or in a similar role 5+ years of Python development experience is necessary. Hands-on experience with database technologies (e.g. SQL and MongoDB) Technical expertise with distributed Spark or other distributed data processing technologies Experience with machine learning techniques Great numerical and analytical skills Ability to write reusable code components. Degree in Computer Science, IT, or similar field. Open-minded to the new technologies, frameworks Thorough business analysis skills Would be a plus: Understanding Blockchain system mechanism Benefits Amazing and accessible office locations in UAE and USA Competitive compensation package World-class benefits package Global company events Flexible working hours Employees may work remotely for a maximum of 40 days a year Eligible to work from alternative Securrency locations 65 days a year Exposure to industry thought leaders"," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-nyc-hedge-fund-w-%245b-aum-at-averity-3500523522?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=hMIpg%2FHj%2BQspdjcKBeTSqQ%3D%3D&position=13&pageNum=6&trk=public_jobs_jserp-result_search-card," Securrency ",https://www.linkedin.com/company/securrency?trk=public_jobs_topcard-org-name," Raleigh, NC "," 2 weeks ago "," 37 applicants "," Securrency is a financial markets infrastructure technology company focused on enhancing capital formation and stimulating global liquidity. Securrency is driving change at the core of financial services via a patent-pending distributed identity and compliance framework and a state-of-the-art infrastructure designed to bridge legacy financial platforms to blockchain networks. One of the industry's most advanced regulatory technology providers, we have developed compliance tools that automate enforcement of the multi-jurisdictional regulatory policy. These tools provide transparency and consistency to strengthen investor confidence and provide regulators with increased oversight of the market activity. Securrency provides software-as-a-service (SaaS) and platform-as-a-service (PaaS) delivery models to offer blockchain-based financial services infrastructure to banks and other financial services providers.Our proprietary, patent-pending Compliance Aware Token™ technology provides multi-jurisdictional compliance and unprecedented convenience to financial services providers and market participants to facilitate the issuance, trading, and servicing of digital securities and other digital assets. Securrency's technology is blockchain-agnostic, and its compliance and policy-enforcement tools support ledger-to-ledger transactions across multiple blockchains.We have built a state-of-the-art blockchain-based financial service and compliance platform that will serve as the global rails along which all future value moves in a transparent and interoperable manner. Well, on its way to being a technology unicorn, but while we are growing rapidly, we still retain the spirit and camaraderie of a dynamic start-up.Job PurposeAs a data engineer, your primary responsibility is to design, develop, maintain, and test data pipelines and infrastructure that enable efficient, secure, and scalable data processing and analysis. You will work closely with data scientists, analysts, and stakeholders to understand their data requirements and design and implement data solutions that meet those requirements. Additionally, you will develop and implement data architecture strategies and best practices, manage and maintain large and complex data sets and databases, and identify and resolve data pipeline and infrastructure issues. You will also stay up-to-date with emerging technologies and trends in data engineering and provide technical guidance and mentorship to junior data engineers and data analysts. Finally, you will participate in data governance initiatives, ensure compliance with relevant data regulations and standards, and collaborate with cross-functional teams to ensure alignment with broader organizational goals and initiatives.ResponsibilitiesWork with large, complex data sets and high throughput data pipelines that meet business requirements.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.Build data and analytics tools that utilize the data pipeline to provide actionable insights to operational efficiency and other key business performance metrics.Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs.Collaborate with data scientists and architects on several projects.Solve various complex problems.RequirementsPrevious experience as a data engineer or in a similar role5+ years of Python development experience is necessary.Hands-on experience with database technologies (e.g. SQL and MongoDB)Technical expertise with distributed Spark or other distributed data processing technologiesExperience with machine learning techniquesGreat numerical and analytical skillsAbility to write reusable code components.Degree in Computer Science, IT, or similar field.Open-minded to the new technologies, frameworksThorough business analysis skillsWould be a plus:Understanding Blockchain system mechanismBenefitsAmazing and accessible office locations in UAE and USACompetitive compensation packageWorld-class benefits packageGlobal company eventsFlexible working hoursEmployees may work remotely for a maximum of 40 days a yearEligible to work from alternative Securrency locations 65 days a yearExposure to industry thought leaders "," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516892416?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=vp9p9i8IFRaWi8nV2mez3A%3D%3D&position=14&pageNum=6&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Riverside, CA "," 1 month ago "," Be among the first 25 applicants ","As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership Requirements Responsibilities: Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3 Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques. Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools) Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective Understand data and analytics use cases across Web3 / blockchains Skills & Qualifications Bachelor’s degree in computer science or related technical field. Masters or PhD a plus. 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others) 3+ years experience with Scala and Apache Spark (or Kafka) A track record of recruiting and leading technical teams in a demanding talent market Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not required Experience with rapid development cycles in a web-based environment Strong scripting and test automation knowledge Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-at-elsdon-consulting-ltd-3493474111?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=9qOFxKDE%2FDIEAethgg78ZQ%3D%3D&position=15&pageNum=6&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Riverside, CA "," 1 month ago "," Be among the first 25 applicants "," As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processingframeworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between datawarehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a databasesystem that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadershipRequirementsResponsibilities:Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques.Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structureCodifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and managementManaging a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools)Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational componentsHighly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspectiveUnderstand data and analytics use cases across Web3 / blockchainsSkills & QualificationsBachelor’s degree in computer science or related technical field. Masters or PhD a plus.6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others)3+ years experience with Scala and Apache Spark (or Kafka)A track record of recruiting and leading technical teams in a demanding talent marketRock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not requiredNice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not requiredExperience with rapid development cycles in a web-based environmentStrong scripting and test automation knowledgeNice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Principal Data Engineer,https://www.linkedin.com/jobs/view/principal-data-engineer-at-amwell-3509514313?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=H9kkAjQSYp8hKzis9%2FZBZQ%3D%3D&position=16&pageNum=6&trk=public_jobs_jserp-result_search-card," Amwell ",https://www.linkedin.com/company/amwellcorp?trk=public_jobs_topcard-org-name," Boston, MA "," 1 week ago "," 26 applicants ","Company Description Amwell is a leading telehealth platform in the United States and globally, connecting and enabling providers, insurers, patients, and innovators to deliver greater access to more affordable, higher quality care. Amwell believes that digital care delivery will transform healthcare. We offer a single, comprehensive platform to support all telehealth needs from urgent to acute and post-acute care, as well as chronic care management and healthy living. With over a decade of experience, Amwell powers telehealth solutions for over 150 health systems comprised of 2,000 hospitals and 55 health plan partners with over 36,000 employers, covering over 80 million lives. Brief Overview: The Principal Data Engineer will play a key role in the Data Platform team within Engineering at an exciting time as we build out our new Converge platform and migrate from legacy solutions to cloud data solutions. They will assess data architecture maturity using industry-accepted principles and standards, identify gaps and drive standardization and adoption across the organization. They will identify Master Data Management needs in concert with the company-wide MDM initiative. The Principal Data Engineer will define and be a Subject Matter Expert in our data classification strategy and will ensure adherence across development squads. They will utilize their expertise to inform and guide others as we strive to capture, classify, secure and report on corporate data in our own product and other systems used across the business. Core Responsibilities: Evaluate and assess data and application integration framework  Identify master data management opportunities and needs  Define the target state and drive implementation of data catalog to enable effective data governance  Establish maps of business capability to data, services, users-roles, IT product model mapping  Develop and analyze various viewpoints, mappings to show relationships between data artifacts and business capability/attributes  Act as a liaison across Data, Engineering, Product, Hosting, Analytics and Delivery teams to drive standardization across data architecture principles  Analyze process, and provide guidance to streamline and optimize data enablement, operational, communication and training or literacy processes  Qualifications: 12+ years hands-on experience within data engineering organization 5+ years demonstrated experience in data architecture, data management, data governance and analytics within growing, agile, global organizations. Healthcare experience is a plus. 5+ years of experience in multiple database technologies such as distributed processing big Data platforms like AWS, GCP and tools like Spark, Kafka, Snowflake, Redshift, Athena, AWS Glue, Python/PySpark. etc.  5 + Years of experience building architecture to transform legacy to modern data platforms (Oracle to Cloud) Experience in BigQuery and Informatica IICS. Proficiency in data visualization tools (Tableau, Looker, etc.)  Knowledge of agile software development process and familiarity with performance metric tools  Distinct customer focus and quality mindset  Strong analytical and problem-solving skills; ability to drive creative, efficient solutions  Ability to work at an abstract level and build consensus across multiple viewpoints  Ability to set priorities, guide your own learning and contribute to domain knowledge Excellent interpersonal, leadership, and communication skills Bachelor's degree in computer science or other technically focused degree; master's degree preferred  Additional Information Working at Amwell Amwell is changing how care is delivered through online and mobile technology. We strive to make the hard work of healthcare look easy. To make this a reality, we look for people with a fast-paced, mission-driven mentality. We're a culture that prides itself on quality, efficiency, smarts, initiative, creative thinking, and a strong work ethic. Our Core Values include One Team, Customer First, and Deliver Awesome. Customer First and Deliver Awesome are all about our product and services and how we strive to serve. As part of One Team, we operate the Amwell Cares program, which brings needed assistance to our communities, whether that be free healthcare for the underserved or for people affected by natural disasters, support for equality, honoring doctors and nurses, or annual Amwell-matched donations to food banks. Amwell aims to be a force for good for our employees, our clients, and our communities. Amwell cares deeply about and supports Diversity, Equity, and Inclusion. These initiatives are highlighted and reflected within our Three DE&I Pillars - our Workplace, our Workforce, and our community. Amwell is a ""virtual first"" workplace, which means you can work from anywhere, coming together physically for ideation, collaboration, and client meetings. We enable our employees with the tools, resources, and opportunities to do their jobs effectively wherever they are! Amwell has collaboration spaces in Boston, Tysons Corner, Portland, Woodland Hills, and Seattle. The typical base salary range for this position is $171,900 - $210,100. The actual salary offer will ultimately depend on multiple factors including, but not limited to, knowledge, skills, relevant education, experience, complexity or specialization of talent, and other objective factors. In addition to base salary, this role may be eligible for an annual bonus based on a combination of company performance and employee performance. Long-term incentive and short-term variable compensation may be offered as part of the compensation package dependent on the role. Some roles may be commission based, in which case the total compensation will be based on a commission and the above range may not be an accurate representation of total compensation. Further, the above range is subject to change based on market demands and operational needs and does not constitute a promise of a particular wage or a guarantee of employment. Your recruiter can share more during the hiring process about the specific salary range based on the above factors listed. Unlimited Personal Time Off (Vacation time) 401K match Competitive healthcare, dental and vision insurance plans Paid Parental Leave (Maternity and Paternity leave) Employee Stock Purchase Program Free access to Amwell's Telehealth Services, SilverCloud and The Clinic by Cleveland Clinic's second opinion program Free Subscription to the Calm App Tuition Assistance Program Pet Insurance"," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Principal Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-millennial-specialty-insurance-3496799450?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=psXKREsVYhGWw%2FQbqnaVyg%3D%3D&position=17&pageNum=6&trk=public_jobs_jserp-result_search-card," Amwell ",https://www.linkedin.com/company/amwellcorp?trk=public_jobs_topcard-org-name," Boston, MA "," 1 week ago "," 26 applicants "," Company DescriptionAmwell is a leading telehealth platform in the United States and globally, connecting and enabling providers, insurers, patients, and innovators to deliver greater access to more affordable, higher quality care. Amwell believes that digital care delivery will transform healthcare. We offer a single, comprehensive platform to support all telehealth needs from urgent to acute and post-acute care, as well as chronic care management and healthy living. With over a decade of experience, Amwell powers telehealth solutions for over 150 health systems comprised of 2,000 hospitals and 55 health plan partners with over 36,000 employers, covering over 80 million lives.Brief Overview:The Principal Data Engineer will play a key role in the Data Platform team within Engineering at an exciting time as we build out our new Converge platform and migrate from legacy solutions to cloud data solutions. They will assess data architecture maturity using industry-accepted principles and standards, identify gaps and drive standardization and adoption across the organization. They will identify Master Data Management needs in concert with the company-wide MDM initiative. The Principal Data Engineer will define and be a Subject Matter Expert in our data classification strategy and will ensure adherence across development squads. They will utilize their expertise to inform and guide others as we strive to capture, classify, secure and report on corporate data in our own product and other systems used across the business.Core Responsibilities:Evaluate and assess data and application integration framework Identify master data management opportunities and needs Define the target state and drive implementation of data catalog to enable effective data governance Establish maps of business capability to data, services, users-roles, IT product model mapping Develop and analyze various viewpoints, mappings to show relationships between data artifacts and business capability/attributes Act as a liaison across Data, Engineering, Product, Hosting, Analytics and Delivery teams to drive standardization across data architecture principles Analyze process, and provide guidance to streamline and optimize data enablement, operational, communication and training or literacy processes Qualifications:12+ years hands-on experience within data engineering organization5+ years demonstrated experience in data architecture, data management, data governance and analytics within growing, agile, global organizations. Healthcare experience is a plus. 5+ years of experience in multiple database technologies such as distributed processing big Data platforms like AWS, GCP and tools like Spark, Kafka, Snowflake, Redshift, Athena, AWS Glue, Python/PySpark. etc. 5 + Years of experience building architecture to transform legacy to modern data platforms (Oracle to Cloud)Experience in BigQuery and Informatica IICS. Proficiency in data visualization tools (Tableau, Looker, etc.) Knowledge of agile software development process and familiarity with performance metric tools Distinct customer focus and quality mindset Strong analytical and problem-solving skills; ability to drive creative, efficient solutions Ability to work at an abstract level and build consensus across multiple viewpoints Ability to set priorities, guide your own learning and contribute to domain knowledge Excellent interpersonal, leadership, and communication skills Bachelor's degree in computer science or other technically focused degree; master's degree preferred  Additional InformationWorking at AmwellAmwell is changing how care is delivered through online and mobile technology. We strive to make the hard work of healthcare look easy. To make this a reality, we look for people with a fast-paced, mission-driven mentality. We're a culture that prides itself on quality, efficiency, smarts, initiative, creative thinking, and a strong work ethic.Our Core Values include One Team, Customer First, and Deliver Awesome. Customer First and Deliver Awesome are all about our product and services and how we strive to serve. As part of One Team, we operate the Amwell Cares program, which brings needed assistance to our communities, whether that be free healthcare for the underserved or for people affected by natural disasters, support for equality, honoring doctors and nurses, or annual Amwell-matched donations to food banks. Amwell aims to be a force for good for our employees, our clients, and our communities.Amwell cares deeply about and supports Diversity, Equity, and Inclusion. These initiatives are highlighted and reflected within our Three DE&I Pillars - our Workplace, our Workforce, and our community.Amwell is a ""virtual first"" workplace, which means you can work from anywhere, coming together physically for ideation, collaboration, and client meetings. We enable our employees with the tools, resources, and opportunities to do their jobs effectively wherever they are! Amwell has collaboration spaces in Boston, Tysons Corner, Portland, Woodland Hills, and Seattle.The typical base salary range for this position is $171,900 - $210,100. The actual salary offer will ultimately depend on multiple factors including, but not limited to, knowledge, skills, relevant education, experience, complexity or specialization of talent, and other objective factors. In addition to base salary, this role may be eligible for an annual bonus based on a combination of company performance and employee performance. Long-term incentive and short-term variable compensation may be offered as part of the compensation package dependent on the role. Some roles may be commission based, in which case the total compensation will be based on a commission and the above range may not be an accurate representation of total compensation.Further, the above range is subject to change based on market demands and operational needs and does not constitute a promise of a particular wage or a guarantee of employment. Your recruiter can share more during the hiring process about the specific salary range based on the above factors listed.Unlimited Personal Time Off (Vacation time)401K matchCompetitive healthcare, dental and vision insurance plansPaid Parental Leave (Maternity and Paternity leave)Employee Stock Purchase ProgramFree access to Amwell's Telehealth Services, SilverCloud and The Clinic by Cleveland Clinic's second opinion programFree Subscription to the Calm AppTuition Assistance ProgramPet Insurance "," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-genspark-3496046897?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=5KaHwshRI0bXxYlh5RHkVw%3D%3D&position=18&pageNum=6&trk=public_jobs_jserp-result_search-card," GenSpark ",https://www.linkedin.com/company/genspark1?trk=public_jobs_topcard-org-name," Atlanta, GA "," 2 weeks ago "," 93 applicants ","Role: Data Engineer Duration: 12 Months (Fulltime) Location: Atlanta, GA Mandatory Skills Required: Python and SQL Requirements: 3+ years of professional working experience. Must be a USC or GC Holder. (We do not offer sponsorship currently) Role and Responsibility: Should have working experience on Python on Data side, NumPy, Pandas Should have working experience on SQL. What We Offer: Pay Range: 75k to 90K. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). About GenSpark Consulting: GenSpark is a division of Pyramid Consulting, a $431M IT Consulting firm. Pyramid Consulting is among the Top 100 largest minority and privately owned IT Consulting firms in the U.S. The success of our clients is facilitated through our ability to provide full-spectrum support via our development centers – from a single consultant under their management, at their site, to full turnkey solutions onsite and offshore. Pyramid Consulting, Inc. is an Equal Employment Opportunity Employer. All applicants hired will be subject to a background check and drug screening. We are on an unstoppable journey of growth and are looking for people who want to go beyond with us on what will be an incredibly exciting talent revolution!"," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-lowe-s-companies-inc-3509360317?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=kKytprCJzuFvbazSjJc8%2FQ%3D%3D&position=19&pageNum=6&trk=public_jobs_jserp-result_search-card," GenSpark ",https://www.linkedin.com/company/genspark1?trk=public_jobs_topcard-org-name," Atlanta, GA "," 2 weeks ago "," 93 applicants "," Role: Data EngineerDuration: 12 Months (Fulltime)Location: Atlanta, GAMandatory Skills Required: Python and SQLRequirements: 3+ years of professional working experience.Must be a USC or GC Holder. (We do not offer sponsorship currently)Role and Responsibility: Should have working experience on Python on Data side, NumPy, PandasShould have working experience on SQL.What We Offer: Pay Range: 75k to 90K.Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).About GenSpark Consulting: GenSpark is a division of Pyramid Consulting, a $431M IT Consulting firm. Pyramid Consulting is among the Top 100 largest minority and privately owned IT Consulting firms in the U.S. The success of our clients is facilitated through our ability to provide full-spectrum support via our development centers – from a single consultant under their management, at their site, to full turnkey solutions onsite and offshore. Pyramid Consulting, Inc. is an Equal Employment Opportunity Employer. All applicants hired will be subject to a background check and drug screening. We are on an unstoppable journey of growth and are looking for people who want to go beyond with us on what will be an incredibly exciting talent revolution! "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer - Remote,https://www.linkedin.com/jobs/view/data-engineer-remote-at-auction-edge-inc-3503238347?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=4lZdrNUMvnnHAqL05LVDxg%3D%3D&position=20&pageNum=6&trk=public_jobs_jserp-result_search-card," Auction Edge, Inc. ",https://www.linkedin.com/company/auction-edge-inc-?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Auction Edge is the automotive remarketing industry’s leading provider of technology and services to independent auctions, dealers, and corporate remarketers. With 230 independent auction customers and millions of cars processed a year, Auction Edge is uniquely positioned to serve the competitive needs of the independent auction community. We currently have offices located in Franklin, TN, Pensacola, FL, and Statesville, NC. For more, visit www.auctionedge.com. We are currently seeking a Data Engineer to join our team in Franklin, TN or as a full-time remote employee to work in several aspects of our Cloud Data ecosystem. This position will design, build, document and maintain data pipelines stretching from multiple on-premise databases to multi-tenant, cloud-based data stores, including operational data stores and data warehouses. In addition we are seeking someone that could design, build, document and maintain “predictive models” for a variety of applications. The ideal candidate will have experience as a programmer, sql, schema design and techniques to create predictive models. Company values broad technical experience along with excellent communication skills, written and oral. Responsibilities Develop reliable and performant programs that migrate data collected in near-realtime from on-premise systems to a centralized cloud based data store. Develop reliable and performant programs that predict future behaviors, such as buyer behavior, based on past behavior. Accuracy of models versus cost to run are key. Monitor and maintain multiple data pipelines reaching from on-premise sites to the cloud. Maintain and enhance documentation on both on-premise and cloud-hosted data stores. Maintain and develop standards for how data is coded and described both in source systems as well as how it's transformed as it's stored in the target system. Ensure the security of critical data. Understand data schemas for multiple on-premise, line-of-business applications as well as multi-tenant, cloud-hosted data stores Required Qualifications 3+ years of database development experience Excellent knowledge of SQL, plpgsql Advanced ability to perform exploratory data analysis Exceptional technical writing skills 2+ years of experience with PostgreSQL Ability to communicate complex data in a simple and actionable way. Analytical and problem solving skills. Excellent skills in high performance query optimization. Ability to work independently and with team members from different backgrounds Excellent attention to detail Preferred qualifications 2+ years of experience programming in JavaScript, Java, Python, Ruby, C# or other equivalent languages 1+ years of experience with ETL tools such as Talend, Informatica, Clover, Pentaho, etc. 1+ years experience as a database administrator / system administrator a plus Experience with systems with high levels of concurrent transactions preferred Experience with source control systems such as Subversion or Git Experience with ElasticSearch / OpenSearch is a big plus Knowledge of test driven database development preferred Experience with continuous integration / deployment Knowledge of Database statistics when / where / why Knowledge of Database architecture Experience with AWS cloud environment RDS, Aurora, Lambda, DynamoDB etc. 1+ years of experience working with, developing, administering a data warehouse Auction Edge Benefits Medical, Dental, and Vision Insurance coverage 401k Retirement Plan 20 days of accrued PTO as well as 12 Flex Days per year (one three-day weekend per month) 8 paid holidays, 2 floating holidays, and 1 paid volunteer day per year Up to $100 Monthly Wellbeing Reimbursement Program (gym membership, personal training, massage therapy, therapy apps, and many other options) Education Reimbursement Program up to $4,000 per 12-month period Focus Fridays Relocation assistance is offered. Auction Edge offers competitive pay, excellent benefits, a culture of continuous improvement and opportunity for career advancement through continued company growth. Auction Edge is an Equal Opportunity Employer (EOE) and supports diversity in the workplace. Salary Description $85,000-$105,000"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting, Software Development, and Motor Vehicle Manufacturing " Data Engineer,United States,Data Engineer - Remote,https://www.linkedin.com/jobs/view/data-engineer-cloud-data-services-at-american-express-3496400035?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=xfyMNsFYs%2BAVEnFe4rDkTg%3D%3D&position=21&pageNum=6&trk=public_jobs_jserp-result_search-card," Auction Edge, Inc. ",https://www.linkedin.com/company/auction-edge-inc-?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Auction Edge is the automotive remarketing industry’s leading provider of technology and services to independent auctions, dealers, and corporate remarketers. With 230 independent auction customers and millions of cars processed a year, Auction Edge is uniquely positioned to serve the competitive needs of the independent auction community. We currently have offices located in Franklin, TN, Pensacola, FL, and Statesville, NC. For more, visit www.auctionedge.com.We are currently seeking a Data Engineer to join our team in Franklin, TN or as a full-time remote employee to work in several aspects of our Cloud Data ecosystem. This position will design, build, document and maintain data pipelines stretching from multiple on-premise databases to multi-tenant, cloud-based data stores, including operational data stores and data warehouses. In addition we are seeking someone that could design, build, document and maintain “predictive models” for a variety of applications. The ideal candidate will have experience as a programmer, sql, schema design and techniques to create predictive models. Company values broad technical experience along with excellent communication skills, written and oral.ResponsibilitiesDevelop reliable and performant programs that migrate data collected in near-realtime from on-premise systems to a centralized cloud based data store.Develop reliable and performant programs that predict future behaviors, such as buyer behavior, based on past behavior. Accuracy of models versus cost to run are key.Monitor and maintain multiple data pipelines reaching from on-premise sites to the cloud.Maintain and enhance documentation on both on-premise and cloud-hosted data stores.Maintain and develop standards for how data is coded and described both in source systems as well as how it's transformed as it's stored in the target system.Ensure the security of critical data.Understand data schemas for multiple on-premise, line-of-business applications as well as multi-tenant, cloud-hosted data storesRequired Qualifications3+ years of database development experienceExcellent knowledge of SQL, plpgsqlAdvanced ability to perform exploratory data analysisExceptional technical writing skills2+ years of experience with PostgreSQLAbility to communicate complex data in a simple and actionable way.Analytical and problem solving skills.Excellent skills in high performance query optimization.Ability to work independently and with team members from different backgroundsExcellent attention to detailPreferred qualifications2+ years of experience programming in JavaScript, Java, Python, Ruby, C# or other equivalent languages1+ years of experience with ETL tools such as Talend, Informatica, Clover, Pentaho, etc.1+ years experience as a database administrator / system administrator a plusExperience with systems with high levels of concurrent transactions preferredExperience with source control systems such as Subversion or GitExperience with ElasticSearch / OpenSearch is a big plusKnowledge of test driven database development preferredExperience with continuous integration / deploymentKnowledge of Database statistics when / where / whyKnowledge of Database architectureExperience with AWS cloud environmentRDS, Aurora, Lambda, DynamoDB etc.1+ years of experience working with, developing, administering a data warehouseAuction Edge BenefitsMedical, Dental, and Vision Insurance coverage401k Retirement Plan20 days of accrued PTO as well as 12 Flex Days per year (one three-day weekend per month)8 paid holidays, 2 floating holidays, and 1 paid volunteer day per yearUp to $100 Monthly Wellbeing Reimbursement Program (gym membership, personal training, massage therapy, therapy apps, and many other options)Education Reimbursement Program up to $4,000 per 12-month periodFocus FridaysRelocation assistance is offered.Auction Edge offers competitive pay, excellent benefits, a culture of continuous improvement and opportunity for career advancement through continued company growth.Auction Edge is an Equal Opportunity Employer (EOE) and supports diversity in the workplace.Salary Description$85,000-$105,000 "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting, Software Development, and Motor Vehicle Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stytch-3515644963?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=gO87E8xADiJ%2BaiH%2BD5PAAQ%3D%3D&position=22&pageNum=6&trk=public_jobs_jserp-result_search-card," Stytch ",https://www.linkedin.com/company/stytch?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 28 applicants ","What We're Looking For Stytch is the platform for user authentication. We build infrastructure that sits in the critical path of our customer's applications. As a data engineer, you'll work on designing and building event-driven architecture systems to drive analytics insights and observability tooling for our customers. What Excites You Championing data-driven insights - you see data analytics and observability as a product critical to success Solving problems with pragmatic solutions — you know when to make trade-offs between completeness and utility and you know when to cut scope to ship something good enough quickly Building products that make developers lives easier — as a data engineer for a developer infrastructure company, what you build will have an immediate impact on our customers Shaping the culture and growing the team through recruiting, mentorship, and establishing best practices Learning new skills and technologies in a fast paced environment What Excites Us Comfort working in a modern data stack using tools like Snowflake, Redshift, DBT, Fivetran, ElasticSearch, and Kinesis Appreciation for schema design and architecture that balance flexibility and simplicity Experience designing and building highly reliable back-end and ETL systems 3+ years as a data or backend engineer What Success Looks Like Technical — build new, highly reliable services that our customers can depend on Ownership — advocate for projects and solutions that you believe in and ship them to production Leadership — level up your teammates by providing mentorship and guidance Our Tech Stack Data moves through Snowflake, ElasticSearch, MySQL, and Kinesis Go and Node for application services We run on AWS with Kubernetes for containerization gRPC and protobufs for internal service communication Expected base salary $150,000-$300,000. The anticipated base salary range is not inclusive of full benefits including equity, health care insurance, time off, paid parental leave, etc. This base salary is accurate based on information at the time of posting. Actual compensation for hired candidates will be determined using a number of factors including experience, skills, and qualifications. We're looking to hire a GREAT team and that means hiring people who are highly empathetic, ambitious, and excited about building the future of user authentication. You should feel empowered to apply for this role even if your experience doesn't exactly match up to our job description (our job descriptions are directional and not perfect recipes for exactly what we need). We are committed to building a diverse, inclusive, and equitable workspace where everyone (regardless of age, education, ethnicity, gender, sexual orientation, or any personal characteristics) feels like they belong. We look forward to hearing from you! Learn more about our team and culture here!"," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-sid-mashburn-and-ann-mashburn-3511445793?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=yugpgh3IZBZ3nd%2BFVGOmAw%3D%3D&position=23&pageNum=6&trk=public_jobs_jserp-result_search-card," Stytch ",https://www.linkedin.com/company/stytch?trk=public_jobs_topcard-org-name," New York, NY "," 1 month ago "," 28 applicants "," What We're Looking ForStytch is the platform for user authentication. We build infrastructure that sits in the critical path of our customer's applications. As a data engineer, you'll work on designing and building event-driven architecture systems to drive analytics insights and observability tooling for our customers.What Excites YouChampioning data-driven insights - you see data analytics and observability as a product critical to successSolving problems with pragmatic solutions — you know when to make trade-offs between completeness and utility and you know when to cut scope to ship something good enough quicklyBuilding products that make developers lives easier — as a data engineer for a developer infrastructure company, what you build will have an immediate impact on our customersShaping the culture and growing the team through recruiting, mentorship, and establishing best practicesLearning new skills and technologies in a fast paced environmentWhat Excites UsComfort working in a modern data stack using tools like Snowflake, Redshift, DBT, Fivetran, ElasticSearch, and KinesisAppreciation for schema design and architecture that balance flexibility and simplicityExperience designing and building highly reliable back-end and ETL systems3+ years as a data or backend engineerWhat Success Looks LikeTechnical — build new, highly reliable services that our customers can depend onOwnership — advocate for projects and solutions that you believe in and ship them to productionLeadership — level up your teammates by providing mentorship and guidanceOur Tech StackData moves through Snowflake, ElasticSearch, MySQL, and KinesisGo and Node for application servicesWe run on AWS with Kubernetes for containerizationgRPC and protobufs for internal service communicationExpected base salary $150,000-$300,000. The anticipated base salary range is not inclusive of full benefits including equity, health care insurance, time off, paid parental leave, etc. This base salary is accurate based on information at the time of posting. Actual compensation for hired candidates will be determined using a number of factors including experience, skills, and qualifications.We're looking to hire a GREAT team and that means hiring people who are highly empathetic, ambitious, and excited about building the future of user authentication. You should feel empowered to apply for this role even if your experience doesn't exactly match up to our job description (our job descriptions are directional and not perfect recipes for exactly what we need). We are committed to building a diverse, inclusive, and equitable workspace where everyone (regardless of age, education, ethnicity, gender, sexual orientation, or any personal characteristics) feels like they belong. We look forward to hearing from you!Learn more about our team and culture here! "," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-aaa-texas-3531403407?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=hhPEn53a1feoTVnQmZ8Emg%3D%3D&position=24&pageNum=6&trk=public_jobs_jserp-result_search-card," AAA Texas ",https://www.linkedin.com/company/aaa-texas?trk=public_jobs_topcard-org-name," Coppell, TX "," 3 hours ago "," Be among the first 25 applicants ","As our Data Engineer, you will function as a consultant between our technology unit and other business units to understand their challenges. You’ll ask questions and present ideas that enable them to solve those data problems with code. Our team is 100% remote, but you must be willing to travel 2 days a month for team meetings. To thrive in this role, you must understand how to query and process large data sets of 200,000 rows or more, ideally in SQL. You must also have functional knowledge of at least 1 programming language like Python. Be prepared to share an example of how you have worked with multiple tables to develop a solution to a business problem (real or hypothetical for more junior candidates). What You’ll Do Every day you will begin with a set of technical specifications to begin coding solutions to issues across the business. Our team’s success isn’t just implementing the code but seeing that those insights are implemented by the business, and they have the necessary tools to accomplish the goal. More senior Data Scientists will write these technical specifications while junior applicants are doing the work to translate specs into functional code. Ask questions and do quality testing to understand if the solution meets the objectives of the business or the goal we set for the business. You will work side-by-side with the business to make sure it works as expected, not just as designed. What You’ll Need To thrive in this role, you must have a passion for problem-solving using data. Your experience in querying the high volumes of data and doing the data analysis must inform how you design solutions that align with the business objectives. Being proficient with SQL and programming languages such as Python and Spark is a must. Experience with data integration from different sources into Big Data systems is preferable. Experience quality testing and coding. These solutions will deploy across products that are important to our customers and the business. They must be high-quality and functional. A willingness to collaborate. Our best work is done when we work together - either with non-technical or technical leads. You should be interested in learning from others regardless of their role in the organization. You have worked previously with an Agile team or understand these concepts. You expect to participate in daily standup meetings, you’ll complete your projects or stories during our sprints, and Remarkable benefits: Health coverage for medical, dental, vision 401(K) saving plan with company match AND Pension Tuition assistance PTO for community volunteer programs Wellness program Employee discounts AAA Texas is part of the largest federation of AAA clubs in the nation. We have 14,000 employees in 21 states helping 17 million members. The strength of our organization is our employees. Bringing together and supporting different cultures, backgrounds, personalities, and strengths creates a team capable of delivering legendary, lifetime service to our members. When we embrace our diversity – we win. All of Us! With our national brand recognition, long-standing reputation since 1902, and constantly growing membership, we are seeking career-minded, service-driven professionals to join our team ""Through dedicated employees we proudly deliver legendary service and beneficial products that provide members peace of mind and value.” AAA is an Equal Opportunity Employer "," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-flow-3487007515?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=8GOsLcqtNxBwxeEu9Q5CQw%3D%3D&position=25&pageNum=6&trk=public_jobs_jserp-result_search-card," Flow ",https://www.linkedin.com/company/live-life-in-flow?trk=public_jobs_topcard-org-name," New York, NY "," 2 days ago "," Over 200 applicants ","About The Company Flow aims to create a superior living environment that enhances the lives of our residents and communities by developing, acquiring, owning, and managing multifamily apartment buildings and the services and technology inside those buildings. Fulfilling our mission will require an exceptional group of people whose collective output is greater than the sum of its individual parts. Our team members are energized by the opportunity to impact our residents’ lives in meaningful ways. They are bold and creatively ambitious, driven by relentlessly high standards, act with a sense of urgency and accountability, and always, above all, operate with integrity, loyalty, and trust. About The Role We’re looking for a Data Engineer to join our team. This role will be responsible for building the systems and infrastructure for Flow to handle data at scale. The Data Engineer is a key member of the tech team, reporting to our BI Product Manager. As a Data Engineer, you will be designing and developing large-scale data systems (e.g., databases, data warehouses, big data systems), platforms, and infrastructure for analytics and business applications. You’re excited to solve data shalleges across both digital and physical products as well as across multiple business verticals. Responsibilities Build and support our modern data stack (AWS, Snowflake, etc.) Architect, build, test, document, and launch highly scalable and reliable data pipelines for business intelligence analytics across the business Develop source of truth datasets and tools that encourage data-driven decisions and allow our teams to access and prepare data sets and reports easily and reliably Partner with stakeholders to translate complex business or technical problems into end-to-end data tools and solutions (e.g., pipelines, models, tables) Evaluate alternatives and make decisions on our data infrastructure Ideal Background A minimum of 5 years experience in a data engineering role High proficiency in the ‘modern data stack’ (Snowflake / Fivetran / dbt / Sigma for example), SQL, Python, and AWS Experience designing and maintaining tools that support ETL pipelines and downstream business use cases of data Ability to collect, interpret, and synthesize inputs from various parts of the business into data model requirements Experience configuring databases and data warehouses to have optimal performance and reliability Deep understanding of the first and second order effects of reporting — you know the power of presenting the right data to the right people at the right time Inherent curiosity and analytical follow-through — you can’t help but ask “why?” and love using data and logic to explore potential solutions Overall understanding of data security and privacy best practices Highly collaborative and able to communicate effectively, both verbally and in writingA team player who can easily adapt in a rapidly changing environment Salary: $140,000 - $200,000 Benefits Medical, dental, and vision insurance plans Paid-Time Off Commuter benefits 401(k) Plan Flow is proud to be an equal opportunity workplace and is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity and/or expression, pregnancy, Veteran status any other characteristic protected by federal, state or local law. In addition, we will provide reasonable accommodations for qualified individuals with disabilities."," Entry level "," Full-time "," Information Technology "," Real Estate " Data Engineer,United States,"Data Engineer, Database Engineering",https://www.linkedin.com/jobs/view/data-engineer-database-engineering-at-experfy-3516891524?refId=Ha6LzFrTZM0PukLkq1LxiQ%3D%3D&trackingId=daf1Ls5dIXiVWW%2FVOUWhLw%3D%3D&position=18&pageNum=3&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Seattle, WA "," 1 month ago "," Be among the first 25 applicants ","As a Data Engineer for our Data Platform Engineering team you will join skilled Scala/ Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership Requirements Responsibilities: Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3 Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques. Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools) Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective Understand data and analytics use cases across Web3 / blockchains Skills & Qualifications Bachelor’s degree in computer science or related technical field. Masters or PhD a plus. 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others) 3+ years experience with Scala and Apache Spark (or Kafka) A track record of recruiting and leading technical teams in a demanding talent market Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms... is a strong plus but not required Experience with rapid development cycles in a web-based environment Strong scripting and test automation knowledge Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-planet-technology-3512699609?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=gHXe0oFl0Lr0Vy%2Bj6t1xSw%3D%3D&position=7&pageNum=6&trk=public_jobs_jserp-result_search-card," Planet Technology ",https://www.linkedin.com/company/the-planet-technology?trk=public_jobs_topcard-org-name," Greater Milwaukee "," 1 week ago "," 200 applicants ","Planet Technology’s direct end-user client is looking for an Data Engineer to join them on a long term contract basis. You would be surrounded by some incredibly smart and creative technologists and also get weekly exposure to multiple teams with various IT backgrounds. This is a very energetic and collaborative environment. C2C Rates up to 70/hr MAX. W2 Rates up to 60/hr MAX. This role is Hybrid Onsite and requires 3 days a week in the office in Milwaukee. Top Requirements: 5 years of experience in a Data Engineer or Data Developer role 3 years of experience in building and optimizing ‘big data’ data pipelines, architectures and data sets. Production experience using cloud based big data tools Production experience with multiple sql databases, such as SQL Server, Oracle, Progress, etc Production experience with one or more non-sql databases, such as cloud technologies, ADLS, ADF, Azure Databricks, Spark, Azure Synapse, and other big data technologies Job Description: The Data Engineer is responsible for expanding and optimizing our data and data pipeline architecture as well as optimizing data flow and collection for cross functional teams. Create and maintain optimal data pipeline architectures Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Assemble large, complex data sets that meet functional / non-functional business requirements. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Identify, design, and implement designs to keep our global data separated and secure across national boundaries. Planet Technology Enterprise Company Description Planet Technology Enterprise is an internationally recognized IT Consulting Services firm established in 2003 with a specialty in SAP, Cloud Solutions & Big Data. We understand the marketplace and pride ourselves on serving IT candidates as individuals, not commodities. We recognize a candidate’s personalized skills and can match them with both direct end clients and select consulting partners. Additional Information If you are interested, please respond to this ad with an updated resume and a summary of your skills. We look forward to hearing from you soon. All your information will be kept confidential according to EEO guidelines."," Mid-Senior level "," Contract "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer (NYC Hedge-Fund w/ $5B AUM),https://www.linkedin.com/jobs/view/data-engineer-nyc-hedge-fund-w-%245b-aum-at-averity-3500523522?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=hMIpg%2FHj%2BQspdjcKBeTSqQ%3D%3D&position=13&pageNum=6&trk=public_jobs_jserp-result_search-card," Averity ",https://www.linkedin.com/company/averity?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 180 applicants ","How would you like to be a Data Engineer at a NYC Hedge-Fund that has outperformed the market significantly even during this current economic downturn? What's The Job? As our new Data Engineer you be the first step in building out our new Data team. You'll be joined by a Senior Engineer with decades of experience and a Risk Analyst that will be your teams liaison to the traders / portfolio managers. You'll own our data and databases, and spearhead new integration projects on the roadmap. You need to know SQL and databases well, along with programming capabilities in Python of C#. This is a highly visible role. The company is 23 teammates strong, looking to grow to 30 by the end of the year. You'll get to interact closely with everyone, including the partners. This is a position great for someone who has a keen interest in the Asset Management space, and wants to be an integral part of a high performing company. Who Are We? We have over a decade of success in Asset Management, currently we have approximately $5B AUM entrusted to us. We focus on liquid credit markets. Our current office space is in Midtown East, and we plan to move to a bigger office next year due to our growth. While our traders are in office 5 days a week, you and other engineers can operate on a hybrid schedule (2-3 days a week in office). There is no requirement for you to be in the office 5 days a weeks, but you are welcome to be if you prefer. What Skills Do You Need? Knowing SQL and databases (on prem) inside out Python or C# for programming Good communication skills so you can speak Tech to non-technical teammates Minimum of 2 years professional experience Compensation: $130,000 - $150,000 Base Yearly Discretionary Bonus 401(k) Full Benefits - 100% of Premiums Covered. What's In It For You? This is a great opportunity for a talented Data Engineer to make their mark at a high performing company where your efforts will be rewarded significantly. If you want to be part of a small dynamic team, paving the Data journey for the entire company and seeing tangible business outcomes and rewards at the end of it, this is for you. While you don't need experience in the Asset Management world for this position, if you have a keen interest in financial markets this position will excite you as you'll get to interact closely with the traders and portfolio managers on a daily basis."," Associate "," Full-time "," Engineering, Information Technology, and Finance "," Technology, Information and Internet, Financial Services, and Investment Management " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-millennial-specialty-insurance-3496799450?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=psXKREsVYhGWw%2FQbqnaVyg%3D%3D&position=17&pageNum=6&trk=public_jobs_jserp-result_search-card," Millennial Specialty Insurance ",https://www.linkedin.com/company/millennial-specialty-insurance?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Position Summary: The Data Engineer will be responsible for, building and optimizing our data and data pipeline architecture, as well as optimizing data collection and flow for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys data orchestration and optimizing data systems and building them from the ground up. The Data Engineer will support our business intelligence analysts, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of contributing to the design of our company’s data architecture to support our next generation of products and data initiatives. You will use various methods to transform raw data into useful data systems. Principal Responsibilities: Create and maintain optimal data pipeline architecture to support data orchestration. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and ingestion of data from a wide variety of data sources using SQL, ETL and Azure Data Factory technologies. Support analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. Work with stakeholders including the Executive, Product, Data and Technology teams to assist with data-related technical issues and support their data infrastructure needs. Create data tools for analytics and data scientist team members that assist them in executing and optimizing data projects. Work with data and analytics experts to strive for greater functionality in our data systems. Collaborating with colleagues for the purpose of collecting and structuring data. Collect, audit, compile, and validate data from multiple sources. Communicate internally and with clients externally to collect and validate data as well as answer questions regarding data. Apply advanced knowledge and understanding of concepts, principals, and technical capabilities to manage a wide variety of projects. Recommends new practices, processes and procedures. Provides solutions that may set precedent or have significant impact. Build automation and additional efficiencies into manual efforts. Education, Experience, Skills and Abilities Requirements: Bachelor’s degree in related field preferred, equivalent years’ experience considered. At least five to seven years of data related or analytical work experience in a Data Engineer role, who has experience with data orchestration and pipeline creation in the Azure ecosystem. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets and blob storage. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Some knowledge of Python preferred. Experience supporting and working with cross-functional teams in a dynamic environment. Insurance industry experience preferred. Special Working Conditions: Fast paced, multi-tasking environment. Important Notice: This position description is intended to describe the level of work required of the person performing in the role and is not a contract. The essential responsibilities are outlined; other duties may be assigned as needs arise or as required to support the organization. All requirements may be modified to reasonably accommodate physically or mentally challenged colleagues. Click here for some insight into our culture!"," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer - Cloud & Data Services,https://www.linkedin.com/jobs/view/data-engineer-cloud-data-services-at-american-express-3496400035?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=xfyMNsFYs%2BAVEnFe4rDkTg%3D%3D&position=21&pageNum=6&trk=public_jobs_jserp-result_search-card," American Express ",https://www.linkedin.com/company/american-express?trk=public_jobs_topcard-org-name," Florida, United States "," 2 weeks ago "," 117 applicants ","You Lead the Way. We’ve Got Your Back. At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we’re supporting our customers’ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what’s possible — and we’re proud to back each other every step of the way. When you join Team Amex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. Here, you’ll learn and grow as we champion your meaningful career journey with programs, benefits, and flexibility to back you personally and professionally. Every colleague share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to our customers, communities, and each other every day. And we’ll do it with integrity and in an environment where everyone is seen, heard and feels like they truly belong. Join #TeamAmex and let’s lead the way together. From building next-generation apps and microservices in Kotlin to using AI to help protect our customers from fraud, you could be doing transformational work that brings our iconic, global brand into the future. As a part of our tech team, we could work together to bring ground-breaking and diverse ideas to life that power the digital systems, services, products and platforms that millions of customers around the world depend on. If you love to work with APIs, contribute to open source, or use the latest technologies, we’ll support you with an open environment and learning culture to grow your career.   Focus: Designs, develops, solves problems, evaluates, modifies, deploys and documents all data components (data architecture, logical and physical data models, database objects and database administration) that meet the needs of customer-facing applications, business applications, and/or internal end user applications.  Organizational Context: Member of a database management team or database support team reporting to a Senior Data Engineer, Senior Data Architect, Engineering Director or Director of Technical Delivery.   How will you make an impact in this role? Develops technical design documentation and shape architecture in coordination with lead engineer Develops deep understanding of tie-ins with other systems and platforms within the supported domains Demonstrates analytical thinking - recommends improvements and conducts experiments to prove/disprove them Communicates effectively with product and cross functional teams to prioritize features for ongoing sprints and managing a list of technical requirements based on industry trends, new technologies, known defects, and issues  Supports database automation efforts across multiple operational DBMS platforms, including relational and NoSQL, to drive agile software development. Develop tools and automate database processes. Work collaboratively with application development, database engineering Responsibilities: Partners with the product teams to understand business data requirements, identify data needs and data sources to create data architecture Documents data requirements/data stories, in logical data models using data modeling tools – ErStudio, ErWin to ensure flawless integration into existing data architectures Documents processing requirements inclusive of data and transaction volumes, scalability, security, and performance Supports the management of data assets according to enterprise standards, guidelines, and policies Helps build and enhance the database design that supports our business portfolio and translate into physical database Works collaboratively with business, product teams and Senior Architects and Engineers   Basic Qualifications: BS or MS degree in computer science, computer engineering, or other technical discipline 3+ years of overall design and development experience Strong analytical skills with a validated ability to understand and document business data requirements in complete, accurate, extensible and flexible logical data models using data modeling tools – ErStudio, ErWin Demonstrated hands usage of XML/JSON and schema development/reuse, Open Source and NoSQL Expert in any programming language (Java, python, etc.) Experience with design and development across one or more database management systems (e.g. Oracle, PostgreSQL, MongoDB, Couchbase) Experience with Database query optimization and indexing Preferred Qualifications: Experience with automation tools and scripting are highly desired. A propensity to experiment with emerging technologies. Find opportunities to adopt innovative technologies Salary Range: $67,900.00 to $129,800.00 annually + bonus + benefits The above represents the expected salary range for this job requisition. Ultimately, in determining your pay, we'll consider your location, experience, and other job-related factors. American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. We back our colleagues with the support they need to thrive, professionally and personally. That’s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. Depending on role and business needs, colleagues will either work onsite, in a hybrid model (combination of in-office and virtual days) or fully virtually. US Job Seekers/Employees - Click here to view the “EEO is the Law” poster and supplement and the Pay Transparency Policy Statement. If the links do not work, please copy and paste the following URLs in a new browser window: https://www.dol.gov/agencies/ofccp/posters to access the three posters. Non-considerations for sponsorship: Employment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions. Considerations for sponsorship: Depending on factors such as business unit requirements, the nature of the position, cost and applicable laws, American Express may provide visa sponsorship for certain positions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-sid-mashburn-and-ann-mashburn-3511445793?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=yugpgh3IZBZ3nd%2BFVGOmAw%3D%3D&position=23&pageNum=6&trk=public_jobs_jserp-result_search-card," Sid Mashburn and Ann Mashburn ",https://www.linkedin.com/company/sid-mashburn?trk=public_jobs_topcard-org-name," Atlanta Metropolitan Area "," 1 week ago "," 94 applicants ","WHAT IS THE BRAND? Mashburn is an Atlanta-based apparel and lifestyle brand launched in 2007 with a single passion: taking care of people. The company designs, manufactures, and markets a nationally-recognized assortment of menswear (Sid Mashburn) and womenswear (Ann Mashburn) alongside other high-quality, iconic brands. We want to be the world's go-to omnichannel lifestyle shop - a place that embodies service and style, accessibility and luxury, and, for us, the very best of everything. Most excitingly, our story is still unfolding, and incredible growth opportunities lie ahead... WHAT ARE THE RESPONSIBILITIES? Mashburn, LLC is seeking a Data Engineer to join its Technology Team. The data engineer is responsible for identifying, researching, and resolving complex technical problems that enhance the combined business decision making capabilities. Your technical skills, business acumen, and creativity will be essential as you build tools to automate reporting and generate timely business insights. Develop a data strategy by partnering with business units to understand their data needs, create a road map, and deliver solutions that fit their evolving needs. Design and manage data warehouse. Implement a business intelligence frontend such as Power BI, Tableau, or other. Act as the SQL expert within the team and organization Leverage SQL, and other analytic platforms to gather, clean, and prepare data; create dashboards/reports; develop KPIs; analyze trends; provide insights. Create, maintain, and orchestrate an optimal data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Support team and assist with maintaining integrations between systems. WHAT ARE THE SKILLS? Bachelor's degree from accredited university and/ or equivalent years of work experience. Five or more years of relevant work experience in similar roles. Advanced working SQL knowledge and experience working with relational databases. Experience building and optimizing 'big data' data pipelines, architectures, and data sets. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. Used data integration tools and written ETL processes or scripts. Scripting experience with python, PowerShell, or a similar language. Experience with business intelligence tools creating reports and dashboards. Understands cloud-based technologies such as google big query and azure technologies. Familiar with automation, workflow, and data flow concepts. Demonstrated experience understanding complex issues and explaining them in terms appropriate for technical and/or nontechnical audiences. Ability to research, problem solve, and be self-sufficient. Ability to solve complex problems with a sense of urgency, while maintaining a positive attitude. Must be tactful, detail-oriented, and have a passion for accuracy. Excellent verbal communication, written and documentation skills to document procedures and processes. Desire to provide exceptional customer service and satisfaction. Reflect our core values of hard work, honesty, humility, hopefulness, helpfulness, and honor"," Mid-Senior level "," Full-time "," Engineering "," Retail Apparel and Fashion " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-captivation-3499486557?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=ct5z04ibut%2FMGo8Th0pPRQ%3D%3D&position=1&pageNum=7&trk=public_jobs_jserp-result_search-card," Captivation ",https://www.linkedin.com/company/captivation-software?trk=public_jobs_topcard-org-name," Odenton, MD "," 2 weeks ago "," Be among the first 25 applicants ","Annual Salary: $175,000 - $250,000 (depends on experience level) Build Something to Be Proud Of. Captivation Software has built a reputation on providing customers exactly what is needed in a timely manner. Our team of engineers take pride in what they develop and constantly innovate to provide the best solution. Come work with us and help provide the tools to solve DoD’s Big Data problems! Captivation Software is looking for a talented Data Engineer to support the acquisition of mission critical and mission support data sets. The preferred candidate will have a background in supporting cyber and/or network related missions within the military spaces, as either a developer, analyst or engineer. Work is performed mostly on customer site in Ft. Meade, MD. Essential Job Responsibilities: The ideal candidate will have worked with big data systems, complex structured and unstructured data sets, and have supported government data acquisition, analysis, and/or sharing efforts in the past. To excel in the position, the candidate shall have a strong attention to detail, be able to understand technical complexities, and have the willingness to learn and adapt to the situation. The candidate will work both independently and as part of a large team to accomplish client objectives. Requirements Minimum Qualifications: Security Clearance - Must have a current TS/SCI with a Polygraph level security clearance and therefore all candidates must be a U.S. Citizen. 5 years experience as a developer, analyst, or engineer with a Bachelors in related field; OR 3 years relevant experience with Masters in related field; OR High School Diploma or equivalent and 9 years relevant experience. Experience with programming languages such as Python and Java. Proficiency with acquisition and understanding of network data and the associated metadata. Fluency with data extraction, translation, and loading including data prep and labeling to enable data analytics. Experience with Kibana and Elasticsearch. Familiarity with various log formats such as JSON, XML, and others. Experience with data flow, management, and storage solutions (i.e. Kafka, NiFi, and AWS S3 and SQS solutions). Ability to decompose technical problems and troubleshoot system and dataflow issues. Must be able to work on customer site in Ft. Meade, MD most of the time. Preferred Requirements: Experience with NOSQL databases such as Accumulo desired Prior Experience supporting cyber and/or network security operations within a large enterprise, as either an analyst, engineer, architect, or developer. Benefits Annual Salary: $175,000 - $250,000 (depends on experience level) Up to 20% 401k contribution (no matching required) Above market hourly rates $3,000 HSA Contribution 5 Weeks Paid Time Off Company Paid Employee Medical / Dental / Vision Insurance / Life Insurance / Short-Term & Long-Term Disability / AD&D"," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Jr.",https://www.linkedin.com/jobs/view/data-engineer-jr-at-altamira-technologies-corporation-3523764816?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=e9jzC6L%2BoRoKHEwEHiN%2Fpg%3D%3D&position=2&pageNum=7&trk=public_jobs_jserp-result_search-card," Altamira Technologies Corporation ",https://www.linkedin.com/company/altamira-corporation?trk=public_jobs_topcard-org-name," Tampa, FL "," 12 hours ago "," 124 applicants ","Data Engineer Altamira delivers a variety of analytic and engineering capabilities to the US National Security community, but the tech culture and the caliber of the individuals that bring these capabilities to fruition are what really set us apart. We’re a curious, responsive, dedicated bunch spread across many corporate cultures. Dayton, OH is highly focused on the Space-based mission set with a heavy emphasis on sensor exploitation and analysis; Tampa, FL focuses on ‘art-of-the-possible’ analytics of all kinds with an emphasis on graph technologies, NLP, and wrangling complex data sets; and all forces converge at our headquarters in the Northern Virginia/Washington DC area where we host our tech events and support engineering and analytic missions across several IC and DOD agencies. While our work occurs in different states and different mission domains, we’ve got analytics at the heart of every operation and genuine curiosity for new methods, techniques, and solutions. Our specialties are data science and analytics, data engineering, software engineering, and end-to-end analytic solutions architecture. We’ve also got some awesome benefits like the Altamira Healthy Living program, with ongoing competitions and a flexible spending stipend for health and wellness-related items. Location: Tampa, FL The Role: The Data Engineer will support data scientists and analysts through wrangling and readying of datasets for analysis. The Data Engineer will be versed in Extract, Transforms, Load (ETL) techniques and technologies, and will also be familiar with a variety of database types, schemas, and ontologies for centralized data storage. You’ll be working on an interdisciplinary team serving as the data custodian, responsible for its provenance, accuracy, and location throughout project execution. The workflows that you design will support a variety of analytics and analytic applications. Your skills: Demonstrable expertise in software development or software engineering Working knowledge of 2 or more programming languages Experience with Agile software development practices and tools such as JIRA and Confluence Experience designing and delivering software solutions in cloud environments (AWS strongly preferred) Experience with multiple database types, and designing ontologies and schemas in support of various analytic or query-based applications (e.g., ElasticSearch, Cassandra, JanusGraph) Familiarity with producing Analysis of Alternatives (AoA) of data storage methods is strongly desired Familiarity with containerization tools and techniques, container orchestration, and workflow management, to include technologies such as Docker, Kubernetes, Jenkins, or Terraform is desired Your quals: Secret, TS, or TS/SCI clearance (TS/SCI strongly preferred) 1+ years in a software development or software engineering role supporting analytic applications Bachelor’s Degree (BS) or higher in technical field related to software development Experience delivering software or technical solutions to DOD or IC customers, USAF, JAIC, NRO, NGA, or other intelligence organizations preferred Altamira is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, or protected veteran status. We focus on recruiting talented, self-motivated employees that find a way to get things done. Join our team of experts as we engineer national security!"," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer I,https://www.linkedin.com/jobs/view/data-engineer-i-at-greatschools-org-3491990325?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=PjkxeSrJsjBO8TWfBqK6tQ%3D%3D&position=3&pageNum=7&trk=public_jobs_jserp-result_search-card," GreatSchools.org ",https://www.linkedin.com/company/greatschools?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","About GreatSchools: GreatSchools is the leading nonprofit providing high-quality information that supports parents pursuing a great education for their child, schools striving for excellence, and communities working to diminish inequities in education. We are the only national organization that collects and analyzes data from all 51 state departments of education and the federal government to provide analysis, insights, and school quality ratings for parents, partners, researchers, and policymakers. Over 49 million users visit GreatSchools’ award-winning website annually to learn about schools in their area, explore research insights, and access thousands of free, evidence-based parenting resources to support their child’s learning and well-being. We are a mission-driven team that believes all children — especially those who have been historically underserved by the education system — deserve an excellent education. Summary: We are looking for a colleague to join our Data Engineering Team in delivering the data and tools that allow GreatSchools to provide the most up-to-date, comprehensive, and informative lens on school quality possible. If you love working with data and want to use your talents to help tens of thousands of parents every day to find the right school for their child then this position is for you. This position reports to our Data Engineer II. It is a full time, exempt position with headquarters located in Oakland, CA (remote work within US ok). Responsibilities include: Contribution to and maintenance of preprocessing and loading processes, tooling, and documentation. Including shared Python libraries, automation services, and other key infrastructure. Contribution to loading efforts of K-12 education data into our data warehouse Contribution to improvements of the data warehouse architecture Contribution to the GreatSchools process and methodology documentation Performance of QA checks and/or pull request reviews of code submitted by other Data Engineers, using and building upon QA automation efforts/tools. Ability to perform size and difficulty assessment for acquired data loads Support of work to answer data-related internal and external stakeholder requests, including technical support on data related issues from our Customer Service team The responsibilities of this role may include a number of other similar or related duties which may not be specifically included within this position description, but which are consistent with the general level of the job. You’ll likely find success in this role if you have: 1+ years of experience with SQL 1+ years of experience with a programming language (python, ruby, R preferred) Experience working within an Agile Software Development framework The ability to apply a detail-oriented, critical eye to data quality The ability to prioritize work, manage multiple projects and work within tight deadlines Are a good communicator Are a self-starter; with the ability to work independently and as a strong contributor to team projects Nice to haves: 1+ years of experience as a Data Engineer or in a similar role 2-4 years of experience with SQL, relational database management, and familiarity with working in a shell interface 2-4 years of experience with a programming language (python preferred) Experience with git (or another version control system or feedback-facilitator) Experience with data processing and workflow management tools (such as Airflow, Spark, Luigi, Azkaban, etc.) Experience working with AWS data technologies (such as S3, Glue, Lambda,Redshift, etc.) Experience working with cross-functional teams in a dynamic environment Clear and concise communication, ability to articulate clearly with a diverse team Experience working with K-12 education data, working for non-profits, or a passion for the GreatSchools mission Salary & Benefits Compensation for this role ranges from $89-92K, based on location and experience. Candidates in the Bay Area or Pacific Time Zone preferred. If out-of-region, you must be willing to travel periodically. GreatSchools proudly offers competitive medical, dental, and vision benefits, as well as a retirement plan with employer match. Additionally, we provide generous PTO + sick leave, thirteen paid holidays annually, and a paid four-week sabbatical every five years. Application deadline: First application deadline is March 17, 2023. If needed, a second deadline of March 31st will be opened, with an anticipated start date in May 2023. (flexible based on candidate availability). GreatSchools team members are diverse in all ways. We are committed to hiring talented staff who reflect the diversity of the communities and audiences we serve and who believe in supporting all parents, especially those who have been historically underserved. As a proud Equal Opportunity Employer, we are committed to considering applicants regardless of race, color, sex, age, national origin, religion, sexual orientation, gender identity and/or expression, status as a veteran, and basis of disability or any other federal, state or local protected class."," Entry level "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting and Civic and Social Organizations " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-eliassen-group-3480284111?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=RYajwFWLxNgsK03BmKGMAQ%3D%3D&position=4&pageNum=7&trk=public_jobs_jserp-result_search-card," Eliassen Group ",https://www.linkedin.com/company/eliassen-group?trk=public_jobs_topcard-org-name," United States "," 4 weeks ago "," Over 200 applicants ","SQL BI Developer (SSIS/QlikView/QlikSense) Our client, a leader in their industry, has an excellent opportunity for a SQL BI Developer to work on a 6-month + contract to hire, working 100% remote position. The pay rate for this position is $55/hour to $60/hour on a w-2. Available for w-2 only. This position is a contract consulting opportunity, offering a comprehensive benefits package for w-2 consultants that includes medical, dental, vision, disability, prescription drug coverage, life insurance, 401(k) with matching, weekly payment, and more. Responsibilities of the SQL BI Developer Design, build, and deploy data solutions that capture, explore, transform, and utilize data to support ETL, Data Warehouse, Artificial Intelligence, Machine Learning, and business intelligence/insights. Perform data acquisition, preparation, and performing analysis leveraging a variety of data programming and data persistence techniques. Incorporate core data management competencies including data governance, data security, and data quality Support delivery and educate end users on data products/analytic environment. Requirements of the SQL BI Developer 4+ years of SQL / BI Experience with strong skills in Microsoft SSIS, SQL Server Database Development, SQL. BI tool experience with QlikView & QlikSense Bachelor’s Degree in STEM related field or equivalent training with data tools, techniques, and manipulation. Preferred Skills: CI/CD Tools (such as Jenkins and GIT) AWS, MicroStrategy Snowflake ETL (Python, Talend) MS Access / VBA"," Mid-Senior level "," Contract "," Information Technology and Engineering "," IT Services and IT Consulting and Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-aacsb-3492995535?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=kGAW%2B4pHzHcyigxyO7xU1w%3D%3D&position=5&pageNum=7&trk=public_jobs_jserp-result_search-card," Eliassen Group ",https://www.linkedin.com/company/eliassen-group?trk=public_jobs_topcard-org-name," United States "," 4 weeks ago "," Over 200 applicants "," SQL BI Developer (SSIS/QlikView/QlikSense)Our client, a leader in their industry, has an excellent opportunity for a SQL BI Developer to work on a 6-month + contract to hire, working 100% remote position. The pay rate for this position is $55/hour to $60/hour on a w-2. Available for w-2 only. This position is a contract consulting opportunity, offering a comprehensive benefits package for w-2 consultants that includes medical, dental, vision, disability, prescription drug coverage, life insurance, 401(k) with matching, weekly payment, and more.Responsibilities of the SQL BI DeveloperDesign, build, and deploy data solutions that capture, explore, transform, and utilize data to support ETL, Data Warehouse, Artificial Intelligence, Machine Learning, and business intelligence/insights.Perform data acquisition, preparation, and performing analysis leveraging a variety of data programming and data persistence techniques. Incorporate core data management competencies including data governance, data security, and data qualitySupport delivery and educate end users on data products/analytic environment.Requirements of the SQL BI Developer4+ years of SQL / BI Experience with strong skills in Microsoft SSIS, SQL Server Database Development, SQL. BI tool experience with QlikView & QlikSenseBachelor’s Degree in STEM related field or equivalent training with data tools, techniques, and manipulation.Preferred Skills:CI/CD Tools (such as Jenkins and GIT)AWS, MicroStrategySnowflakeETL (Python, Talend) MS Access / VBA "," Mid-Senior level "," Contract "," Information Technology and Engineering "," IT Services and IT Consulting and Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-robert-half-3478299121?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=s1MGfqTevJXEX7CPF2sbUQ%3D%3D&position=6&pageNum=7&trk=public_jobs_jserp-result_search-card," Robert Half ",https://www.linkedin.com/company/robert-half-international?trk=public_jobs_topcard-org-name," Dallas, TX "," 4 weeks ago "," Over 200 applicants ","The role is Onsite (Will Shift For 3 Days In/2 Days Remote After The First Month) In Dallas, TX 75202 6 Month - 12 Month Contract (Possibility Of Conversion) Job Description: The Data Engineer will work on implementing data pipelines to make data available in Snowflake Cloud DW from systems of records and operational data stores. The Data Engineer will get to work on best-in-class cloud technologies (Nexla, Snowflake, Azure, Airflow, Python etc.) Essential Job Functions: Develop Data pipelines from various sources using Nexla, Python and Airflow Large scale file processing (csv,xml,json,pdf etc) experience Enterprise Data Lake/ Warehouse Development Schedule and Monitor Batch and near Realtime data pipelines Unit testing and troubleshooting Understanding technical design and executing technical requirements Team player, Work in a fast-paced agile environment Required skills. 5+ Years’ experience in Data Engineering 1 year experience with Nexla or similar data ingestion & engineering tools 4+ years’ experience in SQL/SnowSQL 2+ Years’ thorough experience in Python coding and packages 4+ years of overall Data Integration experience with 2+ experience in Data Engineering. 2+ years’ experience in large volume File processing with different types and formats Experience in Orchestration Tools - Airflow Understanding of dimensional and relational data modeling. Understanding of BI/DW development methodologies. Preferred Skills: Insurance Industry work experience Understanding of Azure cloud services (Blob Storage, ADLS, Azure Functions, Key Vault etc) Experience in a SaaS environments – Snowflake Ecosystem Tools, Technology Partner Offerings Knowledge of BI tools - Power BI, Tableau Knowledge of DevOps tools - Github, Atlassian Tools(JIRA, Confluence), VS Code Good documentation skills"," Mid-Senior level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-elsdon-consulting-ltd-3486257519?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=iJ6GNvKwxKFgIt0ZhQIUiw%3D%3D&position=7&pageNum=7&trk=public_jobs_jserp-result_search-card," Robert Half ",https://www.linkedin.com/company/robert-half-international?trk=public_jobs_topcard-org-name," Dallas, TX "," 4 weeks ago "," Over 200 applicants "," The role is Onsite (Will Shift For 3 Days In/2 Days Remote After The First Month) In Dallas, TX 752026 Month - 12 Month Contract (Possibility Of Conversion)Job Description:The Data Engineer will work on implementing data pipelines to make data available in Snowflake Cloud DW from systems of records and operational data stores. The Data Engineer will get to work on best-in-class cloud technologies (Nexla, Snowflake, Azure, Airflow, Python etc.)Essential Job Functions: Develop Data pipelines from various sources using Nexla, Python and Airflow Large scale file processing (csv,xml,json,pdf etc) experience Enterprise Data Lake/ Warehouse Development Schedule and Monitor Batch and near Realtime data pipelines Unit testing and troubleshooting Understanding technical design and executing technical requirements Team player, Work in a fast-paced agile environment Required skills.5+ Years’ experience in Data Engineering1 year experience with Nexla or similar data ingestion & engineering tools4+ years’ experience in SQL/SnowSQL2+ Years’ thorough experience in Python coding and packages4+ years of overall Data Integration experience with 2+ experience in Data Engineering.2+ years’ experience in large volume File processing with different types and formatsExperience in Orchestration Tools - AirflowUnderstanding of dimensional and relational data modeling. Understanding of BI/DW development methodologies.Preferred Skills:Insurance Industry work experienceUnderstanding of Azure cloud services (Blob Storage, ADLS, Azure Functions, Key Vault etc)Experience in a SaaS environments – Snowflake Ecosystem Tools, Technology Partner OfferingsKnowledge of BI tools - Power BI, TableauKnowledge of DevOps tools - Github, Atlassian Tools(JIRA, Confluence), VS CodeGood documentation skills "," Mid-Senior level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-summit2sea-consulting-a-cbeyondata-company-3518256731?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=cHNZvizErLMprTvZIwBqfA%3D%3D&position=8&pageNum=7&trk=public_jobs_jserp-result_search-card," Robert Half ",https://www.linkedin.com/company/robert-half-international?trk=public_jobs_topcard-org-name," Dallas, TX "," 4 weeks ago "," Over 200 applicants "," The role is Onsite (Will Shift For 3 Days In/2 Days Remote After The First Month) In Dallas, TX 752026 Month - 12 Month Contract (Possibility Of Conversion)Job Description:The Data Engineer will work on implementing data pipelines to make data available in Snowflake Cloud DW from systems of records and operational data stores. The Data Engineer will get to work on best-in-class cloud technologies (Nexla, Snowflake, Azure, Airflow, Python etc.)Essential Job Functions: Develop Data pipelines from various sources using Nexla, Python and Airflow Large scale file processing (csv,xml,json,pdf etc) experience Enterprise Data Lake/ Warehouse Development Schedule and Monitor Batch and near Realtime data pipelines Unit testing and troubleshooting Understanding technical design and executing technical requirements Team player, Work in a fast-paced agile environment Required skills.5+ Years’ experience in Data Engineering1 year experience with Nexla or similar data ingestion & engineering tools4+ years’ experience in SQL/SnowSQL2+ Years’ thorough experience in Python coding and packages4+ years of overall Data Integration experience with 2+ experience in Data Engineering.2+ years’ experience in large volume File processing with different types and formatsExperience in Orchestration Tools - AirflowUnderstanding of dimensional and relational data modeling. Understanding of BI/DW development methodologies.Preferred Skills:Insurance Industry work experienceUnderstanding of Azure cloud services (Blob Storage, ADLS, Azure Functions, Key Vault etc)Experience in a SaaS environments – Snowflake Ecosystem Tools, Technology Partner OfferingsKnowledge of BI tools - Power BI, TableauKnowledge of DevOps tools - Github, Atlassian Tools(JIRA, Confluence), VS CodeGood documentation skills "," Mid-Senior level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-trepp-inc-3515235383?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=dg8QLdvKW1TKYPqwI%2Fn8mA%3D%3D&position=9&pageNum=7&trk=public_jobs_jserp-result_search-card," Robert Half ",https://www.linkedin.com/company/robert-half-international?trk=public_jobs_topcard-org-name," Dallas, TX "," 4 weeks ago "," Over 200 applicants "," The role is Onsite (Will Shift For 3 Days In/2 Days Remote After The First Month) In Dallas, TX 752026 Month - 12 Month Contract (Possibility Of Conversion)Job Description:The Data Engineer will work on implementing data pipelines to make data available in Snowflake Cloud DW from systems of records and operational data stores. The Data Engineer will get to work on best-in-class cloud technologies (Nexla, Snowflake, Azure, Airflow, Python etc.)Essential Job Functions: Develop Data pipelines from various sources using Nexla, Python and Airflow Large scale file processing (csv,xml,json,pdf etc) experience Enterprise Data Lake/ Warehouse Development Schedule and Monitor Batch and near Realtime data pipelines Unit testing and troubleshooting Understanding technical design and executing technical requirements Team player, Work in a fast-paced agile environment Required skills.5+ Years’ experience in Data Engineering1 year experience with Nexla or similar data ingestion & engineering tools4+ years’ experience in SQL/SnowSQL2+ Years’ thorough experience in Python coding and packages4+ years of overall Data Integration experience with 2+ experience in Data Engineering.2+ years’ experience in large volume File processing with different types and formatsExperience in Orchestration Tools - AirflowUnderstanding of dimensional and relational data modeling. Understanding of BI/DW development methodologies.Preferred Skills:Insurance Industry work experienceUnderstanding of Azure cloud services (Blob Storage, ADLS, Azure Functions, Key Vault etc)Experience in a SaaS environments – Snowflake Ecosystem Tools, Technology Partner OfferingsKnowledge of BI tools - Power BI, TableauKnowledge of DevOps tools - Github, Atlassian Tools(JIRA, Confluence), VS CodeGood documentation skills "," Mid-Senior level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-rockitdata-3502737534?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=2Kw1cDkKP9D1k6QX9x3sZA%3D%3D&position=10&pageNum=7&trk=public_jobs_jserp-result_search-card," rockITdata ",https://www.linkedin.com/company/rockitdata?trk=public_jobs_topcard-org-name," Chicago, IL "," 2 weeks ago "," Over 200 applicants ","Currently seeking a Data Engineer with the follow experience: Qualifications 2-4 years of corporate experience with Data Analytics, Data Engineer, and Business Intelligence. Knowledge of SQL, Python, Bi, and Snowflake. Experience working with Cloud Services such as Microsoft Azure, and AWS. Bachelor's degree. Strong written and verbal communication. Responsibilities Work individually and collaboratively on client projects to execute on data driven tasks. Ability to work in a fast-paced environment. Learn new data solutions and grow within a solution-based company. Consult with clients to manage their data and offer suggestions or project scope. Report into the office up to two times per month. rockITdata is committed to a policy of Equal Employment Opportunity with respect to all employees, applicants, and interns for employment. We recruit, hire, train, and promote without discrimination due to race, color, sex, age, disability, religion, citizenship, national origin, military or veteran status, marital status, gender identity and expression, sexual orientation, and any other status protected by applicable federal, state, or local law."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-nike-3506680547?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=3XvPEMloYDsBwI3%2Bd6gpAw%3D%3D&position=11&pageNum=7&trk=public_jobs_jserp-result_search-card," rockITdata ",https://www.linkedin.com/company/rockitdata?trk=public_jobs_topcard-org-name," Chicago, IL "," 2 weeks ago "," Over 200 applicants "," Currently seeking a Data Engineer with the follow experience:Qualifications2-4 years of corporate experience with Data Analytics, Data Engineer, and Business Intelligence.Knowledge of SQL, Python, Bi, and Snowflake.Experience working with Cloud Services such as Microsoft Azure, and AWS.Bachelor's degree.Strong written and verbal communication.ResponsibilitiesWork individually and collaboratively on client projects to execute on data driven tasks.Ability to work in a fast-paced environment.Learn new data solutions and grow within a solution-based company.Consult with clients to manage their data and offer suggestions or project scope.Report into the office up to two times per month.rockITdata is committed to a policy of Equal Employment Opportunity with respect to all employees, applicants, and interns for employment. We recruit, hire, train, and promote without discrimination due to race, color, sex, age, disability, religion, citizenship, national origin, military or veteran status, marital status, gender identity and expression, sexual orientation, and any other status protected by applicable federal, state, or local law. "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-remote-at-georgia-it-inc-3523778195?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=8Kj1aphcapRX4F7%2F1i%2BCVw%3D%3D&position=12&pageNum=7&trk=public_jobs_jserp-result_search-card," rockITdata ",https://www.linkedin.com/company/rockitdata?trk=public_jobs_topcard-org-name," Chicago, IL "," 2 weeks ago "," Over 200 applicants "," Currently seeking a Data Engineer with the follow experience:Qualifications2-4 years of corporate experience with Data Analytics, Data Engineer, and Business Intelligence.Knowledge of SQL, Python, Bi, and Snowflake.Experience working with Cloud Services such as Microsoft Azure, and AWS.Bachelor's degree.Strong written and verbal communication.ResponsibilitiesWork individually and collaboratively on client projects to execute on data driven tasks.Ability to work in a fast-paced environment.Learn new data solutions and grow within a solution-based company.Consult with clients to manage their data and offer suggestions or project scope.Report into the office up to two times per month.rockITdata is committed to a policy of Equal Employment Opportunity with respect to all employees, applicants, and interns for employment. We recruit, hire, train, and promote without discrimination due to race, color, sex, age, disability, religion, citizenship, national origin, military or veteran status, marital status, gender identity and expression, sexual orientation, and any other status protected by applicable federal, state, or local law. "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-intelletec-3511711570?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=oOM2yVxLpv6YOd2xSQDK6g%3D%3D&position=13&pageNum=7&trk=public_jobs_jserp-result_search-card," Intelletec ",https://www.linkedin.com/company/intelletec-ltd?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 2 days ago "," 133 applicants ","Company: Intelletec is partnered with a global credit rating agency that provides credit ratings, research, and analysis to help investors make informed investment decisions. The company evaluates the creditworthiness of governments, corporations, financial institutions, structured finance products, and other entities. Responsibilities: Build data pipelines and applications for processing datasets Develop infrastructure for data extraction, transformation, and loading using SQL, NoSQL, and Kafka Collaborate with team members to design and implement data solutions Build analytics tools to provide insights into business metrics Track data lineage, ensure data quality, and improve data discoverability Work in an Agile environment and interact with multi-functional teams Requirements: Strong Java & Python development experience 5+ years of data engineering experience building large data pipelines Strong SQL and NoSQL skills Experience with relational SQL and NoSQL databases Hands-on experience with distributed systems such as Spark and Hadoop Experience with message queuing and stream data processing using Kafka Streams Strong analytic skills for working with unstructured datasets Hands-on experience with AWS cloud services"," Mid-Senior level "," Full-time "," Engineering "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-w3r-consulting-3524204118?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=fLFIBEB3HVi6WIiyz0NYfg%3D%3D&position=14&pageNum=7&trk=public_jobs_jserp-result_search-card," W3R Consulting ",https://www.linkedin.com/company/w3r-consulting?trk=public_jobs_topcard-org-name," United States "," 9 hours ago "," 77 applicants ","Remote contract to hire opportunity for data scientist/engineer with prior healthcare experience. Description: Skilled in the design, development and execution of healthcare and health insurance related computational analysis, data discovery and reporting. Using a working knowledge of healthcare and health insurance data, operations and competitive / regulatory environment, works with key business staff and the Health Analytics Development Lead to construct and execute data selections and visualizations for healthcare studies. Using statistical methods, designs, codes and executes queries and programs to produce the data and reporting as required by healthcare studies. Supports internal, plan-based and external users by answering questions about and resolving issues with healthcare data selection. Works closely with other members of the Analytics and Insights team, council(s) and workgroup(s) to identify and address data issues and data needs. Key Responsibilities: Using various statistical methods, designs, codes and executes queries and programs to produce the data and reporting as required by healthcare studies Using a working knowledge of healthcare and health insurance data, operations and competitive / regulatory environment, works with key business staff and the Analytics Development Lead to construct and execute data selections and visualizations for healthcare studies. Supports internal, plan-based and external (non-BCBS) users by answering questions about and resolving issues with healthcare data selection. Works closely with other members of the Analytics and Insights team, council(s) and workgroup(s) to identify and address data issues and data needs. An ideal candidate will have: Bachelor’s degree in mathematics, statistics, information technology, healthcare or equivalent experience • 3 years of progressive health insurance industry experience • 5 years of work experience retrieving and analyzing healthcare and health insurance data • 5 years of work experience using SQL and various statistical and reporting software such as “R”, Stata, Tableau, SAS, etc. • 5 years of experience gathering requirements for, designing and executing healthcare computational analysis and research studies Excellent conversational and business English (written and oral) Exceptional IT literacy in Excel, PowerPoint, Word, Visio, SQL (R, Python, Tableau is a plus) An intellectual curiosity in and desire to learn about the constantly evolving healthcare industry Additional Skills Needed: Working knowledge of statistical techniques and concepts such as regression analysis, factor analysis, correlation and standard deviation as it relates to healthcare / health insurance data Knowledge of healthcare and health insurance data including commonly available open source data such as AHRQ, CDC and SAMHSA Working knowledge of database technologies from a user perspective such as data relationships and key structure Understanding of health insurance standard datasets including 837 I&P and 834 Understanding of Heath Information Exchange – HL7, data Working knowledge of SQL and statistical and reporting software such as “R”, Stata, Tableau, SAS, etc. Ability to develop key business partnerships both internal and external, build peer relationships. Effective organization, presentation, negotiation and communications skills (oral and written). Critical thinking relative to problem analysis/resolution Demonstrated ability to gather requirements and information in the development of development of deliverables Ability to build effective relationships with cross-functional teams, demonstrated interpersonal skills for building and fostering key relationships"," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Insurance, and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-smx-services-consulting-inc-3509502758?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=JA0k6SwUKQmmxhVKeI8sew%3D%3D&position=15&pageNum=7&trk=public_jobs_jserp-result_search-card," W3R Consulting ",https://www.linkedin.com/company/w3r-consulting?trk=public_jobs_topcard-org-name," United States "," 9 hours ago "," 77 applicants "," Remote contract to hire opportunity for data scientist/engineer with prior healthcare experience. Description: Skilled in the design, development and execution of healthcare and health insurance related computational analysis, data discovery and reporting. Using a working knowledge of healthcare and health insurance data, operations and competitive / regulatory environment, works with key business staff and the Health Analytics Development Lead to construct and execute data selections and visualizations for healthcare studies. Using statistical methods, designs, codes and executes queries and programs to produce the data and reporting as required by healthcare studies. Supports internal, plan-based and external users by answering questions about and resolving issues with healthcare data selection. Works closely with other members of the Analytics and Insights team, council(s) and workgroup(s) to identify and address data issues and data needs. Key Responsibilities:Using various statistical methods, designs, codes and executes queries and programs to produce the data and reporting as required by healthcare studiesUsing a working knowledge of healthcare and health insurance data, operations and competitive / regulatory environment, works with key business staff and the Analytics Development Lead to construct and execute data selections and visualizations for healthcare studies.Supports internal, plan-based and external (non-BCBS) users by answering questions about and resolving issues with healthcare data selection.Works closely with other members of the Analytics and Insights team, council(s) and workgroup(s) to identify and address data issues and data needs. An ideal candidate will have:Bachelor’s degree in mathematics, statistics, information technology, healthcare or equivalent experience • 3 years of progressive health insurance industry experience • 5 years of work experience retrieving and analyzing healthcare and health insurance data • 5 years of work experience using SQL and various statistical and reporting software such as “R”, Stata, Tableau, SAS, etc. • 5 years of experience gathering requirements for, designing and executing healthcare computational analysis and research studiesExcellent conversational and business English (written and oral)Exceptional IT literacy in Excel, PowerPoint, Word, Visio, SQL (R, Python, Tableau is a plus)An intellectual curiosity in and desire to learn about the constantly evolving healthcare industry Additional Skills Needed: Working knowledge of statistical techniques and concepts such as regression analysis, factor analysis, correlation and standard deviation as it relates to healthcare / health insurance dataKnowledge of healthcare and health insurance data including commonly available open source data such as AHRQ, CDC and SAMHSAWorking knowledge of database technologies from a user perspective such as data relationships and key structureUnderstanding of health insurance standard datasets including 837 I&P and 834 Understanding of Heath Information Exchange – HL7, dataWorking knowledge of SQL and statistical and reporting software such as “R”, Stata, Tableau, SAS, etc.Ability to develop key business partnerships both internal and external, build peer relationships. Effective organization, presentation, negotiation and communications skills (oral and written). Critical thinking relative to problem analysis/resolutionDemonstrated ability to gather requirements and information in the development of development of deliverablesAbility to build effective relationships with cross-functional teams, demonstrated interpersonal skills for building and fostering key relationships "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Insurance, and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-data-analytics-at-costco-wholesale-3515995135?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=v1tTSVNd3IuB%2FeL4gPOKdQ%3D%3D&position=16&pageNum=7&trk=public_jobs_jserp-result_search-card," W3R Consulting ",https://www.linkedin.com/company/w3r-consulting?trk=public_jobs_topcard-org-name," United States "," 9 hours ago "," 77 applicants "," Remote contract to hire opportunity for data scientist/engineer with prior healthcare experience. Description: Skilled in the design, development and execution of healthcare and health insurance related computational analysis, data discovery and reporting. Using a working knowledge of healthcare and health insurance data, operations and competitive / regulatory environment, works with key business staff and the Health Analytics Development Lead to construct and execute data selections and visualizations for healthcare studies. Using statistical methods, designs, codes and executes queries and programs to produce the data and reporting as required by healthcare studies. Supports internal, plan-based and external users by answering questions about and resolving issues with healthcare data selection. Works closely with other members of the Analytics and Insights team, council(s) and workgroup(s) to identify and address data issues and data needs. Key Responsibilities:Using various statistical methods, designs, codes and executes queries and programs to produce the data and reporting as required by healthcare studiesUsing a working knowledge of healthcare and health insurance data, operations and competitive / regulatory environment, works with key business staff and the Analytics Development Lead to construct and execute data selections and visualizations for healthcare studies.Supports internal, plan-based and external (non-BCBS) users by answering questions about and resolving issues with healthcare data selection.Works closely with other members of the Analytics and Insights team, council(s) and workgroup(s) to identify and address data issues and data needs. An ideal candidate will have:Bachelor’s degree in mathematics, statistics, information technology, healthcare or equivalent experience • 3 years of progressive health insurance industry experience • 5 years of work experience retrieving and analyzing healthcare and health insurance data • 5 years of work experience using SQL and various statistical and reporting software such as “R”, Stata, Tableau, SAS, etc. • 5 years of experience gathering requirements for, designing and executing healthcare computational analysis and research studiesExcellent conversational and business English (written and oral)Exceptional IT literacy in Excel, PowerPoint, Word, Visio, SQL (R, Python, Tableau is a plus)An intellectual curiosity in and desire to learn about the constantly evolving healthcare industry Additional Skills Needed: Working knowledge of statistical techniques and concepts such as regression analysis, factor analysis, correlation and standard deviation as it relates to healthcare / health insurance dataKnowledge of healthcare and health insurance data including commonly available open source data such as AHRQ, CDC and SAMHSAWorking knowledge of database technologies from a user perspective such as data relationships and key structureUnderstanding of health insurance standard datasets including 837 I&P and 834 Understanding of Heath Information Exchange – HL7, dataWorking knowledge of SQL and statistical and reporting software such as “R”, Stata, Tableau, SAS, etc.Ability to develop key business partnerships both internal and external, build peer relationships. Effective organization, presentation, negotiation and communications skills (oral and written). Critical thinking relative to problem analysis/resolutionDemonstrated ability to gather requirements and information in the development of development of deliverablesAbility to build effective relationships with cross-functional teams, demonstrated interpersonal skills for building and fostering key relationships "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting, Insurance, and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-agility-partners-3498768433?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=Lb3E1z6TNQ3NZ2b66wJHqw%3D%3D&position=17&pageNum=7&trk=public_jobs_jserp-result_search-card," Agility Partners ",https://www.linkedin.com/company/agilitypartners?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," 183 applicants ","Position Summary Accountable for developing and delivering technological responses to targeted business outcomes. Analyze, design and develop enterprise data and information architecture deliverables, focusing on data as an asset for the enterprise. Understand and follow reusable standards, design patterns, guidelines, and configurations to deliver valuable data and information across the enterprise where needed. Demonstrate the company's core values of respect, honesty, integrity, diversity, inclusion and safety. Essential Job Functions Utilize enterprise standards for data domains and data solutions, focusing on simplified integration and streamlined operational and analytical uses Ensure there is clarity between ongoing projects, escalating when necessary Leverage innovative new technologies and approaches to renovate, extend, and transform the existing core data assets, including SQL-based, NoSQL-based, and Cloud-based data platforms Define high-level migration plans to address the gaps between the current and future state Contribute to the development of cost/benefit analysis for leadership to shape sound architectural decisions Analyze technology environments to detect critical deficiencies and recommend solutions for improvement Promote the reuse of data assets, including the management of the data catalog for reference Draft architectural diagrams, interface specifications and other design documents Must be able to perform the essential job functions of this position with or without reasonable accommodation Minimum Position Qualifications Bachelor's Degree in computer science, software engineering, or related field 4+ years experience in the data development and principles including end-to-end design patterns 4+ years proven track record of delivering large scale, high quality operational or analytical data systems 4+ years successful and applicable experience building complex data solutions that have been successfully delivered to customers Any experience in a minimum of two of the following technical disciplines: data warehousing, big data management, analytics development, data science, application programming interfaces (APIs), data integration, cloud, servers and storage, and database management Desired Previous Experience/Education Any experience with Azure Data Platform stack: Azure Data Lake, Data Factory and Databricks Any experience with Python, Spark and SQL Any experience with streaming technologies like Kafka, IBM MQ and EventHub Any experience with data science solutions or platforms Any experience with a variety of SQL, NoSQL and Big Data Platforms Any experience building solutions using elastic architectures (preferably Microsoft Azure and Google Cloud Platform)"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting and Computer Hardware Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-compunnel-inc-3496143797?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=hg3aFSzx2%2FsoZ4D4slhWvw%3D%3D&position=18&pageNum=7&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," Bellevue, WA "," 3 weeks ago "," Be among the first 25 applicants ","Description Candidates MUST have Data Engineer experience. At least 3 years of Data Engineer experience required preferably in a cloud Environment. You should have at least 4 years of coding experience in python/java/ Scala and open source packages with at least 2 years of experience with Databases(SQL/NOSQL etc). Experience with large scale Distributed databases like redshift/Snowflake is a big Plus. You should have Experience with different aspects of data systems including database design, data modeling, performance optimization, SQL etc. Some Experience with building data pipelines and Orchestration(Airflow ,ADF,glue etc) is required. Strong communication skills (able to explain concepts to non-technical audiences as well as peers) Self-starter who is highly organized, communicative, quick learner, and team-oriented Priority will be given to candidates local to the Seattle/Oregan/Bellevue area since the team is based there. However, they will consider strong remote candidates. Possibility for further extension - though the manager would like to convert these hires if a good fit. Technology Requirements (I.E Programs, systems, etc) Python/Java or Scala , SQL and Airflow. Cloud experience AWS/Azure Responsibilities Developing, executing, monitoring and troubleshooting Data pipelines and workflows in our cloud environment. Work on Data Lake/DW/DQ and other framework related items Team and cross functional collaboration as needed. Preferred background/prior work experience? 3 years of DE expertise building data pipelines and working in a DW/Data lake Cloud based environment Priority soft skills Strong communication and problem solving skills, Self starter and highly organized. Ability to learn and adapt Education: Bachelors Degree"," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cvs-health-3522843006?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=I0SGqQ%2F29igY8Lv0CJpZ6g%3D%3D&position=19&pageNum=7&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," Bellevue, WA "," 3 weeks ago "," Be among the first 25 applicants "," DescriptionCandidates MUST have Data Engineer experience.At least 3 years of Data Engineer experience required preferably in a cloud Environment.You should have at least 4 years of coding experience in python/java/ Scala and open source packages with at least 2 years of experience with Databases(SQL/NOSQL etc).Experience with large scale Distributed databases like redshift/Snowflake is a big Plus.You should have Experience with different aspects of data systems including database design, data modeling, performance optimization, SQL etc.Some Experience with building data pipelines and Orchestration(Airflow ,ADF,glue etc) is required.Strong communication skills (able to explain concepts to non-technical audiences as well as peers)Self-starter who is highly organized, communicative, quick learner, and team-orientedPriority will be given to candidates local to the Seattle/Oregan/Bellevue area since the team is based there.However, they will consider strong remote candidates.Possibility for further extension - though the manager would like to convert these hires if a good fit.Technology Requirements (I.E Programs, systems, etc) Python/Java or Scala , SQL and Airflow.Cloud experience AWS/AzureResponsibilitiesDeveloping, executing, monitoring and troubleshooting Data pipelines and workflows in our cloud environment.Work on Data Lake/DW/DQ and other framework related itemsTeam and cross functional collaboration as needed.Preferred background/prior work experience? 3 years of DE expertise building data pipelines and working in a DW/Data lake Cloud based environmentPriority soft skills Strong communication and problem solving skills, Self starter and highly organized. Ability to learn and adaptEducation: Bachelors Degree "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/advanced-data-engineer-at-kroger-technology-digital-3531126682?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=K2FNmVDro1gHYkjCM9tX0g%3D%3D&position=20&pageNum=7&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," Bellevue, WA "," 3 weeks ago "," Be among the first 25 applicants "," DescriptionCandidates MUST have Data Engineer experience.At least 3 years of Data Engineer experience required preferably in a cloud Environment.You should have at least 4 years of coding experience in python/java/ Scala and open source packages with at least 2 years of experience with Databases(SQL/NOSQL etc).Experience with large scale Distributed databases like redshift/Snowflake is a big Plus.You should have Experience with different aspects of data systems including database design, data modeling, performance optimization, SQL etc.Some Experience with building data pipelines and Orchestration(Airflow ,ADF,glue etc) is required.Strong communication skills (able to explain concepts to non-technical audiences as well as peers)Self-starter who is highly organized, communicative, quick learner, and team-orientedPriority will be given to candidates local to the Seattle/Oregan/Bellevue area since the team is based there.However, they will consider strong remote candidates.Possibility for further extension - though the manager would like to convert these hires if a good fit.Technology Requirements (I.E Programs, systems, etc) Python/Java or Scala , SQL and Airflow.Cloud experience AWS/AzureResponsibilitiesDeveloping, executing, monitoring and troubleshooting Data pipelines and workflows in our cloud environment.Work on Data Lake/DW/DQ and other framework related itemsTeam and cross functional collaboration as needed.Preferred background/prior work experience? 3 years of DE expertise building data pipelines and working in a DW/Data lake Cloud based environmentPriority soft skills Strong communication and problem solving skills, Self starter and highly organized. Ability to learn and adaptEducation: Bachelors Degree "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-brooksource-3499055675?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=dxJd%2Fh8zJwyBngWeI4LT8Q%3D%3D&position=21&pageNum=7&trk=public_jobs_jserp-result_search-card," Brooksource ",https://www.linkedin.com/company/brooksource?trk=public_jobs_topcard-org-name," Michigan, United States "," 2 weeks ago "," Over 200 applicants ","Data Engineer 100% Remote Full-Time, 40 hours/week As a Data Engineer supporting tech modernization efforts for one of America's leading utilities, you will conduct data integration and analytics projects that automate data collection, transformation, storage, delivery, and reporting processes. Your work will ensure optimization of data retrieval and processing, including performance tuning, delivery design for down-stream analytics, machine learning modeling, feature engineering, and reporting. You'll work across multiple areas/teams to develop data integration methods that advance enterprise data and reporting capabilities. If this sounds like you, please keep reading and apply today! What's In It For You...? Get your foot in the door with one of the nation's leading Utilities and help drive exciting changes in the Energy industry across the country! Career development and coaching from the Brooksource support team, including benefits like healthcare, 401k, and others Minimum Requirements: Bachelor’s degree in Computer Science or related field SQL Database design and query optimization experience Intermediate-level proficiency in business intelligence tools and data blending tools (e.g., Microsoft Power Platform, Power BI, etc.) Day-to-Day Responsibilities: Facilitates data engineering projects and collaborates with stakeholders to formulate end-to-end solutions, including data structure design to feed downstream analytics, machine learning modeling, feature engineering, prototype development, and reporting. Works with business units, data architects, server engineers and data scientists to identify relevant data, analyze data quality, design data requirements, and develop prototypes for proof-of-concepts. Develops data sets and automated pipelines that support data requirements for process improvement and operational efficiency metric."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-neteffects-3478646345?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=JSts4o%2BsFElBSj9%2FQMkmNw%3D%3D&position=22&pageNum=7&trk=public_jobs_jserp-result_search-card," neteffects ",https://www.linkedin.com/company/neteffects?trk=public_jobs_topcard-org-name," United States "," 4 weeks ago "," Over 200 applicants ","We at neteffects are looking for Sr. Data Engineer for our direct enterprise client. Work will be in CST times. No C2C . Sponsorship available for visa holders A Data engineer should embrace the challenge of dealing with petabytes or even exabytes of data daily in a high-throughput API/micro service ecosystem. A Data engineer understands how to apply technologies to solve big data problems and to develop innovative big data solutions. The Data engineer should be able to develop prototypes and proof of concepts for the selected solutions. This role will drive the engineering and building of geospatial data assets to support our client’s Digital Platform and R&D product pipeline. Basic Requirements: • BSc degree in Computer Science or relevant job experience. • Minimum of 5-year experience with Python/Java development languages. • Knowledge in different programming or scripting languages like Python, Scala, Go, Java, Javascript, R, SQL, Bash. • Experience developing HTTP APIs (REST and/or GraphQL) that serve up data in an open-source technology, preferably in a cloud environment. • Ability to build and maintain modern cloud architecture, e.g. AWS, Google Cloud, etc. • Proven experience working with ETL concepts of data integration, consolidation, enrichment, and aggregation. Design, build and support stable, scalable data pipelines or ETL processes that cleanse, structure and integrate big data sets from multiple data sources and provision to integrated systems and Business Intelligence reporting. • Experience working with PostgreSQL/PostGIS. • Experience with streaming sensor/IoT data, e.g. Kafka. • Experience with code versioning and dependency management systems such as GitHub, SVT, and Maven. • Proven success utilizing Docker to build and deploy within a CI/CD Environment, preferably using Kubernetes. Desirable qualifications: • MSc in Computer Science or related field. • Knowledge on open-source geospatial tech stack such as geoserver. • Highly proficient (6 years) in Python Location: Saint Louis MO or Remote from another location. All qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran."," Mid-Senior level "," Contract "," Business Development and Sales "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-brooksource-3487266424?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=GkZmjN0j4g0KI8rLn5QsJw%3D%3D&position=23&pageNum=7&trk=public_jobs_jserp-result_search-card," Brooksource ",https://www.linkedin.com/company/brooksource?trk=public_jobs_topcard-org-name," Greater St. Louis "," 1 week ago "," 136 applicants ","Title: Data Engineer Location: St. Louis OR Denver (hybrid work schedule) Engagement: Contract to Hire Summary Our Fortune 500 consumer goods client has an exciting opportunity to be part of their expanding MarTech Loyalty team on a mission to create best in class experiences for their loyalty customers on their new customer Loyalty platform, the foundation for multiple future initiatives. This is a Technology Engineering role focused on designing, developing, deploying, and maintaining digital products, features, and enhancements that meet business requirements with scalability, reliability, and performance. The ideal candidate will be able to work in a fast-paced environment, with a sense of urgency to deliver projects within marketing technology. Day to Day responsibilities: Meaningfully relate data for key products across the company Daily interactions with Product, Data Science and Software Engineering teams to iterate quickly and constantly change / update / add visualizations to our reporting products. Build data models using the latest cloud technologies that support fast, low maintenance and scalable problem solving. Create sustainable pipelines and support Data Science and the Front End team on all data needs. Top Skills: Bachelor's or Master’s degree in Computer Science, or related engineering field. Relevant real-world experience developing scalable Data Assets Experience in designing user experiences and building data models on cloud technology. Experience with tuning databases. Experience with high velocity CI/CD systems, Agile ways of working and A/B testing to accelerate iteration cycles. Technology Skills: SQL, Python Data Bricks. Snowflake, Azure Synapse Azure DevOps Azure Analysis Services PowerBI Dataflows Star schema"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-dynpro-inc-3489129681?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=%2F4pU1u3wwYamRwOhS60xsQ%3D%3D&position=24&pageNum=7&trk=public_jobs_jserp-result_search-card," DynPro Inc. ",https://www.linkedin.com/company/dynpro-inc?trk=public_jobs_topcard-org-name," Dallas, TX "," 3 weeks ago "," 136 applicants "," Role: Data Engineers Location: Dallas TX (US/Remote & India) Duration 12 months 1. Experience in Data Architectures, ODSs, Datawarehouses and methodologies 2. Able to address issues of data migration (validation, cleanup and mapping), and understanding the importance of data dictionaries 3. Hands on experience with cloud sql and snowflake 4. Exposure to traditional and real-time ETL "," Entry level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-firstpro-inc-3516807580?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=2GhqXAAh1bPIsK9EmRFQ9A%3D%3D&position=25&pageNum=7&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 2 days ago "," 90 applicants "," firstPRO is now accepting resumes for a Data Engineer position located in Orlando, FL. This role will operate on a hybrid model and require being onsite 3 days per week. This is a direct hire role that comes with salary, excellent benefits package, and generous yearly bonus.The Data Engineer will play a pivotal role in operationalizing the most-urgent data and analytics initiatives for business initiatives. The bulk of the data engineer’s work would be building, managing, and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers. Data engineers also need to guarantee compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This would enable faster data access, integrated data reuse, and vastly improved time-to-solution for client’s data and analytics initiatives. This role will require both creative and collaborative working with IT and the wider business. It will involve evangelizing effective data management practices and promoting a better understanding of data and analytics. The data engineer will also be tasked with working with key business stakeholders, IT experts, and commercial real estate experts to plan and deliver optimal analytics and data science solutions. Responsibilities Serve as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure. Perform analysis on large datasets to make and implement recommendations for maximizing customer experience. Assists in the design and implementation of relational databases and structures as needed. Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing. Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability. Participates in developing cutting-edge storage design structures and data processing flows. Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use: It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed. Participate in agile development ceremonies and interact with both business analysts and end-users to come up with well-performing and scalable solutions. RequirementsBachelor's degree in computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required. 5+ years of experience developing SQL/T-SQL including, Single-row and multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF. Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases. Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design. Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production. Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others. Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query. Ability to apply DevOps principles to data pipelines to improve the communication, integration, reuse, and automation of data flows between data managers and consumers across an organization. Experience with agile and lean development methodologies (SCRUM/Lean). Must be a self-starter with excellent problem-solving skills and excellent written/verbal communication skills. Knowledge and experience with cloud data management and analytics with Microsoft Azure or Amazon AWS are strongly preferred. Excellent interpersonal and organizational skills. Commercial real estate industry knowledge or previous experience would be a plus. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-indotronix-avani-group-3526739461?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=PZdyjqwN3ua2DZT%2BYKpkZg%3D%3D&position=5&pageNum=6&trk=public_jobs_jserp-result_search-card," Indotronix Avani Group ",https://www.linkedin.com/company/indotronix-avani-group?trk=public_jobs_topcard-org-name," Allen, TX "," 13 hours ago "," 67 applicants ","Remote* The consumer loyalty data science team consists of data scientists who create data science models on the topics of churn, expected lifetime, customer lifetime value, upgrade propensity, and customer satisfaction. Algorithmic approaches include binary classification, multi-class classification, time series forecasting, regressions and random forests. Basic Qualifications 2+ years of experience as a Data Engineer in a similar role Experience with data modeling, warehousing and building pipelines Proven experience in designing and implementing comprehensive data pipelines for a variety of flows (data integration across systems, ETL processes, machine learning infrastructures) Proficient In SQL And Python Preferred Qualifications The more experience, the better, when it comes to the AWS ecosystem (e.g. GLUE, Athena, S3, Lambda, IAM, SageMaker, CloudWatch, API Gateway), Delta Lake, PySpark, Apache Spark, Airflow, APIs (REST, SOAP, RPC), streaming event data Understanding of system architecture and experience with large distributed systems (familiarity with the Apache Spark ecosystem) Has prior experience working with Teradata SQL, MS SQL, and IBM DB2 or similar dialects Has prior experience in continuous integration and continuous deployment of large scalable data systems Has prior experience working with and supporting a data science team Familiarity with working on data involving telecoms, mobile providers, ISPs or cable companies Top 3 Must-Haves (Hard Skills) Intermediate experience building ETL pipelines in the AWS ecosystem (e.g., GLUE, Athena, S3, Lambda, IAM, SageMaker, CloudWatch, API Gateway). Basic experience writing PySpark. Basic experience in dashboard creation experience in Microsoft Power BI Nice-To-Haves (Hard Skills) Experience using APIs (REST, SOAP, RPC).Teradata and MS SQL. We would prefer those with the AWS technical certificates ""Solution Architect Associate"" and ""Data Analytics Specialty"". Must-Haves (Soft Skills) Strong verbal and written communication skills. Nice-To-Haves (Soft Skills) Experience navigating corporate bureaucracy"," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-fetch-3506658705?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=zVaxUEwJhdLf0mgBul97mg%3D%3D&position=9&pageNum=6&trk=public_jobs_jserp-result_search-card," Fetch ",https://www.linkedin.com/company/fetch-rewards?trk=public_jobs_topcard-org-name," Madison, WI "," 1 month ago "," 33 applicants ","What We're Building And Why We're Building It. There's a reason Fetch is ranked top 10 in Shopping in the App Store. Every day, millions of people earn Fetch Points buying brands they love. From the grocery aisle to the drive-through, Fetch makes saving money fun. We're more than just a build-first tech unicorn. We're a revolutionary shopping platform where brands and consumers come together for a loyalty-driving, points-exploding, money-saving party. Join a fast-growing, founder-led technology company that's still only in its early innings. Ranked one of America's Best Startup Employers by Forbes two years in a row, Fetch is building a people-first culture rooted in trust and accountability. How do we do it? By empowering employees to think big, challenge ideas, and find new ways to bring the fun to Fetch. So what are you waiting for? Apply to join our rocketship today! Fetch is an equal employment opportunity employer. The Role: The data engineering team is working to use all the latest technology to build a performant, reliable, and scalable platform for delivering data. The work of data engineers is to enable all stakeholders to be able to access and use endless amounts of data that come from an ever-growing variety of data sources. At fetch our motto is to make data processing appear seamless and effortless for both producers and consumers of data. With a goal of having world class data availability with terabytes of daily data, data engineering is critical to Fetch's success. The ideal candidate: Python programming skills Solid SQL skills Familiarity with Unix systems, shell scripting, and Git Experience with relational (SQL), non-relational (NoSQL), and/or object data stores (e.g., Snowflake, MongoDB, S3, HDFS, Postgres, Redis, DynamoDB) Experience working with streaming data in Kafka and Flink Interest in building and experimenting with different tools and tech, and sharing your learnings with the broader organization The desire to work with other teams in the organization (e.g., Development, Business Intelligence, Data Science) to build tools and solutions that support and help manage data within the Fetch ecosystem Bachelor's degree in Computer Science (or equivalent) Bonus points for: Excellent written and verbal communication skills Familiarity with open source software and dependency management ETL process, data pipeline, and/or microservice development experience Cloud engineering and DevOps skills (e.g., AWS, CloudFormation, Docker) Familiarity with messaging and asynchronous technologies (e.g., SQS, Kinesis, RabbitMQ, Kafka) Big data development skills (e.g., Spark, Hadoop, MPP DW) Experience with visualization tools (e.g., Tableau) At Fetch, we'll give you the tools to feel healthy, happy and secure through: Stock Options for everyone 401k Match: Dollar-for-dollar match up to 4%. Benefits for humans and pets: We offer comprehensive medical, dental and vision plans for everyone including your pets. Continuing Education: Fetch provides $10,000 per year in education reimbursement. Employee Resource Groups: Take part in employee-led groups that are centered around fostering a diverse and inclusive workplace through events, dialogue and advocacy. The ERGs participate in our Inclusion Council with members of executive leadership. Paid Time Off: On top of our flexible PTO, Fetch observes 9 paid holidays, including Juneteenth and Indigenous People's Day, as well as our year-end week-long break. Robust Leave Policies: 18 weeks of paid parental leave for primary caregivers, 12 weeks for secondary caregivers, and a flexible return to work schedule. Hybrid Work Environment: Collaborate with your team in one of our stunning offices in Madison, Birmingham, or Chicago. We'll ensure you are equally equipped with the hardware and software you need to get your job done in the comfort of your home."," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-elsdon-consulting-ltd-3486257519?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=iJ6GNvKwxKFgIt0ZhQIUiw%3D%3D&position=7&pageNum=7&trk=public_jobs_jserp-result_search-card," Elsdon Consulting ltd ",https://uk.linkedin.com/company/elsdon-consulting?trk=public_jobs_topcard-org-name," New Hampshire, United States "," 4 weeks ago "," 101 applicants ","Are you an Data Engineer? Do you enjoy working within the full project lifecycle? Would you be excited at the opportunity to work for a key manufacturer within the Aerospace supply chain? If so, this may be the opportunity you have been looking for... This consultancy are looking for a dedicated and passionate Data Engineer to help manage projects from conceptual stage all the way through to deployment. This role would be suited to someone who enjoys being involved in the full project lifecycle and communicating with senior stakeholders! At this time this role is only open to US Citizens / Green card holders and the role will be largely remote. The Data Engineer will need: Bachelor's degree Experience leading projects Strong understanding of AWS tech stack Experience of building ETL pipelines The responsibilities of the Data Engineer will be: Make data available to stakeholders using languages like R, Python Be a real technical problem solver, able to create solutions for complex requirements with a hands-on attitude Building out Data Warehouses, Lakes etc. So if you are a Data Engineer and have been looking for a new position, or this role simply caught your eye, please do apply today or contact me directly at max.morrell@elsdonconsulting.com"," Mid-Senior level "," Full-time "," Information Technology "," Aviation and Aerospace Component Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-summit2sea-consulting-a-cbeyondata-company-3518256731?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=cHNZvizErLMprTvZIwBqfA%3D%3D&position=8&pageNum=7&trk=public_jobs_jserp-result_search-card," Summit2Sea Consulting, a cBEYONData Company ",https://www.linkedin.com/company/summit2sea-consulting?trk=public_jobs_topcard-org-name," Arlington, VA "," 1 month ago "," Be among the first 25 applicants ","Have you been looking to shift your career into high gear? This is your opportunity to take your ambitions and convert them into a solid career in a supportive and innovative environment! Summit2Sea is looking for a Data Engineer. Summit2Sea is a technology consulting firm run by hands on technologists that combines people, process and technology to deliver innovative solutions to our clients. We have been named on The Washington Post's list of top work places for the past 3 years! We invest in our biggest asset - our people! You can be a part of a winning team that will contribute to your career growth. Impactful Work You'll Do: Support data collection, ingestion, validation, and loading of optimized data in the appropriate data stores Work on a team made up of analyst(s), developer(s), data scientist(s), and a product lead Identify and implement solutions for the data requirements, including building pipelines to collect data from disparate, external sources, implementing rules to validate that expected data is received, cleansed, transformed, massaged and in an optimized output format for the data store Transform/clean data to create useful formats for data science, design and build data systems and process data in a way that allows data scientists to extract value from it Maintain the Big Data infrastructure and create reliable data pipelines Perform data acquisition, data transformation, and data modeling Perform validation and analytics in support of the client requirements and evolves solutions through automation, optimizing performance with minimal human involvement Monitor pipeline status, performance, and troubleshoots issues while working on improvements to ensure the solution is the very best version to address the customer need Focus specifically on the development and maintenance of scalable data stores that supply big data in forms needed for business analysis Apply advanced consulting skills, extensive technical expertise and has full industry knowledge to develop innovative solutions to complex problems Requirements Must Have: Ability to represent and uphold our core values of integrity, client service, expertise, adaptability, communication, and teamwork Solid background developing solutions for high volume, low latency applications and can operate in a fast-paced, highly collaborative environment Distributed computer understanding and experience with SQL, Spark, ETL Appreciates the opportunity to be independent, creative, and challenged Curious mind, passionate about solving problems quickly and bringing innovative ideas to the table Able to work without considerable direction and may mentor or supervise other team members Experience with SQL Experience developing data pipelines using modern Big Data ETL technologies like NiFi or StreamSets Experience with a modern programming language such as Python or Java Experience working in a big data and cloud environment Ability to obtain a Secret Clearance or higher Nice to Have: Experience working in an agile development environment Ability to quickly learn technical concepts and communicate with multiple functional groups Ability to display a positive, can-do attitude to solve the challenges of tomorrow Preferred experience at the respective command with an understanding of analytical and data paint points and challenges across the J-Codes Benefits Upper Tier Compensation includes a base salary and bonuses based upon: performance, business development, employee referrals and knowledge sharing. We value the individual and share our company's success across our team. Summit2Sea is committed to offering our employees a benefits package that is competitive and comprehensive enough to meet their goals and needs. As a valued member of the S2S team, employees are provided with a collection of benefits to include paid holidays, health and dental care to name a few. ‍ Compensation Components Competitive Base Salary Quarterly Recruiting Bonus New Sales Bonus Knowledge Contribution Bonus Benefit Components Paid Holidays Vacation/Sick Leave/Personal Time Off Health Insurance Dental Insurance Life Insurance 401K Summit2Sea Consulting is an Equal Opportunity Employer (EOE) and E-Verify employer. Qualified applicants are considered for employment without regard to age, race, color, religion, sex, national origin, sexual orientation, disability, or veteran status. If you need assistance or an accommodation during the application process because of a disability, it is available upon request. The company is pleased to provide such assistance, and no applicant will be penalized as a result of such a request."," Not Applicable "," Full-time "," Analyst "," Technology, Information and Internet " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-trepp-inc-3515235383?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=dg8QLdvKW1TKYPqwI%2Fn8mA%3D%3D&position=9&pageNum=7&trk=public_jobs_jserp-result_search-card," Trepp, Inc. ",https://www.linkedin.com/company/trepp?trk=public_jobs_topcard-org-name," New York, NY "," 13 hours ago "," Over 200 applicants ","The Data Engineer is responsible for leading the migration of legacy systems to the Data Lake and providing technical leadership in support of Trepp’s initiatives in data ingestion and pipeline automation, with a focus on the design and implementation of Spark/EMR and streaming platforms. The Data Engineer reports into the Data Engineering organization. The ideal candidate will have a bachelor’s degree in Computer Science or closely related subject; an advanced degree is preferred. In addition, 7 or more years’ experience in migrating complex IT systems in enterprise organizations. The position requires strong technical skills and must be able to collaborate effectively with a group of high performing individuals. The position requires an individual who can collaborate on setting mid/long term goals and objectives, then be self-directed to estimate timelines and take ownership of the work, while communicating with stakeholders on a regular basis. The data engineering role sets an example for junior team members through modelling best practices and provides guidance during weekly team calls, feedback through code reviews etc., and generally supports the knowledge and skills development of junior team members. Data Engineering, Design and Development Requirements: Demonstrate knowledge of batch and streaming pipelines technologies Be responsible for core analytics, data lake and date pipeline products. Demonstrate knowledge of Data Quality and Data Governance. Build data applications, products, integrate with third-party data and technology platforms. Demonstrate expertise in comprehension and construction of complex SQL queries Act as a Subject Matter Expert to the organization for Trepp’s ingestion frameworks, including AWS and future providers, networking, provisioning, and management Demonstrate leadership ability to back decisions with research and the “why,” and articulate several options, the pros and cons for each, and a recommendation Maintain overall industry knowledge on latest trends, technology, etc. Qualifications: Bachelor’s degree in computer science, systems analysis or a related study, or equivalent experience 7+ years of experience with Python or Java, or Scala and AWS (Lambda, ECS, EMR) 7+ years of experience with SQL (DDL, DML, complex queries) Experience with SQL RDBMS implementations and RDS is a plus Experience with AWS enterprise implementations (Lambda, ECS, Spark, Kinesis, DMS) Experience with Spark / EMR (or similar) implementations Exposure to multiple, diverse technologies and processing environments Knowledge of components within a technical architecture Experience with Agile and SDLC, Git workflows, and CI/CD Strong preference for process and documentation Strong understanding of network architecture and application development methodologies. Salary Range: Base salary starting from $180k plus bonus eligible Benefits and Perks: Base + target bonus compensation structure Medical, Dental, Vision insurance 401K (with employer match) Life insurance, long term disability, short term disability all covered by the company Flexible paid time off (PTO) Sixteen (16) weeks paid primary caregiver leave (Biological, adoptive, and foster parents are all eligible) Four (4) weeks paid parental leave Pet insurance Laptop ​+ WFH equipment​ Career progression plan Pre-tax commuter benefit with company subsidy (For NYC-office based employees only) Involvement in Diversity and Inclusion programs Fun company events and volunteering opportunities Workplace Policy: NYC, Dallas, PA, and London office-based positions: Trepp’s offices follows a 3-2 hybrid-working policy with the expectation of in-office work on Tuesday-Thursday and the option to work from home on Monday and Friday. Remote positions: Employees in remote roles have the option of working remotely and may occasionally travel to a Trepp office or elsewhere for required meetings or team-building events. Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Trepp (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status). About Trepp, Inc. Trepp, Inc. founded in 1979, is a leading provider of data, analytics, and technology solutions to the global securities and investment management industries. Trepp specifically serves three key sectors: structured finance, commercial real estate, and banking to help market participants meet their objectives for surveillance, credit risk management, and investment performance. Trusted by the industry for the accuracy of its proprietary data, Trepp provides clients sophisticated, comprehensive models and analytics. Trepp is wholly owned by Daily Mail and General Trust (DMGT). Trepp, Inc. is an equal opportunity / affirmative action employer, complying with all laws governing employment in each jurisdiction in which operating, and provides equal opportunity to all applicants and employees. All qualified applicants will be considered without regard to race, color, religion, gender, national origin, age, disability, marital or protected veteran status, sexual orientation, gender identity and other status protected by applicable laws."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Information Services and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-nike-3506680547?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=3XvPEMloYDsBwI3%2Bd6gpAw%3D%3D&position=11&pageNum=7&trk=public_jobs_jserp-result_search-card," Nike ",https://www.linkedin.com/company/nike?trk=public_jobs_topcard-org-name," Beaverton, OR "," 1 month ago "," 43 applicants ","Work options: Remote Data Engineer contract role 4month+ contract Remote from US Pay Rate Range: $60-$70/hr Skills Specific application programming, development methods, techniques and standards Proficient in application development tools Software tools, which automate or assist part of the development process. Has a good knowledge of a wide area of IS concepts and practice, including the systems development life cycle, with a deep knowledge of at least one area of specialization. Familiar with database software Familiar with operating infrastructure Knowledge of the IS infrastructure (hardware, databases, operating systems, local area networks etc) used within the organization. Microsoft Certified Solution Developer (MCSD) (www.microsoft.com) covers Microsoft Windows operating systems and application development. Information Systems Examination Board (www.iseb.org.uk) Diploma in Rapid Application Development Proven ability to work in a Test Driven Development manner Responsible for development of application using object-oriented Java tools and methodologies. Responsible for customization and configuration of WC components and solutions. Proficient in Microsoft Windows Server technology including Windows NT4, Windows 2000 & Windows 2003, Exchange 5.5, Exchange 2000 and Exchange 2003, Active Directory, SQL Server, Terminal Server and Citrix Server. Working knowledge of FTP, Telnet, Ping, DNS, DHCP, WINS, SNMP, HTTP and LDAP. Frequent use and application of technical standards, principles, theories, concepts, and techniques. Provides solutions to a variety of technical problems of moderate scope and complexity. Responsible for understanding and documenting technical specifications at the javadoc and design/specification level. Responsible for code management and utilization of code management software. Responsible for understanding and utilizing tools such as Rational Rose, Eclipse, Clearcase, and memory/performance analysis tools (like NetBeans). Includes experience with Cloudera, Apache Hadoop, Hortonworks, mongoDB, Java, Apache Cassandra, Apache Hive, Hadoop Distributed File System (HDFS), Cloudera Impala, Apache Kafka, NoSQL Database, MapReduce, etc. Duties Specific application programming, development methods, techniques and standards Proficient in application development tools Software tools, which automate or assist part of the development process. Has a good knowledge of a wide area of IS concepts and practice, including the systems development life cycle, with a deep knowledge of at least one area of specialization. Familiar with database software Familiar with operating infrastructure Knowledge of the IS infrastructure (hardware, databases, operating systems, local area networks etc) used within the organization. Microsoft Certified Solution Developer (MCSD) (www.microsoft.com) covers Microsoft Windows operating systems and application development. Information Systems Examination Board (www.iseb.org.uk) Diploma in Rapid Application Development Proven ability to work in a Test Driven Development manner Responsible for development of application using object-oriented Java tools and methodologies. Responsible for customization and configuration of WC components and solutions. Proficient in Microsoft Windows Server technology including Windows NT4, Windows 2000 & Windows 2003, Exchange 5.5, Exchange 2000 and Exchange 2003, Active Directory, SQL Server, Terminal Server and Citrix Server. Working knowledge of FTP, Telnet, Ping, DNS, DHCP, WINS, SNMP, HTTP and LDAP. Frequent use and application of technical standards, principles, theories, concepts, and techniques. Provides solutions to a variety of technical problems of moderate scope and complexity. Responsible for understanding and documenting technical specifications at the javadoc and design/specification level. Responsible for code management and utilization of code management software. Responsible for understanding and utilizing tools such as Rational Rose, Eclipse, Clearcase, and memory/performance analysis tools (like NetBeans). Includes experience with Cloudera, Apache Hadoop, Hortonworks, mongoDB, Java, Apache Cassandra, Apache Hive, Hadoop Distributed File System (HDFS), Cloudera Impala, Apache Kafka, NoSQL Database, MapReduce, etc."," Entry level "," Contract "," Information Technology "," Retail " Data Engineer,United States,Data Engineer - Remote,https://www.linkedin.com/jobs/view/data-engineer-remote-at-georgia-it-inc-3523778195?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=8Kj1aphcapRX4F7%2F1i%2BCVw%3D%3D&position=12&pageNum=7&trk=public_jobs_jserp-result_search-card," Georgia IT, Inc. ",https://www.linkedin.com/company/georgia-it-inc-?trk=public_jobs_topcard-org-name," United States "," 2 days ago "," Be among the first 25 applicants ","Job Title - Data Engineer Location - Remote Duration - 12 Plus Months Rate - DOE U.S. Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time. Job Description The role is for data engineer who actively participates in scrum meetings and work on sprint items. He also works as an escalation point to support operation teams for any job/pipeline failure and publish related issue. Top priority Azure (data brick, Azure Synapse, Azure data brick migration to Azure Synapse, Pipelines, Py.Spark) PowerShell script Tabular cubes, DAX MS SQL, SSIS MS Cosmos Power BI Tableau"," Entry level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-trepp-inc-3506787563?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=Pgc%2Fm8CYI4RLVzCvE%2BXuiw%3D%3D&position=1&pageNum=8&trk=public_jobs_jserp-result_search-card," Trepp, Inc. ",https://www.linkedin.com/company/trepp?trk=public_jobs_topcard-org-name," Dallas-Fort Worth Metroplex "," 13 hours ago "," Over 200 applicants ","The Data Engineer is responsible for leading the migration of legacy systems to the Data Lake and providing technical leadership in support of Trepp’s initiatives in data ingestion and pipeline automation, with a focus on the design and implementation of Spark/EMR and streaming platforms. The Data Engineer reports into the Data Engineering organization. The ideal candidate will have a bachelor’s degree in Computer Science or closely related subject; an advanced degree is preferred. In addition, 7 or more years’ experience in migrating complex IT systems in enterprise organizations. The position requires strong technical skills and must be able to collaborate effectively with a group of high performing individuals. The position requires an individual who can collaborate on setting mid/long term goals and objectives, then be self-directed to estimate timelines and take ownership of the work, while communicating with stakeholders on a regular basis. The data engineering role sets an example for junior team members through modelling best practices and provides guidance during weekly team calls, feedback through code reviews etc., and generally supports the knowledge and skills development of junior team members. Data Engineering, Design and Development Requirements: Demonstrate knowledge of batch and streaming pipelines technologies Be responsible for core analytics, data lake and date pipeline products. Demonstrate knowledge of Data Quality and Data Governance. Build data applications, products, integrate with third-party data and technology platforms. Demonstrate expertise in comprehension and construction of complex SQL queries Act as a Subject Matter Expert to the organization for Trepp’s ingestion frameworks, including AWS and future providers, networking, provisioning, and management Demonstrate leadership ability to back decisions with research and the “why,” and articulate several options, the pros and cons for each, and a recommendation Maintain overall industry knowledge on latest trends, technology, etc. Qualifications: Bachelor’s degree in computer science, systems analysis or a related study, or equivalent experience 7+ years of experience with Python or Java, or Scala and AWS (Lambda, ECS, EMR) 7+ years of experience with SQL (DDL, DML, complex queries) Experience with SQL RDBMS implementations and RDS is a plus Experience with AWS enterprise implementations (Lambda, ECS, Spark, Kinesis, DMS) Experience with Spark / EMR (or similar) implementations Exposure to multiple, diverse technologies and processing environments Knowledge of components within a technical architecture Experience with Agile and SDLC, Git workflows, and CI/CD Strong preference for process and documentation Strong understanding of network architecture and application development methodologies. Salary Range: Base salary starting from $180k plus bonus eligible Benefits and Perks: Base + target bonus compensation structure Medical, Dental, Vision insurance 401K (with employer match) Life insurance, long term disability, short term disability all covered by the company Flexible paid time off (PTO) Sixteen (16) weeks paid primary caregiver leave (Biological, adoptive, and foster parents are all eligible) Four (4) weeks paid parental leave Pet insurance Laptop ​+ WFH equipment​ Career progression plan Pre-tax commuter benefit with company subsidy (For NYC-office based employees only) Involvement in Diversity and Inclusion programs Fun company events and volunteering opportunities Workplace Policy: NYC, Dallas, PA, and London office-based positions: Trepp’s offices follows a 3-2 hybrid-working policy with the expectation of in-office work on Tuesday-Thursday and the option to work from home on Monday and Friday. Remote positions: Employees in remote roles have the option of working remotely and may occasionally travel to a Trepp office or elsewhere for required meetings or team-building events. Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Trepp (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status). About Trepp, Inc. Trepp, Inc. founded in 1979, is a leading provider of data, analytics, and technology solutions to the global securities and investment management industries. Trepp specifically serves three key sectors: structured finance, commercial real estate, and banking to help market participants meet their objectives for surveillance, credit risk management, and investment performance. Trusted by the industry for the accuracy of its proprietary data, Trepp provides clients sophisticated, comprehensive models and analytics. Trepp is wholly owned by Daily Mail and General Trust (DMGT). Trepp, Inc. is an equal opportunity / affirmative action employer, complying with all laws governing employment in each jurisdiction in which operating, and provides equal opportunity to all applicants and employees. All qualified applicants will be considered without regard to race, color, religion, gender, national origin, age, disability, marital or protected veteran status, sexual orientation, gender identity and other status protected by applicable laws."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Information Services and Financial Services " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-ascendion-3487736116?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=EsGTFDPtGjTVLSYx8nSUvQ%3D%3D&position=2&pageNum=8&trk=public_jobs_jserp-result_search-card," Trepp, Inc. ",https://www.linkedin.com/company/trepp?trk=public_jobs_topcard-org-name," Dallas-Fort Worth Metroplex "," 13 hours ago "," Over 200 applicants "," The Data Engineer is responsible for leading the migration of legacy systems to the Data Lake and providing technical leadership in support of Trepp’s initiatives in data ingestion and pipeline automation, with a focus on the design and implementation of Spark/EMR and streaming platforms. The Data Engineer reports into the Data Engineering organization. The ideal candidate will have a bachelor’s degree in Computer Science or closely related subject; an advanced degree is preferred. In addition, 7 or more years’ experience in migrating complex IT systems in enterprise organizations. The position requires strong technical skills and must be able to collaborate effectively with a group of high performing individuals. The position requires an individual who can collaborate on setting mid/long term goals and objectives, then be self-directed to estimate timelines and take ownership of the work, while communicating with stakeholders on a regular basis. The data engineering role sets an example for junior team members through modelling best practices and provides guidance during weekly team calls, feedback through code reviews etc., and generally supports the knowledge and skills development of junior team members. Data Engineering, Design and Development Requirements: Demonstrate knowledge of batch and streaming pipelines technologiesBe responsible for core analytics, data lake and date pipeline products.Demonstrate knowledge of Data Quality and Data Governance.Build data applications, products, integrate with third-party data and technology platforms.Demonstrate expertise in comprehension and construction of complex SQL queriesAct as a Subject Matter Expert to the organization for Trepp’s ingestion frameworks, including AWS and future providers, networking, provisioning, and management Demonstrate leadership ability to back decisions with research and the “why,” and articulate several options, the pros and cons for each, and a recommendation Maintain overall industry knowledge on latest trends, technology, etc.Qualifications: Bachelor’s degree in computer science, systems analysis or a related study, or equivalent experience 7+ years of experience with Python or Java, or Scala and AWS (Lambda, ECS, EMR)7+ years of experience with SQL (DDL, DML, complex queries)Experience with SQL RDBMS implementations and RDS is a plusExperience with AWS enterprise implementations (Lambda, ECS, Spark, Kinesis, DMS)Experience with Spark / EMR (or similar) implementationsExposure to multiple, diverse technologies and processing environments Knowledge of components within a technical architectureExperience with Agile and SDLC, Git workflows, and CI/CDStrong preference for process and documentationStrong understanding of network architecture and application development methodologies.Salary Range: Base salary starting from $180k plus bonus eligibleBenefits and Perks:Base + target bonus compensation structureMedical, Dental, Vision insurance401K (with employer match)Life insurance, long term disability, short term disability all covered by the companyFlexible paid time off (PTO)Sixteen (16) weeks paid primary caregiver leave (Biological, adoptive, and foster parents are all eligible)Four (4) weeks paid parental leavePet insuranceLaptop ​+ WFH equipment​Career progression planPre-tax commuter benefit with company subsidy (For NYC-office based employees only)Involvement in Diversity and Inclusion programs Fun company events and volunteering opportunitiesWorkplace Policy:NYC, Dallas, PA, and London office-based positions: Trepp’s offices follows a 3-2 hybrid-working policy with the expectation of in-office work on Tuesday-Thursday and the option to work from home on Monday and Friday.Remote positions: Employees in remote roles have the option of working remotely and may occasionally travel to a Trepp office or elsewhere for required meetings or team-building events. Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Trepp (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).About Trepp, Inc.Trepp, Inc. founded in 1979, is a leading provider of data, analytics, and technology solutions to the global securities and investment management industries. Trepp specifically serves three key sectors: structured finance, commercial real estate, and banking to help market participants meet their objectives for surveillance, credit risk management, and investment performance. Trusted by the industry for the accuracy of its proprietary data, Trepp provides clients sophisticated, comprehensive models and analytics. Trepp is wholly owned by Daily Mail and General Trust (DMGT).Trepp, Inc. is an equal opportunity / affirmative action employer, complying with all laws governing employment in each jurisdiction in which operating, and provides equal opportunity to all applicants and employees. All qualified applicants will be considered without regard to race, color, religion, gender, national origin, age, disability, marital or protected veteran status, sexual orientation, gender identity and other status protected by applicable laws. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Information Services and Financial Services " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-advantis-global-3490307445?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=WapyYzffDYz6SOjX4gU1iw%3D%3D&position=3&pageNum=8&trk=public_jobs_jserp-result_search-card," Trepp, Inc. ",https://www.linkedin.com/company/trepp?trk=public_jobs_topcard-org-name," Dallas-Fort Worth Metroplex "," 13 hours ago "," Over 200 applicants "," The Data Engineer is responsible for leading the migration of legacy systems to the Data Lake and providing technical leadership in support of Trepp’s initiatives in data ingestion and pipeline automation, with a focus on the design and implementation of Spark/EMR and streaming platforms. The Data Engineer reports into the Data Engineering organization. The ideal candidate will have a bachelor’s degree in Computer Science or closely related subject; an advanced degree is preferred. In addition, 7 or more years’ experience in migrating complex IT systems in enterprise organizations. The position requires strong technical skills and must be able to collaborate effectively with a group of high performing individuals. The position requires an individual who can collaborate on setting mid/long term goals and objectives, then be self-directed to estimate timelines and take ownership of the work, while communicating with stakeholders on a regular basis. The data engineering role sets an example for junior team members through modelling best practices and provides guidance during weekly team calls, feedback through code reviews etc., and generally supports the knowledge and skills development of junior team members. Data Engineering, Design and Development Requirements: Demonstrate knowledge of batch and streaming pipelines technologiesBe responsible for core analytics, data lake and date pipeline products.Demonstrate knowledge of Data Quality and Data Governance.Build data applications, products, integrate with third-party data and technology platforms.Demonstrate expertise in comprehension and construction of complex SQL queriesAct as a Subject Matter Expert to the organization for Trepp’s ingestion frameworks, including AWS and future providers, networking, provisioning, and management Demonstrate leadership ability to back decisions with research and the “why,” and articulate several options, the pros and cons for each, and a recommendation Maintain overall industry knowledge on latest trends, technology, etc.Qualifications: Bachelor’s degree in computer science, systems analysis or a related study, or equivalent experience 7+ years of experience with Python or Java, or Scala and AWS (Lambda, ECS, EMR)7+ years of experience with SQL (DDL, DML, complex queries)Experience with SQL RDBMS implementations and RDS is a plusExperience with AWS enterprise implementations (Lambda, ECS, Spark, Kinesis, DMS)Experience with Spark / EMR (or similar) implementationsExposure to multiple, diverse technologies and processing environments Knowledge of components within a technical architectureExperience with Agile and SDLC, Git workflows, and CI/CDStrong preference for process and documentationStrong understanding of network architecture and application development methodologies.Salary Range: Base salary starting from $180k plus bonus eligibleBenefits and Perks:Base + target bonus compensation structureMedical, Dental, Vision insurance401K (with employer match)Life insurance, long term disability, short term disability all covered by the companyFlexible paid time off (PTO)Sixteen (16) weeks paid primary caregiver leave (Biological, adoptive, and foster parents are all eligible)Four (4) weeks paid parental leavePet insuranceLaptop ​+ WFH equipment​Career progression planPre-tax commuter benefit with company subsidy (For NYC-office based employees only)Involvement in Diversity and Inclusion programs Fun company events and volunteering opportunitiesWorkplace Policy:NYC, Dallas, PA, and London office-based positions: Trepp’s offices follows a 3-2 hybrid-working policy with the expectation of in-office work on Tuesday-Thursday and the option to work from home on Monday and Friday.Remote positions: Employees in remote roles have the option of working remotely and may occasionally travel to a Trepp office or elsewhere for required meetings or team-building events. Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Trepp (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).About Trepp, Inc.Trepp, Inc. founded in 1979, is a leading provider of data, analytics, and technology solutions to the global securities and investment management industries. Trepp specifically serves three key sectors: structured finance, commercial real estate, and banking to help market participants meet their objectives for surveillance, credit risk management, and investment performance. Trusted by the industry for the accuracy of its proprietary data, Trepp provides clients sophisticated, comprehensive models and analytics. Trepp is wholly owned by Daily Mail and General Trust (DMGT).Trepp, Inc. is an equal opportunity / affirmative action employer, complying with all laws governing employment in each jurisdiction in which operating, and provides equal opportunity to all applicants and employees. All qualified applicants will be considered without regard to race, color, religion, gender, national origin, age, disability, marital or protected veteran status, sexual orientation, gender identity and other status protected by applicable laws. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Information Services and Financial Services " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-zenapse-3516479400?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=cf0RAvwN3yKG0op1Fcjz4g%3D%3D&position=4&pageNum=8&trk=public_jobs_jserp-result_search-card," Trepp, Inc. ",https://www.linkedin.com/company/trepp?trk=public_jobs_topcard-org-name," Dallas-Fort Worth Metroplex "," 13 hours ago "," Over 200 applicants "," The Data Engineer is responsible for leading the migration of legacy systems to the Data Lake and providing technical leadership in support of Trepp’s initiatives in data ingestion and pipeline automation, with a focus on the design and implementation of Spark/EMR and streaming platforms. The Data Engineer reports into the Data Engineering organization. The ideal candidate will have a bachelor’s degree in Computer Science or closely related subject; an advanced degree is preferred. In addition, 7 or more years’ experience in migrating complex IT systems in enterprise organizations. The position requires strong technical skills and must be able to collaborate effectively with a group of high performing individuals. The position requires an individual who can collaborate on setting mid/long term goals and objectives, then be self-directed to estimate timelines and take ownership of the work, while communicating with stakeholders on a regular basis. The data engineering role sets an example for junior team members through modelling best practices and provides guidance during weekly team calls, feedback through code reviews etc., and generally supports the knowledge and skills development of junior team members. Data Engineering, Design and Development Requirements: Demonstrate knowledge of batch and streaming pipelines technologiesBe responsible for core analytics, data lake and date pipeline products.Demonstrate knowledge of Data Quality and Data Governance.Build data applications, products, integrate with third-party data and technology platforms.Demonstrate expertise in comprehension and construction of complex SQL queriesAct as a Subject Matter Expert to the organization for Trepp’s ingestion frameworks, including AWS and future providers, networking, provisioning, and management Demonstrate leadership ability to back decisions with research and the “why,” and articulate several options, the pros and cons for each, and a recommendation Maintain overall industry knowledge on latest trends, technology, etc.Qualifications: Bachelor’s degree in computer science, systems analysis or a related study, or equivalent experience 7+ years of experience with Python or Java, or Scala and AWS (Lambda, ECS, EMR)7+ years of experience with SQL (DDL, DML, complex queries)Experience with SQL RDBMS implementations and RDS is a plusExperience with AWS enterprise implementations (Lambda, ECS, Spark, Kinesis, DMS)Experience with Spark / EMR (or similar) implementationsExposure to multiple, diverse technologies and processing environments Knowledge of components within a technical architectureExperience with Agile and SDLC, Git workflows, and CI/CDStrong preference for process and documentationStrong understanding of network architecture and application development methodologies.Salary Range: Base salary starting from $180k plus bonus eligibleBenefits and Perks:Base + target bonus compensation structureMedical, Dental, Vision insurance401K (with employer match)Life insurance, long term disability, short term disability all covered by the companyFlexible paid time off (PTO)Sixteen (16) weeks paid primary caregiver leave (Biological, adoptive, and foster parents are all eligible)Four (4) weeks paid parental leavePet insuranceLaptop ​+ WFH equipment​Career progression planPre-tax commuter benefit with company subsidy (For NYC-office based employees only)Involvement in Diversity and Inclusion programs Fun company events and volunteering opportunitiesWorkplace Policy:NYC, Dallas, PA, and London office-based positions: Trepp’s offices follows a 3-2 hybrid-working policy with the expectation of in-office work on Tuesday-Thursday and the option to work from home on Monday and Friday.Remote positions: Employees in remote roles have the option of working remotely and may occasionally travel to a Trepp office or elsewhere for required meetings or team-building events. Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Trepp (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).About Trepp, Inc.Trepp, Inc. founded in 1979, is a leading provider of data, analytics, and technology solutions to the global securities and investment management industries. Trepp specifically serves three key sectors: structured finance, commercial real estate, and banking to help market participants meet their objectives for surveillance, credit risk management, and investment performance. Trusted by the industry for the accuracy of its proprietary data, Trepp provides clients sophisticated, comprehensive models and analytics. Trepp is wholly owned by Daily Mail and General Trust (DMGT).Trepp, Inc. is an equal opportunity / affirmative action employer, complying with all laws governing employment in each jurisdiction in which operating, and provides equal opportunity to all applicants and employees. All qualified applicants will be considered without regard to race, color, religion, gender, national origin, age, disability, marital or protected veteran status, sexual orientation, gender identity and other status protected by applicable laws. "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Information Services and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-firstpro-inc-3518312462?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=BTorpSctJKTW2L6xpBLucg%3D%3D&position=5&pageNum=8&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 1 week ago "," 60 applicants ","firstPRO is now accepting resumes for a Data Engineer position located in Orlando, FL. This role will operate on a hybrid model and require being onsite 3 days per week. This is a direct hire role that comes with salary, excellent benefits package, and generous yearly bonus. Must reside in FL. The Data Engineer will play a pivotal role in operationalizing the most-urgent data and analytics initiatives for business initiatives. The bulk of the data engineer’s work would be building, managing, and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers. Data engineers also need to guarantee compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This would enable faster data access, integrated data reuse, and vastly improved time-to-solution for client’s data and analytics initiatives. This role will require both creative and collaborative working with IT and the wider business. It will involve evangelizing effective data management practices and promoting a better understanding of data and analytics. The data engineer will also be tasked with working with key business stakeholders, IT experts, and commercial real estate experts to plan and deliver optimal analytics and data science solutions. Responsibilities Serve as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure. Perform analysis on large datasets to make and implement recommendations for maximizing customer experience. Assists in the design and implementation of relational databases and structures as needed. Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing. Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability. Participates in developing cutting-edge storage design structures and data processing flows. Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use: It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed. Participate in agile development ceremonies and interact with both business analysts and end-users to come up with well-performing and scalable solutions. Requirements Bachelor's degree in computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required. 5+ years of experience developing SQL/T-SQL including, Single-row and multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF. Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases. Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design. Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production. Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others. Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query. Ability to apply DevOps principles to data pipelines to improve the communication, integration, reuse, and automation of data flows between data managers and consumers across an organization. Experience with agile and lean development methodologies (SCRUM/Lean). Must be a self-starter with excellent problem-solving skills and excellent written/verbal communication skills. Knowledge and experience with cloud data management and analytics with Microsoft Azure or Amazon AWS are strongly preferred. Excellent interpersonal and organizational skills. Commercial real estate industry knowledge or previous experience would be a plus."," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-coinlist-3506628176?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=BK8KGOV2rjHkzpEWPYPD3g%3D%3D&position=6&pageNum=8&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 1 week ago "," 60 applicants "," firstPRO is now accepting resumes for a Data Engineer position located in Orlando, FL. This role will operate on a hybrid model and require being onsite 3 days per week. This is a direct hire role that comes with salary, excellent benefits package, and generous yearly bonus. Must reside in FL.The Data Engineer will play a pivotal role in operationalizing the most-urgent data and analytics initiatives for business initiatives. The bulk of the data engineer’s work would be building, managing, and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers. Data engineers also need to guarantee compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This would enable faster data access, integrated data reuse, and vastly improved time-to-solution for client’s data and analytics initiatives. This role will require both creative and collaborative working with IT and the wider business. It will involve evangelizing effective data management practices and promoting a better understanding of data and analytics. The data engineer will also be tasked with working with key business stakeholders, IT experts, and commercial real estate experts to plan and deliver optimal analytics and data science solutions.Responsibilities Serve as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure.Perform analysis on large datasets to make and implement recommendations for maximizing customer experience.Assists in the design and implementation of relational databases and structures as needed.Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing.Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability.Participates in developing cutting-edge storage design structures and data processing flows.Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use:It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives.Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed.Participate in agile development ceremonies and interact with both business analysts and end-users to come up with well-performing and scalable solutions.RequirementsBachelor's degree in computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required.5+ years of experience developing SQL/T-SQL including, Single-row and multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF.Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management.The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases.Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design.Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others.Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query.Ability to apply DevOps principles to data pipelines to improve the communication, integration, reuse, and automation of data flows between data managers and consumers across an organization.Experience with agile and lean development methodologies (SCRUM/Lean). Must be a self-starter with excellent problem-solving skills and excellent written/verbal communication skills.Knowledge and experience with cloud data management and analytics with Microsoft Azure or Amazon AWS are strongly preferred.Excellent interpersonal and organizational skills. Commercial real estate industry knowledge or previous experience would be a plus. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/business-intelligence-data-engineer-at-archimed-3508496542?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=PsC0newObdS3kCanSjOyvg%3D%3D&position=7&pageNum=8&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 1 week ago "," 60 applicants "," firstPRO is now accepting resumes for a Data Engineer position located in Orlando, FL. This role will operate on a hybrid model and require being onsite 3 days per week. This is a direct hire role that comes with salary, excellent benefits package, and generous yearly bonus. Must reside in FL.The Data Engineer will play a pivotal role in operationalizing the most-urgent data and analytics initiatives for business initiatives. The bulk of the data engineer’s work would be building, managing, and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers. Data engineers also need to guarantee compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This would enable faster data access, integrated data reuse, and vastly improved time-to-solution for client’s data and analytics initiatives. This role will require both creative and collaborative working with IT and the wider business. It will involve evangelizing effective data management practices and promoting a better understanding of data and analytics. The data engineer will also be tasked with working with key business stakeholders, IT experts, and commercial real estate experts to plan and deliver optimal analytics and data science solutions.Responsibilities Serve as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure.Perform analysis on large datasets to make and implement recommendations for maximizing customer experience.Assists in the design and implementation of relational databases and structures as needed.Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing.Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability.Participates in developing cutting-edge storage design structures and data processing flows.Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use:It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives.Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed.Participate in agile development ceremonies and interact with both business analysts and end-users to come up with well-performing and scalable solutions.RequirementsBachelor's degree in computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required.5+ years of experience developing SQL/T-SQL including, Single-row and multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF.Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management.The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases.Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design.Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others.Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query.Ability to apply DevOps principles to data pipelines to improve the communication, integration, reuse, and automation of data flows between data managers and consumers across an organization.Experience with agile and lean development methodologies (SCRUM/Lean). Must be a self-starter with excellent problem-solving skills and excellent written/verbal communication skills.Knowledge and experience with cloud data management and analytics with Microsoft Azure or Amazon AWS are strongly preferred.Excellent interpersonal and organizational skills. Commercial real estate industry knowledge or previous experience would be a plus. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/bi-data-engineer-analyst-at-lakefield-veterinary-group-3505586653?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=1XzNbDfq9Q4hghVFuMW2%2BA%3D%3D&position=8&pageNum=8&trk=public_jobs_jserp-result_search-card," firstPRO, Inc ",https://www.linkedin.com/company/firstpro?trk=public_jobs_topcard-org-name," Orlando, FL "," 1 week ago "," 60 applicants "," firstPRO is now accepting resumes for a Data Engineer position located in Orlando, FL. This role will operate on a hybrid model and require being onsite 3 days per week. This is a direct hire role that comes with salary, excellent benefits package, and generous yearly bonus. Must reside in FL.The Data Engineer will play a pivotal role in operationalizing the most-urgent data and analytics initiatives for business initiatives. The bulk of the data engineer’s work would be building, managing, and optimizing data pipelines and then moving these data pipelines effectively into production for key data and analytics consumers. Data engineers also need to guarantee compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This would enable faster data access, integrated data reuse, and vastly improved time-to-solution for client’s data and analytics initiatives. This role will require both creative and collaborative working with IT and the wider business. It will involve evangelizing effective data management practices and promoting a better understanding of data and analytics. The data engineer will also be tasked with working with key business stakeholders, IT experts, and commercial real estate experts to plan and deliver optimal analytics and data science solutions.Responsibilities Serve as a key contributor to identify, evaluate, and execute the development and implementation of data infrastructure.Perform analysis on large datasets to make and implement recommendations for maximizing customer experience.Assists in the design and implementation of relational databases and structures as needed.Works collaboratively with Application development teams throughout the product development process, to ensure optimal usage of SQL Server for storage and transaction processing.Build data pipelines with Azure Data Factory (ADF) to feed Microsoft SQL Server Business Intelligence stack including relational databases, data cubes (tabular/multidimensional), SQL Reporting, Power BI, and other tools as needed. Writes, refines, and optimizes T-SQL code for maximum performance, reliability, and maintainability.Participates in developing cutting-edge storage design structures and data processing flows.Creates documentation for both new and existing code. Participate in ensuring compliance and governance during data use:It will be the responsibility of the Data Engineer to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives.Participate in logic and technical design, peer code reviews, unit testing, and documentation of code developed.Participate in agile development ceremonies and interact with both business analysts and end-users to come up with well-performing and scalable solutions.RequirementsBachelor's degree in computer science, statistics, applied mathematics, data management, information systems, information science, or a related quantitative field or equivalent work experience is required.5+ years of experience developing SQL/T-SQL including, Single-row and multi-row functions, complex joins, Common Table Expressions (CTEs), Procedures, Packages, ETL jobs, and Data linages in ADF.Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata, and workload management.The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.Strong experience with popular database programming languages including SQL for relational databases and knowledge of upcoming NoSQL/Hadoop oriented databases like MongoDB, Cosmos DB, others for nonrelational databases.Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, and API design.Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.Experience working with popular data discovery, analytics, and BI software tools like Power BI, Tableau, Alteryx, and others.Experience with the Microsoft SQL Server Business Intelligence stack (SSAS, SSIS, SSRS), and Excel/Power Query.Ability to apply DevOps principles to data pipelines to improve the communication, integration, reuse, and automation of data flows between data managers and consumers across an organization.Experience with agile and lean development methodologies (SCRUM/Lean). Must be a self-starter with excellent problem-solving skills and excellent written/verbal communication skills.Knowledge and experience with cloud data management and analytics with Microsoft Azure or Amazon AWS are strongly preferred.Excellent interpersonal and organizational skills. Commercial real estate industry knowledge or previous experience would be a plus. "," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer - ETL/BI Developer,https://www.linkedin.com/jobs/view/data-engineer-etl-bi-developer-at-avalara-3511432938?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=Qh0XTVlDY9qP3xyEjxSGew%3D%3D&position=9&pageNum=8&trk=public_jobs_jserp-result_search-card," Avalara ",https://www.linkedin.com/company/avalara?trk=public_jobs_topcard-org-name," North Carolina, United States "," 1 week ago "," Over 200 applicants ","Designs, builds and oversees the deployment and operation of technology architecture, solutions and software to capture, manage, store and utilize structured and unstructured data from internal and external sources. Establishes and builds processes and structures based on business and technical requirements to channel data from multiple inputs, route appropriately and store using any combination of distributed (cloud) structures, local databases, and other applicable storage forms as required. Develops technical tools and programming that leverage artificial intelligence, machine learning and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs. Essential Duties and Responsibilities: Translate business requirements into specifications that will be used to drive data store/data warehouse/data mart design & configuration. Use ETL tools to load data stores/data warehouse Provide support as required to ensure the viability & performance of enterprise data and BI environments to both internal & external users. Ensure proper configuration management and change controls are implemented. Must be able to perform duties with moderate to low supervision. Design & Implement technology best practices, guidelines and repeatable processes Maintain JIRA, Wiki and project documentation as needed. Technical Skills: 5 years of relevant experience in data management, ETL, data warehousing and BI Reporting. Minimum of 2 years' experience with data pipelines (ETL / ELT) Demonstrated understanding of the Data Lifecycle Knowledge of data modeling, data ingestion and ETL design. Advanced SQL proficiency Experience in Talend Data Integration tool. Experience with Data Visualization tools (Tableau and Power BI a plus) Exposure to Source Control, CI / CD, and DevOps Knowledge of AWS technologies (EC2, S3, RDS, Redshift...etc.) Working knowledge of Agile frameworks and Jira Knowledge of integration with systems like Salesforce, Relational Databases, REST API, FTP/SFTP...etc. Knowledge SSRS is a plus. Ability to learn and use new technologies quickly & effectively. Proven ability to communicate effectively with technical and non-technical stakeholders across multiple business units Excellent analytical and problem-solving skills Preferred Qualifications: Advanced SQL proficiency Functional experience with Talend or DBT Advanced experience with Data Visualization tools (Tableau and Power BI) Experience with AWS and Snowflake About Avalara Avalara helps businesses of all sizes achieve compliance with transaction taxes, including sales and use, VAT, excise, communications, and other tax types. The company delivers comprehensive, automated, cloud-based solutions designed to be fast, accurate, and easy to use. The Avalara Compliance Cloud® platform helps customers manage complicated and burdensome tax compliance obligations imposed by state, local, and other taxing authorities throughout the world. Avalara offers more than 700 pre-built connectors into leading accounting, ERP, ecommerce and other business applications, making the integration of tax and compliance solutions easy for customers. Each year, the company processes billions of indirect tax transactions for customers and users, files more than a million tax returns, and manages millions of tax exemption certificates and other compliance documents. Headquartered in Seattle, Avalara has offices across the U.S. and overseas in the U.K., Belgium, Brazil, and India. More information at www.avalara.com The perks of working at Avalara go beyond amazing physical spaces and a Tiki Bar. We're committed to continued progress in diversity and inclusion. As an employee at Avalara, you'll have the opportunity to join resource groups focused on diversity of thought, engage with your local or global community about topics that matter to you and the organization and receive continued education around inclusion and development. As Avalara grows, so do the voices within it. It's time to hear your voice. Avalara is an Equal Opportunity Employer. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. Avalara is an Equal Opportunity Employer. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law."," Mid-Senior level "," Full-time "," Engineering "," Software Development " Data Engineer,United States,Data Engineer - ETL/BI Developer,https://www.linkedin.com/jobs/view/data-engineer-at-evolution-3523754743?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=yQ%2Bkv%2BWl4K2n8Q4n8bcjFQ%3D%3D&position=10&pageNum=8&trk=public_jobs_jserp-result_search-card," Avalara ",https://www.linkedin.com/company/avalara?trk=public_jobs_topcard-org-name," North Carolina, United States "," 1 week ago "," Over 200 applicants "," Designs, builds and oversees the deployment and operation of technology architecture, solutions and software to capture, manage, store and utilize structured and unstructured data from internal and external sources. Establishes and builds processes and structures based on business and technical requirements to channel data from multiple inputs, route appropriately and store using any combination of distributed (cloud) structures, local databases, and other applicable storage forms as required. Develops technical tools and programming that leverage artificial intelligence, machine learning and big-data techniques to cleanse, organize and transform data and to maintain, defend and update data structures and integrity on an automated basis. Creates and establishes design standards and assurance processes for software, systems and applications development to ensure compatibility and operability of data connections, flows and storage requirements. Reviews internal and external business and product requirements for data operations and activity and suggests changes and upgrades to systems and storage to accommodate ongoing needs.Essential Duties and Responsibilities:Translate business requirements into specifications that will be used to drive data store/data warehouse/data mart design & configuration.Use ETL tools to load data stores/data warehouseProvide support as required to ensure the viability & performance of enterprise data and BI environments to both internal & external users.Ensure proper configuration management and change controls are implemented.Must be able to perform duties with moderate to low supervision.Design & Implement technology best practices, guidelines and repeatable processesMaintain JIRA, Wiki and project documentation as needed.Technical Skills:5 years of relevant experience in data management, ETL, data warehousing and BI Reporting.Minimum of 2 years' experience with data pipelines (ETL / ELT)Demonstrated understanding of the Data LifecycleKnowledge of data modeling, data ingestion and ETL design.Advanced SQL proficiencyExperience in Talend Data Integration tool.Experience with Data Visualization tools (Tableau and Power BI a plus)Exposure to Source Control, CI / CD, and DevOpsKnowledge of AWS technologies (EC2, S3, RDS, Redshift...etc.)Working knowledge of Agile frameworks and JiraKnowledge of integration with systems like Salesforce, Relational Databases, REST API, FTP/SFTP...etc.Knowledge SSRS is a plus.Ability to learn and use new technologies quickly & effectively.Proven ability to communicate effectively with technical and non-technical stakeholders across multiple business unitsExcellent analytical and problem-solving skillsPreferred Qualifications:Advanced SQL proficiencyFunctional experience with Talend or DBTAdvanced experience with Data Visualization tools (Tableau and Power BI)Experience with AWS and SnowflakeAbout AvalaraAvalara helps businesses of all sizes achieve compliance with transaction taxes, including sales and use, VAT, excise, communications, and other tax types. The company delivers comprehensive, automated, cloud-based solutions designed to be fast, accurate, and easy to use. The Avalara Compliance Cloud® platform helps customers manage complicated and burdensome tax compliance obligations imposed by state, local, and other taxing authorities throughout the world.Avalara offers more than 700 pre-built connectors into leading accounting, ERP, ecommerce and other business applications, making the integration of tax and compliance solutions easy for customers. Each year, the company processes billions of indirect tax transactions for customers and users, files more than a million tax returns, and manages millions of tax exemption certificates and other compliance documents.Headquartered in Seattle, Avalara has offices across the U.S. and overseas in the U.K., Belgium, Brazil, and India. More information at www.avalara.comThe perks of working at Avalara go beyond amazing physical spaces and a Tiki Bar. We're committed to continued progress in diversity and inclusion. As an employee at Avalara, you'll have the opportunity to join resource groups focused on diversity of thought, engage with your local or global community about topics that matter to you and the organization and receive continued education around inclusion and development. As Avalara grows, so do the voices within it. It's time to hear your voice.Avalara is an Equal Opportunity Employer. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law.Avalara is an Equal Opportunity Employer. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. "," Mid-Senior level "," Full-time "," Engineering "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-aaa-texas-3531403407?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=3R86tCBZixScXylqnHq97g%3D%3D&position=11&pageNum=8&trk=public_jobs_jserp-result_search-card," AAA Texas ",https://www.linkedin.com/company/aaa-texas?trk=public_jobs_topcard-org-name," Coppell, TX "," 3 hours ago "," Be among the first 25 applicants ","As our Data Engineer, you will function as a consultant between our technology unit and other business units to understand their challenges. You’ll ask questions and present ideas that enable them to solve those data problems with code. Our team is 100% remote, but you must be willing to travel 2 days a month for team meetings. To thrive in this role, you must understand how to query and process large data sets of 200,000 rows or more, ideally in SQL. You must also have functional knowledge of at least 1 programming language like Python. Be prepared to share an example of how you have worked with multiple tables to develop a solution to a business problem (real or hypothetical for more junior candidates). What You’ll Do Every day you will begin with a set of technical specifications to begin coding solutions to issues across the business. Our team’s success isn’t just implementing the code but seeing that those insights are implemented by the business, and they have the necessary tools to accomplish the goal. More senior Data Scientists will write these technical specifications while junior applicants are doing the work to translate specs into functional code. Ask questions and do quality testing to understand if the solution meets the objectives of the business or the goal we set for the business. You will work side-by-side with the business to make sure it works as expected, not just as designed. What You’ll Need To thrive in this role, you must have a passion for problem-solving using data. Your experience in querying the high volumes of data and doing the data analysis must inform how you design solutions that align with the business objectives. Being proficient with SQL and programming languages such as Python and Spark is a must. Experience with data integration from different sources into Big Data systems is preferable. Experience quality testing and coding. These solutions will deploy across products that are important to our customers and the business. They must be high-quality and functional. A willingness to collaborate. Our best work is done when we work together - either with non-technical or technical leads. You should be interested in learning from others regardless of their role in the organization. You have worked previously with an Agile team or understand these concepts. You expect to participate in daily standup meetings, you’ll complete your projects or stories during our sprints, and Remarkable benefits: Health coverage for medical, dental, vision 401(K) saving plan with company match AND Pension Tuition assistance PTO for community volunteer programs Wellness program Employee discounts AAA Texas is part of the largest federation of AAA clubs in the nation. We have 14,000 employees in 21 states helping 17 million members. The strength of our organization is our employees. Bringing together and supporting different cultures, backgrounds, personalities, and strengths creates a team capable of delivering legendary, lifetime service to our members. When we embrace our diversity – we win. All of Us! With our national brand recognition, long-standing reputation since 1902, and constantly growing membership, we are seeking career-minded, service-driven professionals to join our team ""Through dedicated employees we proudly deliver legendary service and beneficial products that provide members peace of mind and value.” AAA is an Equal Opportunity Employer "," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-the-judge-group-3491751605?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=uUErrE9iXoxR2zVhx31vZQ%3D%3D&position=12&pageNum=8&trk=public_jobs_jserp-result_search-card," AAA Texas ",https://www.linkedin.com/company/aaa-texas?trk=public_jobs_topcard-org-name," Coppell, TX "," 3 hours ago "," Be among the first 25 applicants "," As our Data Engineer, you will function as a consultant between our technology unit and other business units to understand their challenges. You’ll ask questions and present ideas that enable them to solve those data problems with code. Our team is 100% remote, but you must be willing to travel 2 days a month for team meetings.To thrive in this role, you must understand how to query and process large data sets of 200,000 rows or more, ideally in SQL. You must also have functional knowledge of at least 1 programming language like Python. Be prepared to share an example of how you have worked with multiple tables to develop a solution to a business problem (real or hypothetical for more junior candidates).What You’ll DoEvery day you will begin with a set of technical specifications to begin coding solutions to issues across the business. Our team’s success isn’t just implementing the code but seeing that those insights are implemented by the business, and they have the necessary tools to accomplish the goal. More senior Data Scientists will write these technical specifications while junior applicants are doing the work to translate specs into functional code.Ask questions and do quality testing to understand if the solution meets the objectives of the business or the goal we set for the business. You will work side-by-side with the business to make sure it works as expected, not just as designed.What You’ll NeedTo thrive in this role, you must have a passion for problem-solving using data. Your experience in querying the high volumes of data and doing the data analysis must inform how you design solutions that align with the business objectives.Being proficient with SQL and programming languages such as Python and Spark is a must.Experience with data integration from different sources into Big Data systems is preferable.Experience quality testing and coding. These solutions will deploy across products that are important to our customers and the business. They must be high-quality and functional.A willingness to collaborate. Our best work is done when we work together - either with non-technical or technical leads. You should be interested in learning from others regardless of their role in the organization.You have worked previously with an Agile team or understand these concepts. You expect to participate in daily standup meetings, you’ll complete your projects or stories during our sprints, andRemarkable benefits: Health coverage for medical, dental, vision 401(K) saving plan with company match AND Pension Tuition assistance PTO for community volunteer programs Wellness program Employee discountsAAA Texas is part of the largest federation of AAA clubs in the nation. We have 14,000 employees in 21 states helping 17 million members. The strength of our organization is our employees. Bringing together and supporting different cultures, backgrounds, personalities, and strengths creates a team capable of delivering legendary, lifetime service to our members. When we embrace our diversity – we win. All of Us! With our national brand recognition, long-standing reputation since 1902, and constantly growing membership, we are seeking career-minded, service-driven professionals to join our team""Through dedicated employees we proudly deliver legendary service and beneficial products that provide members peace of mind and value.”AAA is an Equal Opportunity Employer "," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3485241845?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=J6wiqPbvHVMThRf6w7gF4w%3D%3D&position=13&pageNum=8&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 1 week ago "," 73 applicants ","Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Job Description As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools is a plus. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Education BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Able to lead a team effectively through times of change. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law. EEO Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender Identity If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy. Please view our Pay Transparency Statement"," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-lowe-s-companies-inc-3518317253?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=ZZ5Lh59UN2FfFM4pWqrhyg%3D%3D&position=14&pageNum=8&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 1 week ago "," 73 applicants "," OverviewPepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development.PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation.What PepsiCo Data Management and Operations does:Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the companyResponsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholdersIncrease awareness about available data and democratize access to it across the company Job DescriptionAs a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems.ResponsibilitiesActive contributor to code development in projects and services.Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products.Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance.Responsible for implementing best practices around systems integration, security, performance and data management.Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape.Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions.Develop and optimize procedures to “productionalize” data science models.Define and manage SLA’s for data products and processes running in production.Support large-scale experimentation done by data scientists.Prototype new approaches and build solutions at scale.Research in state-of-the-art methodologies.Create documentation for learnings and knowledge transfer.Create and audit reusable packages or libraries.Qualifications4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools.3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.).2+ years in cloud data engineering experience in.Fluent with Azure cloud services. Azure Certification is a plus.Experience with integration of multi cloud services with on-premises technologies.Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations.Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake.Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes.Experience with version control systems like Github and deployment & CI tools.Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools is a plus.Experience with Statistical/ML techniques is a plus.Experience with building solutions in the retail or in the supply chain space is a plus.Understanding of metadata management, data lineage, and data glossaries is a plus.Working knowledge of agile development, including DevOps and DataOps concepts.Familiarity with business intelligence tools (such as PowerBI).EducationBA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, KnowledgeExcellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management.Proven track record of leading, mentoring data teams.Strong change manager. Comfortable with change, especially that which arises through company growth. Able to lead a team effectively through times of change.Ability to understand and translate business requirements into data and technical requirements.High degree of organization and ability to manage multiple, competing projects and priorities simultaneously.Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs.Foster a team culture of accountability, communication, and self-management.Proactively drives impact and engagement while bringing others along.Consistently attain/exceed individual and team goals.Ability to lead others without direct authority in a matrixed environment.CompetenciesHighly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law.EEO StatementAll qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender IdentityIf you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy.Please view our Pay Transparency Statement "," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stem-it-3509830986?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=KLklYt1ZyH1WljrYRJNj%2Bg%3D%3D&position=15&pageNum=8&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 1 week ago "," 73 applicants "," OverviewPepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development.PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation.What PepsiCo Data Management and Operations does:Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the companyResponsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholdersIncrease awareness about available data and democratize access to it across the company Job DescriptionAs a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems.ResponsibilitiesActive contributor to code development in projects and services.Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products.Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance.Responsible for implementing best practices around systems integration, security, performance and data management.Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape.Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions.Develop and optimize procedures to “productionalize” data science models.Define and manage SLA’s for data products and processes running in production.Support large-scale experimentation done by data scientists.Prototype new approaches and build solutions at scale.Research in state-of-the-art methodologies.Create documentation for learnings and knowledge transfer.Create and audit reusable packages or libraries.Qualifications4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools.3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.).2+ years in cloud data engineering experience in.Fluent with Azure cloud services. Azure Certification is a plus.Experience with integration of multi cloud services with on-premises technologies.Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations.Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake.Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes.Experience with version control systems like Github and deployment & CI tools.Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools is a plus.Experience with Statistical/ML techniques is a plus.Experience with building solutions in the retail or in the supply chain space is a plus.Understanding of metadata management, data lineage, and data glossaries is a plus.Working knowledge of agile development, including DevOps and DataOps concepts.Familiarity with business intelligence tools (such as PowerBI).EducationBA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, KnowledgeExcellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management.Proven track record of leading, mentoring data teams.Strong change manager. Comfortable with change, especially that which arises through company growth. Able to lead a team effectively through times of change.Ability to understand and translate business requirements into data and technical requirements.High degree of organization and ability to manage multiple, competing projects and priorities simultaneously.Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs.Foster a team culture of accountability, communication, and self-management.Proactively drives impact and engagement while bringing others along.Consistently attain/exceed individual and team goals.Ability to lead others without direct authority in a matrixed environment.CompetenciesHighly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law.EEO StatementAll qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender IdentityIf you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy.Please view our Pay Transparency Statement "," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-merchant-intelligence-remote-at-constructor-3511345422?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=3sh33mazXeEdzAq4W6lMMw%3D%3D&position=16&pageNum=8&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 1 week ago "," 73 applicants "," OverviewPepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development.PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation.What PepsiCo Data Management and Operations does:Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the companyResponsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholdersIncrease awareness about available data and democratize access to it across the company Job DescriptionAs a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems.ResponsibilitiesActive contributor to code development in projects and services.Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products.Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance.Responsible for implementing best practices around systems integration, security, performance and data management.Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape.Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions.Develop and optimize procedures to “productionalize” data science models.Define and manage SLA’s for data products and processes running in production.Support large-scale experimentation done by data scientists.Prototype new approaches and build solutions at scale.Research in state-of-the-art methodologies.Create documentation for learnings and knowledge transfer.Create and audit reusable packages or libraries.Qualifications4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools.3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.).2+ years in cloud data engineering experience in.Fluent with Azure cloud services. Azure Certification is a plus.Experience with integration of multi cloud services with on-premises technologies.Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines.Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations.Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake.Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes.Experience with version control systems like Github and deployment & CI tools.Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools is a plus.Experience with Statistical/ML techniques is a plus.Experience with building solutions in the retail or in the supply chain space is a plus.Understanding of metadata management, data lineage, and data glossaries is a plus.Working knowledge of agile development, including DevOps and DataOps concepts.Familiarity with business intelligence tools (such as PowerBI).EducationBA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, KnowledgeExcellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management.Proven track record of leading, mentoring data teams.Strong change manager. Comfortable with change, especially that which arises through company growth. Able to lead a team effectively through times of change.Ability to understand and translate business requirements into data and technical requirements.High degree of organization and ability to manage multiple, competing projects and priorities simultaneously.Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs.Foster a team culture of accountability, communication, and self-management.Proactively drives impact and engagement while bringing others along.Consistently attain/exceed individual and team goals.Ability to lead others without direct authority in a matrixed environment.CompetenciesHighly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams.COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law.EEO StatementAll qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender IdentityIf you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy.Please view our Pay Transparency Statement "," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-emids-3513230165?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=o9Itz%2Fia3TXN6tLWI%2Fqt%2FA%3D%3D&position=17&pageNum=8&trk=public_jobs_jserp-result_search-card," Emids ",https://www.linkedin.com/company/emids?trk=public_jobs_topcard-org-name," Nashville, TN "," 1 week ago "," Over 200 applicants ","About Us Emids is healthcare's digital transformation leader, delivering business and tech solutions that help payers, providers and tech-enablers maximize technology to deliver care better since 1999 As a global partner headquartered in Nashville, TN, emids helps bridge the critical gaps in accessible, affordable, high-quality healthcare by providing advisory consulting services, custom application development, and data solutions. Services include EHR application deployment and management, analytics, data integration and governance, software development and testing, and business intelligence. Responsibilities: • Export data from the Hadoop ecosystem to ORC or Parquet file • Build scripts to move data from on-prem to GCP • Build Python/PySpark pipelines • Transform the data as per the outlined data model • Proactively improve pipeline performance and efficiency ‘Must Have’ Experience: • 4+ years of Data Engineering work experience • 2+ years of building Python/PySpark pipelines • 2+ years working with Hadoop/Hive • 4+ years of experience with SQL • Any cloud experience – AWS, Azure, GCP (GCP Desired) • Experience with Data Warehousing & Data Lake • Understanding of Data Modeling • Understanding of data files format like ORC, Parquet, Avro ‘Nice to Have’ Experience: • Google experience – Cloud Storage, Cloud Composer, Dataproc & BigQuery • Experience using Cloud Warehouses like BigQuery (preferred), Amazon Redshift, Snowflake • etc. • Working knowledge of Distributed file systems like GCS, S3, HDFS etc. • Understanding of Airflow / Cloud Composer • CI/CD and DevOps experience • ETL tools e.g., Informatica (IICS) Ab Initio, Infoworks, SSIS Desired Qualifications Bachelor's degree in Computer Science, IT, Systems Engineering Experience in developing Healthcare applications. Excellent oral and written communication skills Able to quickly learn new systems and technology Microsoft Azure certifications would be preferable but not mandatory. Here at Emids we're not scared of differences. It's how we break new ground. As we scale and elevate the experience of our clients in the Healthcare & LifeSciences Space and ultimately have an impact on every patient from every walk of life, the team we build must be reflective of the diversity that we serve. Together, we've built and will continue to grow, a diverse and inclusive culture where everyone has a seat at the table and the space to be their most authentic self. Emids believes in being an Equal Opportunity Employer and we support, celebrate, and cherish all the things that make our teammates who they are. What can we offer you? You will be part of a team that offers you a fulfilling career, great results through amazing team, strong relationships and a high performance culture. We are using the latest technologies, high security mediums, and service platforms top market providers. We strongly promote agile mind-set and ways of working, followed by agile methods used in practice. We offer proper guidance and take care of our people and offer top notch services including flexible work timings and training required. We also offer: Benefits and leave management A great learning platform A challenging environment where 2 days never look the same A high performing team and a positive atmosphere where mistakes are welcome as part of the learning"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-all-levels-at-fedex-dataworks-3509329010?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=W0KDy49j0D80CdFmqFA9FA%3D%3D&position=18&pageNum=8&trk=public_jobs_jserp-result_search-card," Emids ",https://www.linkedin.com/company/emids?trk=public_jobs_topcard-org-name," Nashville, TN "," 1 week ago "," Over 200 applicants "," About UsEmids is healthcare's digital transformation leader, delivering business and tech solutions that help payers, providers and tech-enablers maximize technology to deliver care better since 1999As a global partner headquartered in Nashville, TN, emids helps bridge the critical gaps in accessible, affordable, high-quality healthcare by providing advisory consulting services, custom application development, and data solutions. Services include EHR application deployment and management, analytics, data integration and governance, software development and testing, and business intelligence.Responsibilities:• Export data from the Hadoop ecosystem to ORC or Parquet file• Build scripts to move data from on-prem to GCP• Build Python/PySpark pipelines• Transform the data as per the outlined data model• Proactively improve pipeline performance and efficiency‘Must Have’ Experience:• 4+ years of Data Engineering work experience• 2+ years of building Python/PySpark pipelines• 2+ years working with Hadoop/Hive• 4+ years of experience with SQL• Any cloud experience – AWS, Azure, GCP (GCP Desired) • Experience with Data Warehousing & Data Lake• Understanding of Data Modeling• Understanding of data files format like ORC, Parquet, Avro‘Nice to Have’ Experience:• Google experience – Cloud Storage, Cloud Composer, Dataproc & BigQuery• Experience using Cloud Warehouses like BigQuery (preferred), Amazon Redshift, Snowflake• etc.• Working knowledge of Distributed file systems like GCS, S3, HDFS etc.• Understanding of Airflow / Cloud Composer• CI/CD and DevOps experience• ETL tools e.g., Informatica (IICS) Ab Initio, Infoworks, SSIS Desired QualificationsBachelor's degree in Computer Science, IT, Systems EngineeringExperience in developing Healthcare applications.Excellent oral and written communication skillsAble to quickly learn new systems and technologyMicrosoft Azure certifications would be preferable but not mandatory.Here at Emids we're not scared of differences. It's how we break new ground. As we scale and elevate the experience of our clients in the Healthcare & LifeSciences Space and ultimately have an impact on every patient from every walk of life, the team we build must be reflective of the diversity that we serve. Together, we've built and will continue to grow, a diverse and inclusive culture where everyone has a seat at the table and the space to be their most authentic self. Emids believes in being an Equal Opportunity Employer and we support, celebrate, and cherish all the things that make our teammates who they are.What can we offer you?You will be part of a team that offers you a fulfilling career, great results through amazing team, strong relationships and a high performance culture. We are using the latest technologies, high security mediums, and service platforms top market providers. We strongly promote agile mind-set and ways of working, followed by agile methods used in practice. We offer proper guidance and take care of our people and offer top notch services including flexible work timings and training required.We also offer:Benefits and leave managementA great learning platformA challenging environment where 2 days never look the sameA high performing team and a positive atmosphere where mistakes are welcome as part of the learning "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/finance-data-engineer-at-roblox-3510923487?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=NUQELTXW9GHxaozrMFe3rQ%3D%3D&position=19&pageNum=8&trk=public_jobs_jserp-result_search-card," Emids ",https://www.linkedin.com/company/emids?trk=public_jobs_topcard-org-name," Nashville, TN "," 1 week ago "," Over 200 applicants "," About UsEmids is healthcare's digital transformation leader, delivering business and tech solutions that help payers, providers and tech-enablers maximize technology to deliver care better since 1999As a global partner headquartered in Nashville, TN, emids helps bridge the critical gaps in accessible, affordable, high-quality healthcare by providing advisory consulting services, custom application development, and data solutions. Services include EHR application deployment and management, analytics, data integration and governance, software development and testing, and business intelligence.Responsibilities:• Export data from the Hadoop ecosystem to ORC or Parquet file• Build scripts to move data from on-prem to GCP• Build Python/PySpark pipelines• Transform the data as per the outlined data model• Proactively improve pipeline performance and efficiency‘Must Have’ Experience:• 4+ years of Data Engineering work experience• 2+ years of building Python/PySpark pipelines• 2+ years working with Hadoop/Hive• 4+ years of experience with SQL• Any cloud experience – AWS, Azure, GCP (GCP Desired) • Experience with Data Warehousing & Data Lake• Understanding of Data Modeling• Understanding of data files format like ORC, Parquet, Avro‘Nice to Have’ Experience:• Google experience – Cloud Storage, Cloud Composer, Dataproc & BigQuery• Experience using Cloud Warehouses like BigQuery (preferred), Amazon Redshift, Snowflake• etc.• Working knowledge of Distributed file systems like GCS, S3, HDFS etc.• Understanding of Airflow / Cloud Composer• CI/CD and DevOps experience• ETL tools e.g., Informatica (IICS) Ab Initio, Infoworks, SSIS Desired QualificationsBachelor's degree in Computer Science, IT, Systems EngineeringExperience in developing Healthcare applications.Excellent oral and written communication skillsAble to quickly learn new systems and technologyMicrosoft Azure certifications would be preferable but not mandatory.Here at Emids we're not scared of differences. It's how we break new ground. As we scale and elevate the experience of our clients in the Healthcare & LifeSciences Space and ultimately have an impact on every patient from every walk of life, the team we build must be reflective of the diversity that we serve. Together, we've built and will continue to grow, a diverse and inclusive culture where everyone has a seat at the table and the space to be their most authentic self. Emids believes in being an Equal Opportunity Employer and we support, celebrate, and cherish all the things that make our teammates who they are.What can we offer you?You will be part of a team that offers you a fulfilling career, great results through amazing team, strong relationships and a high performance culture. We are using the latest technologies, high security mediums, and service platforms top market providers. We strongly promote agile mind-set and ways of working, followed by agile methods used in practice. We offer proper guidance and take care of our people and offer top notch services including flexible work timings and training required.We also offer:Benefits and leave managementA great learning platformA challenging environment where 2 days never look the sameA high performing team and a positive atmosphere where mistakes are welcome as part of the learning "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-lhh-3478667902?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=IGks68%2F1IrMH5K72FkV0Uw%3D%3D&position=20&pageNum=8&trk=public_jobs_jserp-result_search-card," LHH ",https://www.linkedin.com/company/lee-hecht-harrison?trk=public_jobs_topcard-org-name," Dallas, TX "," 4 weeks ago "," Over 200 applicants ","Sr. Data Engineer * Full-Time * W2 * Direct-hire Based out of Dallas, TX, or REMOTELY from another state SQL Server Snowflake - pipelining Coding tools - DBT, Python, Alteryx, Tableau PowerShell Understanding Data Engineering Nice to have: Docker Container experience -running those in AWS Strong work ethic, with integrity, desire to help others, team player katrin.jeggle@lhh.com"," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer-Fintech,https://www.linkedin.com/jobs/view/data-engineer-fintech-at-applepie-capital-3516636162?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=AC2lteX3yxSu7KqQfLvW5A%3D%3D&position=21&pageNum=8&trk=public_jobs_jserp-result_search-card," ApplePie Capital ",https://www.linkedin.com/company/applepie-capital?trk=public_jobs_topcard-org-name," California, United States "," 1 week ago "," 174 applicants ","ApplePie Capital is a fast-growing online lender focused exclusively on the franchise industry. Our channel is differentiated and our momentum is real - we forge partnerships with high quality franchise brands, who deliver highly-qualified franchisee borrowers seeking capital to grow their franchise empire and escape the growth limitations of traditional financing. We are seeking a highly motivated and experienced Data Engineer to join our growing team. The person appointed will be an integral member of the Platform Team and will be responsible for the design and implementation of the enterprise wide data strategy, ensuring the strategy supports the current and future business needs. The role will involve collaborating with Business and IT stakeholders at all levels to ensure the enterprise data strategy and associated implementation is adding value to the business. Responsibilities: Develop and evolve the enterprise wide data strategy to support delivery of corporate objectives Be a key stakeholder and advisor in all new strategic data initiatives and ensure alignment to the enterprise wide data strategy Build a framework of principles to ensure data integrity across the business (including but not limited to CRM, BI, Data warehouse, external interfaces etc.) Participate in data management process with stakeholders Ensure that the Data Architecture strategy and roadmap is aligned to the business and technology strategies Build and maintain appropriate Enterprise Architecture artifacts including; Entity Relationship Models, Data dictionary, taxonomy to aid data traceability Drive data transformation initiatives in collaboration with analytic teams to support ease of end-user data access and limit iterations for data analytics resources to ‘cleanse’ data for each initiative they are assigned Provide technical oversight to solution delivery in creating business driven solutions adhering to the enterprise architecture and data governance standards Review and provide impact analysis to data for development initiatives and releases Be an advocate of data security principles and ensure appropriate security practices are embedded in any data strategy Be an active contributor to how the company evolves Data Governance practices and influence the adoption of data standards Develop key performance measures for data integration and quality Support third party data suppliers in developing specifications that are congruent with the Enterprise data architecture Required skills and qualifications: Advanced experience with Postgres, Salesforce, AmazonRDS data environments Proven experience in architecting and implementing Business Intelligence and Data warehouse platforms, Master data Management, data integration and OLTP database solutions. Possess in-depth knowledge of and able to consult on various technologies Strong knowledge of industry best practices around data architecture in cloud based solutions. Strong analytical and numerical skills are essential, enabling easy interpretation and analysis of large volumes of data. A comprehensive understanding of the principles of and best practices behind data engineering, and the supporting technologies such as RDBMS, NoSQL, Cache & In-memory stores. Experience of architecting data solutions across cloud data platforms. A comprehensive understanding of data warehousing and data transformation (extract, transform and load) processes and the supporting technologies such as AWS Lambda, EC2, CloudWatch, AppFlow, S3 Experience implementing data solutions Excellent problem solving and data modeling skills (logical, physical, semantic and integration models) including; normalization, OLAP / OLTP principles and entity relationship analysis Experience of mapping key Enterprise data entities to business capabilities and applications A strong knowledge of horizontal data lineage from source to output Excellent communication and presentational skills, confident and methodical approach, and able to work within a team environment ApplePie Capital is an equal opportunity employer. For more information about ApplePie Capital, visit www.applepiecapital.com."," Associate "," Full-time "," Finance, Analyst, and Engineering "," Financial Services " Data Engineer,United States,Data Engineer-Fintech,https://www.linkedin.com/jobs/view/data-engineer-at-planet-technology-3506286769?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=IZnu75mqN1mge9C806PzOA%3D%3D&position=22&pageNum=8&trk=public_jobs_jserp-result_search-card," ApplePie Capital ",https://www.linkedin.com/company/applepie-capital?trk=public_jobs_topcard-org-name," California, United States "," 1 week ago "," 174 applicants "," ApplePie Capital is a fast-growing online lender focused exclusively on the franchise industry. Our channel is differentiated and our momentum is real - we forge partnerships with high quality franchise brands, who deliver highly-qualified franchisee borrowers seeking capital to grow their franchise empire and escape the growth limitations of traditional financing. We are seeking a highly motivated and experienced Data Engineer to join our growing team. The person appointed will be an integral member of the Platform Team and will be responsible for the design and implementation of the enterprise wide data strategy, ensuring the strategy supports the current and future business needs. The role will involve collaborating with Business and IT stakeholders at all levels to ensure the enterprise data strategy and associated implementation is adding value to the business. Responsibilities:Develop and evolve the enterprise wide data strategy to support delivery of corporate objectivesBe a key stakeholder and advisor in all new strategic data initiatives and ensure alignment to the enterprise wide data strategyBuild a framework of principles to ensure data integrity across the business (including but not limited to CRM, BI, Data warehouse, external interfaces etc.)Participate in data management process with stakeholders Ensure that the Data Architecture strategy and roadmap is aligned to the business and technology strategiesBuild and maintain appropriate Enterprise Architecture artifacts including; Entity Relationship Models, Data dictionary, taxonomy to aid data traceabilityDrive data transformation initiatives in collaboration with analytic teams to support ease of end-user data access and limit iterations for data analytics resources to ‘cleanse’ data for each initiative they are assignedProvide technical oversight to solution delivery in creating business driven solutions adhering to the enterprise architecture and data governance standardsReview and provide impact analysis to data for development initiatives and releasesBe an advocate of data security principles and ensure appropriate security practices are embedded in any data strategyBe an active contributor to how the company evolves Data Governance practices and influence the adoption of data standards Develop key performance measures for data integration and quality Support third party data suppliers in developing specifications that are congruent with the Enterprise data architecture Required skills and qualifications:Advanced experience with Postgres, Salesforce, AmazonRDS data environmentsProven experience in architecting and implementing Business Intelligence and Data warehouse platforms, Master data Management, data integration and OLTP database solutions. Possess in-depth knowledge of and able to consult on various technologies Strong knowledge of industry best practices around data architecture in cloud based solutions.Strong analytical and numerical skills are essential, enabling easy interpretation and analysis of large volumes of data. A comprehensive understanding of the principles of and best practices behind data engineering, and the supporting technologies such as RDBMS, NoSQL, Cache & In-memory stores. Experience of architecting data solutions across cloud data platforms. A comprehensive understanding of data warehousing and data transformation (extract, transform and load) processes and the supporting technologies such as AWS Lambda, EC2, CloudWatch, AppFlow, S3Experience implementing data solutionsExcellent problem solving and data modeling skills (logical, physical, semantic and integration models) including; normalization, OLAP / OLTP principles and entity relationship analysis Experience of mapping key Enterprise data entities to business capabilities and applications A strong knowledge of horizontal data lineage from source to output Excellent communication and presentational skills, confident and methodical approach, and able to work within a team environment ApplePie Capital is an equal opportunity employer.For more information about ApplePie Capital, visit www.applepiecapital.com. "," Associate "," Full-time "," Finance, Analyst, and Engineering "," Financial Services " Data Engineer,United States,Data Engineer-Fintech,https://www.linkedin.com/jobs/view/data-engineer-at-maven-3500561395?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=q%2Fojw643c6HjaqH2VW6bkw%3D%3D&position=23&pageNum=8&trk=public_jobs_jserp-result_search-card," ApplePie Capital ",https://www.linkedin.com/company/applepie-capital?trk=public_jobs_topcard-org-name," California, United States "," 1 week ago "," 174 applicants "," ApplePie Capital is a fast-growing online lender focused exclusively on the franchise industry. Our channel is differentiated and our momentum is real - we forge partnerships with high quality franchise brands, who deliver highly-qualified franchisee borrowers seeking capital to grow their franchise empire and escape the growth limitations of traditional financing. We are seeking a highly motivated and experienced Data Engineer to join our growing team. The person appointed will be an integral member of the Platform Team and will be responsible for the design and implementation of the enterprise wide data strategy, ensuring the strategy supports the current and future business needs. The role will involve collaborating with Business and IT stakeholders at all levels to ensure the enterprise data strategy and associated implementation is adding value to the business. Responsibilities:Develop and evolve the enterprise wide data strategy to support delivery of corporate objectivesBe a key stakeholder and advisor in all new strategic data initiatives and ensure alignment to the enterprise wide data strategyBuild a framework of principles to ensure data integrity across the business (including but not limited to CRM, BI, Data warehouse, external interfaces etc.)Participate in data management process with stakeholders Ensure that the Data Architecture strategy and roadmap is aligned to the business and technology strategiesBuild and maintain appropriate Enterprise Architecture artifacts including; Entity Relationship Models, Data dictionary, taxonomy to aid data traceabilityDrive data transformation initiatives in collaboration with analytic teams to support ease of end-user data access and limit iterations for data analytics resources to ‘cleanse’ data for each initiative they are assignedProvide technical oversight to solution delivery in creating business driven solutions adhering to the enterprise architecture and data governance standardsReview and provide impact analysis to data for development initiatives and releasesBe an advocate of data security principles and ensure appropriate security practices are embedded in any data strategyBe an active contributor to how the company evolves Data Governance practices and influence the adoption of data standards Develop key performance measures for data integration and quality Support third party data suppliers in developing specifications that are congruent with the Enterprise data architecture Required skills and qualifications:Advanced experience with Postgres, Salesforce, AmazonRDS data environmentsProven experience in architecting and implementing Business Intelligence and Data warehouse platforms, Master data Management, data integration and OLTP database solutions. Possess in-depth knowledge of and able to consult on various technologies Strong knowledge of industry best practices around data architecture in cloud based solutions.Strong analytical and numerical skills are essential, enabling easy interpretation and analysis of large volumes of data. A comprehensive understanding of the principles of and best practices behind data engineering, and the supporting technologies such as RDBMS, NoSQL, Cache & In-memory stores. Experience of architecting data solutions across cloud data platforms. A comprehensive understanding of data warehousing and data transformation (extract, transform and load) processes and the supporting technologies such as AWS Lambda, EC2, CloudWatch, AppFlow, S3Experience implementing data solutionsExcellent problem solving and data modeling skills (logical, physical, semantic and integration models) including; normalization, OLAP / OLTP principles and entity relationship analysis Experience of mapping key Enterprise data entities to business capabilities and applications A strong knowledge of horizontal data lineage from source to output Excellent communication and presentational skills, confident and methodical approach, and able to work within a team environment ApplePie Capital is an equal opportunity employer.For more information about ApplePie Capital, visit www.applepiecapital.com. "," Associate "," Full-time "," Finance, Analyst, and Engineering "," Financial Services " Data Engineer,United States,Data Services Engineer,https://www.linkedin.com/jobs/view/data-services-engineer-at-ninjacat-3486710860?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=My5Y1Txe1NjTaQvYdaLYVQ%3D%3D&position=24&pageNum=8&trk=public_jobs_jserp-result_search-card," NinjaCat ",https://www.linkedin.com/company/ninjacat?trk=public_jobs_topcard-org-name," Denver, CO "," 4 weeks ago "," 56 applicants ","NinjaCat is a digital marketing performance management platform that provides brands, agencies and media companies tools to collect, connect, analyze, and present marketing data in a meaningful way. Our marketing data management and reporting solutions empower teams to communicate quickly and insightfully about the effectiveness of their marketing efforts at scale. Our mission is to build a company that everyone wishes they were a part of and the proof is in the pudding: we were featured by Inc. Magazine as one of the best places to work (2 years in a row!) and 2020 AdAge best places to work. If that weren’t enough: we work remotely (work from anywhere!), offer great perks and have a unique culture built on our core values of compassion, action and trust. We recently raised a significant amount of capital, built a best-in-class leadership team, and we’re executing on a product vision that will transform the marketing analytics industry. We would love to have you be part of it. Learn more at www.ninjacat.io. Interested but not sure you are qualified? If you are up for the challenge we want you to apply. We believe skills are transferable and we want everyone who wants to be part of NinjaCat to have a conversation with us. About This Role A NinjaCat Data Services Engineer’s primary responsibility is to build and maintain efficient and reliable data ingestion solutions for our customers. These solutions typically utilize SQL, Big Query, S3 and/or Snowflake. The ideal candidate should have experience working with large datasets and should be familiar with various data storage and processing systems. The Data Services Engineer works within NinjaCat’s technical services team and reports to the VP of Technical Services. Requirements 3+ years of experience building data ingestion solutions using SQL, Big Query, S3, and Snowflake. 2+ years of experience with a relational database (PostgreSQL, MySQL, etc) Experience working with large datasets and understanding of various data storage and processing systems. Strong understanding of data warehousing concepts and ETL processes. Experience working with cloud-based data platforms such as AWS, GCP, or Azure. Excellent problem-solving skills and the ability to troubleshoot and resolve complex technical issues. Strong communication skills and the ability to work in a collaborative team environment. Experience solving large, complex data management problems through analytical and creative thinking, research, and collaboration with your customer. Strong consulting and project management skills, with proven results working as a trusted advisor to drive business value for customers, including the ability to interact with client teams at various levels of technical and non-technical depth Self-starter who takes the initiative to get things done, has superior emotional intelligence and interpersonal skills and thrives in a fast-paced environment where priorities shift regularly. Responsibilities Design, build and maintain data ingestion solutions for our customers using SQL, Big Query, S3, and Snowflake. Work closely with the customer and internal teams to understand data requirements, design and implement optimal data ingestion solutions. Develop and maintain ETL pipelines to ensure smooth and reliable data ingestion from various data sources. Troubleshoot and resolve data ingestion issues, and work closely with customers to ensure their data is loaded accurately and in a timely manner. Ensure data quality, accuracy and completeness by building validation checks and data quality rules. Monitor data ingestion jobs and identify any issues or bottlenecks in the process and provide solutions to improve performance and efficiency. Work with the customer and internal teams to create and maintain documentation for the data ingestion solutions Nice to Haves Experience in digital advertising and marketing Experience in data warehouse or data processing systems (Snowflake, BigQuery, Redshift, etc) What You Bring A willingness to learn and grow, and a collaborative mindset The passion and perseverance to help NinjaCat’s engineering team be the best it can be Benefits Cash compensation for this role includes a base salary in the range of $85,000 to $115,000, but may vary based on job-related knowledge, skills and experience. Other Benefits Include: Work from home (We are 100% remote!) 4-Day Work Week Unlimited Vacation 401k Health, Dental, Vision and Life Insurance An awesome place to work (Inc Magazine - Best Place To Work, and Glassdoor 4.7 Star Rating) Free books supported by NinjaCat’s reading program Personal learning and development stipend Monthly health and wellness reimbursement Yearly All Company in-person meetup Ability to have a huge impact on a growing company Work alongside an incredible CEO, and a fantastic team Ability to use “cat” puns and memes all day long Equal Opportunity NinjaCat is an equal opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws. This policy applies to all employment practices within our organization, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. NinjaCat makes hiring decisions based solely on qualifications, merit, and business needs at the time. Applicants must be located and authorized in the US or Canada. At this time, NinjaCat does not offer visa sponsorship or transfers."," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-maven-3500559550?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=FXQa78TOQd5gKC3w1phJOQ%3D%3D&position=25&pageNum=8&trk=public_jobs_jserp-result_search-card," maven ",https://www.linkedin.com/company/maven-alpha?trk=public_jobs_topcard-org-name," Greater Chicago Area "," 2 weeks ago "," 55 applicants ","Multi-strategy hedge fund is looking for an experienced Data Developer / Engineer to join its quantitative trading team. Your core focus will be to build sophisticated data pipelines and analytics used to perform advanced quantitative research to enhance existing and create new and profitable systematic trading strategies. Skills & Experience: > Strong academic background in a STEM field. > 5 -15 years of experience in researching and building data pipelines and analytics. >Financial markets experience is welcome but not required. > Expert programming skills in C++ and or Python."," Mid-Senior level "," Full-time "," Engineering, Information Technology, and Research "," Software Development, Technology, Information and Internet, and Financial Services " Data Engineer,United States,Advanced Data Engineer,https://www.linkedin.com/jobs/view/advanced-data-engineer-at-kroger-technology-digital-3531126682?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=K2FNmVDro1gHYkjCM9tX0g%3D%3D&position=20&pageNum=7&trk=public_jobs_jserp-result_search-card," Kroger Technology & Digital ",https://www.linkedin.com/company/kroger-technology-and-digital?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 10 hours ago "," 71 applicants ","Accountable for leading activities that create deliverables to guide the direction, development, and delivery of technological responses to targeted business outcomes. Provide facilitation, analysis and design tasks required for the development of an enterprise's data and information architecture, focusing on data as an asset for the enterprise. Develop reusable standards, design patterns, guidelines, and configurations to evolve the technical infrastructure related to data and information across the enterprise, including direct collaboration with 84.51. Demonstrate the company’s core values of respect, honesty, integrity, diversity, inclusion and safety. Minimum Position Qualifications: Bachelor's Degree in computer science, or software engineering, or related field 7+ years successful and applicable hands-on experience in the data development and principles including end-to-end design patterns. 7+ years proven track record of designing and delivering large scale, high quality operational or analytical data systems. 7+ years successful and applicable experience taking a lead role in building complex data solutions that have been successfully delivered to customers. Any experience defining evolutionary data solutions and underlying technologies. Demonstrated written and oral communication skills Basic understanding of network and data security architecture. Strong knowledge of industry trends and industry competition Knowledge in a minimum of two of the following technical disciplines: data warehousing, data management, analytics development, data science, application programming interfaces (APIs), data integration, cloud, servers and storage, and database management Requirements: Experience in Data modeling and advanced SQL techniques Experience working on cloud migration methodologies and processes including tools like Databricks, Azure Data Factory, Azure Functions, and other Azure data services Expert in SQL, Python, Spark, Databricks Experience working with varied data file formats (Avro, json, csv) using PySpark for ingesting and transformation Experience with DevOps process and understanding of Terraform scripting Understanding the benefits of data warehousing, data architecture, data quality processes, data warehousing design and implementation, table structure, fact and dimension tables, logical and physical database design Experience designing and implementing ingestion processes for unstructured and structured data sets Experience designing and developing data cleansing routines utilizing standard data operations Knowledge of data, master data, metadata related standards, and processes Experience working with multi-Terabyte data sets, troubleshooting issues, performance tuning of Spark and SQL queries Experience using Azure DevOps/Github actions CI/CD pipelines to deploy code Microsoft Azure certifications are a plus Company Overview: Kroger Family of Companies employs nearly half a million associates who serve over 11 million customers daily through a seamless shopping experience under a variety of banner names. At The Kroger Co., we are Fresh for Everyone™ and dedicated to our Purpose: To Feed the Human Spirit®. We are committed to creating #ZeroHungerZeroWaste communities by 2025. Careers with The Kroger Co. and our family of companies offer competitive wages, flexible schedules, benefits and room for advancement."," Mid-Senior level "," Full-time "," Design, Engineering, and Other "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ascendion-3487736116?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=EsGTFDPtGjTVLSYx8nSUvQ%3D%3D&position=2&pageNum=8&trk=public_jobs_jserp-result_search-card," Ascendion ",https://www.linkedin.com/company/ascendion?trk=public_jobs_topcard-org-name," Texas, United States "," 3 weeks ago "," Over 200 applicants ","100% Remote position!!! Title:- Data Engineer/ETL Developer Duration:- 6-12 Months Contract Location:- 100% Remote Must-Have:- 3+ years working with ETL Developer (Talend). Experience with MongoDB. Experience with microservice architecture. Experience with data models, data mapping, etc. Experience working with data migrations. Integrating data, transforming data, moving data between systems, data warehouses, etc. Strong experience with SQL & Oracle DB's. Agile background."," Mid-Senior level "," Full-time "," Information Technology "," Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-advantis-global-3490307445?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=WapyYzffDYz6SOjX4gU1iw%3D%3D&position=3&pageNum=8&trk=public_jobs_jserp-result_search-card," Ascendion ",https://www.linkedin.com/company/ascendion?trk=public_jobs_topcard-org-name," Texas, United States "," 3 weeks ago "," Over 200 applicants "," 100% Remote position!!!Title:- Data Engineer/ETL DeveloperDuration:- 6-12 Months ContractLocation:- 100% RemoteMust-Have:-3+ years working with ETL Developer (Talend).Experience with MongoDB.Experience with microservice architecture.Experience with data models, data mapping, etc.Experience working with data migrations.Integrating data, transforming data, moving data between systems, data warehouses, etc.Strong experience with SQL & Oracle DB's.Agile background. "," Mid-Senior level "," Full-time "," Information Technology "," Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-zenapse-3516479400?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=cf0RAvwN3yKG0op1Fcjz4g%3D%3D&position=4&pageNum=8&trk=public_jobs_jserp-result_search-card," Ascendion ",https://www.linkedin.com/company/ascendion?trk=public_jobs_topcard-org-name," Texas, United States "," 3 weeks ago "," Over 200 applicants "," 100% Remote position!!!Title:- Data Engineer/ETL DeveloperDuration:- 6-12 Months ContractLocation:- 100% RemoteMust-Have:-3+ years working with ETL Developer (Talend).Experience with MongoDB.Experience with microservice architecture.Experience with data models, data mapping, etc.Experience working with data migrations.Integrating data, transforming data, moving data between systems, data warehouses, etc.Strong experience with SQL & Oracle DB's.Agile background. "," Mid-Senior level "," Full-time "," Information Technology "," Banking " Data Engineer,United States,Business Intelligence Data Engineer,https://www.linkedin.com/jobs/view/business-intelligence-data-engineer-at-archimed-3508496542?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=PsC0newObdS3kCanSjOyvg%3D%3D&position=7&pageNum=8&trk=public_jobs_jserp-result_search-card," ARCHIMED ",https://fr.linkedin.com/company/archimed---investors-in-healthcare?trk=public_jobs_topcard-org-name," Phoenix, AZ "," 2 weeks ago "," Be among the first 25 applicants ","Title21 Health Solutions, a leading provider of Health and Life Science data management solutions, is seeking a Report Developer II at its Phoenix, AZ location. The Report Developer is responsible for developing reports for Manufacturing Execution (MES) and Enterprise Quality Management Systems (EQMS) in the health and life science domains. These domains include Cell and Gene Therapy (CGT), immunotherapy, and regenerative medicine spaces in the context of biologic-drug development, cGMP manufacturing, and clinical/healthcare processing and delivery. Report Developer Candidate Background This position seeks an experienced professional with technical expertise in developing complex static and interactive reports in support of software implementations and analytics. Responsibilities Create, implement and maintain Crystal Reports for Title21 projects for both internal and external customers, on-time and within budgets. Required Experience 5-8 years’ relevant work experience working with Crystal Reports Developer version XI up to current version. Minimum 3 years professional experience working with SQL Server (SSMS) in support of data reporting and analytics. 3-5 years’ experience interacting and working with internal and external customers in the report development process. 2-4 years professional experience with scripting or development languages (SAS, R, Python, VBA, PowerShell, etc.) Requirements Desirable Experience Professional experience with other Analytics and Reporting Tools such as Tableau, Power BI, SSRS, Cognos, Logi Analytics, etc. Experience in the Health and Life Science domains, and regulatory environments, is highly desirable. Experience training internal and external customers in using Crystal Reports Developer. Minimum Required Skill Set Highly skilled in using Microsoft Excel including pivot tables, macros, formulas, etc. Strong technical skills and familiarity with software technology. Demonstrated ability to learn quickly and add value immediately in a fast-paced environment. Demonstrated ability to work as both a team member and as an individual contributor. Excellent time management skills. Excellent verbal, public speaking, and written communication skills. Ability to effectively document results. Demonstrated ability to deliver clear, concise instructions, communicates difficult concepts simply and effectively, and maintains professional presentation skills. Highly detailed oriented with extremely good follow-up. Excellent interpersonal skills with a genuine enthusiasm in the aforementioned fields. Strong familiarity with Windows/PCs environments. Strong initiative and ability to thrive in self-directed work teams. Strong critical thinker with the ability to learn new systems, synthesize information and formulate recommendations. Personal Traits Ability to work independently and self-starting. Enjoy working on a team with enthusiastic, talented professionals; being a team player and enjoying a collaborative environment is essential. Ability to be agile in a high growth, fast-paced environment. Willingness to learn, grow and take on more responsibilities. Desire to learn the Title21 software systems. Self-starter, reliable, conscientious, customer-focused team member. Education Requirements Relevant BS 4-year degree in the Computer or Information Sciences. Position Type Full-Time Employment; Non-Exempt. Sponsorships considered for highly qualified candidates. Travel Potential; approximately 10% Location This position is located in Phoenix, AZ. Benefits Compensation Title21 Health Solutions provides a highly competitive compensation and benefits package including: Medical Plan/Dental/Vision plan Health Savings Account (HSA) 401(k) with company match Paid Holiday/Vacation/Birthday/Personal days Life Insurance Plan"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States, BI Data Engineer-Analyst ,https://www.linkedin.com/jobs/view/bi-data-engineer-analyst-at-lakefield-veterinary-group-3505586653?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=1XzNbDfq9Q4hghVFuMW2%2BA%3D%3D&position=8&pageNum=8&trk=public_jobs_jserp-result_search-card," Lakefield Veterinary Group ",https://www.linkedin.com/company/lakefield-veterinary-group?trk=public_jobs_topcard-org-name," Kent, WA "," 2 weeks ago "," 155 applicants ","Lakefield Veterinary Group, Inc. is a national network of veterinary hospitals. Our services are designed to enhance the well-being of pets through all stages of their lives. For years, Lakefield has delivered the very best in pet care, including veterinary and wellness services. The Total Pet Care (TPC) team leads and partners with our hospitals to enhance the experience of our pets, pet owners, and team members. Our office environment fosters our core values of doing what is best for our clients and their pets, providing “WOW” service, being humble and treating others with respect, continued growth and learning, and cultivating a fun, positive, family spirit. We are without a doubt a pet-friendly workplace. We aim to do well by doing good, leveraging our know-how to provide incredible pet experiences, and growing the organization. Be a part of something meaningful! Overview The BI Data Engineer-Analyst position is responsible for the development and implementation of scalable, stable, and secure solutions for data acquisition, data distribution, and workflow orchestration within the data warehouse. The role will also be responsible for the maintenance and optimization of existing data pipelines. They will work closely with team members, end users, and other stakeholders to provide solutions that support and empower internal business partners and drive enterprise analytics and data science objectives. Successful candidates will have strong engineering, communication, and knowledge in on-prem SQL Server and Azure data systems. They will also have the desire to dive into the complexity of our business and navigate both relationships and processes. Familiarity with common cloud data tools, best practices, and experience with PowerApps is a plus. Responsibilities Design, implement, and support ETL processes sourcing data from various internal applications and external data sources. Use Azure services such as Azure ADLS, Event Hub, Data Factory, and Purview to improve and speed up the delivery of our data products and services. Identify, design, and implement internal process improvements: automating manual processes, and optimizing data delivery and reliability. Communicate technical concepts to non-technical audiences both in written and verbal form. Develop and refine dashboards and data visualizations in Power BI to measure KPIs. Analyze and document business requirements and data specifications. Identify and analyze issues and problems, determine root causes, and report on problem status and resolution. Support the development of end-user training and reference materials. Develop technical solution documentation reflecting data design and mapping. Required Qualifications 4+ years’ experience working in MS SQL with a proven ability to write efficient queries. 3+ years’ experience creating reports and dashboards in PowerBI leveraging DAX calculations. Ability to assess performance and tune performance of queries and dashboards. Desired Qualifications Experience programming in additional languages such as Python and C#. Experience creating Microsoft PowerApps. Solid understanding of data architecture and BI best practice principles. We are unable to sponsor or take over the sponsorship of an employment Visa at this time. Compensation: Salary $120,000-$135,000 This position is Full-time and can be remote Benefits: Medical Dental Vision 401k Employee Assistant Program120 Long Term Care Short Term Disability Long Term Disability Life Paid Holidays 2 Weeks of Paid time off"," Mid-Senior level "," Full-time "," Health Care Provider "," Non-profit Organizations " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-hybrid-at-captivation-3499487550?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=PLp1yY4Q8gwX%2Fax%2BuuFMvA%3D%3D&position=1&pageNum=9&trk=public_jobs_jserp-result_search-card," maven ",https://www.linkedin.com/company/maven-alpha?trk=public_jobs_topcard-org-name," Greater Chicago Area "," 2 weeks ago "," 55 applicants "," Multi-strategy hedge fund is looking for an experienced Data Developer / Engineer to join its quantitative trading team.Your core focus will be to build sophisticated data pipelines and analytics used to perform advanced quantitative research to enhance existing and create new and profitable systematic trading strategies.Skills & Experience:> Strong academic background in a STEM field.> 5 -15 years of experience in researching and building data pipelines and analytics.>Financial markets experience is welcome but not required.> Expert programming skills in C++ and or Python. "," Mid-Senior level "," Full-time "," Engineering, Information Technology, and Research "," Software Development, Technology, Information and Internet, and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3485241841?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=8mBlNo9BIU1ZAW%2B3tj6QCA%3D%3D&position=2&pageNum=9&trk=public_jobs_jserp-result_search-card," maven ",https://www.linkedin.com/company/maven-alpha?trk=public_jobs_topcard-org-name," Greater Chicago Area "," 2 weeks ago "," 55 applicants "," Multi-strategy hedge fund is looking for an experienced Data Developer / Engineer to join its quantitative trading team.Your core focus will be to build sophisticated data pipelines and analytics used to perform advanced quantitative research to enhance existing and create new and profitable systematic trading strategies.Skills & Experience:> Strong academic background in a STEM field.> 5 -15 years of experience in researching and building data pipelines and analytics.>Financial markets experience is welcome but not required.> Expert programming skills in C++ and or Python. "," Mid-Senior level "," Full-time "," Engineering, Information Technology, and Research "," Software Development, Technology, Information and Internet, and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-american-residential-services-3506293195?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=FnDjy%2BPbTrLpldND8DD35w%3D%3D&position=3&pageNum=9&trk=public_jobs_jserp-result_search-card," maven ",https://www.linkedin.com/company/maven-alpha?trk=public_jobs_topcard-org-name," Greater Chicago Area "," 2 weeks ago "," 55 applicants "," Multi-strategy hedge fund is looking for an experienced Data Developer / Engineer to join its quantitative trading team.Your core focus will be to build sophisticated data pipelines and analytics used to perform advanced quantitative research to enhance existing and create new and profitable systematic trading strategies.Skills & Experience:> Strong academic background in a STEM field.> 5 -15 years of experience in researching and building data pipelines and analytics.>Financial markets experience is welcome but not required.> Expert programming skills in C++ and or Python. "," Mid-Senior level "," Full-time "," Engineering, Information Technology, and Research "," Software Development, Technology, Information and Internet, and Financial Services " Data Engineer,United States,Data Visualization Engineer ,https://www.linkedin.com/jobs/view/data-visualization-engineer-at-ishare-inc-3511430203?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=6KfHnCM7YXf011i9ZFn0dQ%3D%3D&position=4&pageNum=9&trk=public_jobs_jserp-result_search-card," iShare Inc. ",https://www.linkedin.com/company/ishareinc?trk=public_jobs_topcard-org-name," San Francisco Bay Area "," 1 week ago "," 50 applicants ","Hiring on behalf of a client or the role of Data Visualization Engineer / Hybrid / Full time hire / W2/ , San Francisco Bay Area, CA Details of the application, domain, as appropriate: Responsible for delivering Executive dashboards end to end with actionable insights Be a storyteller who can show how to get business insights from the Dashboard Responsible for engaging Executives to gather requirements and use agile methodology to deliver dashboards Conduct deep analysis to discover actionable business insights to grow client business top line Act as Coach/Mentor for other Enterprise BI developers and influence Visualization best practices Develop analytics across different personas which includes C-Level executives, product leaders and DevOps engineers Key responsibilities and expected “output”: Solve data and analytics problems using design thinking principles Deep knowledge and understanding of Sales, Marketing and Customer success business domain. Excellent oral and written communications skills and ability to interact with and present to all levels of management Self-starter with sharp decision making skills, ability to multitask, work independently and prioritize in a fast-paced and changing environment ​​“Required” tech stack and other related details: 10+ yrs. of experience in BI/UI/UX domain 5+ years of experience working with Tableau Ability to create actionable dashboards, passionate about data and good data evangelist Strong experience in building dashboards in any of the BI Platforms Should have experience in writing SQL queries to understand or build dashboards Any other information considered “critical” / “Useful"": Prior experience in high tech and software industries is beneficial Prior experience in building cloud spend dashboards, unit economics, marginal analysis dashboard is beneficial Prior experience in working with google big query and tableau security is added advantage"," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Visualization Engineer ,https://www.linkedin.com/jobs/view/sr-data-engineer-at-experfy-3530755289?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=qT5%2FrXXFsoXwjS407aUSsA%3D%3D&position=5&pageNum=9&trk=public_jobs_jserp-result_search-card," iShare Inc. ",https://www.linkedin.com/company/ishareinc?trk=public_jobs_topcard-org-name," San Francisco Bay Area "," 1 week ago "," 50 applicants "," Hiring on behalf of a client or the role ofData Visualization Engineer / Hybrid / Full time hire / W2/ , San Francisco Bay Area, CADetails of the application, domain, as appropriate:Responsible for delivering Executive dashboards end to end with actionable insights Be a storyteller who can show how to get business insights from the DashboardResponsible for engaging Executives to gather requirements and use agile methodology to deliver dashboards Conduct deep analysis to discover actionable business insights to grow client business top lineAct as Coach/Mentor for other Enterprise BI developers and influence Visualization best practicesDevelop analytics across different personas which includes C-Level executives, product leaders and DevOps engineers Key responsibilities and expected “output”:Solve data and analytics problems using design thinking principlesDeep knowledge and understanding of Sales, Marketing and Customer success business domain.Excellent oral and written communications skills and ability to interact with and present to all levels of managementSelf-starter with sharp decision making skills, ability to multitask, work independently and prioritize in a fast-paced and changing environment ​​“Required” tech stack and other related details:10+ yrs. of experience in BI/UI/UX domain5+ years of experience working with TableauAbility to create actionable dashboards, passionate about data and good data evangelist Strong experience in building dashboards in any of the BI Platforms Should have experience in writing SQL queries to understand or build dashboards Any other information considered “critical” / “Useful"": Prior experience in high tech and software industries is beneficialPrior experience in building cloud spend dashboards, unit economics, marginal analysis dashboard is beneficial Prior experience in working with google big query and tableau security is added advantage "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Visualization Engineer ,https://www.linkedin.com/jobs/view/senior-data-engineer-at-razor-3527036512?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=ikOit4YL6zfmTcuO1wKr7A%3D%3D&position=6&pageNum=9&trk=public_jobs_jserp-result_search-card," iShare Inc. ",https://www.linkedin.com/company/ishareinc?trk=public_jobs_topcard-org-name," San Francisco Bay Area "," 1 week ago "," 50 applicants "," Hiring on behalf of a client or the role ofData Visualization Engineer / Hybrid / Full time hire / W2/ , San Francisco Bay Area, CADetails of the application, domain, as appropriate:Responsible for delivering Executive dashboards end to end with actionable insights Be a storyteller who can show how to get business insights from the DashboardResponsible for engaging Executives to gather requirements and use agile methodology to deliver dashboards Conduct deep analysis to discover actionable business insights to grow client business top lineAct as Coach/Mentor for other Enterprise BI developers and influence Visualization best practicesDevelop analytics across different personas which includes C-Level executives, product leaders and DevOps engineers Key responsibilities and expected “output”:Solve data and analytics problems using design thinking principlesDeep knowledge and understanding of Sales, Marketing and Customer success business domain.Excellent oral and written communications skills and ability to interact with and present to all levels of managementSelf-starter with sharp decision making skills, ability to multitask, work independently and prioritize in a fast-paced and changing environment ​​“Required” tech stack and other related details:10+ yrs. of experience in BI/UI/UX domain5+ years of experience working with TableauAbility to create actionable dashboards, passionate about data and good data evangelist Strong experience in building dashboards in any of the BI Platforms Should have experience in writing SQL queries to understand or build dashboards Any other information considered “critical” / “Useful"": Prior experience in high tech and software industries is beneficialPrior experience in building cloud spend dashboards, unit economics, marginal analysis dashboard is beneficial Prior experience in working with google big query and tableau security is added advantage "," Mid-Senior level "," Full-time "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Commercial Operations Data Engineer,https://www.linkedin.com/jobs/view/commercial-operations-data-engineer-at-kate-farms-3513821279?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=AwxOle5LEsa17nDR4bZNrw%3D%3D&position=7&pageNum=9&trk=public_jobs_jserp-result_search-card," Kate Farms ",https://www.linkedin.com/company/kate-farms-inc-?trk=public_jobs_topcard-org-name," Santa Barbara, CA "," 1 month ago "," Be among the first 25 applicants ","About Kate Farms Kate Farms is a company with heart. Our company was founded on the belief that good nutrition leads to good health, and good health opens the door to endless possibilities. That’s why our mission is to make nutrition the cornerstone of healthcare, so people can live their best lives. We are a medical food company that makes complete nutrition formulas for people who have a medical need for liquid nutrition. Position Overview We are looking for a talented data engineer to join our team at Kate Farms! Here, you'll bring advanced knowledge and experience to solve complex business issues. We'll look to your data engineering subject-matter expertise, and you will frequently contribute to the development of new ideas and methods. You’ll also get to work on complicated, interesting problems where analysis of situations and data requires an in-depth evaluation of multiple factors. The Commercial Operations Data Engineer position will be a key enterprise role that will be responsible for managing, optimizing, overseeing, and monitoring data retrieval, storage, and distribution throughout the organization. This technology expert will build and manage data pipelines that combine information from multiple and varied source systems, both internal and external. They will integrate, consolidate, and convert raw data into usable information for analytics and business decision-making. As part of the Commercial Operations and Insights team, this role will be the bridge between understanding the business requirements and building a scalable solution that addresses those requirements. This role requires a significant set of technical skills, including deep knowledge of SQL database design and programming languages. They will work cross-functionally, so will need strong communication skills to work across departments (i.e.: IT, sales, marketing, finance, supply chain, etc.) to understand what business leaders want to gain from the data. Essential Job Duties And Responsibilites Develop, construct, test, and maintain architectures. Design, develop, automate, and support complex applications to extract, transform, and load data. Manage the entire back-end development life cycle for the company's data warehouse, including ETL procedures, cube building for database and performance management, and dimensional design of the table structures. Work closely across the business units to gain an in-depth understanding of business processes and requirements. Identify ways to improve data reliability, efficiency, and quality. Align architecture with business requirements and translate them into detailed technical specifications to be implemented. Support business units with evaluation and plan for any external data acquisition. Lead support efforts for collecting, parsing, managing, and analyzing large sets of data. Ensure data quality and cleanup, validate, and verify as needed. Create and update documentation for ETL processes and data flow implemented for building the database. Provide production support and root cause analysis of ETL failures and reported data bugs. Prepare data for predictive and prescriptive modeling, as well as, visualization BI tools. Use data to discover tasks that can be automated. Deliver updates to stakeholders based on analytics. Analyze, design, and determine coding, programming, and integration activities required based on specific objectives and established project guidelines. Collaborate and communicate with the project team regarding project progress and issue resolutions. Minimum Job Requirements Bachelor's or master's degree in computer science, information systems, engineering, or equivalent. 2–4 years’ data engineering experience in a fast-paced environment. This role requires experience in developing solutions with a cloud provider, preferably Azure. Report design and development skills with a BI Tool (Tableau, Power BI) is a definite Plus. Software development experience in Python and SQL are mandatory. Knowledge of AWS services—Redshift, Athena, EMR, DocumentDB, S3. Basic knowledge of AI and data science. Experience in ETL, Data Lake, and data warehouse pipelines. Fluent in structured and unstructured data, its management, and modern data transformation methodologies. Ability to define and create complex models to pull insights, predictions, and innovations from data is a plus. Effectively and creatively tell stories and create visualizations to describe and communicate data insights. Experience with distributor tracings, claims data and/or Nielsen retail data is a plus. Exceptional analytical and quantitative skills with the ability to interpret and summarize complex data that drive actionable plans and that add value to the business. Ability to influence at all levels, as well as a demonstrated ability to work effectively within a team. Proven ability to structure complex problems, develop solutions, and craft recommendations and results into easily digestible presentations. Critical thinking ability to connect inputs from various cross-functional stakeholders to create all-inclusive organizational strategies and initiatives. Ability to coordinate and manage simultaneous data projects of varying size and scope. Excellent communication skills, both written and verbal, with the ability to convey complex information to a broad audience. Ability to navigate ambiguity and fast-moving environment. Process-oriented with a focus on continuous improvement. Motivated self-starter approach with a proactive attitude, initiative, and drive. PHYSICAL DEMANDS The physical demands described here are representative of those that must be met by the employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Ability to sit at a computer for extended periods of time. WORK ENVIRONMENT The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of the job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. This position will work in a home/office environment with occasional trips to corporate office. It is Kate Farms policy that any position that requires regular interaction with health care professionals, requires that, if hired, you be vaccinated against Covid-19 unless you need a reasonable accommodation due to sincerely held religious beliefs, medical needs, or other reason protected by applicable federal, state, and local law. In compliance with Colorado’s Equal Pay for Equal Work Act (EPEWA) , New York City’s Human Rights Law (NYCHRL), and California’s Pay Transparency bill (SB 1162) we are disclosing the compensation, or a range thereof, for roles that will be, or could be performed in Colorado, New York City, or California. If the position applied to is not located in Colorado, NYC, or California, the following information may not apply. Salary Minimum: $87,000 Salary Maximum $109,849. The base salary range above represents the low and high end of the Kate Farms salary range for this position. This range will vary and may be above or below the range based on various factors including but not limited to location, experience, internal pay alignment, and performance. The range listed is just one component of Kate Farms’ total compensation package for employees. Other rewards may include annual bonuses and short- and long- term incentives. Kate Farms cares well for its employees and it shows through the provision of standout benefits such as Medical, Dental, Vision, Life Insurance, FSA, 401k Retirement Plan, Paid Time Off , Remote Work options, and many more outstanding company perks. NOTE: This job description is not intended to be all-inclusive. Employee may perform other related duties as negotiated to meet the ongoing needs of the organization as directed by the management of the company."," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Commercial Operations Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499587063?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=ISk9FjVFCR%2B%2BEb2VRaFIcA%3D%3D&position=8&pageNum=9&trk=public_jobs_jserp-result_search-card," Kate Farms ",https://www.linkedin.com/company/kate-farms-inc-?trk=public_jobs_topcard-org-name," Santa Barbara, CA "," 1 month ago "," Be among the first 25 applicants "," About Kate FarmsKate Farms is a company with heart. Our company was founded on the belief that good nutrition leads to good health, and good health opens the door to endless possibilities. That’s why our mission is to make nutrition the cornerstone of healthcare, so people can live their best lives. We are a medical food company that makes complete nutrition formulas for people who have a medical need for liquid nutrition.Position OverviewWe are looking for a talented data engineer to join our team at Kate Farms! Here, you'll bring advanced knowledge and experience to solve complex business issues. We'll look to your data engineering subject-matter expertise, and you will frequently contribute to the development of new ideas and methods. You’ll also get to work on complicated, interesting problems where analysis of situations and data requires an in-depth evaluation of multiple factors.The Commercial Operations Data Engineer position will be a key enterprise role that will be responsible for managing, optimizing, overseeing, and monitoring data retrieval, storage, and distribution throughout the organization. This technology expert will build and manage data pipelines that combine information from multiple and varied source systems, both internal and external. They will integrate, consolidate, and convert raw data into usable information for analytics and business decision-making.As part of the Commercial Operations and Insights team, this role will be the bridge between understanding the business requirements and building a scalable solution that addresses those requirements. This role requires a significant set of technical skills, including deep knowledge of SQL database design and programming languages. They will work cross-functionally, so will need strong communication skills to work across departments (i.e.: IT, sales, marketing, finance, supply chain, etc.) to understand what business leaders want to gain from the data.Essential Job Duties And ResponsibilitesDevelop, construct, test, and maintain architectures.Design, develop, automate, and support complex applications to extract, transform, and load data.Manage the entire back-end development life cycle for the company's data warehouse, including ETL procedures, cube building for database and performance management, and dimensional design of the table structures. Work closely across the business units to gain an in-depth understanding of business processes and requirements.Identify ways to improve data reliability, efficiency, and quality.Align architecture with business requirements and translate them into detailed technical specifications to be implemented. Support business units with evaluation and plan for any external data acquisition.Lead support efforts for collecting, parsing, managing, and analyzing large sets of data.Ensure data quality and cleanup, validate, and verify as needed.Create and update documentation for ETL processes and data flow implemented for building the database.Provide production support and root cause analysis of ETL failures and reported data bugs.Prepare data for predictive and prescriptive modeling, as well as, visualization BI tools.Use data to discover tasks that can be automated.Deliver updates to stakeholders based on analytics.Analyze, design, and determine coding, programming, and integration activities required based on specific objectives and established project guidelines.Collaborate and communicate with the project team regarding project progress and issue resolutions.Minimum Job RequirementsBachelor's or master's degree in computer science, information systems, engineering, or equivalent.2–4 years’ data engineering experience in a fast-paced environment.This role requires experience in developing solutions with a cloud provider, preferably Azure.Report design and development skills with a BI Tool (Tableau, Power BI) is a definite Plus.Software development experience in Python and SQL are mandatory. Knowledge of AWS services—Redshift, Athena, EMR, DocumentDB, S3.Basic knowledge of AI and data science.Experience in ETL, Data Lake, and data warehouse pipelines.Fluent in structured and unstructured data, its management, and modern data transformation methodologies.Ability to define and create complex models to pull insights, predictions, and innovations from data is a plus.Effectively and creatively tell stories and create visualizations to describe and communicate data insights.Experience with distributor tracings, claims data and/or Nielsen retail data is a plus.Exceptional analytical and quantitative skills with the ability to interpret and summarize complex data that drive actionable plans and that add value to the business. Ability to influence at all levels, as well as a demonstrated ability to work effectively within a team. Proven ability to structure complex problems, develop solutions, and craft recommendations and results into easily digestible presentations. Critical thinking ability to connect inputs from various cross-functional stakeholders to create all-inclusive organizational strategies and initiatives.Ability to coordinate and manage simultaneous data projects of varying size and scope.Excellent communication skills, both written and verbal, with the ability to convey complex information to a broad audience.Ability to navigate ambiguity and fast-moving environment.Process-oriented with a focus on continuous improvement.Motivated self-starter approach with a proactive attitude, initiative, and drive.PHYSICAL DEMANDSThe physical demands described here are representative of those that must be met by the employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.Ability to sit at a computer for extended periods of time. WORK ENVIRONMENTThe work environment characteristics described here are representative of those an employee encounters while performing the essential functions of the job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.This position will work in a home/office environment with occasional trips to corporate office.It is Kate Farms policy that any position that requires regular interaction with health care professionals, requires that, if hired, you be vaccinated against Covid-19 unless you need a reasonable accommodation due to sincerely held religious beliefs, medical needs, or other reason protected by applicable federal, state, and local law.In compliance with Colorado’s Equal Pay for Equal Work Act (EPEWA) , New York City’s Human Rights Law (NYCHRL), and California’s Pay Transparency bill (SB 1162) we are disclosing the compensation, or a range thereof, for roles that will be, or could be performed in Colorado, New York City, or California. If the position applied to is not located in Colorado, NYC, or California, the following information may not apply. Salary Minimum: $87,000 Salary Maximum $109,849. The base salary range above represents the low and high end of the Kate Farms salary range for this position. This range will vary and may be above or below the range based on various factors including but not limited to location, experience, internal pay alignment, and performance. The range listed is just one component of Kate Farms’ total compensation package for employees. Other rewards may include annual bonuses and short- and long- term incentives. Kate Farms cares well for its employees and it shows through the provision of standout benefits such as Medical, Dental, Vision, Life Insurance, FSA, 401k Retirement Plan, Paid Time Off , Remote Work options, and many more outstanding company perks. NOTE: This job description is not intended to be all-inclusive. Employee may perform other related duties as negotiated to meet the ongoing needs of the organization as directed by the management of the company. "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Commercial Operations Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-analytics-at-meta-3503786346?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=2lgn0%2FskEj6EikbbZSRu6w%3D%3D&position=9&pageNum=9&trk=public_jobs_jserp-result_search-card," Kate Farms ",https://www.linkedin.com/company/kate-farms-inc-?trk=public_jobs_topcard-org-name," Santa Barbara, CA "," 1 month ago "," Be among the first 25 applicants "," About Kate FarmsKate Farms is a company with heart. Our company was founded on the belief that good nutrition leads to good health, and good health opens the door to endless possibilities. That’s why our mission is to make nutrition the cornerstone of healthcare, so people can live their best lives. We are a medical food company that makes complete nutrition formulas for people who have a medical need for liquid nutrition.Position OverviewWe are looking for a talented data engineer to join our team at Kate Farms! Here, you'll bring advanced knowledge and experience to solve complex business issues. We'll look to your data engineering subject-matter expertise, and you will frequently contribute to the development of new ideas and methods. You’ll also get to work on complicated, interesting problems where analysis of situations and data requires an in-depth evaluation of multiple factors.The Commercial Operations Data Engineer position will be a key enterprise role that will be responsible for managing, optimizing, overseeing, and monitoring data retrieval, storage, and distribution throughout the organization. This technology expert will build and manage data pipelines that combine information from multiple and varied source systems, both internal and external. They will integrate, consolidate, and convert raw data into usable information for analytics and business decision-making.As part of the Commercial Operations and Insights team, this role will be the bridge between understanding the business requirements and building a scalable solution that addresses those requirements. This role requires a significant set of technical skills, including deep knowledge of SQL database design and programming languages. They will work cross-functionally, so will need strong communication skills to work across departments (i.e.: IT, sales, marketing, finance, supply chain, etc.) to understand what business leaders want to gain from the data.Essential Job Duties And ResponsibilitesDevelop, construct, test, and maintain architectures.Design, develop, automate, and support complex applications to extract, transform, and load data.Manage the entire back-end development life cycle for the company's data warehouse, including ETL procedures, cube building for database and performance management, and dimensional design of the table structures. Work closely across the business units to gain an in-depth understanding of business processes and requirements.Identify ways to improve data reliability, efficiency, and quality.Align architecture with business requirements and translate them into detailed technical specifications to be implemented. Support business units with evaluation and plan for any external data acquisition.Lead support efforts for collecting, parsing, managing, and analyzing large sets of data.Ensure data quality and cleanup, validate, and verify as needed.Create and update documentation for ETL processes and data flow implemented for building the database.Provide production support and root cause analysis of ETL failures and reported data bugs.Prepare data for predictive and prescriptive modeling, as well as, visualization BI tools.Use data to discover tasks that can be automated.Deliver updates to stakeholders based on analytics.Analyze, design, and determine coding, programming, and integration activities required based on specific objectives and established project guidelines.Collaborate and communicate with the project team regarding project progress and issue resolutions.Minimum Job RequirementsBachelor's or master's degree in computer science, information systems, engineering, or equivalent.2–4 years’ data engineering experience in a fast-paced environment.This role requires experience in developing solutions with a cloud provider, preferably Azure.Report design and development skills with a BI Tool (Tableau, Power BI) is a definite Plus.Software development experience in Python and SQL are mandatory. Knowledge of AWS services—Redshift, Athena, EMR, DocumentDB, S3.Basic knowledge of AI and data science.Experience in ETL, Data Lake, and data warehouse pipelines.Fluent in structured and unstructured data, its management, and modern data transformation methodologies.Ability to define and create complex models to pull insights, predictions, and innovations from data is a plus.Effectively and creatively tell stories and create visualizations to describe and communicate data insights.Experience with distributor tracings, claims data and/or Nielsen retail data is a plus.Exceptional analytical and quantitative skills with the ability to interpret and summarize complex data that drive actionable plans and that add value to the business. Ability to influence at all levels, as well as a demonstrated ability to work effectively within a team. Proven ability to structure complex problems, develop solutions, and craft recommendations and results into easily digestible presentations. Critical thinking ability to connect inputs from various cross-functional stakeholders to create all-inclusive organizational strategies and initiatives.Ability to coordinate and manage simultaneous data projects of varying size and scope.Excellent communication skills, both written and verbal, with the ability to convey complex information to a broad audience.Ability to navigate ambiguity and fast-moving environment.Process-oriented with a focus on continuous improvement.Motivated self-starter approach with a proactive attitude, initiative, and drive.PHYSICAL DEMANDSThe physical demands described here are representative of those that must be met by the employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.Ability to sit at a computer for extended periods of time. WORK ENVIRONMENTThe work environment characteristics described here are representative of those an employee encounters while performing the essential functions of the job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.This position will work in a home/office environment with occasional trips to corporate office.It is Kate Farms policy that any position that requires regular interaction with health care professionals, requires that, if hired, you be vaccinated against Covid-19 unless you need a reasonable accommodation due to sincerely held religious beliefs, medical needs, or other reason protected by applicable federal, state, and local law.In compliance with Colorado’s Equal Pay for Equal Work Act (EPEWA) , New York City’s Human Rights Law (NYCHRL), and California’s Pay Transparency bill (SB 1162) we are disclosing the compensation, or a range thereof, for roles that will be, or could be performed in Colorado, New York City, or California. If the position applied to is not located in Colorado, NYC, or California, the following information may not apply. Salary Minimum: $87,000 Salary Maximum $109,849. The base salary range above represents the low and high end of the Kate Farms salary range for this position. This range will vary and may be above or below the range based on various factors including but not limited to location, experience, internal pay alignment, and performance. The range listed is just one component of Kate Farms’ total compensation package for employees. Other rewards may include annual bonuses and short- and long- term incentives. Kate Farms cares well for its employees and it shows through the provision of standout benefits such as Medical, Dental, Vision, Life Insurance, FSA, 401k Retirement Plan, Paid Time Off , Remote Work options, and many more outstanding company perks. NOTE: This job description is not intended to be all-inclusive. Employee may perform other related duties as negotiated to meet the ongoing needs of the organization as directed by the management of the company. "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Commercial Operations Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-brooksource-3510607974?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=%2BasiNJw0KNokKTyGD%2FiTng%3D%3D&position=10&pageNum=9&trk=public_jobs_jserp-result_search-card," Kate Farms ",https://www.linkedin.com/company/kate-farms-inc-?trk=public_jobs_topcard-org-name," Santa Barbara, CA "," 1 month ago "," Be among the first 25 applicants "," About Kate FarmsKate Farms is a company with heart. Our company was founded on the belief that good nutrition leads to good health, and good health opens the door to endless possibilities. That’s why our mission is to make nutrition the cornerstone of healthcare, so people can live their best lives. We are a medical food company that makes complete nutrition formulas for people who have a medical need for liquid nutrition.Position OverviewWe are looking for a talented data engineer to join our team at Kate Farms! Here, you'll bring advanced knowledge and experience to solve complex business issues. We'll look to your data engineering subject-matter expertise, and you will frequently contribute to the development of new ideas and methods. You’ll also get to work on complicated, interesting problems where analysis of situations and data requires an in-depth evaluation of multiple factors.The Commercial Operations Data Engineer position will be a key enterprise role that will be responsible for managing, optimizing, overseeing, and monitoring data retrieval, storage, and distribution throughout the organization. This technology expert will build and manage data pipelines that combine information from multiple and varied source systems, both internal and external. They will integrate, consolidate, and convert raw data into usable information for analytics and business decision-making.As part of the Commercial Operations and Insights team, this role will be the bridge between understanding the business requirements and building a scalable solution that addresses those requirements. This role requires a significant set of technical skills, including deep knowledge of SQL database design and programming languages. They will work cross-functionally, so will need strong communication skills to work across departments (i.e.: IT, sales, marketing, finance, supply chain, etc.) to understand what business leaders want to gain from the data.Essential Job Duties And ResponsibilitesDevelop, construct, test, and maintain architectures.Design, develop, automate, and support complex applications to extract, transform, and load data.Manage the entire back-end development life cycle for the company's data warehouse, including ETL procedures, cube building for database and performance management, and dimensional design of the table structures. Work closely across the business units to gain an in-depth understanding of business processes and requirements.Identify ways to improve data reliability, efficiency, and quality.Align architecture with business requirements and translate them into detailed technical specifications to be implemented. Support business units with evaluation and plan for any external data acquisition.Lead support efforts for collecting, parsing, managing, and analyzing large sets of data.Ensure data quality and cleanup, validate, and verify as needed.Create and update documentation for ETL processes and data flow implemented for building the database.Provide production support and root cause analysis of ETL failures and reported data bugs.Prepare data for predictive and prescriptive modeling, as well as, visualization BI tools.Use data to discover tasks that can be automated.Deliver updates to stakeholders based on analytics.Analyze, design, and determine coding, programming, and integration activities required based on specific objectives and established project guidelines.Collaborate and communicate with the project team regarding project progress and issue resolutions.Minimum Job RequirementsBachelor's or master's degree in computer science, information systems, engineering, or equivalent.2–4 years’ data engineering experience in a fast-paced environment.This role requires experience in developing solutions with a cloud provider, preferably Azure.Report design and development skills with a BI Tool (Tableau, Power BI) is a definite Plus.Software development experience in Python and SQL are mandatory. Knowledge of AWS services—Redshift, Athena, EMR, DocumentDB, S3.Basic knowledge of AI and data science.Experience in ETL, Data Lake, and data warehouse pipelines.Fluent in structured and unstructured data, its management, and modern data transformation methodologies.Ability to define and create complex models to pull insights, predictions, and innovations from data is a plus.Effectively and creatively tell stories and create visualizations to describe and communicate data insights.Experience with distributor tracings, claims data and/or Nielsen retail data is a plus.Exceptional analytical and quantitative skills with the ability to interpret and summarize complex data that drive actionable plans and that add value to the business. Ability to influence at all levels, as well as a demonstrated ability to work effectively within a team. Proven ability to structure complex problems, develop solutions, and craft recommendations and results into easily digestible presentations. Critical thinking ability to connect inputs from various cross-functional stakeholders to create all-inclusive organizational strategies and initiatives.Ability to coordinate and manage simultaneous data projects of varying size and scope.Excellent communication skills, both written and verbal, with the ability to convey complex information to a broad audience.Ability to navigate ambiguity and fast-moving environment.Process-oriented with a focus on continuous improvement.Motivated self-starter approach with a proactive attitude, initiative, and drive.PHYSICAL DEMANDSThe physical demands described here are representative of those that must be met by the employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.Ability to sit at a computer for extended periods of time. WORK ENVIRONMENTThe work environment characteristics described here are representative of those an employee encounters while performing the essential functions of the job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.This position will work in a home/office environment with occasional trips to corporate office.It is Kate Farms policy that any position that requires regular interaction with health care professionals, requires that, if hired, you be vaccinated against Covid-19 unless you need a reasonable accommodation due to sincerely held religious beliefs, medical needs, or other reason protected by applicable federal, state, and local law.In compliance with Colorado’s Equal Pay for Equal Work Act (EPEWA) , New York City’s Human Rights Law (NYCHRL), and California’s Pay Transparency bill (SB 1162) we are disclosing the compensation, or a range thereof, for roles that will be, or could be performed in Colorado, New York City, or California. If the position applied to is not located in Colorado, NYC, or California, the following information may not apply. Salary Minimum: $87,000 Salary Maximum $109,849. The base salary range above represents the low and high end of the Kate Farms salary range for this position. This range will vary and may be above or below the range based on various factors including but not limited to location, experience, internal pay alignment, and performance. The range listed is just one component of Kate Farms’ total compensation package for employees. Other rewards may include annual bonuses and short- and long- term incentives. Kate Farms cares well for its employees and it shows through the provision of standout benefits such as Medical, Dental, Vision, Life Insurance, FSA, 401k Retirement Plan, Paid Time Off , Remote Work options, and many more outstanding company perks. NOTE: This job description is not intended to be all-inclusive. Employee may perform other related duties as negotiated to meet the ongoing needs of the organization as directed by the management of the company. "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-on-demand-group-3500595169?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=G9ZizQ%2FF8ZZim%2F%2Fy7g7AQA%3D%3D&position=11&pageNum=9&trk=public_jobs_jserp-result_search-card," On-Demand Group ",https://www.linkedin.com/company/on-demand-group?trk=public_jobs_topcard-org-name," Minneapolis, MN "," 2 weeks ago "," Over 200 applicants ","Data Ingest Engineer 6-month contract-for-hire Remote or Minneapolis based This role develops enterprise data solutions that support the organization in achieving its strategic goals. Your work on cloud data pipelines will advance our enterprise data capabilities, while you get immersed in our collaborative, fun, and engaging culture. The position reports to our IT Data Engineering department and is a part of projects that align with key strategic initiatives to meet business objectives. In your role, you will be part of a team that is responsible for the design, development, testing, deployment and support of cloud-based (GCP), data, analytical, and reporting applications. Essential Duties & Responsibilities: • Develop and maintain custom ELT data pipelines with Python and SQL-based transformations running on the Google Cloud Platform. • Collaborate and implement event and batch based data science scoring pipelines. • Develop data access APIs to facilitate cross application data sharing. • Conduct and/or participate in requirements analysis sessions with internal customers, external vendors, and project teams. • Translate business requirements into technical designs. • Follow engineering best practice to ensure robust, tested, and reliable data pipelines. • Support data governance and security practices. • Follow agile development methodologies and actively participate in sprint planning sessions. • Support downstream users and resolve production issues with excellent customer service. • Conduct other duties as assigned. Job Skills: • Hands-on experience in creating API based data ingestion pipelines. • Good design skills in data pipeline, enrichment, and API patterns. • Good understanding of object oriented software engineering patterns. • Good relationship-building, customer service, and problem resolution skills. • Knowledge of software engineering, version control, and testing practices. • Knowledge of Agile software development methodologies. • Works effectively in a dynamic work environment with competing priorities. Work Experience: • 2+ years in Python object oriented programming. Multi-language experience preferred. • 2+ years in API based custom data ingestion, particularly working with 3rd party vendor API's including reading API documentation, authentication, and bulk data staging strategies. • 2+ years in Cloud-based development. GCP preferred. • 1+ years in SQL • Experience with development in a version control, CI/CD environment. Education: • From an accredited institution, Bachelor’s degree is required. Other: • Must be able to lift 20 lbs. • Remote and/or Typical office setting. • Mobility within the office includes movement from floor to floor. • Must be able to work more than 40 hours per week when business needs warrant. • Access information using a computer. • Effectively communicate, both up and down the management chain. • Effectively cope with stressful situations. • Strong mental acuity. • Regular, dependable attendance and punctuality are essential functions of this job. • Other essential functions and marginal job functions are subject to modification."," Associate "," Full-time "," Information Technology and Business Development "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-concurrency-inc-3485960102?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=y4%2F5DIIdNfqC3MObQ3JXCw%3D%3D&position=12&pageNum=9&trk=public_jobs_jserp-result_search-card," On-Demand Group ",https://www.linkedin.com/company/on-demand-group?trk=public_jobs_topcard-org-name," Minneapolis, MN "," 2 weeks ago "," Over 200 applicants "," Data Ingest Engineer6-month contract-for-hireRemote or Minneapolis basedThis role develops enterprise data solutions that support the organization in achieving its strategic goals. Your work on cloud data pipelines will advance our enterprise data capabilities, while you get immersed in our collaborative, fun, and engaging culture. The position reports to our IT Data Engineering department and is a part of projects that align with key strategic initiatives to meet business objectives. In your role, you will be part of a team that is responsible for the design, development, testing, deployment and support of cloud-based (GCP), data, analytical, and reporting applications. Essential Duties & Responsibilities:• Develop and maintain custom ELT data pipelines with Python and SQL-based transformations running on the Google Cloud Platform.• Collaborate and implement event and batch based data science scoring pipelines.• Develop data access APIs to facilitate cross application data sharing.• Conduct and/or participate in requirements analysis sessions with internal customers, external vendors, and project teams.• Translate business requirements into technical designs.• Follow engineering best practice to ensure robust, tested, and reliable data pipelines.• Support data governance and security practices.• Follow agile development methodologies and actively participate in sprint planning sessions.• Support downstream users and resolve production issues with excellent customer service.• Conduct other duties as assigned. Job Skills:• Hands-on experience in creating API based data ingestion pipelines.• Good design skills in data pipeline, enrichment, and API patterns. • Good understanding of object oriented software engineering patterns.• Good relationship-building, customer service, and problem resolution skills.• Knowledge of software engineering, version control, and testing practices.• Knowledge of Agile software development methodologies.• Works effectively in a dynamic work environment with competing priorities. Work Experience:• 2+ years in Python object oriented programming. Multi-language experience preferred.• 2+ years in API based custom data ingestion, particularly working with 3rd party vendor API's including reading API documentation, authentication, and bulk data staging strategies.• 2+ years in Cloud-based development. GCP preferred.• 1+ years in SQL• Experience with development in a version control, CI/CD environment. Education:• From an accredited institution, Bachelor’s degree is required. Other:• Must be able to lift 20 lbs.• Remote and/or Typical office setting.• Mobility within the office includes movement from floor to floor.• Must be able to work more than 40 hours per week when business needs warrant.• Access information using a computer.• Effectively communicate, both up and down the management chain.• Effectively cope with stressful situations.• Strong mental acuity.• Regular, dependable attendance and punctuality are essential functions of this job. • Other essential functions and marginal job functions are subject to modification. "," Associate "," Full-time "," Information Technology and Business Development "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cvs-health-3522843008?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=I83wMrRKZnK6M7jIeBuF%2BQ%3D%3D&position=13&pageNum=9&trk=public_jobs_jserp-result_search-card," On-Demand Group ",https://www.linkedin.com/company/on-demand-group?trk=public_jobs_topcard-org-name," Minneapolis, MN "," 2 weeks ago "," Over 200 applicants "," Data Ingest Engineer6-month contract-for-hireRemote or Minneapolis basedThis role develops enterprise data solutions that support the organization in achieving its strategic goals. Your work on cloud data pipelines will advance our enterprise data capabilities, while you get immersed in our collaborative, fun, and engaging culture. The position reports to our IT Data Engineering department and is a part of projects that align with key strategic initiatives to meet business objectives. In your role, you will be part of a team that is responsible for the design, development, testing, deployment and support of cloud-based (GCP), data, analytical, and reporting applications. Essential Duties & Responsibilities:• Develop and maintain custom ELT data pipelines with Python and SQL-based transformations running on the Google Cloud Platform.• Collaborate and implement event and batch based data science scoring pipelines.• Develop data access APIs to facilitate cross application data sharing.• Conduct and/or participate in requirements analysis sessions with internal customers, external vendors, and project teams.• Translate business requirements into technical designs.• Follow engineering best practice to ensure robust, tested, and reliable data pipelines.• Support data governance and security practices.• Follow agile development methodologies and actively participate in sprint planning sessions.• Support downstream users and resolve production issues with excellent customer service.• Conduct other duties as assigned. Job Skills:• Hands-on experience in creating API based data ingestion pipelines.• Good design skills in data pipeline, enrichment, and API patterns. • Good understanding of object oriented software engineering patterns.• Good relationship-building, customer service, and problem resolution skills.• Knowledge of software engineering, version control, and testing practices.• Knowledge of Agile software development methodologies.• Works effectively in a dynamic work environment with competing priorities. Work Experience:• 2+ years in Python object oriented programming. Multi-language experience preferred.• 2+ years in API based custom data ingestion, particularly working with 3rd party vendor API's including reading API documentation, authentication, and bulk data staging strategies.• 2+ years in Cloud-based development. GCP preferred.• 1+ years in SQL• Experience with development in a version control, CI/CD environment. Education:• From an accredited institution, Bachelor’s degree is required. Other:• Must be able to lift 20 lbs.• Remote and/or Typical office setting.• Mobility within the office includes movement from floor to floor.• Must be able to work more than 40 hours per week when business needs warrant.• Access information using a computer.• Effectively communicate, both up and down the management chain.• Effectively cope with stressful situations.• Strong mental acuity.• Regular, dependable attendance and punctuality are essential functions of this job. • Other essential functions and marginal job functions are subject to modification. "," Associate "," Full-time "," Information Technology and Business Development "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-versar-inc-3531152322?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=Vn1leopihNVJtV6q5cT9zw%3D%3D&position=14&pageNum=9&trk=public_jobs_jserp-result_search-card," On-Demand Group ",https://www.linkedin.com/company/on-demand-group?trk=public_jobs_topcard-org-name," Minneapolis, MN "," 2 weeks ago "," Over 200 applicants "," Data Ingest Engineer6-month contract-for-hireRemote or Minneapolis basedThis role develops enterprise data solutions that support the organization in achieving its strategic goals. Your work on cloud data pipelines will advance our enterprise data capabilities, while you get immersed in our collaborative, fun, and engaging culture. The position reports to our IT Data Engineering department and is a part of projects that align with key strategic initiatives to meet business objectives. In your role, you will be part of a team that is responsible for the design, development, testing, deployment and support of cloud-based (GCP), data, analytical, and reporting applications. Essential Duties & Responsibilities:• Develop and maintain custom ELT data pipelines with Python and SQL-based transformations running on the Google Cloud Platform.• Collaborate and implement event and batch based data science scoring pipelines.• Develop data access APIs to facilitate cross application data sharing.• Conduct and/or participate in requirements analysis sessions with internal customers, external vendors, and project teams.• Translate business requirements into technical designs.• Follow engineering best practice to ensure robust, tested, and reliable data pipelines.• Support data governance and security practices.• Follow agile development methodologies and actively participate in sprint planning sessions.• Support downstream users and resolve production issues with excellent customer service.• Conduct other duties as assigned. Job Skills:• Hands-on experience in creating API based data ingestion pipelines.• Good design skills in data pipeline, enrichment, and API patterns. • Good understanding of object oriented software engineering patterns.• Good relationship-building, customer service, and problem resolution skills.• Knowledge of software engineering, version control, and testing practices.• Knowledge of Agile software development methodologies.• Works effectively in a dynamic work environment with competing priorities. Work Experience:• 2+ years in Python object oriented programming. Multi-language experience preferred.• 2+ years in API based custom data ingestion, particularly working with 3rd party vendor API's including reading API documentation, authentication, and bulk data staging strategies.• 2+ years in Cloud-based development. GCP preferred.• 1+ years in SQL• Experience with development in a version control, CI/CD environment. Education:• From an accredited institution, Bachelor’s degree is required. Other:• Must be able to lift 20 lbs.• Remote and/or Typical office setting.• Mobility within the office includes movement from floor to floor.• Must be able to work more than 40 hours per week when business needs warrant.• Access information using a computer.• Effectively communicate, both up and down the management chain.• Effectively cope with stressful situations.• Strong mental acuity.• Regular, dependable attendance and punctuality are essential functions of this job. • Other essential functions and marginal job functions are subject to modification. "," Associate "," Full-time "," Information Technology and Business Development "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499583531?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=bP1jUAeF0H6NknHYG7rCbw%3D%3D&position=15&pageNum=9&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 46 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-savion-llc-3509229732?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=lqAvP1ebtGGAN63F1get3w%3D%3D&position=16&pageNum=9&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 46 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-for-eoir-programs-at-acacia-center-for-justice-3505391042?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=m5to%2B4WhybstH1lpbU8PJQ%3D%3D&position=17&pageNum=9&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 46 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-eliassen-group-3511763941?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=bPKAM3L3y8xUncfN4xx9Aw%3D%3D&position=18&pageNum=9&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 46 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Senior Data Engineer,https://www.linkedin.com/jobs/view/senior-data-engineer-at-fractal-3524227614?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=CPRRPchzuR9B8TUR5U1zWQ%3D%3D&position=19&pageNum=9&trk=public_jobs_jserp-result_search-card," Fractal ",https://www.linkedin.com/company/fractal-analytics?trk=public_jobs_topcard-org-name," Charlotte, NC "," 6 hours ago "," Be among the first 25 applicants "," Senior Data Engineer Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work® Institute and recognized as a ‘Cool Vendor’ and a ‘Vendor to Watch’ by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Project work includes ETL/ELT from various data sources and other data platforms, remodeling the data to fit a series of enterprise standards, and landing the data in a conformed set of facts and dimensions within Azure. Potential destinations include Azure Data Lake, Synapse/SQL, and others. You will work as a part of a large team working to unify and standardize data across various legacy systems into a centralized data model for a large Fortune 500 enterprise. You will use your expertise to: Drive and support the development, delivery, operations, and sustainment of the data pipelines and data engineering efforts. Create and maintain automated methods of orchestrating data flows and products while monitoring data quality and the status of related infrastructure. Enable and support data and analytics capabilities and technologies, ensuring functional specifications meet or exceed established business and functional requirements, technical standards, quality measures and continuous improvement goals. Provide technical perspective on traditional and cloud data platform services, vendor products and architectural designs. Participate in architectural discussions to build confidence and ensure customer success when building new solutions and migrating existing data applications on the Azure platform. Manage business and product team in-take requests; recommended best practices (technology / process / security); creation and maintenance of resource groups and the associated services. Conduct technical discovery, identify pain points, business, and technical requirements, “as is” and “to be” scenarios. Translate complex functional and technical requirements into detailed design, and then rapidly prototype those designs into solutions in an agile environment. Collaborate with product teams to assess and establish the required pipeline to support business objectives; identify opportunities to streamline delivery processes. Design, manage and maintain tools to automate operational and business processes. Must-haves (minimum requirements): Experience in dimensional data modelling, Kimball methodologies Four or more years of hands-on pipeline development/engineering experience in Azure and cloud data engineering technologies such as Azure Data Factory and Databricks A Bachelor’s degree or a technical diploma in Computer Science, Engineering, MIS, Business, or a related field Experience supporting and / or operating data and analytics capabilities and tooling; Azure is required, multi cloud engineering experience is a plus. Solid understanding of product management, agile principles and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact, and risks, taking account of technical and architectural debt Proven ability to learn and adopt new technologies in a fast-evolving environment. Proven communication and interpersonal skills to enable the development of high value, fit for customer’s purpose data solutions. The ability to communicate and present technical information to diverse groups as well as senior leaders. A collaborative, self-starter attitude, with an ability to multi-task and make critical decisions in a timely fashion under little supervision · Alignment with our values: safety above all else, stronger together, operational discipline, curiosity and lifelong learning, and act with integrity. Preference for: Experience with Synapse Experience working in the banking domain Azure certifications Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $125,000 – $155,000. In addition, for the current performance period, you may be eligible for a discretionary bonus. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. "," Mid-Senior level "," Full-time "," Information Technology "," Business Consulting and Services " Data Engineer,United States,Senior Data Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-state-farm-3487709894?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=K1cYv8js5fNR0N4iYZa6DA%3D%3D&position=20&pageNum=9&trk=public_jobs_jserp-result_search-card," Fractal ",https://www.linkedin.com/company/fractal-analytics?trk=public_jobs_topcard-org-name," Charlotte, NC "," 6 hours ago "," Be among the first 25 applicants "," Senior Data Engineer Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work® Institute and recognized as a ‘Cool Vendor’ and a ‘Vendor to Watch’ by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Project work includes ETL/ELT from various data sources and other data platforms, remodeling the data to fit a series of enterprise standards, and landing the data in a conformed set of facts and dimensions within Azure. Potential destinations include Azure Data Lake, Synapse/SQL, and others. You will work as a part of a large team working to unify and standardize data across various legacy systems into a centralized data model for a large Fortune 500 enterprise. You will use your expertise to: Drive and support the development, delivery, operations, and sustainment of the data pipelines and data engineering efforts. Create and maintain automated methods of orchestrating data flows and products while monitoring data quality and the status of related infrastructure. Enable and support data and analytics capabilities and technologies, ensuring functional specifications meet or exceed established business and functional requirements, technical standards, quality measures and continuous improvement goals. Provide technical perspective on traditional and cloud data platform services, vendor products and architectural designs. Participate in architectural discussions to build confidence and ensure customer success when building new solutions and migrating existing data applications on the Azure platform. Manage business and product team in-take requests; recommended best practices (technology / process / security); creation and maintenance of resource groups and the associated services. Conduct technical discovery, identify pain points, business, and technical requirements, “as is” and “to be” scenarios. Translate complex functional and technical requirements into detailed design, and then rapidly prototype those designs into solutions in an agile environment. Collaborate with product teams to assess and establish the required pipeline to support business objectives; identify opportunities to streamline delivery processes. Design, manage and maintain tools to automate operational and business processes. Must-haves (minimum requirements): Experience in dimensional data modelling, Kimball methodologies Four or more years of hands-on pipeline development/engineering experience in Azure and cloud data engineering technologies such as Azure Data Factory and Databricks A Bachelor’s degree or a technical diploma in Computer Science, Engineering, MIS, Business, or a related field Experience supporting and / or operating data and analytics capabilities and tooling; Azure is required, multi cloud engineering experience is a plus. Solid understanding of product management, agile principles and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact, and risks, taking account of technical and architectural debt Proven ability to learn and adopt new technologies in a fast-evolving environment. Proven communication and interpersonal skills to enable the development of high value, fit for customer’s purpose data solutions. The ability to communicate and present technical information to diverse groups as well as senior leaders. A collaborative, self-starter attitude, with an ability to multi-task and make critical decisions in a timely fashion under little supervision · Alignment with our values: safety above all else, stronger together, operational discipline, curiosity and lifelong learning, and act with integrity. Preference for: Experience with Synapse Experience working in the banking domain Azure certifications Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $125,000 – $155,000. In addition, for the current performance period, you may be eligible for a discretionary bonus. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. "," Mid-Senior level "," Full-time "," Information Technology "," Business Consulting and Services " Data Engineer,United States,Senior Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499587056?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=tZakwI5WSN7Yw%2B7mg7DlAw%3D%3D&position=21&pageNum=9&trk=public_jobs_jserp-result_search-card," Fractal ",https://www.linkedin.com/company/fractal-analytics?trk=public_jobs_topcard-org-name," Charlotte, NC "," 6 hours ago "," Be among the first 25 applicants "," Senior Data Engineer Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work® Institute and recognized as a ‘Cool Vendor’ and a ‘Vendor to Watch’ by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Project work includes ETL/ELT from various data sources and other data platforms, remodeling the data to fit a series of enterprise standards, and landing the data in a conformed set of facts and dimensions within Azure. Potential destinations include Azure Data Lake, Synapse/SQL, and others. You will work as a part of a large team working to unify and standardize data across various legacy systems into a centralized data model for a large Fortune 500 enterprise. You will use your expertise to: Drive and support the development, delivery, operations, and sustainment of the data pipelines and data engineering efforts. Create and maintain automated methods of orchestrating data flows and products while monitoring data quality and the status of related infrastructure. Enable and support data and analytics capabilities and technologies, ensuring functional specifications meet or exceed established business and functional requirements, technical standards, quality measures and continuous improvement goals. Provide technical perspective on traditional and cloud data platform services, vendor products and architectural designs. Participate in architectural discussions to build confidence and ensure customer success when building new solutions and migrating existing data applications on the Azure platform. Manage business and product team in-take requests; recommended best practices (technology / process / security); creation and maintenance of resource groups and the associated services. Conduct technical discovery, identify pain points, business, and technical requirements, “as is” and “to be” scenarios. Translate complex functional and technical requirements into detailed design, and then rapidly prototype those designs into solutions in an agile environment. Collaborate with product teams to assess and establish the required pipeline to support business objectives; identify opportunities to streamline delivery processes. Design, manage and maintain tools to automate operational and business processes. Must-haves (minimum requirements): Experience in dimensional data modelling, Kimball methodologies Four or more years of hands-on pipeline development/engineering experience in Azure and cloud data engineering technologies such as Azure Data Factory and Databricks A Bachelor’s degree or a technical diploma in Computer Science, Engineering, MIS, Business, or a related field Experience supporting and / or operating data and analytics capabilities and tooling; Azure is required, multi cloud engineering experience is a plus. Solid understanding of product management, agile principles and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact, and risks, taking account of technical and architectural debt Proven ability to learn and adopt new technologies in a fast-evolving environment. Proven communication and interpersonal skills to enable the development of high value, fit for customer’s purpose data solutions. The ability to communicate and present technical information to diverse groups as well as senior leaders. A collaborative, self-starter attitude, with an ability to multi-task and make critical decisions in a timely fashion under little supervision · Alignment with our values: safety above all else, stronger together, operational discipline, curiosity and lifelong learning, and act with integrity. Preference for: Experience with Synapse Experience working in the banking domain Azure certifications Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $125,000 – $155,000. In addition, for the current performance period, you may be eligible for a discretionary bonus. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. "," Mid-Senior level "," Full-time "," Information Technology "," Business Consulting and Services " Data Engineer,United States,Senior Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-private-energy-partners-3501298892?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=ATpLaklJTZB2poOD4zjFVg%3D%3D&position=22&pageNum=9&trk=public_jobs_jserp-result_search-card," Fractal ",https://www.linkedin.com/company/fractal-analytics?trk=public_jobs_topcard-org-name," Charlotte, NC "," 6 hours ago "," Be among the first 25 applicants "," Senior Data Engineer Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work® Institute and recognized as a ‘Cool Vendor’ and a ‘Vendor to Watch’ by Gartner. Please visit Fractal | Intelligence for Imagination for more information about Fractal. Project work includes ETL/ELT from various data sources and other data platforms, remodeling the data to fit a series of enterprise standards, and landing the data in a conformed set of facts and dimensions within Azure. Potential destinations include Azure Data Lake, Synapse/SQL, and others. You will work as a part of a large team working to unify and standardize data across various legacy systems into a centralized data model for a large Fortune 500 enterprise. You will use your expertise to: Drive and support the development, delivery, operations, and sustainment of the data pipelines and data engineering efforts. Create and maintain automated methods of orchestrating data flows and products while monitoring data quality and the status of related infrastructure. Enable and support data and analytics capabilities and technologies, ensuring functional specifications meet or exceed established business and functional requirements, technical standards, quality measures and continuous improvement goals. Provide technical perspective on traditional and cloud data platform services, vendor products and architectural designs. Participate in architectural discussions to build confidence and ensure customer success when building new solutions and migrating existing data applications on the Azure platform. Manage business and product team in-take requests; recommended best practices (technology / process / security); creation and maintenance of resource groups and the associated services. Conduct technical discovery, identify pain points, business, and technical requirements, “as is” and “to be” scenarios. Translate complex functional and technical requirements into detailed design, and then rapidly prototype those designs into solutions in an agile environment. Collaborate with product teams to assess and establish the required pipeline to support business objectives; identify opportunities to streamline delivery processes. Design, manage and maintain tools to automate operational and business processes. Must-haves (minimum requirements): Experience in dimensional data modelling, Kimball methodologies Four or more years of hands-on pipeline development/engineering experience in Azure and cloud data engineering technologies such as Azure Data Factory and Databricks A Bachelor’s degree or a technical diploma in Computer Science, Engineering, MIS, Business, or a related field Experience supporting and / or operating data and analytics capabilities and tooling; Azure is required, multi cloud engineering experience is a plus. Solid understanding of product management, agile principles and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact, and risks, taking account of technical and architectural debt Proven ability to learn and adopt new technologies in a fast-evolving environment. Proven communication and interpersonal skills to enable the development of high value, fit for customer’s purpose data solutions. The ability to communicate and present technical information to diverse groups as well as senior leaders. A collaborative, self-starter attitude, with an ability to multi-task and make critical decisions in a timely fashion under little supervision · Alignment with our values: safety above all else, stronger together, operational discipline, curiosity and lifelong learning, and act with integrity. Preference for: Experience with Synapse Experience working in the banking domain Azure certifications Pay: The wage range for this role takes into account the wide range of factors that are considered in making compensation decisions including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. The disclosed range estimate has not been adjusted for the applicable geographic differential associated with the location at which the position may be filled. At Fractal, it is not typical for an individual to be hired at or near the top of the range for their role and compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is: $125,000 – $155,000. In addition, for the current performance period, you may be eligible for a discretionary bonus. Benefits: As a full-time employee of the company or as an hourly employee working more than 30 hours per week, you will be eligible to participate in the health, dental, vision, life insurance, and disability plans in accordance with the plan documents, which may be amended from time to time. You will be eligible for benefits on the first day of employment with the Company. In addition, you are eligible to participate in the Company 401(k) Plan after 30 days of employment, in accordance with the applicable plan terms. The Company provides for 11 paid holidays and 12 weeks of Parental Leave. We also follow a “free time” PTO policy, allowing you the flexibility to take time needed for either sick time or vacation. Fractal provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. "," Mid-Senior level "," Full-time "," Information Technology "," Business Consulting and Services " Data Engineer,United States,Data ETL Engineer,https://www.linkedin.com/jobs/view/data-etl-engineer-at-htc-global-services-3482514263?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=Uafv0aOmErwX67Zngo%2F6xQ%3D%3D&position=23&pageNum=9&trk=public_jobs_jserp-result_search-card," HTC Global Services ",https://www.linkedin.com/company/htc-global-services?trk=public_jobs_topcard-org-name," Washington DC-Baltimore Area "," 3 weeks ago "," Over 200 applicants ","At HTC Global Services our consultants have access to a comprehensive benefits package. Benefits can include Paid-Time-Off, Paid Holidays, 401K matching, Life and Accidental Death Insurance, Short & Long Term Disability Insurance, and a variety of other perks. Skills : Data ETL / Data Engineering (Batch) Key Technology : ADF, Databricks, dbt, Azure Cloud Services Supporting Technology : MS SQL Server ADW/DB, Python, PySpark, DevOps Proven development experience in building complex data pipeline for lakehouse/data warehouses using Agile methodology Experience: 8-10 years Find a purpose Help clients embrace emerging technologies. Create inventive solutions and meet intriguing client challenges. Solve, fix, design and innovate. Be a part of something bigger by helping clients go digital, create engaging customer experiences and transform their business. Move ahead Our success as a company is built on practicing inclusion and embracing diversity. HTC Global Services is committed to providing a work environment free from discrimination and harassment, where all employees are treated with respect and dignity. Together we work to create and maintain an environment where everyone feels valued, included, and respected. At HTC Global Services, our differences are embraced and celebrated. HTC is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. HTC is proud to be recognized as a National Minority Supplier and an equal opportunity employer of protected veterans. About HTC Global Services Shaping careers since 1990 - our long tenured employees are a testimony of the work culture. Join our global employee base of 12,000 and help us bring human expertise to tech in order to deliver purposeful solutions that amplify value."," Director "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data ETL Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-state-farm-3487713210?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=cRCaXmCE7x3ySuYV6MVWHQ%3D%3D&position=24&pageNum=9&trk=public_jobs_jserp-result_search-card," HTC Global Services ",https://www.linkedin.com/company/htc-global-services?trk=public_jobs_topcard-org-name," Washington DC-Baltimore Area "," 3 weeks ago "," Over 200 applicants "," At HTC Global Services our consultants have access to a comprehensive benefits package. Benefits can include Paid-Time-Off, Paid Holidays, 401K matching, Life and Accidental Death Insurance, Short & Long Term Disability Insurance, and a variety of other perks.Skills : Data ETL / Data Engineering (Batch) Key Technology : ADF, Databricks, dbt, Azure Cloud ServicesSupporting Technology : MS SQL Server ADW/DB, Python, PySpark, DevOpsProven development experience in building complex data pipeline for lakehouse/data warehouses using Agile methodologyExperience: 8-10 yearsFind a purposeHelp clients embrace emerging technologies. Create inventive solutions and meet intriguing client challenges. Solve, fix, design and innovate. Be a part of something bigger by helping clients go digital, create engaging customer experiences and transform their business.Move aheadOur success as a company is built on practicing inclusion and embracing diversity. HTC Global Services is committed to providing a work environment free from discrimination and harassment, where all employees are treated with respect and dignity. Together we work to create and maintain an environment where everyone feels valued, included, and respected. At HTC Global Services, our differences are embraced and celebrated. HTC is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. HTC is proud to be recognized as a National Minority Supplier and an equal opportunity employer of protected veterans.About HTC Global ServicesShaping careers since 1990 - our long tenured employees are a testimony of the work culture. Join our global employee base of 12,000 and help us bring human expertise to tech in order to deliver purposeful solutions that amplify value. "," Director "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data ETL Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-softstandard-solutions-3502753289?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=sOczkp8%2F26N7bgeoZ0u7Eg%3D%3D&position=25&pageNum=9&trk=public_jobs_jserp-result_search-card," HTC Global Services ",https://www.linkedin.com/company/htc-global-services?trk=public_jobs_topcard-org-name," Washington DC-Baltimore Area "," 3 weeks ago "," Over 200 applicants "," At HTC Global Services our consultants have access to a comprehensive benefits package. Benefits can include Paid-Time-Off, Paid Holidays, 401K matching, Life and Accidental Death Insurance, Short & Long Term Disability Insurance, and a variety of other perks.Skills : Data ETL / Data Engineering (Batch) Key Technology : ADF, Databricks, dbt, Azure Cloud ServicesSupporting Technology : MS SQL Server ADW/DB, Python, PySpark, DevOpsProven development experience in building complex data pipeline for lakehouse/data warehouses using Agile methodologyExperience: 8-10 yearsFind a purposeHelp clients embrace emerging technologies. Create inventive solutions and meet intriguing client challenges. Solve, fix, design and innovate. Be a part of something bigger by helping clients go digital, create engaging customer experiences and transform their business.Move aheadOur success as a company is built on practicing inclusion and embracing diversity. HTC Global Services is committed to providing a work environment free from discrimination and harassment, where all employees are treated with respect and dignity. Together we work to create and maintain an environment where everyone feels valued, included, and respected. At HTC Global Services, our differences are embraced and celebrated. HTC is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. HTC is proud to be recognized as a National Minority Supplier and an equal opportunity employer of protected veterans.About HTC Global ServicesShaping careers since 1990 - our long tenured employees are a testimony of the work culture. Join our global employee base of 12,000 and help us bring human expertise to tech in order to deliver purposeful solutions that amplify value. "," Director "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-aacsb-3492995535?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=kGAW%2B4pHzHcyigxyO7xU1w%3D%3D&position=5&pageNum=7&trk=public_jobs_jserp-result_search-card," AACSB ",https://www.linkedin.com/company/aacsb-international?trk=public_jobs_topcard-org-name," Tampa, FL "," 15 hours ago "," Over 200 applicants ","Description AACSB is the world’s leading voice in business education, providing quality assurance (accreditation), intelligence and thought leadership, and learning and development (e.g., conferences, seminars, digital learning) opportunities to over 1,800 member organizations and more than 900 accredited business schools in over 100 countries and territories. AACSB’s core values of quality, community, social responsibility, diversity and inclusion, and ethics are all viewed through a global lens in our collective commitment to transform business education for positive societal impact. Synonymous with the highest standards of excellence since 1916, AACSB connects educators, students, and business to develop the next generation of great leaders. Do you want to join a fun, innovation-focused team that will push you to think creatively and will encourage your professional growth? Join AACSB. We are seeking a Data Engineer who is excited to join forces with a talented core of data and business analysts to build insightful and useful tools for variety of stakeholders. As the Data Engineer, you will be responsible for maintaining the pipelines for AACSB’s data warehouse and will collaborate with a variety of different stakeholders to develop solutions. How You Will Contribute Design and develop solutions to enhance AACSB’s data infrastructure. Design and implement effective processes to store, retrieve, and transform data. Teach these methods to users and developers.  Document systems, keeping the backend technology, service, and API documentation up to date, including the building of workflow documentation to help explain systems as needed to other stakeholders. Perform code reviews to provide support and quality control before changes are released. Monitor AACSB’s data assets and perform troubleshooting and testing to ensure workflows are functioning as required. Requirements Fluency in English language Proficiency in Microsoft Office Experience with Python, R, or similar programming languages Proficiency with SQL Three (3) or more years of backend data service development or data engineering Three (3) or more years of data center design, development and management Preferred Qualifications Bachelor's Degree in Data or Computer Science, Applied Mathematics or Stats, MIS, Data Analytics, Engineering, or a related field obtained through an accredited college or university Five (5) years of relevant experience Experience with Airflow Mastery of Python and SQL Why join AACSB? We take pride in providing our employees with an inclusive work environment that promotes individual development. Our employees say our benefits, location, flexible work environment, and their colleagues are the primary drivers that attract and keep them with AACSB. Benefits We offer a competitive benefit package, including generous vacation, sick and holiday paid time off, health/dental/vision insurance, 403B, short and long-term disability, life insurance, wellness allowance, tuition reimbursement, and a hybrid work environment."," Entry level "," Full-time "," Information Technology "," Education Administration Programs " Data Engineer,United States,Data Engineer - Data Analytics,https://www.linkedin.com/jobs/view/data-engineer-data-analytics-at-costco-wholesale-3515995135?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=v1tTSVNd3IuB%2FeL4gPOKdQ%3D%3D&position=16&pageNum=7&trk=public_jobs_jserp-result_search-card," Costco Wholesale ",https://www.linkedin.com/company/costco-wholesale?trk=public_jobs_topcard-org-name," Seattle, WA "," 6 days ago "," 121 applicants ","This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture. The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes. Our employees and our members come FIRST. Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others. In 2021, Costco contributed over $58 million to organizations such as United Way and Children's Miracle Network Hospitals. Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed. As proof, Costco ranks seventh in Forbes “World’s Best Employers”. The Data Engineer - Data Analytics is responsible for the end to end data pipelines to power analytics and data services. This role is focused on data engineering to build and deliver automated data pipelines from a plethora of internal and external data sources. The Data Engineer will partner with product owners, engineering and data platform teams to design, build, test and automate data pipelines that are relied upon across the company as the single source of truth. If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined. ROLE Develops and operationalizes data pipelines to make data available for consumption (BI, Advanced analytics, Services). Works in tandem with data architects and data/BI engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality and orchestration. Designs, develops and implements ETL/ELT processes using IICS (Informatica cloud). Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Azure Data Factory to improve and speed up delivery of our data products and services. Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency and quality of data management. Communicates technical concepts to non-technical audiences both in written and verbal form. Performs peer reviews for other data engineer’s work. Required 5+ years’ experience engineering and operationalizing data pipelines with large and complex datasets. 5+ years’ of hands on experience with Informatica PowerCenter 2+ years’ of hands on experience with Informatica IICS 3+ years’ experience working with Cloud technologies such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB and other big data technologies. Extensive experience working with various data sources (SQL,Oracle database, flat files (csv, delimited), Web API, XML. Advanced SQL skills required. Solid understanding of relational databases and business data; ability to write complex SQL queries against a variety of data sources. 5+ years’ experience with Data Modeling, ETL, and Data Warehousing. Strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing). Scheduling flexibility to meet the needs of the business including weekends, holidays, and 24/7 on call responsibilities on a rotational basis. Able to work in a fast-paced agile development environment. Recommended BA/BS in Computer Science, Engineering, or equivalent software/services experience. Azure Certifications Experience implementing data integration techniques such as event / message based integration (Kafka, Azure Event Hub), ETL. Experience with Git / Azure DevOps Experience delivering data solutions through agile software development methodologies. Exposure to the retail industry. Excellent verbal and written communication skills. Experience working with SAP integration tools including BODS. Experience with UC4 Job Scheduler Required Documents Cover Letter Resume California applicants, please click here to review the Costco Applicant Privacy Notice. Pay Ranges Level 2 - $100,000 - $135,000 Level 3 - $125,000 - $165,000 Level 4 - $155,000 - 195,000 - Potential Bonus and Restricted Stock Unit (RSU) eligible level We offer a comprehensive package of benefits including paid time off, health benefits — medical/dental/vision/hearing aid/pharmacy/behavioral health/employee assistance, health care reimbursement account, dependent care assistance plan, commuter benefits, short-term disability and long-term disability insurance, AD&D insurance, life insurance, 401(k), stock purchase plan, SmartDollar financial wellness program, to eligible employees. Costco is committed to a diverse and inclusive workplace. Costco is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or any other legally protected status. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to IT-Recruiting@costco.com If hired, you will be required to provide proof of authorization to work in the United States. In some cases, applicants and employees for selected positions will not be sponsored for work authorization, including, but not limited to H1-B visas."," Entry level "," Full-time "," Information Technology "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-the-judge-group-3491751605?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=uUErrE9iXoxR2zVhx31vZQ%3D%3D&position=12&pageNum=8&trk=public_jobs_jserp-result_search-card," The Judge Group ",https://www.linkedin.com/company/the-judge-group?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 3 weeks ago "," 169 applicants ","Position Summary We are seeking an Analytics Data Engineer with outstanding Python / Spark knowledge to help the Securities Services business identify opportunities to improve efficiency, increase revenue, optimize the operating model and risk framework and create value added data driven Products for our Clients. This is a software engineering role. The successful candidate will work closely with multiple teams including Product Development to develop applications within our AWS Analytics platform. This is a hands-on role requiring the design and development of applications. The successful candidate will also help formulate the analytics strategy and ensure cross-application data & logic consistency and optimization. Requirements Experience in building data pipelines using Python / Pyspark Excellent Python knowledge with a strong track record of delivering with impact Experience in Python development including Web application frameworks such as Flask, Bottle and Tornado Strong database skills with a thorough understanding of relational database and data modelling concepts. Well-rounded in Object Oriented Programming and Design Patterns Working knowledge on AWS and relevant cloud tools like Glue, Lambda Understanding of CI/CD Pipeline builds Experience in developing user interfaces and good visualization skills. Knowledge of JavaScript, HTML5 and core frameworks such as React.js is desirable An ability to be self-sufficient and proactive whilst working in a broader team Bachelor’s Degree in Computer Science or a related discipline. About Us We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. In accordance with applicable law, we make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as any mental health or physical disability needs. We offer a competitive total rewards package including base salary determined based on the role, experience, skill set, and location. For those in eligible roles, discretionary incentive compensation which may be awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process. About the Team Our Corporate & Investment Bank relies on innovators like you to build and maintain the technology that helps us safely service the world’s important corporations, governments and institutions. You'll develop solutions that help the bank provide strategic advice, raise capital, manage risk, and extend liquidity in markets spanning over 100 countries around the world."," Mid-Senior level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer : Merchant Intelligence (Remote),https://www.linkedin.com/jobs/view/data-engineer-merchant-intelligence-remote-at-constructor-3511345422?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=3sh33mazXeEdzAq4W6lMMw%3D%3D&position=16&pageNum=8&trk=public_jobs_jserp-result_search-card," Constructor ",https://www.linkedin.com/company/constructor-io?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 month ago "," 33 applicants ","About Us Constructor.io powers product search and discovery for the largest retailers in the world, like Sephora and Backcountry, serving billions of requests every year: you have most likely used our product without knowing it. Each year we are growing in revenue and scale by several multiples, helping customers in every eCommerce vertical around the world. We love working together to help each other succeed, and are committed to maintaining an open, cooperative culture as we grow. We get to the right answer with empathy, ownership and passion for making an impact. Merchant Intelligence Team An important part of our product is the Customer dashboard that helps merchandizers to analyze and impact user behavior. We provide a number of tools for them to influence which products and product attributes should and will receive more attention to create value for their business. The goal of the Merchant Intelligence team is to: Help merchandizers achieve their e-commerce goals, increasing satisfaction with their sites and retention of customers. Provide insights to merchants they can't get anywhere else. Become a critical point of merchant team planning, decision making, and evaluation so we become a sticky part of their organization. Challenges you will tackle Deliver new reports and tools for merchandizers and analysts from e-commerce companies. Improve the existing dashboard experience by building analytics that provide insights to improve KPIs. Perform data exploration and research user behavior. Implement end-to-end data pipelines to support realtime analytics for important business metrics. Take part in product research and development, iterate with prototypes and customer product interviews. Requirements You are proficient in BI tools (data analysis, building dashboards for engineers and non-technical folks). You are an excellent communicator with the ability to translate business asks into a technical language and vice versa. You are excited to leverage massive amounts of data to drive product innovation & deliver business value. You're familiar with math statistics (A/B-tests) You are proficient at SQL (any variant), well-versed in exploratory data analysis with Python (pandas & numpy, data visualization libraries). Big plus is practical familiarity with the big data stack (Spark, Presto/Athena, Hive). You are adept at fast prototyping and providing analytical support for initiatives in the e-commerce space by identifying & focusing on relevant features & metrics. You are willing to develop and maintain effective communication tools to report business performance and inform decision-making at a cross-functional level. Stack: python, numpy, pandas, SQL, pyspark, flask, docker, git Benefits Unlimited vacation time -we strongly encourage all of our employees take at least 3 weeks per year A competitive compensation package including stock options Company sponsored US health coverage (100% paid for employee) Fully remote team - choose where you live Work from home stipend! We want you to have the resources you need to set up your home office Apple laptops provided for new employees Training and development budget for every employee, refreshed each year Parental leave for qualified employees Work with smart people who will help you grow and make a meaningful impact Diversity, Equity, and Inclusion at Constructor At Constructor.io we are committed to cultivating a work environment that is diverse, equitable, and inclusive. As an equal opportunity employer, we welcome individuals of all backgrounds and provide equal opportunities to all applicants regardless of their education, diversity of opinion, race, color, religion, gender, gender expression, sexual orientation, national origin, genetics, disability, age, veteran status or affiliation in any other protected group."," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-american-residential-services-3506293195?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=FnDjy%2BPbTrLpldND8DD35w%3D%3D&position=3&pageNum=9&trk=public_jobs_jserp-result_search-card," American Residential Services ",https://www.linkedin.com/company/american-residential-services?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","SUMMARY: This position is responsible for design, development, and implementation of dimensionally modeled data storage, retrieval, and ETL solutions to support the Strategic Solutions Department to provide insightful, timely, and accurate Business Intelligence to the Business. RESPONSIBILITIES: Designs and implements complex/shared Dimensional Models in an Azure Data Warehouse environment to support multifaceted analytics and reporting objectives Designs, and develops complex Tables, Views, and Stored Procedures to support the Data Warehousing, Reporting, and analytics initiatives of the Business Intelligence team. Investigates, develops, and implements effective ETL Processes using a variety of methods to support the company Data Warehouse Develops efficient and effective data extraction procedures. Helps maintain the integrity and security of the company Azure Data Warehouse Creates, maintains, and validates dimensionally modeled data sets for a variety of needs. Takes initiative in identifying issues pertaining to data management and communicates these issues to management. Accurately completes all tasks within given time constraints. Is Capable or working on multiple tasks at the same time. Communicates effectively with Team Members and Leadership. REQUIREMENTS: Bachelor's Degree in Software Engineering or related field required (Applicable work experience may be substituted) 5+ years' experience in a data management and/or development Ability to work in distributed systems Proven proficiency with MS SQL required Must be able to develop creative solutions to problems in a timely manner. Proficiency utilizing ETL tools including but not limited to ADF, BCP, SSIS, PowerShell Experience retrieving data from vendor API’s via ADF, PowerShell and SSIS Experience with Azure SQL DB, Data Factories, and Data Lake Experience developing Dimensionally Modeled Data/Star Schema preferred Proven ability to communicate effectively on both business and technical levels ARS-Rescue Rooter is an Equal Opportunity Employer AA/EOE/M/F/V/D. In compliance with the Americans with Disabilities Act, ARS-Rescue Rooter may provide reasonable accommodations to qualified individuals with disabilities and encourages both prospective and current employees to discuss potential accommodations with the employer."," Associate "," Full-time "," Information Technology "," Consumer Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cvs-health-3522843008?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=I83wMrRKZnK6M7jIeBuF%2BQ%3D%3D&position=13&pageNum=9&trk=public_jobs_jserp-result_search-card," CVS Health ",https://www.linkedin.com/company/cvs-health?trk=public_jobs_topcard-org-name," Texas, United States "," 19 hours ago "," 87 applicants ","Job Description Assists in the development of large-scale data structures and pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needs Applies understanding of key business drivers to accomplish own work Uses expertise, judgment and precedents to contribute to the resolution of moderately complex problems Leads portions of initiatives of limited scope, with guidance and direction Writes ETL (Extract / Transform / Load) processes, designs database systems and develops tools for real-time and offline analytic processing Collaborates with client team to transform data and integrate algorithms and models into automated processes Uses knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries to build data pipelines Uses programming skills in Python, Java or any of the major languages to build robust data pipelines and dynamic systems Builds data marts and data models to support clients and other internal customers Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards Pay Range The typical pay range for this role is: Minimum: $ 70,000 Maximum: $ 140,000 Please keep in mind that this range represents the pay range for all positions in the job grade within which this position falls. The actual salary offer will take into account a wide range of factors, including location. Required Qualifications 1+ years of progressively complex related experience Experience with bash shell scripts, UNIX utilities & UNIX Commands Preferred Qualifications Ability to leverage multiple tools and programming languages to analyze and manipulate data sets from disparate data sources Ability to understand complex systems and solve challenging analytical problems Strong problem-solving skills and critical thinking ability Strong collaboration and communication skills within and across teams Knowledge in Java, Python, Hive, Cassandra, Pig, MySQL or NoSQL or similar Knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries against data in the HDFS environment Experience building data transformation and processing solutions Has strong knowledge of large-scale search applications and building high volume data pipelines Education Bachelor's degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline Master’s degree or PhD preferred Business Overview Bring your heart to CVS Health Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver. Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable. We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an affirmative action employer, and is an equal opportunity employer, as are the physician-owned businesses for which CVS Health provides management services. We do not discriminate in recruiting, hiring, promotion, or any other personnel action based on race, ethnicity, color, national origin, sex/gender, sexual orientation, gender identity or expression, religion, age, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities."," Entry level "," Full-time "," Information Technology "," Wellness and Fitness Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-versar-inc-3531152322?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=Vn1leopihNVJtV6q5cT9zw%3D%3D&position=14&pageNum=9&trk=public_jobs_jserp-result_search-card," CVS Health ",https://www.linkedin.com/company/cvs-health?trk=public_jobs_topcard-org-name," Texas, United States "," 19 hours ago "," 87 applicants "," Job DescriptionAssists in the development of large-scale data structures and pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needsApplies understanding of key business drivers to accomplish own workUses expertise, judgment and precedents to contribute to the resolution of moderately complex problemsLeads portions of initiatives of limited scope, with guidance and directionWrites ETL (Extract / Transform / Load) processes, designs database systems and develops tools for real-time and offline analytic processingCollaborates with client team to transform data and integrate algorithms and models into automated processesUses knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries to build data pipelinesUses programming skills in Python, Java or any of the major languages to build robust data pipelines and dynamic systemsBuilds data marts and data models to support clients and other internal customersIntegrates data from a variety of sources, assuring that they adhere to data quality and accessibility standardsPay RangeThe typical pay range for this role is:Minimum: $ 70,000Maximum: $ 140,000Please keep in mind that this range represents the pay range for all positions in the job grade within which this position falls. The actual salary offer will take into account a wide range of factors, including location.Required Qualifications1+ years of progressively complex related experienceExperience with bash shell scripts, UNIX utilities & UNIX CommandsPreferred QualificationsAbility to leverage multiple tools and programming languages to analyze and manipulate data sets from disparate data sourcesAbility to understand complex systems and solve challenging analytical problemsStrong problem-solving skills and critical thinking abilityStrong collaboration and communication skills within and across teamsKnowledge in Java, Python, Hive, Cassandra, Pig, MySQL or NoSQL or similarKnowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries against data in the HDFS environmentExperience building data transformation and processing solutionsHas strong knowledge of large-scale search applications and building high volume data pipelinesEducationBachelor's degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related disciplineMaster’s degree or PhD preferredBusiness OverviewBring your heart to CVS Health Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver. Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable. We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an affirmative action employer, and is an equal opportunity employer, as are the physician-owned businesses for which CVS Health provides management services. We do not discriminate in recruiting, hiring, promotion, or any other personnel action based on race, ethnicity, color, national origin, sex/gender, sexual orientation, gender identity or expression, religion, age, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities. "," Entry level "," Full-time "," Information Technology "," Wellness and Fitness Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-savion-llc-3509229732?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=lqAvP1ebtGGAN63F1get3w%3D%3D&position=16&pageNum=9&trk=public_jobs_jserp-result_search-card," Savion, LLC ",https://www.linkedin.com/company/savion-llc?trk=public_jobs_topcard-org-name," Kansas City Metropolitan Area "," 1 week ago "," 109 applicants ","Savion, a Shell Group portfolio company operating on a stand-alone basis, is one of the largest, most technologically advanced utility-scale solar and energy storage project development companies in the United States. With a growing portfolio of more than 23 GW, Savion’s diverse team provides comprehensive services at each phase of renewable energy project development, from conception through construction. As part of this full-service model, Savion manages all aspects of development for customers, partners, and project host communities. Savion is committed to helping decarbonize the energy grid by replacing electric power generation with renewable sources and delivering cost-competitive electricity to the marketplace. Our workplace culture attracts competitive, smart, and fun people. We are committed to the health, fitness, and work-life balance of our employees. Our employees are highly motivated, technically proficient, team-oriented, and committed to renewable energy. We recognize our employees are our most valuable asset and offer highly competitive pay as well as exceptional above-market employee benefits. Balancing work and play is met with generous PTO allowances and 14.5 paid holidays per year. Check out more about Savion culture: https://vimeo.com/652520363 Position Details The data engineer, which is an emerging role in Savion’s data and analytics team, plays a pivotal role in operationalizing data and analytics initiatives for Savion’s business initiatives. The bulk of the data engineer’s work is building, managing and optimizing data pipelines. The data engineer also needs to ensure compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This enables faster data access, integrated data reuse and vastly improved time-to-solution for Savion’s data and analytics initiatives. This new data engineer position at Savion is responsible for operationalizing data and analytics on behalf of the business and organizational outcomes. This role requires both creative and collaborative working within IT and across all business units. It involves promoting effective data management practices and improved organizational leverage of data and analytics. The data engineer also works with key business stakeholders, IT experts and subject-matter experts to plan and deliver optimal analytics and data science solutions. Core Responsibilities Gather and analyze business and customer requirements to identify and prioritize opportunities to improve efficiencies and processes through data integration. Use user and stakeholder feedback to guide the development of new products and data integration enhancements. Prepare and manage technical documentation and self-service resources on data integrations. Architecting, creating and maintaining data pipelines, including APIs and/or file-based integrations. Proactively monitor data integration performance and troubleshoot, resolve, and report issues to impacted teams and stakeholders. Participate in vendor and tool selection to meet business needs and support development team workflows. Promote the available data and analytics capabilities and expertise to business unit leaders Promote a collaborative team environment and work closely with colleagues and stakeholders to achieve goals. Education and Experience At least 5 years or more of work experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks. A bachelor’s or master’s degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field is required. An advanced degree or certificate in computer science (MS), statistics, applied mathematics, information science (MIS), data management, information systems, information science (postgraduate diploma or related) or a related quantitative field is preferred. The ideal candidate will have a combination of IT skills, data governance skills and analytics skills with a technical or computer science degree. Skills Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as [R, Python, Matlab]. Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows. Strong experience with SQL for relational databases. Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include ETL/ELT, API design and access. Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production. Basic experience working with popular data discovery, analytics and BI software tools like PowerBI and others for semantic-layer-based data discovery. Strong understanding of popular open-source and commercial data science platforms such as Python, R, Alteryx, others is a strong plus but not required/compulsory. Adept with agile methodologies. Familiar with DevOps and DataOps principles to data pipelines for improving the communication, integration, reuse and automation of data flows between data managers and consumers across the organization. Compensation and Benefits Base, bonus and compensation commensurate with experience Vacation & Holidays: Generous paid time off plus 14.5 paid holidays Benefits: Health & dental insurance, long-term disability and life insurance, 401(k) match Awesome culture and opportunities for interaction across multiple departments with fun and exciting events and challenges Savion offers a culture that embraces a flex-time schedule so as to maximize both productivity and work-life balance Contact Information Please send a resume and cover letter to: jobs@savionenergy.com Only direct applicants need apply. No recruiters please. This is a KC based position. Hybrid schedules are available."," Associate "," Full-time "," Information Technology "," Renewable Energy Semiconductor Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-for-eoir-programs-at-acacia-center-for-justice-3505391042?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=m5to%2B4WhybstH1lpbU8PJQ%3D%3D&position=17&pageNum=9&trk=public_jobs_jserp-result_search-card," Savion, LLC ",https://www.linkedin.com/company/savion-llc?trk=public_jobs_topcard-org-name," Kansas City Metropolitan Area "," 1 week ago "," 109 applicants "," Savion, a Shell Group portfolio company operating on a stand-alone basis, is one of the largest, most technologically advanced utility-scale solar and energy storage project development companies in the United States. With a growing portfolio of more than 23 GW, Savion’s diverse team provides comprehensive services at each phase of renewable energy project development, from conception through construction. As part of this full-service model, Savion manages all aspects of development for customers, partners, and project host communities. Savion is committed to helping decarbonize the energy grid by replacing electric power generation with renewable sources and delivering cost-competitive electricity to the marketplace.Our workplace culture attracts competitive, smart, and fun people. We are committed to the health, fitness, and work-life balance of our employees. Our employees are highly motivated, technically proficient, team-oriented, and committed to renewable energy. We recognize our employees are our most valuable asset and offer highly competitive pay as well as exceptional above-market employee benefits. Balancing work and play is met with generous PTO allowances and 14.5 paid holidays per year. Check out more about Savion culture: https://vimeo.com/652520363Position DetailsThe data engineer, which is an emerging role in Savion’s data and analytics team, plays a pivotal role in operationalizing data and analytics initiatives for Savion’s business initiatives. The bulk of the data engineer’s work is building, managing and optimizing data pipelines.The data engineer also needs to ensure compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This enables faster data access, integrated data reuse and vastly improved time-to-solution for Savion’s data and analytics initiatives.This new data engineer position at Savion is responsible for operationalizing data and analytics on behalf of the business and organizational outcomes. This role requires both creative and collaborative working within IT and across all business units. It involves promoting effective data management practices and improved organizational leverage of data and analytics. The data engineer also works with key business stakeholders, IT experts and subject-matter experts to plan and deliver optimal analytics and data science solutions.Core ResponsibilitiesGather and analyze business and customer requirements to identify and prioritize opportunities to improve efficiencies and processes through data integration.Use user and stakeholder feedback to guide the development of new products and data integration enhancements.Prepare and manage technical documentation and self-service resources on data integrations.Architecting, creating and maintaining data pipelines, including APIs and/or file-based integrations.Proactively monitor data integration performance and troubleshoot, resolve, and report issues to impacted teams and stakeholders.Participate in vendor and tool selection to meet business needs and support development team workflows.Promote the available data and analytics capabilities and expertise to business unit leadersPromote a collaborative team environment and work closely with colleagues and stakeholders to achieve goals.Education and ExperienceAt least 5 years or more of work experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.A bachelor’s or master’s degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field is required.An advanced degree or certificate in computer science (MS), statistics, applied mathematics, information science (MIS), data management, information systems, information science (postgraduate diploma or related) or a related quantitative field is preferred.The ideal candidate will have a combination of IT skills, data governance skills and analytics skills with a technical or computer science degree.SkillsStrong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as [R, Python, Matlab].Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.Strong experience with SQL for relational databases.Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include ETL/ELT, API design and access.Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.Basic experience working with popular data discovery, analytics and BI software tools like PowerBI and others for semantic-layer-based data discovery.Strong understanding of popular open-source and commercial data science platforms such as Python, R, Alteryx, others is a strong plus but not required/compulsory.Adept with agile methodologies.Familiar with DevOps and DataOps principles to data pipelines for improving the communication, integration, reuse and automation of data flows between data managers and consumers across the organization.Compensation and BenefitsBase, bonus and compensation commensurate with experienceVacation & Holidays: Generous paid time off plus 14.5 paid holidaysBenefits: Health & dental insurance, long-term disability and life insurance, 401(k) matchAwesome culture and opportunities for interaction across multiple departments with fun and exciting events and challengesSavion offers a culture that embraces a flex-time schedule so as to maximize both productivity and work-life balanceContact InformationPlease send a resume and cover letter to: jobs@savionenergy.comOnly direct applicants need apply. No recruiters please.This is a KC based position. Hybrid schedules are available. "," Associate "," Full-time "," Information Technology "," Renewable Energy Semiconductor Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-eliassen-group-3511763941?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=bPKAM3L3y8xUncfN4xx9Aw%3D%3D&position=18&pageNum=9&trk=public_jobs_jserp-result_search-card," Savion, LLC ",https://www.linkedin.com/company/savion-llc?trk=public_jobs_topcard-org-name," Kansas City Metropolitan Area "," 1 week ago "," 109 applicants "," Savion, a Shell Group portfolio company operating on a stand-alone basis, is one of the largest, most technologically advanced utility-scale solar and energy storage project development companies in the United States. With a growing portfolio of more than 23 GW, Savion’s diverse team provides comprehensive services at each phase of renewable energy project development, from conception through construction. As part of this full-service model, Savion manages all aspects of development for customers, partners, and project host communities. Savion is committed to helping decarbonize the energy grid by replacing electric power generation with renewable sources and delivering cost-competitive electricity to the marketplace.Our workplace culture attracts competitive, smart, and fun people. We are committed to the health, fitness, and work-life balance of our employees. Our employees are highly motivated, technically proficient, team-oriented, and committed to renewable energy. We recognize our employees are our most valuable asset and offer highly competitive pay as well as exceptional above-market employee benefits. Balancing work and play is met with generous PTO allowances and 14.5 paid holidays per year. Check out more about Savion culture: https://vimeo.com/652520363Position DetailsThe data engineer, which is an emerging role in Savion’s data and analytics team, plays a pivotal role in operationalizing data and analytics initiatives for Savion’s business initiatives. The bulk of the data engineer’s work is building, managing and optimizing data pipelines.The data engineer also needs to ensure compliance with data governance and data security requirements while creating, improving and operationalizing these integrated and reusable data pipelines. This enables faster data access, integrated data reuse and vastly improved time-to-solution for Savion’s data and analytics initiatives.This new data engineer position at Savion is responsible for operationalizing data and analytics on behalf of the business and organizational outcomes. This role requires both creative and collaborative working within IT and across all business units. It involves promoting effective data management practices and improved organizational leverage of data and analytics. The data engineer also works with key business stakeholders, IT experts and subject-matter experts to plan and deliver optimal analytics and data science solutions.Core ResponsibilitiesGather and analyze business and customer requirements to identify and prioritize opportunities to improve efficiencies and processes through data integration.Use user and stakeholder feedback to guide the development of new products and data integration enhancements.Prepare and manage technical documentation and self-service resources on data integrations.Architecting, creating and maintaining data pipelines, including APIs and/or file-based integrations.Proactively monitor data integration performance and troubleshoot, resolve, and report issues to impacted teams and stakeholders.Participate in vendor and tool selection to meet business needs and support development team workflows.Promote the available data and analytics capabilities and expertise to business unit leadersPromote a collaborative team environment and work closely with colleagues and stakeholders to achieve goals.Education and ExperienceAt least 5 years or more of work experience in data management disciplines including data integration, modeling, optimization and data quality, and/or other areas directly relevant to data engineering responsibilities and tasks.A bachelor’s or master’s degree in computer science, statistics, applied mathematics, data management, information systems, information science or a related quantitative field is required.An advanced degree or certificate in computer science (MS), statistics, applied mathematics, information science (MIS), data management, information systems, information science (postgraduate diploma or related) or a related quantitative field is preferred.The ideal candidate will have a combination of IT skills, data governance skills and analytics skills with a technical or computer science degree.SkillsStrong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as [R, Python, Matlab].Strong ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management. The ability to work with both IT and business in integrating analytics and data science output into business processes and workflows.Strong experience with SQL for relational databases.Strong experience in working with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include ETL/ELT, API design and access.Strong experience in working with and optimizing existing ETL processes and data integration and data preparation flows and helping to move them in production.Basic experience working with popular data discovery, analytics and BI software tools like PowerBI and others for semantic-layer-based data discovery.Strong understanding of popular open-source and commercial data science platforms such as Python, R, Alteryx, others is a strong plus but not required/compulsory.Adept with agile methodologies.Familiar with DevOps and DataOps principles to data pipelines for improving the communication, integration, reuse and automation of data flows between data managers and consumers across the organization.Compensation and BenefitsBase, bonus and compensation commensurate with experienceVacation & Holidays: Generous paid time off plus 14.5 paid holidaysBenefits: Health & dental insurance, long-term disability and life insurance, 401(k) matchAwesome culture and opportunities for interaction across multiple departments with fun and exciting events and challengesSavion offers a culture that embraces a flex-time schedule so as to maximize both productivity and work-life balanceContact InformationPlease send a resume and cover letter to: jobs@savionenergy.comOnly direct applicants need apply. No recruiters please.This is a KC based position. Hybrid schedules are available. "," Associate "," Full-time "," Information Technology "," Renewable Energy Semiconductor Manufacturing " Data Engineer,United States,REMOTE Data Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-state-farm-3487709894?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=K1cYv8js5fNR0N4iYZa6DA%3D%3D&position=20&pageNum=9&trk=public_jobs_jserp-result_search-card," State Farm ",https://www.linkedin.com/company/state_farm?trk=public_jobs_topcard-org-name," Tempe, AZ "," 1 day ago "," Over 200 applicants ","Overview We are not just offering a job but a meaningful career! Come join our passionate team! As a Fortune 50 company, we hire the best employees to serve our customers, making us a leader in the insurance and financial services industry. State Farm embraces diversity and inclusion to ensure a workforce that is engaged, builds on the strengths and talents of all associates, and creates a Good Neighbor culture. We offer competitive benefits and pay with the potential for an annual financial award based on both individual and enterprise performance. Our employees have an opportunity to participate in volunteer events within the community and engage in a learning culture. We offer programs to assist with tuition reimbursement, professional designations, employee development, wellness initiatives, and more! Visit our Careers page for more information on our benefits, locations and the process of joining the State Farm team! REMOTE: Qualified candidates (outside of hub locations listed below) may be considered for 100% remote work arrangements based on where a candidate currently resides or is currently located. HYBRID: Qualified candidates (in or near hub locations listed below) should plan to spend time working from home and some time working in the office as part of our hybrid work environment. HUB LOCATIONS: Dunwoody, GA; Richardson, TX; Tempe, AZ; or Bloomington, IL Check out our Enterprise Technology department! Responsibilities The Data Visualization team is seeking a talented and creative Data Engineer to evaluate and enable data technologies that transform data into meaningful insights across the Enterprise. To be successful in this role, the engineer must be a strategic thinker and can bring a data-driven approach to solving complex business problems. We need an exceptional communicator, passionate about data, collaborative, analytical, and a problem-solver who has expertise related to business intelligence (BI) tooling. As a Data Engineer in this role you will get to: Position data and perform data analysis for use in visualizations that will provide insights into business opportunities. Interface with the business areas that are sourcing the data for the various analytical insights. Qualifications Highly desired skills: At least 5 years of experience in data engineering Strong proficiency in Python and SQL Experience with data warehousing and ETL tools Familiarity with cloud computing platforms, such as AWS Knowledge of data modeling and data visualization techniques Excellent problem-solving, analytical, communication, and interpersonal skills SPONSORSHIP: Applicants are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunity For Los Angeles candidates: Pursuant to the Los Angeles Fair Chance Initiative for Hiring, we will consider for employment qualified applicants with criminal histories. For San Francisco candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. For Colorado and Washington State candidates: Salary Range: $84,620.00-$169,250.00 For California, NYC, and CT candidates: Potential salary range: $84,620.00-$169,250.00 Potential yearly incentive pay: up to 15% of base salary Competitive Benefits including: 401k Plan Health Insurance Dental/Vision plans Life Insurance Paid Time Off Annual Merit Increases Tuition Reimbursement Health Initiatives For more details visit our benefits summary page SFARM "," Entry level "," Full-time "," Analyst, Information Technology, and Engineering "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499587056?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=tZakwI5WSN7Yw%2B7mg7DlAw%3D%3D&position=21&pageNum=9&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Seattle, WA "," 2 weeks ago "," 49 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-private-energy-partners-3501298892?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=ATpLaklJTZB2poOD4zjFVg%3D%3D&position=22&pageNum=9&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Seattle, WA "," 2 weeks ago "," 49 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,REMOTE Data Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-state-farm-3487713210?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=cRCaXmCE7x3ySuYV6MVWHQ%3D%3D&position=24&pageNum=9&trk=public_jobs_jserp-result_search-card," State Farm ",https://www.linkedin.com/company/state_farm?trk=public_jobs_topcard-org-name," Richardson, TX "," 1 day ago "," Over 200 applicants ","Overview We are not just offering a job but a meaningful career! Come join our passionate team! As a Fortune 50 company, we hire the best employees to serve our customers, making us a leader in the insurance and financial services industry. State Farm embraces diversity and inclusion to ensure a workforce that is engaged, builds on the strengths and talents of all associates, and creates a Good Neighbor culture. We offer competitive benefits and pay with the potential for an annual financial award based on both individual and enterprise performance. Our employees have an opportunity to participate in volunteer events within the community and engage in a learning culture. We offer programs to assist with tuition reimbursement, professional designations, employee development, wellness initiatives, and more! Visit our Careers page for more information on our benefits, locations and the process of joining the State Farm team! REMOTE: Qualified candidates (outside of hub locations listed below) may be considered for 100% remote work arrangements based on where a candidate currently resides or is currently located. HYBRID: Qualified candidates (in or near hub locations listed below) should plan to spend time working from home and some time working in the office as part of our hybrid work environment. HUB LOCATIONS: Dunwoody, GA; Richardson, TX; Tempe, AZ; or Bloomington, IL Check out our Enterprise Technology department! Responsibilities The Data Visualization team is seeking a talented and creative Data Engineer to evaluate and enable data technologies that transform data into meaningful insights across the Enterprise. To be successful in this role, the engineer must be a strategic thinker and can bring a data-driven approach to solving complex business problems. We need an exceptional communicator, passionate about data, collaborative, analytical, and a problem-solver who has expertise related to business intelligence (BI) tooling. As a Data Engineer in this role you will get to: Position data and perform data analysis for use in visualizations that will provide insights into business opportunities. Interface with the business areas that are sourcing the data for the various analytical insights. Qualifications Highly desired skills: At least 5 years of experience in data engineering Strong proficiency in Python and SQL Experience with data warehousing and ETL tools Familiarity with cloud computing platforms, such as AWS Knowledge of data modeling and data visualization techniques Excellent problem-solving, analytical, communication, and interpersonal skills SPONSORSHIP: Applicants are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunity For Los Angeles candidates: Pursuant to the Los Angeles Fair Chance Initiative for Hiring, we will consider for employment qualified applicants with criminal histories. For San Francisco candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. For Colorado and Washington State candidates: Salary Range: $84,620.00-$169,250.00 For California, NYC, and CT candidates: Potential salary range: $84,620.00-$169,250.00 Potential yearly incentive pay: up to 15% of base salary Competitive Benefits including: 401k Plan Health Insurance Dental/Vision plans Life Insurance Paid Time Off Annual Merit Increases Tuition Reimbursement Health Initiatives For more details visit our benefits summary page SFARM "," Entry level "," Full-time "," Analyst, Information Technology, and Engineering "," Insurance " Data Engineer,United States,REMOTE Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-softstandard-solutions-3502753289?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=sOczkp8%2F26N7bgeoZ0u7Eg%3D%3D&position=25&pageNum=9&trk=public_jobs_jserp-result_search-card," State Farm ",https://www.linkedin.com/company/state_farm?trk=public_jobs_topcard-org-name," Richardson, TX "," 1 day ago "," Over 200 applicants "," OverviewWe are not just offering a job but a meaningful career! Come join our passionate team!As a Fortune 50 company, we hire the best employees to serve our customers, making us a leader in the insurance and financial services industry. State Farm embraces diversity and inclusion to ensure a workforce that is engaged, builds on the strengths and talents of all associates, and creates a Good Neighbor culture.We offer competitive benefits and pay with the potential for an annual financial award based on both individual and enterprise performance. Our employees have an opportunity to participate in volunteer events within the community and engage in a learning culture. We offer programs to assist with tuition reimbursement, professional designations, employee development, wellness initiatives, and more!Visit our Careers page for more information on our benefits, locations and the process of joining the State Farm team!REMOTE: Qualified candidates (outside of hub locations listed below) may be considered for 100% remote work arrangements based on where a candidate currently resides or is currently located.HYBRID: Qualified candidates (in or near hub locations listed below) should plan to spend time working from home and some time working in the office as part of our hybrid work environment.HUB LOCATIONS: Dunwoody, GA; Richardson, TX; Tempe, AZ; or Bloomington, ILCheck out our Enterprise Technology department!ResponsibilitiesThe Data Visualization team is seeking a talented and creative Data Engineer to evaluate and enable data technologies that transform data into meaningful insights across the Enterprise. To be successful in this role, the engineer must be a strategic thinker and can bring a data-driven approach to solving complex business problems. We need an exceptional communicator, passionate about data, collaborative, analytical, and a problem-solver who has expertise related to business intelligence (BI) tooling.As a Data Engineer in this role you will get to:Position data and perform data analysis for use in visualizations that will provide insights into business opportunities. Interface with the business areas that are sourcing the data for the various analytical insights. QualificationsHighly desired skills:At least 5 years of experience in data engineeringStrong proficiency in Python and SQLExperience with data warehousing and ETL toolsFamiliarity with cloud computing platforms, such as AWSKnowledge of data modeling and data visualization techniquesExcellent problem-solving, analytical, communication, and interpersonal skillsSPONSORSHIP: Applicants are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunityFor Los Angeles candidates: Pursuant to the Los Angeles Fair Chance Initiative for Hiring, we will consider for employment qualified applicants with criminal histories.For San Francisco candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.For Colorado and Washington State candidates:Salary Range: $84,620.00-$169,250.00For California, NYC, and CT candidates:Potential salary range: $84,620.00-$169,250.00Potential yearly incentive pay: up to 15% of base salaryCompetitive Benefits including:401k PlanHealth InsuranceDental/Vision plansLife InsurancePaid Time OffAnnual Merit IncreasesTuition ReimbursementHealth InitiativesFor more details visit our benefits summary pageSFARM "," Entry level "," Full-time "," Analyst, Information Technology, and Engineering "," Insurance " Data Engineer,United States,REMOTE Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-softstandard-solutions-3502753289?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=vnuZKgP3yw4nI%2B4ghF4Xeg%3D%3D&position=1&pageNum=10&trk=public_jobs_jserp-result_search-card," State Farm ",https://www.linkedin.com/company/state_farm?trk=public_jobs_topcard-org-name," Richardson, TX "," 1 day ago "," Over 200 applicants "," OverviewWe are not just offering a job but a meaningful career! Come join our passionate team!As a Fortune 50 company, we hire the best employees to serve our customers, making us a leader in the insurance and financial services industry. State Farm embraces diversity and inclusion to ensure a workforce that is engaged, builds on the strengths and talents of all associates, and creates a Good Neighbor culture.We offer competitive benefits and pay with the potential for an annual financial award based on both individual and enterprise performance. Our employees have an opportunity to participate in volunteer events within the community and engage in a learning culture. We offer programs to assist with tuition reimbursement, professional designations, employee development, wellness initiatives, and more!Visit our Careers page for more information on our benefits, locations and the process of joining the State Farm team!REMOTE: Qualified candidates (outside of hub locations listed below) may be considered for 100% remote work arrangements based on where a candidate currently resides or is currently located.HYBRID: Qualified candidates (in or near hub locations listed below) should plan to spend time working from home and some time working in the office as part of our hybrid work environment.HUB LOCATIONS: Dunwoody, GA; Richardson, TX; Tempe, AZ; or Bloomington, ILCheck out our Enterprise Technology department!ResponsibilitiesThe Data Visualization team is seeking a talented and creative Data Engineer to evaluate and enable data technologies that transform data into meaningful insights across the Enterprise. To be successful in this role, the engineer must be a strategic thinker and can bring a data-driven approach to solving complex business problems. We need an exceptional communicator, passionate about data, collaborative, analytical, and a problem-solver who has expertise related to business intelligence (BI) tooling.As a Data Engineer in this role you will get to:Position data and perform data analysis for use in visualizations that will provide insights into business opportunities. Interface with the business areas that are sourcing the data for the various analytical insights. QualificationsHighly desired skills:At least 5 years of experience in data engineeringStrong proficiency in Python and SQLExperience with data warehousing and ETL toolsFamiliarity with cloud computing platforms, such as AWSKnowledge of data modeling and data visualization techniquesExcellent problem-solving, analytical, communication, and interpersonal skillsSPONSORSHIP: Applicants are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunityFor Los Angeles candidates: Pursuant to the Los Angeles Fair Chance Initiative for Hiring, we will consider for employment qualified applicants with criminal histories.For San Francisco candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.For Colorado and Washington State candidates:Salary Range: $84,620.00-$169,250.00For California, NYC, and CT candidates:Potential salary range: $84,620.00-$169,250.00Potential yearly incentive pay: up to 15% of base salaryCompetitive Benefits including:401k PlanHealth InsuranceDental/Vision plansLife InsurancePaid Time OffAnnual Merit IncreasesTuition ReimbursementHealth InitiativesFor more details visit our benefits summary pageSFARM "," Entry level "," Full-time "," Analyst, Information Technology, and Engineering "," Insurance " Data Engineer,United States,REMOTE Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ubs-3499279586?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=fk6rgt5MLJhex2ZioJUGiw%3D%3D&position=2&pageNum=10&trk=public_jobs_jserp-result_search-card," State Farm ",https://www.linkedin.com/company/state_farm?trk=public_jobs_topcard-org-name," Richardson, TX "," 1 day ago "," Over 200 applicants "," OverviewWe are not just offering a job but a meaningful career! Come join our passionate team!As a Fortune 50 company, we hire the best employees to serve our customers, making us a leader in the insurance and financial services industry. State Farm embraces diversity and inclusion to ensure a workforce that is engaged, builds on the strengths and talents of all associates, and creates a Good Neighbor culture.We offer competitive benefits and pay with the potential for an annual financial award based on both individual and enterprise performance. Our employees have an opportunity to participate in volunteer events within the community and engage in a learning culture. We offer programs to assist with tuition reimbursement, professional designations, employee development, wellness initiatives, and more!Visit our Careers page for more information on our benefits, locations and the process of joining the State Farm team!REMOTE: Qualified candidates (outside of hub locations listed below) may be considered for 100% remote work arrangements based on where a candidate currently resides or is currently located.HYBRID: Qualified candidates (in or near hub locations listed below) should plan to spend time working from home and some time working in the office as part of our hybrid work environment.HUB LOCATIONS: Dunwoody, GA; Richardson, TX; Tempe, AZ; or Bloomington, ILCheck out our Enterprise Technology department!ResponsibilitiesThe Data Visualization team is seeking a talented and creative Data Engineer to evaluate and enable data technologies that transform data into meaningful insights across the Enterprise. To be successful in this role, the engineer must be a strategic thinker and can bring a data-driven approach to solving complex business problems. We need an exceptional communicator, passionate about data, collaborative, analytical, and a problem-solver who has expertise related to business intelligence (BI) tooling.As a Data Engineer in this role you will get to:Position data and perform data analysis for use in visualizations that will provide insights into business opportunities. Interface with the business areas that are sourcing the data for the various analytical insights. QualificationsHighly desired skills:At least 5 years of experience in data engineeringStrong proficiency in Python and SQLExperience with data warehousing and ETL toolsFamiliarity with cloud computing platforms, such as AWSKnowledge of data modeling and data visualization techniquesExcellent problem-solving, analytical, communication, and interpersonal skillsSPONSORSHIP: Applicants are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunityFor Los Angeles candidates: Pursuant to the Los Angeles Fair Chance Initiative for Hiring, we will consider for employment qualified applicants with criminal histories.For San Francisco candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.For Colorado and Washington State candidates:Salary Range: $84,620.00-$169,250.00For California, NYC, and CT candidates:Potential salary range: $84,620.00-$169,250.00Potential yearly incentive pay: up to 15% of base salaryCompetitive Benefits including:401k PlanHealth InsuranceDental/Vision plansLife InsurancePaid Time OffAnnual Merit IncreasesTuition ReimbursementHealth InitiativesFor more details visit our benefits summary pageSFARM "," Entry level "," Full-time "," Analyst, Information Technology, and Engineering "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-otr-solutions-3491478397?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=LddHB6HCIt%2Bz7y01uPm2wg%3D%3D&position=3&pageNum=10&trk=public_jobs_jserp-result_search-card," OTR Solutions ",https://www.linkedin.com/company/otrsolutions?trk=public_jobs_topcard-org-name," Roswell, GA "," 3 weeks ago "," Over 200 applicants ","OTR Solutions is an innovator in the transportation industry providing a suite of factoring, fuel, and business management focused solutions. We help new and established companies get fast access to the funds they need for daily operations. As a Private Equity backed FinTech company, we are looking to grow our best-in-class financial organization. We are at the beginning of the development of a new Cloud-Native platform that will drive the next wave of innovation in the industry and fuel OTR’s growth. We are looking for growth minded, collaborative technologist who love to create, innovate, and learn cutting-edge solutions on the latest and greatest technology. OTR has been recognized as a “Top Workplace” by the Atlanta Journal-Constitution since 2016! The Data Engineer will be joining a team of data engineers and data scientists building a data platform from the ground up. As a core technical contributor to OTR Solutions' data modernization, the Data Engineer will have expertise in developing, constructing, testing and maintaining modern data architectures. The Data Engineer will also work closely with data scientists in preparing data for ML modeling and analytics dashboards. Please note: We do not sponsor work-related visas.** Responsibilities: Develop, construct, test and maintain modern data architectures Prepare data for use in predictive and prescriptive modeling Provide support for Data Analytics team using Tableau and other reporting tools Perform requirements analysis, detail design, troubleshooting, source code construction, system testing, integration testing and implementation for reports. Create, update, and maintain system based on incoming change requests and break-fixes. Interact with team members when developing interfaces and researching and troubleshooting technical issues. What we look for: Experience designing and building cloud-native data platforms Experience developing a secure integration strategy with external partners Experience building ETLs and data pipelines Experience with cloud native data stores on Azure Experience with Azure Synapse Experience coding with Python Expert knowledge of SQL & SQL Server Experience with ML Modeling preferred Experience in C# a plus Translate business requirements into technical requirements Able to manage multiple projects and deliver high quality deliverables by deadlines Flexible and adaptable with the ability to learn quickly Strong communication skills Excellent teaming skills Benefits: OTR provides a competitive, comprehensive compensation package for our full-time employees: Eligibility for Individual and Company bonus programs Medical, Dental, Vision, Life/ AD&D Insurance, Short-Term Disability Pet Insurance, Paid Family Leave, Employee Assistance Program Fully Paid Maternity Leave 401(k) with Company Matching 12 days of Paid Time Off, 4 Sick/Mental Health days, 7 Paid Holidays, 2 Flex Holidays Weekly Catered Lunches Work from Home Flexibility Company Paid Fitness Membership Volunteer Days and Opportunities with Company-Partnered Charities Internal Inclusion programs OTR’s mission is to create exceptional value for our clients by providing industry leading financing and back-office solutions. Three pillars that are crucial to supporting that mission are outstanding customer service, technology that creates efficiency for ourselves and our customers, and a culture that provides the opportunity for employees to achieve greatness. OTR Solutions is an Equal Opportunity Employer"," Entry level "," Full-time "," Engineering "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499582546?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=m%2BES827a%2Fv0iAainMg308A%3D%3D&position=4&pageNum=10&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Washington, United States "," 1 week ago "," 41 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/junior-data-scientist-database-engineer-technology-at-arena-investors-lp-3507702618?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=tacPE9WrCf4bWnjJc%2FqNmQ%3D%3D&position=5&pageNum=10&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Washington, United States "," 1 week ago "," 41 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-starschema-3479481781?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=WdbQ3LRDqD88iCWvE5WNeA%3D%3D&position=6&pageNum=10&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Washington, United States "," 1 week ago "," 41 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer - Data Analytics,https://www.linkedin.com/jobs/view/data-engineer-data-analytics-at-costco-wholesale-3507365886?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=%2F5as09sio%2FKFYif25c9ZoA%3D%3D&position=7&pageNum=10&trk=public_jobs_jserp-result_search-card," Costco Wholesale ",https://www.linkedin.com/company/costco-wholesale?trk=public_jobs_topcard-org-name," Dallas, TX "," 1 week ago "," 187 applicants ","This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture. The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes. Our employees and our members come FIRST. Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others. In 2021, Costco contributed over $58 million to organizations such as United Way and Children's Miracle Network Hospitals. Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed. As proof, Costco ranks seventh in Forbes “World’s Best Employers”. The Data Engineer - Data Analytics is responsible for the end to end data pipelines to power analytics and data services. This role is focused on data engineering to build and deliver automated data pipelines from a plethora of internal and external data sources. The Data Engineer will partner with product owners, engineering and data platform teams to design, build, test and automate data pipelines that are relied upon across the company as the single source of truth. If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined. ROLE Develops and operationalizes data pipelines to make data available for consumption (BI, Advanced analytics, Services). Works in tandem with data architects and data/BI engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality and orchestration. Designs, develops and implements ETL/ELT processes using IICS (Informatica cloud). Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Azure Data Factory to improve and speed up delivery of our data products and services. Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency and quality of data management. Communicates technical concepts to non-technical audiences both in written and verbal form. Performs peer reviews for other data engineer’s work. Required 5+ years’ experience engineering and operationalizing data pipelines with large and complex datasets. 5+ years’ of hands on experience with Informatica PowerCenter 2+ years’ of hands on experience with Informatica IICS 3+ years’ experience working with Cloud technologies such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB and other big data technologies. Extensive experience working with various data sources (SQL,Oracle database, flat files (csv, delimited), Web API, XML. Advanced SQL skills required. Solid understanding of relational databases and business data; ability to write complex SQL queries against a variety of data sources. 5+ years’ experience with Data Modeling, ETL, and Data Warehousing. Strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing). Scheduling flexibility to meet the needs of the business including weekends, holidays, and 24/7 on call responsibilities on a rotational basis. Able to work in a fast-paced agile development environment. Recommended BA/BS in Computer Science, Engineering, or equivalent software/services experience. Azure Certifications Experience implementing data integration techniques such as event / message based integration (Kafka, Azure Event Hub), ETL. Experience with Git / Azure DevOps Experience delivering data solutions through agile software development methodologies. Exposure to the retail industry. Excellent verbal and written communication skills. Experience working with SAP integration tools including BODS. Experience with UC4 Job Scheduler Required Documents Cover Letter Resume California applicants, please click here to review the Costco Applicant Privacy Notice. Pay Ranges Level 2 - $100,000 - $135,000 Level 3 - $125,000 - $165,000 Level 4 - $155,000 - 195,000 - Potential Bonus and Restricted Stock Unit (RSU) eligible level We offer a comprehensive package of benefits including paid time off, health benefits — medical/dental/vision/hearing aid/pharmacy/behavioral health/employee assistance, health care reimbursement account, dependent care assistance plan, commuter benefits, short-term disability and long-term disability insurance, AD&D insurance, life insurance, 401(k), stock purchase plan, SmartDollar financial wellness program, to eligible employees. Costco is committed to a diverse and inclusive workplace. Costco is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or any other legally protected status. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to IT-Recruiting@costco.com If hired, you will be required to provide proof of authorization to work in the United States. In some cases, applicants and employees for selected positions will not be sponsored for work authorization, including, but not limited to H1-B visas."," Entry level "," Full-time "," Information Technology "," Retail " Data Engineer,United States,Data Engineer - Advanced,https://www.linkedin.com/jobs/view/data-engineer-advanced-at-federal-reserve-bank-of-new-york-3498362495?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=EgzEcgCr381IMZ9eBw41dA%3D%3D&position=8&pageNum=10&trk=public_jobs_jserp-result_search-card," Federal Reserve Bank of New York ",https://www.linkedin.com/company/federal-reserve-bank-of-new-york?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 43 applicants ","Company Federal Reserve Bank of New York Working at the Federal Reserve Bank of New York positions you at the center of the financial world with a unique perspective on national and international markets and economies. You will work in an environment with a diverse group of experienced professionals to foster and support the safety, soundness, and vitality of our economic and financial systems. The Bank believes in work flexibility to balance the demands of work and life while also connecting and collaborating with our colleagues in person. Employees can expect to be in the office a couple of days per week as needed for meetings and team collaboration and should live within a commutable distance. What we do: The Data and Analytics chapter in the Technology Group builds data products that provide the organization with analytical capabilities in support of its mission. Reporting to the chapter lead for Data and Analytics, you will be part of a diverse, dynamic, and agile squad that is responsible for data pipelines, data integration, data quality, data visualization, self-service analytics and data catalog for the enterprise. Your role as Data Engineer: Quickly learn about the business domain and the associated data and analytics products that the team works on Execute on the cloud migration using data platforms identified Support and maintain existing data pipelines using legacy technology such as Informatica Power Center and Oracle while the cloud migration is underway Develop new data pipelines as needed with wide-ranging source and target configurations in a customer facing role Migrate on-premises data management products to AWS cloud as well as support hybrid configurations Research, troubleshoot and recommend solutions to data integration and quality problems. What we are looking for: Technologist with background in data engineering and data integration with hands on experience Experience in DW SaaS products such as Snowflake and AWS Data Services Experience with ETL concepts and RDBMS Alteryx and Tableau experience is preferred but not a must have Experienced working in Agile product teams Experience with Python in data engineering or application development Understanding of fixed income products Expertise in ETL and data integration techniques and practices Knowledge of data architecture and data management best practices Collaborative working style to support larger team goals and outcomes Experience with data catalog tools like Collibra Salary Range: $96000 - $120000 /year We believe in transparency at the NY Fed. This salary range reflects a variety of skills and experiences candidates may bring to the job. We pay individuals along this range based on their unique backgrounds. Whether you’re stretching into the job or are a more seasoned candidate, we aim to pay competitively for your contributions. Touchstone Behaviors set clear expectations for leading with impact at every stage of our careers and aspire to achieve in our continued growth and development. Communicate Authentically: Empathetically engage one another with direct and transparent dialogue and listening. Actively discuss viewpoints with respect and compassion in a timely and candid manner, taking into account verbal and nonverbal cues. Ask questions, learn from each other, and share information widely to move the Bank's work forward. Collaborate Inclusively: Inspire a diverse and inclusive environment that empowers others to contribute meaningfully. Intentionally bring a diverse set of people together to achieve positive business results. Drive Progress: Grow and adapt to changing priorities in the Bank. Experiment with new concepts and take appropriate risk to drive innovation. Remain curious and action oriented, navigating through ambiguity and uncertainty to drive outcomes. Develop Others: Equitably champion, mentor, and develop others to grow professionally. Demonstrate vulnerability and empathy to create a trusted environment. Take Ownership: Establish an environment of action and excellence by holding self and others accountable to execute to the highest standard. Benefits: Our organization offers benefits that are the best fit for you at every stage of your career: Fully paid Pension plan and 401k with Generous Match Comprehensive Insurance Plans (Medical, Dental and Vision including Flexible Spending Accounts and HSA) Subsidized Public Transportation Program Tuition Assistance Program Onsite Fitness & Wellness Center And more Please note that the position requires access to confidential supervisory information and/or FOMC information, which is limited to ""Protected Individuals"" as defined in the U.S. federal immigration law. Protected Individuals include, but are not limited to, U.S. citizens, U.S. nationals, and U.S. permanent residents who either are not yet eligible to apply for naturalization or who have applied for naturalization within the requisite timeframe. Candidates who are permanent residents may be eligible for the information access required for this position if they sign a declaration of intent to become a U.S. citizen and pursue a path to citizenship and meet other eligibility requirements. In addition, all candidates must undergo an enhanced background check, comply with all applicable information handling rules, and will be tested for all controlled substances prohibited by federal law, to include marijuana. The Federal Reserve Bank of New York is committed to a diverse workforce and to providing equal employment opportunity to all persons without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, age, genetic information, disability, or military service. This is not necessarily an exhaustive list of all responsibilities, duties, performance standards or requirements, efforts, skills or working conditions associated with the job. While this is intended to be an accurate reflection of the current job, management reserves the right to revise the job or to require that other or different tasks be performed when circumstances change. Full Time / Part Time Full time Regular / Temporary Regular Job Exempt (Yes / No) No Job Category Information Technology Work Shift First (United States of America) The Federal Reserve Banks believe that diversity and inclusion among our employees is critical to our success as an organization, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool. The Federal Reserve Banks are committed to equal employment opportunity for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences. Privacy Notice"," Mid-Senior level "," Full-time "," Information Technology "," Capital Markets, Banking, and Financial Services " Data Engineer,United States,Data Engineer - Advanced,https://www.linkedin.com/jobs/view/python-data-engineer-at-ust-3478642218?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=0QvHUyBNimG0ik0lK5fo5w%3D%3D&position=9&pageNum=10&trk=public_jobs_jserp-result_search-card," Federal Reserve Bank of New York ",https://www.linkedin.com/company/federal-reserve-bank-of-new-york?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 43 applicants "," CompanyFederal Reserve Bank of New YorkWorking at the Federal Reserve Bank of New York positions you at the center of the financial world with a unique perspective on national and international markets and economies. You will work in an environment with a diverse group of experienced professionals to foster and support the safety, soundness, and vitality of our economic and financial systems.The Bank believes in work flexibility to balance the demands of work and life while also connecting and collaborating with our colleagues in person. Employees can expect to be in the office a couple of days per week as needed for meetings and team collaboration and should live within a commutable distance.What we do: The Data and Analytics chapter in the Technology Group builds data products that provide the organization with analytical capabilities in support of its mission. Reporting to the chapter lead for Data and Analytics, you will be part of a diverse, dynamic, and agile squad that is responsible for data pipelines, data integration, data quality, data visualization, self-service analytics and data catalog for the enterprise.Your role as Data Engineer:Quickly learn about the business domain and the associated data and analytics products that the team works onExecute on the cloud migration using data platforms identifiedSupport and maintain existing data pipelines using legacy technology such as Informatica Power Center and Oracle while the cloud migration is underwayDevelop new data pipelines as needed with wide-ranging source and target configurations in a customer facing roleMigrate on-premises data management products to AWS cloud as well as support hybrid configurationsResearch, troubleshoot and recommend solutions to data integration and quality problems.What we are looking for:Technologist with background in data engineering and data integration with hands on experienceExperience in DW SaaS products such as Snowflake and AWS Data ServicesExperience with ETL concepts and RDBMSAlteryx and Tableau experience is preferred but not a must haveExperienced working in Agile product teamsExperience with Python in data engineering or application development Understanding of fixed income productsExpertise in ETL and data integration techniques and practicesKnowledge of data architecture and data management best practicesCollaborative working style to support larger team goals and outcomesExperience with data catalog tools like CollibraSalary Range: $96000 - $120000 /yearWe believe in transparency at the NY Fed. This salary range reflects a variety of skills and experiences candidates may bring to the job. We pay individuals along this range based on their unique backgrounds. Whether you’re stretching into the job or are a more seasoned candidate, we aim to pay competitively for your contributions.Touchstone Behaviors set clear expectations for leading with impact at every stage of our careers and aspire to achieve in our continued growth and development.Communicate Authentically: Empathetically engage one another with direct and transparent dialogue and listening. Actively discuss viewpoints with respect and compassion in a timely and candid manner, taking into account verbal and nonverbal cues. Ask questions, learn from each other, and share information widely to move the Bank's work forward.Collaborate Inclusively: Inspire a diverse and inclusive environment that empowers others to contribute meaningfully. Intentionally bring a diverse set of people together to achieve positive business results.Drive Progress: Grow and adapt to changing priorities in the Bank. Experiment with new concepts and take appropriate risk to drive innovation. Remain curious and action oriented, navigating through ambiguity and uncertainty to drive outcomes.Develop Others: Equitably champion, mentor, and develop others to grow professionally. Demonstrate vulnerability and empathy to create a trusted environment.Take Ownership: Establish an environment of action and excellence by holding self and others accountable to execute to the highest standard.Benefits:Our organization offers benefits that are the best fit for you at every stage of your career:Fully paid Pension plan and 401k with Generous MatchComprehensive Insurance Plans (Medical, Dental and Vision including Flexible Spending Accounts and HSA)Subsidized Public Transportation ProgramTuition Assistance ProgramOnsite Fitness & Wellness CenterAnd morePlease note that the position requires access to confidential supervisory information and/or FOMC information, which is limited to ""Protected Individuals"" as defined in the U.S. federal immigration law. Protected Individuals include, but are not limited to, U.S. citizens, U.S. nationals, and U.S. permanent residents who either are not yet eligible to apply for naturalization or who have applied for naturalization within the requisite timeframe. Candidates who are permanent residents may be eligible for the information access required for this position if they sign a declaration of intent to become a U.S. citizen and pursue a path to citizenship and meet other eligibility requirements. In addition, all candidates must undergo an enhanced background check, comply with all applicable information handling rules, and will be tested for all controlled substances prohibited by federal law, to include marijuana.The Federal Reserve Bank of New York is committed to a diverse workforce and to providing equal employment opportunity to all persons without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, age, genetic information, disability, or military service.This is not necessarily an exhaustive list of all responsibilities, duties, performance standards or requirements, efforts, skills or working conditions associated with the job. While this is intended to be an accurate reflection of the current job, management reserves the right to revise the job or to require that other or different tasks be performed when circumstances change.Full Time / Part TimeFull timeRegular / TemporaryRegularJob Exempt (Yes / No)NoJob CategoryInformation TechnologyWork ShiftFirst (United States of America)The Federal Reserve Banks believe that diversity and inclusion among our employees is critical to our success as an organization, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool. The Federal Reserve Banks are committed to equal employment opportunity for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.Privacy Notice "," Mid-Senior level "," Full-time "," Information Technology "," Capital Markets, Banking, and Financial Services " Data Engineer,United States,Data Engineer - Advanced,https://www.linkedin.com/jobs/view/data-engineer-at-eliassen-group-3511423862?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=eRv5zNwPPvF6kxn4TPqvDQ%3D%3D&position=10&pageNum=10&trk=public_jobs_jserp-result_search-card," Federal Reserve Bank of New York ",https://www.linkedin.com/company/federal-reserve-bank-of-new-york?trk=public_jobs_topcard-org-name," New York, NY "," 2 weeks ago "," 43 applicants "," CompanyFederal Reserve Bank of New YorkWorking at the Federal Reserve Bank of New York positions you at the center of the financial world with a unique perspective on national and international markets and economies. You will work in an environment with a diverse group of experienced professionals to foster and support the safety, soundness, and vitality of our economic and financial systems.The Bank believes in work flexibility to balance the demands of work and life while also connecting and collaborating with our colleagues in person. Employees can expect to be in the office a couple of days per week as needed for meetings and team collaboration and should live within a commutable distance.What we do: The Data and Analytics chapter in the Technology Group builds data products that provide the organization with analytical capabilities in support of its mission. Reporting to the chapter lead for Data and Analytics, you will be part of a diverse, dynamic, and agile squad that is responsible for data pipelines, data integration, data quality, data visualization, self-service analytics and data catalog for the enterprise.Your role as Data Engineer:Quickly learn about the business domain and the associated data and analytics products that the team works onExecute on the cloud migration using data platforms identifiedSupport and maintain existing data pipelines using legacy technology such as Informatica Power Center and Oracle while the cloud migration is underwayDevelop new data pipelines as needed with wide-ranging source and target configurations in a customer facing roleMigrate on-premises data management products to AWS cloud as well as support hybrid configurationsResearch, troubleshoot and recommend solutions to data integration and quality problems.What we are looking for:Technologist with background in data engineering and data integration with hands on experienceExperience in DW SaaS products such as Snowflake and AWS Data ServicesExperience with ETL concepts and RDBMSAlteryx and Tableau experience is preferred but not a must haveExperienced working in Agile product teamsExperience with Python in data engineering or application development Understanding of fixed income productsExpertise in ETL and data integration techniques and practicesKnowledge of data architecture and data management best practicesCollaborative working style to support larger team goals and outcomesExperience with data catalog tools like CollibraSalary Range: $96000 - $120000 /yearWe believe in transparency at the NY Fed. This salary range reflects a variety of skills and experiences candidates may bring to the job. We pay individuals along this range based on their unique backgrounds. Whether you’re stretching into the job or are a more seasoned candidate, we aim to pay competitively for your contributions.Touchstone Behaviors set clear expectations for leading with impact at every stage of our careers and aspire to achieve in our continued growth and development.Communicate Authentically: Empathetically engage one another with direct and transparent dialogue and listening. Actively discuss viewpoints with respect and compassion in a timely and candid manner, taking into account verbal and nonverbal cues. Ask questions, learn from each other, and share information widely to move the Bank's work forward.Collaborate Inclusively: Inspire a diverse and inclusive environment that empowers others to contribute meaningfully. Intentionally bring a diverse set of people together to achieve positive business results.Drive Progress: Grow and adapt to changing priorities in the Bank. Experiment with new concepts and take appropriate risk to drive innovation. Remain curious and action oriented, navigating through ambiguity and uncertainty to drive outcomes.Develop Others: Equitably champion, mentor, and develop others to grow professionally. Demonstrate vulnerability and empathy to create a trusted environment.Take Ownership: Establish an environment of action and excellence by holding self and others accountable to execute to the highest standard.Benefits:Our organization offers benefits that are the best fit for you at every stage of your career:Fully paid Pension plan and 401k with Generous MatchComprehensive Insurance Plans (Medical, Dental and Vision including Flexible Spending Accounts and HSA)Subsidized Public Transportation ProgramTuition Assistance ProgramOnsite Fitness & Wellness CenterAnd morePlease note that the position requires access to confidential supervisory information and/or FOMC information, which is limited to ""Protected Individuals"" as defined in the U.S. federal immigration law. Protected Individuals include, but are not limited to, U.S. citizens, U.S. nationals, and U.S. permanent residents who either are not yet eligible to apply for naturalization or who have applied for naturalization within the requisite timeframe. Candidates who are permanent residents may be eligible for the information access required for this position if they sign a declaration of intent to become a U.S. citizen and pursue a path to citizenship and meet other eligibility requirements. In addition, all candidates must undergo an enhanced background check, comply with all applicable information handling rules, and will be tested for all controlled substances prohibited by federal law, to include marijuana.The Federal Reserve Bank of New York is committed to a diverse workforce and to providing equal employment opportunity to all persons without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, age, genetic information, disability, or military service.This is not necessarily an exhaustive list of all responsibilities, duties, performance standards or requirements, efforts, skills or working conditions associated with the job. While this is intended to be an accurate reflection of the current job, management reserves the right to revise the job or to require that other or different tasks be performed when circumstances change.Full Time / Part TimeFull timeRegular / TemporaryRegularJob Exempt (Yes / No)NoJob CategoryInformation TechnologyWork ShiftFirst (United States of America)The Federal Reserve Banks believe that diversity and inclusion among our employees is critical to our success as an organization, and we seek to recruit, develop and retain the most talented people from a diverse candidate pool. The Federal Reserve Banks are committed to equal employment opportunity for employees and job applicants in compliance with applicable law and to an environment where employees are valued for their differences.Privacy Notice "," Mid-Senior level "," Full-time "," Information Technology "," Capital Markets, Banking, and Financial Services " Data Engineer,United States,Data Engineer – FLO Solutions,https://www.linkedin.com/jobs/view/data-engineer-%E2%80%93-flo-solutions-at-otg-management-3506965337?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=pyDuCN4BtoGacz4hCtmoCQ%3D%3D&position=11&pageNum=10&trk=public_jobs_jserp-result_search-card," OTG Management ",https://www.linkedin.com/company/otg-management?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 2 weeks ago "," 134 applicants ","YOUR NEXT OPPORTUNITY IS NOW BOARDING: Join OTG as the Data Engineer – FLO Solutions now and drive a new type of hospitality. Explore career opportunities in a unique hospitality environment with some of the industry's best compensation and benefits, including PTO, Healthcare, and a competitive 401k match. WHAT IS OTG? OTG has revolutionized the hospitality industry by pushing the boundaries of excellence. With more than 300 in-terminal dining and retail locations across 10 airports, OTG and its 5,000+ Crewmembers serve millions of travelers each year. WHY OTG? By joining our team, you’ll discover endless opportunities to explore, learn and realize your greatest potential in some of the most exciting hospitality environments around. Our people drive our experiences, so we offer our crewmembers some of the best compensation and benefits in the industry. We transform airport experiences. You drive it. ROLE AND RESPONSIBILITIES Position Summary: The Data Engineer – FLO Solutions will be directly responsible for developing, integrating, and maintaining all data pipelines and data consumption streams in OTG data platforms, and will be accountable for the on-time, on-budget delivery of software to support the OTG product roadmap. We focus on utilizing our data to provide business value. This is an exciting opportunity because of the wealth of data available and the countless ways in which it can be applied. The role will own data science projects, such as a recommendation engine, from end to end. Responsibilities: Leads the in-house development of ETL/ELT data pipelines and data science projects. Create and own data products, such as a recommendation engine or predictive model. Provide key insights and analyses to senior stakeholders and executives. Work with the Data Engineering team to design and develop data models. Collaborate with business users and decision-makers on their data needs. Manages external software development partners to ensure consistent coding standards and in-house body of knowledge. QUALIFICATIONS AND REQUIREMENTS 5+ years of relevant professional experience in building AWS big data pipelines using Apache Spark. Strong hands-on experience with programming in Python. Expertise in SQL and analytical data modeling. Hands-on experience in pipeline orchestration tools, like Apache Airflow. Experience in PostgreSQL, Redshift, S3, AWS Lambda, Kinesis and Athena. Previous examples of working with “real-world” data and building data products and/or analyses. Previous work with cloud-based BI tools (such as Looker, and Quicksight) is a plus. Experience with containerization, docker, and Kubernete is nice-to-have. Strong organizational, problem-solving, and communications skills (must be able to do basic technical writing) Ability to work in a rapidly changing environment and exhibit grace under pressure. Ability to work effectively across teams and disciplines within the organization. Desired Education: Bachelor of Science in Computer Engineering, Computer Information Systems, Computer Science, Engineering, or Software Engineering or related experience OTG Concessions Management, LLC and its subsidiaries and affiliates are proud to be an equal-opportunity workplace and employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity, veteran status, or any other basis protected by law."," Mid-Senior level "," Full-time "," Engineering "," Food and Beverage Services, Retail, and Hospitality " Data Engineer,United States,Associate Data Engineer - Remote,https://www.linkedin.com/jobs/view/associate-data-engineer-remote-at-american-cancer-society-3482315326?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=XSiP5O6TaQJL%2FITQTI%2Bweg%3D%3D&position=12&pageNum=10&trk=public_jobs_jserp-result_search-card," American Cancer Society ",https://www.linkedin.com/company/american-cancer-society?trk=public_jobs_topcard-org-name," Atlanta, GA "," 1 week ago "," 167 applicants ","Position Description This position is a remote role, open anywhere throughout the United States. Job Summary American Cancer Society (ACS) is seeking a passionate data individual on the fight against cancer. This position sits within the Digital Solution’s Discovery Data Science team, where we work with a wide range of data sources and types from research grants, Salesforce, survey data, and genomics data. The associate data engineer will be working in tandem with our lead data engineer to continue to develop and implement a modern data pipeline for ACS. Major Responsibilities Complete data integration from a multitude of sources into our cloud infrastructure. Document the architecture and solutions for business continuity. Work with our analytics engineer and business intelligence analyst to build out best in class data models for data visualization. Define and execute database and data movement standards, design reviews, pipeline CI/CD process, and data container policies to ensure high quality data management Collaborate closely with data governance to identify gaps in documentation and standards Position Requirements FORMAL KNOWLEDGE Bachelor’s degree in Computer Science, Engineering, or equivalent experience. Two or more years of relevant work experience. Skills Required Qualifications: 2+ years of building a modern data pipeline, including the ETL, cloud-storage, reporting, and deployment Experience with cloud services such as, Azure Cloud, AWS, Google Cloud Expertise with SQL, YAML Experience with workflow orchestration (Prefect.io, Airflow. Etc.) 2+ years of experience with python Experience with Git – Azure DevOps, GitHub, Gitlab, etc. Preferred Qualifications Snowflake experience, particularly data integration from multiple sources. Exposure to BI tools such as PowerBI, Tableau, Looker. Ability to understand the data needs of researchers, such as epidemiologist and cancer researchers, guiding them to the best practices for their data. Experience with healthcare data, such as claims, electronic health records, biospecimen, medical imaging, genomic data. SPECIALIZED TRAINING OR KNOWLEDGE Proven experience in data engineering for a modern data pipeline. The estimated starting rate is $75,000-$100,000 annually. The final candidate's relevant experience/skills will be considered before an offer is extended. Actual starting pay will vary based on non-discriminatory factors including, but not limited to, geographic location, experience, skills, specialty, and education. The American Cancer Society has adopted a vaccination policy that requires all staff, regardless of position or work location, to be fully vaccinated against COVID-19 (except where prohibited by state law). ACS provides staff a generous paid time off policy; medical, dental, retirement benefits, wellness programs, and professional development programs to enhance staff skills. Further details on our benefits can be found on our careers site at: jobs.cancer.org/benefits. We are a proud equal opportunity employer. Equal Opportunity Employer. See our commitment to a policy of Equal Employment Opportunity to continually ensure equal opportunity to our employees and to our applicants."," Mid-Senior level "," Full-time "," Information Technology "," Non-profit Organizations and Hospitals and Health Care " Data Engineer,United States,Associate Data Engineer - Remote,https://www.linkedin.com/jobs/view/data-engineer-i-at-medical-solutions-3488030757?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=ymJ6j58tX5LTEsbCuWqsnw%3D%3D&position=13&pageNum=10&trk=public_jobs_jserp-result_search-card," American Cancer Society ",https://www.linkedin.com/company/american-cancer-society?trk=public_jobs_topcard-org-name," Atlanta, GA "," 1 week ago "," 167 applicants "," Position DescriptionThis position is a remote role, open anywhere throughout the United States.Job SummaryAmerican Cancer Society (ACS) is seeking a passionate data individual on the fight against cancer. This position sits within the Digital Solution’s Discovery Data Science team, where we work with a wide range of data sources and types from research grants, Salesforce, survey data, and genomics data. The associate data engineer will be working in tandem with our lead data engineer to continue to develop and implement a modern data pipeline for ACS.Major ResponsibilitiesComplete data integration from a multitude of sources into our cloud infrastructure.Document the architecture and solutions for business continuity.Work with our analytics engineer and business intelligence analyst to build out best in class data models for data visualization.Define and execute database and data movement standards, design reviews, pipeline CI/CD process, and data container policies to ensure high quality data managementCollaborate closely with data governance to identify gaps in documentation and standardsPosition RequirementsFORMAL KNOWLEDGEBachelor’s degree in Computer Science, Engineering, or equivalent experience. Two or more years of relevant work experience.SkillsRequired Qualifications:2+ years of building a modern data pipeline, including the ETL, cloud-storage, reporting, and deploymentExperience with cloud services such as, Azure Cloud, AWS, Google CloudExpertise with SQL, YAMLExperience with workflow orchestration (Prefect.io, Airflow. Etc.)2+ years of experience with pythonExperience with Git – Azure DevOps, GitHub, Gitlab, etc.Preferred QualificationsSnowflake experience, particularly data integration from multiple sources.Exposure to BI tools such as PowerBI, Tableau, Looker.Ability to understand the data needs of researchers, such as epidemiologist and cancer researchers, guiding them to the best practices for their data.Experience with healthcare data, such as claims, electronic health records, biospecimen, medical imaging, genomic data.SPECIALIZED TRAINING OR KNOWLEDGEProven experience in data engineering for a modern data pipeline.The estimated starting rate is $75,000-$100,000 annually. The final candidate's relevant experience/skills will be considered before an offer is extended. Actual starting pay will vary based on non-discriminatory factors including, but not limited to, geographic location, experience, skills, specialty, and education.The American Cancer Society has adopted a vaccination policy that requires all staff, regardless of position or work location, to be fully vaccinated against COVID-19 (except where prohibited by state law).ACS provides staff a generous paid time off policy; medical, dental, retirement benefits, wellness programs, and professional development programs to enhance staff skills. Further details on our benefits can be found on our careers site at: jobs.cancer.org/benefits. We are a proud equal opportunity employer.Equal Opportunity Employer.See our commitment to a policy of Equal Employment Opportunity to continually ensure equal opportunity to our employees and to our applicants. "," Mid-Senior level "," Full-time "," Information Technology "," Non-profit Organizations and Hospitals and Health Care " Data Engineer,United States,Associate Data Engineer - Remote,https://www.linkedin.com/jobs/view/data-engineer-at-stytch-3515646590?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=kG7spJaqJlmh97gLIyBplg%3D%3D&position=14&pageNum=10&trk=public_jobs_jserp-result_search-card," American Cancer Society ",https://www.linkedin.com/company/american-cancer-society?trk=public_jobs_topcard-org-name," Atlanta, GA "," 1 week ago "," 167 applicants "," Position DescriptionThis position is a remote role, open anywhere throughout the United States.Job SummaryAmerican Cancer Society (ACS) is seeking a passionate data individual on the fight against cancer. This position sits within the Digital Solution’s Discovery Data Science team, where we work with a wide range of data sources and types from research grants, Salesforce, survey data, and genomics data. The associate data engineer will be working in tandem with our lead data engineer to continue to develop and implement a modern data pipeline for ACS.Major ResponsibilitiesComplete data integration from a multitude of sources into our cloud infrastructure.Document the architecture and solutions for business continuity.Work with our analytics engineer and business intelligence analyst to build out best in class data models for data visualization.Define and execute database and data movement standards, design reviews, pipeline CI/CD process, and data container policies to ensure high quality data managementCollaborate closely with data governance to identify gaps in documentation and standardsPosition RequirementsFORMAL KNOWLEDGEBachelor’s degree in Computer Science, Engineering, or equivalent experience. Two or more years of relevant work experience.SkillsRequired Qualifications:2+ years of building a modern data pipeline, including the ETL, cloud-storage, reporting, and deploymentExperience with cloud services such as, Azure Cloud, AWS, Google CloudExpertise with SQL, YAMLExperience with workflow orchestration (Prefect.io, Airflow. Etc.)2+ years of experience with pythonExperience with Git – Azure DevOps, GitHub, Gitlab, etc.Preferred QualificationsSnowflake experience, particularly data integration from multiple sources.Exposure to BI tools such as PowerBI, Tableau, Looker.Ability to understand the data needs of researchers, such as epidemiologist and cancer researchers, guiding them to the best practices for their data.Experience with healthcare data, such as claims, electronic health records, biospecimen, medical imaging, genomic data.SPECIALIZED TRAINING OR KNOWLEDGEProven experience in data engineering for a modern data pipeline.The estimated starting rate is $75,000-$100,000 annually. The final candidate's relevant experience/skills will be considered before an offer is extended. Actual starting pay will vary based on non-discriminatory factors including, but not limited to, geographic location, experience, skills, specialty, and education.The American Cancer Society has adopted a vaccination policy that requires all staff, regardless of position or work location, to be fully vaccinated against COVID-19 (except where prohibited by state law).ACS provides staff a generous paid time off policy; medical, dental, retirement benefits, wellness programs, and professional development programs to enhance staff skills. Further details on our benefits can be found on our careers site at: jobs.cancer.org/benefits. We are a proud equal opportunity employer.Equal Opportunity Employer.See our commitment to a policy of Equal Employment Opportunity to continually ensure equal opportunity to our employees and to our applicants. "," Mid-Senior level "," Full-time "," Information Technology "," Non-profit Organizations and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-rapport-it-services-inc-3528101410?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=4%2FC1AYXPdeHEhIzHDvrT%2FA%3D%3D&position=15&pageNum=10&trk=public_jobs_jserp-result_search-card," Rapport IT Services Inc ",https://www.linkedin.com/company/rapportit?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Be among the first 25 applicants ","Reporting to the Director of Information Technology, as a Data Engineer you will implement assigned data related stories within the Demand Sciences portfolio. You will work with senior team members, partners and data scientists to further clarify requirements and assignment priority. The Data Engineer will become a data domain expert for Demand Sciences. You Will Gather, structure, and prepare data for usability including sourcing, filtering, tagging, joining, parsing, and normalizing data sets for use in analytical models. Give input to and implement data model designs provided by senior team members and architects. Create and contribute to technical design documentation Build automations to validate source to target data accuracy during build, testing and support phases. Identify and resolve (if applicable) drivers of data related issues either proactively or based on input from end-users. Take user requirements and create SQL data transformations and views as part of either end-user transaction, batch process or report/application usage Implement optimizations and perform performance tuning based on recommendations from senior team members. Implement automated data test cases to ensure values are within expected ranges, complete and within quality parameters. Create job schedules to chain together daily, weekly, monthly or adhoc data flow schedules with predecessors and successors Observe DevOps standards and procedures established You Have Bachelor's degree from accredited university preferred or 5 years of experience in an IT data related role. Work within a team, taking direction from senior members. Demonstrated knowledge of business concepts and processes as they relate to data domains supported. Experience (2+ years) in writing and debugging SQL on RDBMS platforms like Snowflake, SQL Server, Oracle.. Preferred- Experience with object-oriented scripting languages like python to assist data science team in debugging data related issues. Experience operating within an Agile framework to break down, estimate, assign and complete work preferred. At this time, we require applicants for this role to be legally authorized to work in the United States without requiring employer sponsorship either now or in the future"," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-state-farm-3487710806?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=Hc2fMHU7%2F%2BHuZpwDxWPV4A%3D%3D&position=16&pageNum=10&trk=public_jobs_jserp-result_search-card," Rapport IT Services Inc ",https://www.linkedin.com/company/rapportit?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Be among the first 25 applicants "," Reporting to the Director of Information Technology, as a Data Engineer you will implement assigned data related stories within the Demand Sciences portfolio. You will work with senior team members, partners and data scientists to further clarify requirements and assignment priority. The Data Engineer will become a data domain expert for Demand Sciences.You WillGather, structure, and prepare data for usability including sourcing, filtering, tagging, joining, parsing, and normalizing data sets for use in analytical models.Give input to and implement data model designs provided by senior team members and architects.Create and contribute to technical design documentationBuild automations to validate source to target data accuracy during build, testing and support phases.Identify and resolve (if applicable) drivers of data related issues either proactively or based on input from end-users.Take user requirements and create SQL data transformations and views as part of either end-user transaction, batch process or report/application usageImplement optimizations and perform performance tuning based on recommendations from senior team members.Implement automated data test cases to ensure values are within expected ranges, complete and within quality parameters.Create job schedules to chain together daily, weekly, monthly or adhoc data flow schedules with predecessors and successorsObserve DevOps standards and procedures establishedYou HaveBachelor's degree from accredited university preferred or 5 years of experience in an IT data related role.Work within a team, taking direction from senior members.Demonstrated knowledge of business concepts and processes as they relate to data domains supported.Experience (2+ years) in writing and debugging SQL on RDBMS platforms like Snowflake, SQL Server, Oracle..Preferred- Experience with object-oriented scripting languages like python to assist data science team in debugging data related issues.Experience operating within an Agile framework to break down, estimate, assign and complete work preferred.At this time, we require applicants for this role to be legally authorized to work in the United States without requiring employer sponsorship either now or in the future "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-surge-technology-solutions-inc-3504908751?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=LVoJOHWGQ488dNGJPLkiWg%3D%3D&position=17&pageNum=10&trk=public_jobs_jserp-result_search-card," Rapport IT Services Inc ",https://www.linkedin.com/company/rapportit?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Be among the first 25 applicants "," Reporting to the Director of Information Technology, as a Data Engineer you will implement assigned data related stories within the Demand Sciences portfolio. You will work with senior team members, partners and data scientists to further clarify requirements and assignment priority. The Data Engineer will become a data domain expert for Demand Sciences.You WillGather, structure, and prepare data for usability including sourcing, filtering, tagging, joining, parsing, and normalizing data sets for use in analytical models.Give input to and implement data model designs provided by senior team members and architects.Create and contribute to technical design documentationBuild automations to validate source to target data accuracy during build, testing and support phases.Identify and resolve (if applicable) drivers of data related issues either proactively or based on input from end-users.Take user requirements and create SQL data transformations and views as part of either end-user transaction, batch process or report/application usageImplement optimizations and perform performance tuning based on recommendations from senior team members.Implement automated data test cases to ensure values are within expected ranges, complete and within quality parameters.Create job schedules to chain together daily, weekly, monthly or adhoc data flow schedules with predecessors and successorsObserve DevOps standards and procedures establishedYou HaveBachelor's degree from accredited university preferred or 5 years of experience in an IT data related role.Work within a team, taking direction from senior members.Demonstrated knowledge of business concepts and processes as they relate to data domains supported.Experience (2+ years) in writing and debugging SQL on RDBMS platforms like Snowflake, SQL Server, Oracle..Preferred- Experience with object-oriented scripting languages like python to assist data science team in debugging data related issues.Experience operating within an Agile framework to break down, estimate, assign and complete work preferred.At this time, we require applicants for this role to be legally authorized to work in the United States without requiring employer sponsorship either now or in the future "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499586065?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=7YOA4NMQurbkFh0mwNfAUg%3D%3D&position=18&pageNum=10&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Boston, MA "," 2 weeks ago "," 40 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer- Hybrid,https://www.linkedin.com/jobs/view/data-engineer-hybrid-at-diversified-services-network-inc-3510785098?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=ayjIrkxi5oZms4%2BdhX%2FPyA%3D%3D&position=19&pageNum=10&trk=public_jobs_jserp-result_search-card," Diversified Services Network, Inc. ",https://www.linkedin.com/company/diversified-services-network?trk=public_jobs_topcard-org-name," Peoria, IL "," 1 week ago "," Be among the first 25 applicants ","Data Engineer Diversified Services Network is seeking a Data Engineer to work Hybridly out of our Peoria, IL location Benefits Medical Insurance Dental Insurance Vision Insurance Life Insurance Short-term and Long-term Disability Paid Time Off Paid Holidays 401K Options Please follow the link to our website for a list of job openings in Engineering, IT, Project Management, and more! https://www.dsnworldwide.com Position Summary The PSLD Transformation Analytics group engages with various stakeholders across the organization to help solve their business problems. The individual will run the entire project end to end, so strong skills in gathering/understanding customer requirements, creating and maintaining optimal data pipeline architecture, choosing appropriate tools/techniques and delivering actionable insights. Typical Day Extract large, complex data sets that meet business requirements. Work to build the on-prem /cloud infrastructure for optimal extraction, transformation, and loading of a wide variety of complex business data from on-prem/cloud databases. Identify ways to improve data reliability, efficiency, and quality. Work with internal and external stakeholders to assist with data-related technical issues and support data needs. Own the design and development of ongoing business metrics/KPI, reports and dashboards to drive key business decisions. Prepare data for predictive and prescriptive modeling. Education Requirements Minimum BS in information technology, computer science, mechanical engineering, applied math, statistics, data science etc. 5-7 years of relevant experience is required. MS is preferred. 3-5 years of relevant experience is required. Technical Skills Familiarity with database such as Snowflake, DB2, SQL Server, Oracle (2-3 of these are required) Programming languages - SQL, Python, and SAS Experience working with large data sets, preferably in several GB or millions of transactions. Visualization – PowerBI, Tableau(preferred) Experience working with platform integration tool like Snaplogic is preferred Experience working with AWS Founded in 1989, DSN provides public sector consulting, IT, engineering and project management services to Fortune 500 companies and state/federal government agencies. DSN is a certified Woman Business Enterprise (WBE) by both the Women’s Business Enterprise National Council (WNENC) and the State of Illinois, as well as a certified Disadvantaged Business Enterprise (DBE)."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer- Hybrid,https://www.linkedin.com/jobs/view/data-engineer-at-ameri100-3523780012?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=uN57nZYWYSNoA7D%2BfwYkzQ%3D%3D&position=20&pageNum=10&trk=public_jobs_jserp-result_search-card," Diversified Services Network, Inc. ",https://www.linkedin.com/company/diversified-services-network?trk=public_jobs_topcard-org-name," Peoria, IL "," 1 week ago "," Be among the first 25 applicants "," Data Engineer Diversified Services Network is seeking a Data Engineer to work Hybridly out of our Peoria, IL locationBenefitsMedical InsuranceDental InsuranceVision InsuranceLife InsuranceShort-term and Long-term DisabilityPaid Time OffPaid Holidays401K OptionsPlease follow the link to our website for a list of job openings in Engineering, IT, Project Management, and more! https://www.dsnworldwide.comPosition Summary The PSLD Transformation Analytics group engages with various stakeholders across the organization to help solve their business problems. The individual will run the entire project end to end, so strong skills in gathering/understanding customer requirements, creating and maintaining optimal data pipeline architecture, choosing appropriate tools/techniques and delivering actionable insights.Typical Day Extract large, complex data sets that meet business requirements. Work to build the on-prem /cloud infrastructure for optimal extraction, transformation, and loading of a wide variety of complex business data from on-prem/cloud databases. Identify ways to improve data reliability, efficiency, and quality. Work with internal and external stakeholders to assist with data-related technical issues and support data needs. Own the design and development of ongoing business metrics/KPI, reports and dashboards to drive key business decisions. Prepare data for predictive and prescriptive modeling.Education Requirements Minimum BS in information technology, computer science, mechanical engineering, applied math, statistics, data science etc. 5-7 years of relevant experience is required. MS is preferred. 3-5 years of relevant experience is required.Technical Skills Familiarity with database such as Snowflake, DB2, SQL Server, Oracle (2-3 of these are required) Programming languages - SQL, Python, and SAS Experience working with large data sets, preferably in several GB or millions of transactions. Visualization – PowerBI, Tableau(preferred) Experience working with platform integration tool like Snaplogic is preferred Experience working with AWSFounded in 1989, DSN provides public sector consulting, IT, engineering and project management services to Fortune 500 companies and state/federal government agencies. DSN is a certified Woman Business Enterprise (WBE) by both the Women’s Business Enterprise National Council (WNENC) and the State of Illinois, as well as a certified Disadvantaged Business Enterprise (DBE). "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer- Hybrid,https://www.linkedin.com/jobs/view/data-engineer-at-chisel-analytics-3510057807?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=nmb%2F3GoxA%2FK1Fg2c3vJn7g%3D%3D&position=21&pageNum=10&trk=public_jobs_jserp-result_search-card," Diversified Services Network, Inc. ",https://www.linkedin.com/company/diversified-services-network?trk=public_jobs_topcard-org-name," Peoria, IL "," 1 week ago "," Be among the first 25 applicants "," Data Engineer Diversified Services Network is seeking a Data Engineer to work Hybridly out of our Peoria, IL locationBenefitsMedical InsuranceDental InsuranceVision InsuranceLife InsuranceShort-term and Long-term DisabilityPaid Time OffPaid Holidays401K OptionsPlease follow the link to our website for a list of job openings in Engineering, IT, Project Management, and more! https://www.dsnworldwide.comPosition Summary The PSLD Transformation Analytics group engages with various stakeholders across the organization to help solve their business problems. The individual will run the entire project end to end, so strong skills in gathering/understanding customer requirements, creating and maintaining optimal data pipeline architecture, choosing appropriate tools/techniques and delivering actionable insights.Typical Day Extract large, complex data sets that meet business requirements. Work to build the on-prem /cloud infrastructure for optimal extraction, transformation, and loading of a wide variety of complex business data from on-prem/cloud databases. Identify ways to improve data reliability, efficiency, and quality. Work with internal and external stakeholders to assist with data-related technical issues and support data needs. Own the design and development of ongoing business metrics/KPI, reports and dashboards to drive key business decisions. Prepare data for predictive and prescriptive modeling.Education Requirements Minimum BS in information technology, computer science, mechanical engineering, applied math, statistics, data science etc. 5-7 years of relevant experience is required. MS is preferred. 3-5 years of relevant experience is required.Technical Skills Familiarity with database such as Snowflake, DB2, SQL Server, Oracle (2-3 of these are required) Programming languages - SQL, Python, and SAS Experience working with large data sets, preferably in several GB or millions of transactions. Visualization – PowerBI, Tableau(preferred) Experience working with platform integration tool like Snaplogic is preferred Experience working with AWSFounded in 1989, DSN provides public sector consulting, IT, engineering and project management services to Fortune 500 companies and state/federal government agencies. DSN is a certified Woman Business Enterprise (WBE) by both the Women’s Business Enterprise National Council (WNENC) and the State of Illinois, as well as a certified Disadvantaged Business Enterprise (DBE). "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stellent-it-3527797182?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=g%2BXwvY2aP6ELcPrI93YGtw%3D%3D&position=22&pageNum=10&trk=public_jobs_jserp-result_search-card," Stellent IT ",https://www.linkedin.com/company/stellent-it?trk=public_jobs_topcard-org-name," Houston, TX "," 3 weeks ago "," Be among the first 25 applicants ","Houston, Texas long term Phone + skype Job Description Here is the skill set and experience I am looking for: 5+ years Data Engineering experience Python experience not just as a web app Panda Apache Airflow Proficient in SQL as a language Algorithm Experience Excellent communication This needs to be someone that is comfortable and has experience working with complex calculations and mathematical formulas. A strong statistics background would be nice."," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-272-at-surge-technology-solutions-inc-3526766970?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=yO3fyMrTxd%2BulW7qxcbMkQ%3D%3D&position=23&pageNum=10&trk=public_jobs_jserp-result_search-card," Stellent IT ",https://www.linkedin.com/company/stellent-it?trk=public_jobs_topcard-org-name," Houston, TX "," 3 weeks ago "," Be among the first 25 applicants "," Houston, Texaslong termPhone + skypeJob DescriptionHere is the skill set and experience I am looking for:5+ years Data Engineering experiencePython experience not just as a web appPandaApache AirflowProficient in SQL as a languageAlgorithm ExperienceExcellent communicationThis needs to be someone that is comfortable and has experience working with complex calculations and mathematical formulas. A strong statistics background would be nice. "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-idr-inc-3523756808?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=u36LP8ixS372AMc3HtQjyQ%3D%3D&position=24&pageNum=10&trk=public_jobs_jserp-result_search-card," Stellent IT ",https://www.linkedin.com/company/stellent-it?trk=public_jobs_topcard-org-name," Houston, TX "," 3 weeks ago "," Be among the first 25 applicants "," Houston, Texaslong termPhone + skypeJob DescriptionHere is the skill set and experience I am looking for:5+ years Data Engineering experiencePython experience not just as a web appPandaApache AirflowProficient in SQL as a languageAlgorithm ExperienceExcellent communicationThis needs to be someone that is comfortable and has experience working with complex calculations and mathematical formulas. A strong statistics background would be nice. "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-i-remote-at-core-group-resources-3511755938?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=glbjVZAoXWP%2Fw26%2BmllOuw%3D%3D&position=25&pageNum=10&trk=public_jobs_jserp-result_search-card," Stellent IT ",https://www.linkedin.com/company/stellent-it?trk=public_jobs_topcard-org-name," Houston, TX "," 3 weeks ago "," Be among the first 25 applicants "," Houston, Texaslong termPhone + skypeJob DescriptionHere is the skill set and experience I am looking for:5+ years Data Engineering experiencePython experience not just as a web appPandaApache AirflowProficient in SQL as a languageAlgorithm ExperienceExcellent communicationThis needs to be someone that is comfortable and has experience working with complex calculations and mathematical formulas. A strong statistics background would be nice. "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-elsdon-consulting-ltd-3493474111?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=9qOFxKDE%2FDIEAethgg78ZQ%3D%3D&position=15&pageNum=6&trk=public_jobs_jserp-result_search-card," Elsdon Consulting ltd ",https://uk.linkedin.com/company/elsdon-consulting?trk=public_jobs_topcard-org-name," Georgia, United States "," 3 weeks ago "," Over 200 applicants ","Are you a Data Engineer? Do you enjoy working within the Defence sector? Would you be excited at the opportunity to provide top arms solutions to military and law enforcement communities? If so, this may be the opportunity you have been looking for... This dedicated arms manufacturer are looking for a skilled and passionate Data Engineer to help with architecture, development and maintenance of digital commerce databases, warehouses and lakes. At this time this role is only open to US Citizens / Green card holders and the role will be largely remote. The Data Engineer will need: Bachelor’s degree in data science, computer science; or equivalent combination of education and experience. DTC Ecommerce experience preferred SQL, Snowflake, Python or R required Data management including data governance experience The responsibilities of the Data Engineer will be: Scope, develop and deploy and maintain microservices and/or APIs to collect and/or serve Digital Commerce relevant data to Digital Delivery, Digital Marketing, IT and BI Analyst teams Scope, develop and deploy data models for demand forecasting, customer segmentation, etc Participate in data governance and compliance steering committee So if you are a Data Engineer and have been looking for a new position, or this role simply caught your eye, please do apply today or contact me directly at max.morrell@elsdonconsulting.com"," Mid-Senior level "," Full-time "," Information Technology "," Airlines and Aviation " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-coinlist-3506628176?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=BK8KGOV2rjHkzpEWPYPD3g%3D%3D&position=6&pageNum=8&trk=public_jobs_jserp-result_search-card," CoinList ",https://www.linkedin.com/company/coinlist?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 month ago "," Be among the first 25 applicants ","CoinList is where the world's best crypto projects build their communities and early adopters can invest in and trade top-tier digital assets. Our mission is to accelerate the advancement of blockchain technology, by finding the best emerging blockchain projects and helping them succeed. CoinList has become the global leader in new token issuance, helping blue chip projects like Solana, Filecoin, Celo, Dapper Labs, and others raise over $1.1 billion and connect them with hundreds of thousands of new token holders. And we now support the full lifecycle of crypto investment, from token sales through token distribution, trading, and crypto-specific services such as staking and access to decentralized-finance opportunities. CoinList users trade and store Bitcoin, Ether, and many other popular crypto assets through CoinList.co, CoinList Pro (our full-service exchange), and mobile apps, while also getting exclusive access to the best new tokens before they list on other exchanges. Our team is growing, and we're looking for someone to help understand and present our data so we can build and grow more effectively. Who you are: You are an excellent programmer with 3-5 years of experience in data engineering You are proficient in Python You get excited about understanding different kinds of data and their impact You have experience building a data platform used by internal users You are excited about the crypto space and the future of blockchain technology You have experience working with distributed systems (eg: Spark, Ray) and streaming systems (eg: Kafka, Apache Beam, Flink) (nice to have) Experience with AWS cloud infrastructure using Terraform (nice to have) What you will do: Build and own part of the CoinList data platform, including setting up an integrating large parts of data infrastructure Work with our internal users and teams to help understand their problems and how we can help them do their job more efficiently Enable our organization as a whole to make accurate, data-driven business decisions Produce quality python code that meets our high technical bar As an early employee at CoinList, you will be a critical part of our core team and have a huge influence over the direction of the company. We will compensate you well, invest deeply in your development, and do everything we can to make sure this is the single best work experience of your life. At CoinList, we are proud to be an Equal Opportunity Employer. We celebrate diversity, value our differences, and are committed to creating an inclusive environment for all employees. Base salary range: 145 - 185k + equity + bonus. We are open to a range of background and experience levels for this role. "," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-evolution-3523754743?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=yQ%2Bkv%2BWl4K2n8Q4n8bcjFQ%3D%3D&position=10&pageNum=8&trk=public_jobs_jserp-result_search-card," Evolution ",https://uk.linkedin.com/company/evolution-recruitment?trk=public_jobs_topcard-org-name," Raleigh, NC "," 14 hours ago "," 106 applicants ","Data Engineer, Raleigh North Carolina, $130,000-$150,000 Attention all ambitious Data Engineers! Are you looking for an an exciting career opportunity that offers endless growth potential and the chance to make a real impact? Keep reading... We're partnering with a well known company in the finance space who are looking for a Data Engineer to join their growing team. You'll be reporting directly to the Senior Manager of Data and Analytics Engineering, and working on a large data modernization project, where you'll be building a cloud first data lake and warehouse. This role will be responsible for working on various solution components like AWS, Talend, Snowflake as part of the end-to-end data engineering solutions. You'll be: Building and maintaining the necessary frameworks and technology architecture for data pipelines (ETL / ELT) and machine learning pipelines Contributing actively to processes needed to achieve operational excellence in all areas, including project management and system reliability. Designing, Building and launching new data pipelines in production in partnership with the business stakeholders. Designing and supporting of new machine learning pipelines as well as dashboards and reports in production. Essential: BA/BS in Computer Science, Math, Physics, or other technical fields. 3+ years of experience in Data Engineering, BI, or Data Warehousing. 3+ years of strong experience in SQL and Data Analysis 3+ years of strong experience in Big Data technologies such as Python, Hive, and Spark 3+ years in development of data pipelines/ETL for data ingestion, data preparation, data integration, data aggregation, feature engineering, etc. If you would like to hear more about this role, please contact Aimee Clemson at Evolution Recruitment"," Entry level "," Full-time "," Information Technology "," Software Development, Insurance, and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-lowe-s-companies-inc-3518317253?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=ZZ5Lh59UN2FfFM4pWqrhyg%3D%3D&position=14&pageNum=8&trk=public_jobs_jserp-result_search-card," Lowe's Companies, Inc. ",https://www.linkedin.com/company/lowe%27s-home-improvement?trk=public_jobs_topcard-org-name," Charlotte, NC "," 2 days ago "," 68 applicants ","Job Summary: The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver modules, stable application systems, and Data or Platform solutions. This includes developing, configuring, or modifying integrated business and/or enterprise infrastructure or application solutions within various computing environments. This role facilitates the implementation and maintenance of business and enterprise Data or Platform solutions to ensure the successful deployment of released applications. Key Responsibilities: Translates business requirements and specifications into logical program designs, modules, stable application systems, and data solutions with occasional guidance from senior colleagues; partners with Product Team to understand business needs and functional specifications Develops, configures, or modifies integrated business and/or enterprise application solutions within various computing environments by designing and coding component-based applications using various programming languages Conducts the implementation and maintenance of complex business and enterprise data solutions to ensure successful deployment of released applications Supports systems integration testing (SIT) and user acceptance testing (UAT), provides insight into defining test plans, and ensures quality software deployment Participates in the end-to-end product lifecycle by applying and sharing an in-depth understanding of the company and industry methodologies, policies, standards, and controls Understands Computer Science and/or Computer Engineering fundamentals; knows software architecture and readily applies this to Data or Platform solutions Automates and simplifies team development, test, and operations processes; develops conceptual, logical, and physical architectures consisting of one or more viewpoints (business, application, data, and infrastructure) required for business solution delivery Solves difficult technical problems; solutions are testable, maintainable, and efficient Supports the build, maintenance, and enhancements of data lake development; supports simple to medium complexity API, unstructured data parsing, and streaming data ingestion Excels in one more domain; understands pipelines and business metrics Builds, tests, and enhances data curation pipelines integration data from a wide variety of sources like DBMS, File systems, and APIs for various KPIs and metrics development with high data quality and integrity Supports the development of feature/inputs for data models in an Agile manner Works with Data Science team to understand mathematical models and algorithms; participates in continuous improvement activities including training opportunities; continuously strives to learn analytic best practices and apply them to daily activities Handles data manipulation (extract, load, transform), data visualization, and administration of data and systems securely and in accordance with enterprise data governance standards Maintains the health and monitoring of assigned analytic capabilities for a specific data engineering solution; ensures high availability of the platform; monitors workload demands; works with Technology Job Description Page 2 of 3 Infrastructure Engineering teams to maintain the data platform; serves as an SME of one or more application Supports the build, maintenance, and enhancements of BI solutions; creates standard and ad hoc reports; uses basic report formatting like sorting, totaling, and exporting Minimum Qualifications: Bachelor's degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field) 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) Preferred Qualifications: Master's degree in Computer Science, CIS, or related field 2 years of IT experience developing and implementing business systems within an organization 4 years of experience working with defect or incident tracking software 4 years of experience with technical documentation in a software development environment 2 years of experience working with an IT Infrastructure Library (ITIL) framework 2 years of experience leading teams, with or without direct reports Experience with application and integration middleware Experience with database technologies 2 years of experience in Hadoop or any Cloud Bigdata components Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka, or equivalent Cloud Bigdata components About Lowe’s: Lowe’s Companies, Inc. (NYSE: LOW) is a FORTUNE® 50 home improvement company serving approximately 19 million customer transactions a week in the United States and Canada. With fiscal year 2021 sales of over $96 billion, Lowe’s and its related businesses operate or service nearly 2,200 home improvement and hardware stores and employ over 300,000 associates. Based in Mooresville, N.C., Lowe’s supports the communities it serves through programs focused on creating safe, affordable housing and helping to develop the next generation of skilled trade experts. For more information, visit Lowes.com. EEO Statement Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law. Pay Range for CA, CO, NJ, NY, WA: $55,600.00 - $140,000.00 annually Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits. Pay Range for CA, CO, NJ, NY, WA: $55,600.00 - $140,000.00 annually Starting rate of pay may vary based on factors including, but not limited to, position offered, location, education, training, and/or experience. For information regarding our benefit programs and eligibility, please visit https://talent.lowes.com/us/en/benefits."," Entry level "," Full-time "," Information Technology and Engineering "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stem-it-3509830986?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=KLklYt1ZyH1WljrYRJNj%2Bg%3D%3D&position=15&pageNum=8&trk=public_jobs_jserp-result_search-card," Stem IT ",https://www.linkedin.com/company/stemitstaffing?trk=public_jobs_topcard-org-name," Arlington, VA "," 1 week ago "," 105 applicants ","R&D division of a prominent Media Conglomerate is hiring a Data Engineer for its Arlington based operation(hybrid). This person will join an exclusive 12 person group of software engineers, data scientists and analysts who are collaborating to maximize large amounts of worldwide data. This position will feature: data ingestion, data wrangling, building cutting edge web applications, leading client application and data demo’s, as well as using modern cloud based data warehouse tools(redshift, snowflake). Must Haves: - Bachelors degree in related field - Proficient with Python - Background with SQL - Experience with ETL and data wrangling tools - Active Secret clearance or above Great to have: - Masters degree/PHD in related field - Experience with NoSQL databases - Experience with AWS - Experience with Snowflake* and Redshift - Experience with Kafka, Nifi, and/or Spark - Background with Elastic Search and or Lucene - Containerization experience with Docker and/or Kubernetes Package: - Full-time Direct hire with amazing benefits - Education/tuition reimbursement plans - Plethora of healthcare options - Employee retirement matching plans - Yearly bonus plans - Many more"," Mid-Senior level "," Full-time "," Engineering, Information Technology, and Science "," IT Services and IT Consulting and Computer and Network Security " Data Engineer,United States,Data Engineer - All Levels,https://www.linkedin.com/jobs/view/data-engineer-all-levels-at-fedex-dataworks-3509329010?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=W0KDy49j0D80CdFmqFA9FA%3D%3D&position=18&pageNum=8&trk=public_jobs_jserp-result_search-card," FedEx Dataworks ",https://www.linkedin.com/company/fedex-dataworks?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Job Description Duties for this role include but not limited to: supporting the design, build, test and maintain data pipelines at big data scale. Assists with updating data from multiple data sources. Work on batch processing of collected data and match its format to the stored data, make sure that the data is ready to be processed and analyzed. Assisting with keeping the ecosystem and the pipeline optimized and efficient, troubleshooting standard performance, data related problems and provide L3 support. Implementing parsers, validators, transformers and correlators to reformat, update and enhance the data. Provides recommendations to highly complex problems. Providing guidance to those in less senior positions. Additional Job Description: Data Engineers play a pivotal role within Dataworks, focused on creating and driving engineering innovation and facilitating the delivery of key business initiatives. Acting as a “universal translator” between IT, business, software engineers and data scientists, data engineers collaborate across multi-disciplinary teams to deliver value. Data Engineers will work on those aspects of the Dataworks platform that govern the ingestion, transformation, and pipelining of data assets, both to end users within FedEx and into data products and services that may be externally facing. Day-to-day, they will be deeply involved in code reviews and large-scale deployments. Essential Job Duties & Responsibilities Understanding in depth both the business and technical problems Dataworks aims to solve Building tools, platforms and pipelines to enable teams to clearly and cleanly analyze data, build models and drive decisions Scaling up from “laptop-scale” to “cluster scale” problems, in terms of both infrastructure and problem structure and technique Collaborating across teams to drive the generation of data driven operational insights that translate to high value optimized solutions. Delivering tangible value very rapidly, collaborating with diverse teams of varying backgrounds and disciplines Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases Interacting with senior technologists from the broader enterprise and outside of FedEx (partner ecosystems and customers) to create synergies and ensure smooth deployments to downstream operational systems Skill/Knowledge Considered a Plus Technical background in computer science, software engineering, database systems, and distributed systems. Keen understanding of transportation and logistics domain with the ability to identify high opportunity areas and design approaches and solutions that generate and capture value. Development experience in and familiarity with Microsoft Azure. Knowledge of and demonstrated experience in building CI/CD pipelines. Developing and operationalizing capabilities and solutions including under near real-time high-volume streaming conditions. Hands-on development skills with the ability to work at the code level and help debug hard to resolve issues. Demonstrated ability to deliver technical projects, often working under tight time constraints to deliver value. An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance and accelerate progress. Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews. DevOps: In terms of technology, must have all the knowledge like coding, scripting, database, yaml, json and IaC. Demonstrated expertise working with some of the following common languages and tools: Spark (Scala and PySpark), HDFS, Kafka and other high-volume data tools. SQL and NoSQL storage tools, such as MySQL, Postgres, Cassandra, MongoDB and ElasticSearch. Pandas, Scikit-Learn, Matplotlib, TensorFlow, Jupyter and other Python data tools Minimum Qualifications: Data Engineer II: Bachelor's Degree in Computer Science, Information Systems, a related quantitative field such as Engineering or Mathematics or equivalent formal training or work experience. Two (2) years equivalent work experience in measurement and analysis, quantitative business problem solving, simulation development and/or predictive analytics. Strong knowledge in data engineering and machine learning frameworks including design, development and implementation of highly complex systems and data pipelines. Strong knowledge in Information Systems including design, development and implementation of large batch or online transaction-based systems. Strong understanding of the transportation industry, competitors, and evolving technologies. Experience as a member of multi-functional project teams. Strong oral and written communication skills. A related advanced degree may offset the related experience requirements. Sponsorship is not available for Data Engineer II role. Data Engineer III: Bachelor’s Degree in Information Systems, Computer Science or a quantitative discipline such as Mathematics or Engineering and/or equivalent formal training or work experience. Five (5) years equivalent work experience in measurement and analysis, quantitative business problem solving, simulation development and/or predictive analytics. Extensive knowledge in data engineering and machine learning frameworks including design, development and implementation of highly complex systems and data pipelines. Extensive knowledge in Information Systems including design, development and implementation of large batch or online transaction-based systems. Strong understanding of the transportation industry, competitors, and evolving technologies. Experience providing leadership in a general planning or consulting setting. Experience as a senior member of multi-functional project teams. Strong oral and written communication skills. A related advanced degree may offset the related experience requirements. Data Engineer Lead: Bachelor’s Degree in Information Systems, Computer Science, or a quantitative discipline such as Mathematics or Engineering and/or equivalent formal training or work experience. Seven (7) years equivalent work experience in measurement and analysis, quantitative business problem solving, simulation development and/or predictive analytics. Extensive knowledge in data engineering and machine learning frameworks including design, development and implementation of highly complex systems and data pipelines. Extensive knowledge in Information Systems including design, development and implementation of large batch or online transaction-based systems. Strong understanding of the transportation industry, competitors, and evolving technologies. Experience providing leadership in a general planning or consulting setting. Experience as a leader or a senior member of multi-function project teams. Strong oral and written communication skills. A related advanced degree may offset the related experience requirements. Domicile / Relocation Information: This position can be domiciled anywhere in the United States. The ability to work remotely within the United States may be available based on business need. Application Criteria: Upload current copy of Resume (Microsoft Word or PDF format only) and answer job screening questionnaire. Additional InformationColorado Residents Only – Compensation: Monthly Salary: $6,201.00 - $14,777.00. The estimate displayed represents the typical salary range or starting rate of candidates hired in Colorado. Factors that may be used to determine your actual salary may include your specific skills, your work location, how many years of experience you have, and comparison to other employees already in this role. This information is provided to applicants in accordance to the Colorado Equal Pay for Equal Work Act. Born out of FedEx, a pioneer that ships nearly 20 million packages a day and manages endless threads of information, FedEx Dataworks is an organization rooted in connecting the physical and digital sides of our network to meet today's needs and address tomorrow's challenges. We are creating opportunities for FedEx, our customers, and the world at large by: Exploring and harnessing data to define and solve true problems; Removing barriers between data sets to create new avenues of insight; Building and iterating on solutions that generate value; Acting as a change agent to advance curiosity and performance. At FedEx Dataworks, we are making supply chains work smarter for everyone. Employee Benefits: medical, dental, and vision insurance; paid Life and AD&D insurance; tuition reimbursement; paid sick leave; paid parental leave, paid vacation, paid military leave, and additional paid time off; geographic pay ranges; 401k with Company match and incentive bonus potential; sales Incentive compensation for selling roles. Dataworks does not discriminate against qualified individuals with disabilities in regard to job application procedures, hiring, and other terms and conditions of employment. Further, Dataworks is prepared to make reasonable accommodations for the known physical or mental limitations of an otherwise qualified applicant or employee to enable the applicant or employee to be considered for the desired position, to perform the essential functions of the position in question, or to enjoy equal benefits and privileges of employment as are enjoyed by other similarly situated employees without disabilities, unless the accommodation will impose an undue hardship. If a reasonable accommodation is needed, please contact DataworksTalentAcquisition@corp.ds.fedex.com."," Not Applicable "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Finance Data Engineer,https://www.linkedin.com/jobs/view/finance-data-engineer-at-roblox-3510923487?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=NUQELTXW9GHxaozrMFe3rQ%3D%3D&position=19&pageNum=8&trk=public_jobs_jserp-result_search-card," Roblox ",https://www.linkedin.com/company/roblox?trk=public_jobs_topcard-org-name," San Mateo, CA "," 1 week ago "," 86 applicants ","Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators. At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there. A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone. As the company continues to grow so do the needs of the Finance Data Engineering team. We are working on designing a platform that is robust, scalable and SOX compliant. We are looking for an experienced data engineer who is open to learning, collaborating and will help build and automate applications and solutions at Roblox. You Will Build and maintain robust data pipelines for finance use cases, and ensure high quality data Work with structured, unstructured and real-time data from API sources, cloud or database. Design and implement internal process improvements such as automating manual processes, optimizing data delivery etc. Collaborate with business and engineering teams to ensure reliable, scable, robust solutions. You Have 5+ years of industry experience in data modeling, data analysis, building data pipelines and data visualization. Demonstrate experience working with any relational database technologies like Amazon Redshift, MS-SQL etc. Strong ad-hoc analysis skills utilizing SQL on Relational databases or Hadoop ecosystem processing engines Developed ETL utilizing a programmatic Interface or Framework, preferably in Python, Java, or Scala. Experience working with large datasets, automation and data visualization (Tableau or Superset). For roles that are based at our headquarters in San Mateo, CA: The starting base pay for this position is as shown below. The actual base pay is dependent upon a variety of job-related factors such as professional background, training, work experience, location, business needs and market demand. Therefore, in some circumstances, the actual salary could fall outside of this expected range. This pay range is subject to change and may be modified in the future. All full-time employees are also eligible for equity compensation and for benefits. Annual Salary Range $200,740—$253,430 USD You’ll Love Industry-leading compensation package Excellent medical, dental, and vision coverage A rewarding 401k program Flexible vacation policy Roflex - Flexible and supportive work policy Roblox Admin badge for your avatar At Roblox HQ: Free catered lunches five times a week and several fully stocked kitchens with unlimited snacks Onsite fitness center and fitness program credit Annual CalTrain Go Pass Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training."," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499587063?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=ISk9FjVFCR%2B%2BEb2VRaFIcA%3D%3D&position=8&pageNum=9&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," California, United States "," 2 weeks ago "," 59 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,"Data Engineer, Analytics",https://www.linkedin.com/jobs/view/data-engineer-analytics-at-meta-3503786346?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=2lgn0%2FskEj6EikbbZSRu6w%3D%3D&position=9&pageNum=9&trk=public_jobs_jserp-result_search-card," Meta ",https://www.linkedin.com/company/meta?trk=public_jobs_topcard-org-name," Durham, NC "," 1 month ago "," 70 applicants ","Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. Beyond this, you will work with some of the brightest minds in the industry, and you'll have a unique opportunity to solve some of the most interesting data challenges with efficiency and integrity, at a scale few companies can match. As we continue to expand and create, we have a lot of exciting work ahead of us! Data Engineer, Analytics Responsibilities: Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way Define and manage SLA for all data sets in allocated areas of ownership Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains Solve our most challenging data integration problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sources Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts Influence product and cross-functional teams to identify data opportunities to drive impact Mentor team members by giving/receiving actionable feedback Minimum Qualifications: Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. 4+ years of work experience in data engineering (a minimum of 2+ years with a Ph.D) Experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.) Preferred Qualifications: Master's or Ph.D degree in a STEM field Experience with one or more of the following: data processing automation, data quality, data warehousing, data governance, business intelligence, data visualization, data privacy Experience working with terabyte to petabyte scale data Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law"," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Data Engineer, Analytics",https://www.linkedin.com/jobs/view/data-engineer-at-brooksource-3510607974?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=%2BasiNJw0KNokKTyGD%2FiTng%3D%3D&position=10&pageNum=9&trk=public_jobs_jserp-result_search-card," Meta ",https://www.linkedin.com/company/meta?trk=public_jobs_topcard-org-name," Durham, NC "," 1 month ago "," 70 applicants "," Every month, billions of people leverage Meta products to connect with friends and loved ones from across the world. On the Data Engineering Team, our mission is to support these products both internally and externally by delivering the best data foundation that drives impact through informed decision making. As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth, strategy, and experience for our 3 billion plus users, as well as our internal employee community. In this role, you will see a direct correlation between your work, company growth, and user satisfaction. Beyond this, you will work with some of the brightest minds in the industry, and you'll have a unique opportunity to solve some of the most interesting data challenges with efficiency and integrity, at a scale few companies can match. As we continue to expand and create, we have a lot of exciting work ahead of us!Data Engineer, Analytics Responsibilities:Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systemsCreate and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolveCollaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful wayDefine and manage SLA for all data sets in allocated areas of ownershipDetermine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownershipDesign, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domainsSolve our most challenging data integration problems, utilizing optimal ETL patterns, frameworks, query techniques, sourcing from structured and unstructured data sourcesAssist in owning existing processes running in production, optimizing complex code through advanced algorithmic conceptsOptimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifactsInfluence product and cross-functional teams to identify data opportunities to drive impactMentor team members by giving/receiving actionable feedbackMinimum Qualifications:Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.4+ years of work experience in data engineering (a minimum of 2+ years with a Ph.D)Experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.)Preferred Qualifications:Master's or Ph.D degree in a STEM fieldExperience with one or more of the following: data processing automation, data quality, data warehousing, data governance, business intelligence, data visualization, data privacyExperience working with terabyte to petabyte scale dataMeta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. You may view our Equal Employment Opportunity notice here. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. We may use your information to maintain the safety and security of Meta, its employees, and others as required or permitted by law. You may view Meta's Pay Transparency Policy, Equal Employment Opportunity is the Law notice, and Notice to Applicants for Employment and Employees by clicking on their corresponding links. Additionally, Meta participates in the E-Verify program in certain locations, as required by law "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer for EOIR Programs,https://www.linkedin.com/jobs/view/data-engineer-for-eoir-programs-at-acacia-center-for-justice-3505391042?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=m5to%2B4WhybstH1lpbU8PJQ%3D%3D&position=17&pageNum=9&trk=public_jobs_jserp-result_search-card," Acacia Center for Justice ",https://www.linkedin.com/company/acacia-center-for-justice?trk=public_jobs_topcard-org-name," Washington, DC "," 2 weeks ago "," Be among the first 25 applicants ","About Acacia Center for Justice: The objective of the Acacia Center for Justice (“Acacia”) is to expand on Vera’s work over the past twenty years in providing legal support and representation to immigrants facing deportation through the development, coordination, and management of national networks of legal services providers serving immigrants across the country. Acacia’s goals are twofold: to support immigrant legal services and defense networks to provide exceptional legal services to immigrants and to advocate for the expansion of these programs and the infrastructure critical to guaranteeing immigrants access to justice, fairness, and freedom. Acacia will focus the collective power of both Vera and CAIR on delivering accountable, independent, zealous, and person-centered legal services and representation to protect the rights of all immigrants at risk of deportation. Job Summary: The Data Engineer will be a member of Acacia’s Research Evaluation and Data Analytics (REDA) team. They will provide technical and analytical support to the Director of Data Systems and Analytics and the Director of Research and Evaluation for Acacia’s evolving reporting and research agenda. To achieve this, the Data Engineer will be responsible for building out the data infrastructure for the suite of six immigration legal orientation and representation programs that Acacia administers in partnership with the Legal Access Program of the Executive Office of Immigration Review (EOIR). This work involves managing and enhancing data inputs and architecture for EOIR program databases, as well as the AWS analytic environment that houses the databases, and helping to streamline and improve the ingestion and transformation of data from both internal and external sources. The Data Engineer will also work to develop solutions that allow Acacia staff and subcontractors to securely access, manipulate, and transmit data in support of program delivery, administration, mandatory reporting, and research and evaluation work. The Data Engineer will also collaborate with the Managing Director of IT on data security solutions and work with the external vendors responsible for developing and maintaining our database. Primary Duties/Responsibilities: Create and support systems and processes for collecting, compiling, manipulating, and analyzing data to support Acacia’s program and research, which includes: Working with the Acacia Research Evaluation and Data Analytics (REDA) team and program management staff to identify and solve data ingestion, migration, management, and integration challenges. Designing data models and setting up data environments to support reporting and analysis. Building and testing data ingestion, migration, and ETL/ELT processes. Automating processes and scheduling jobs within the data environment. Writing documentation of systems and processes for collaboration within REDA and with program management staff. Partner with REDA team members to provide coding support, conduct code reviews, and apply software engineering best practices for data security. Partner with REDA team members to generate mandatory and ad hoc reports for EOIR program work. Provide technical support for EOIR program databases (including data quality management, working with IT subcontractors to improve the system logic of database functions, and assisting with database trainings and database technical support for end-users). Ensure data ecosystems are security compliant and properly integrated with Acacia’s IT systems where applicable. Work closely with IT to ensure compliance and integration of systems. Required Skills, Knowledge, Abilities: Bachelor’s Degree or equivalent and/or certification in data engineering, analytics, data science or another relevant field (machine learning, IT systems, etc.) 2+ years of full-time experience in a data engineering capacity. 2+ years data analytics experience (including data cleaning and framing, generating reports, data visualizations, writing code and code review) Experience collaborating directly with data scientists and data analysts who develop analyses in any combination of R, Python, SQL, Tableau, Stata, etc. preferred. Working knowledge of AWS data product ecosystem. Prior experience designing and developing data models, building out and testing ETL data pipelines, and automating scheduled workflows using technologies including Github Actions, SQL & Python. Fluency in collaborating with Git & Github, with dedication to using these tools to conduct peer code reviews and uphold coding standards. High comfort level with ingesting messy source datasets that are prone to manual data entry errors and integrating these into a live database. Ability to build repeatable and well-documented processes and tools that can be used by other research & analytics team members, regardless of the languages they use to perform their analyses. Excellent oral and written communication skills, including ability to present and teach the use of data infrastructure to a range of audiences in a variety of formats, and work effectively on a large team to advance shared priorities. Strong social and emotional awareness with your team and external partners. The following security checks are required for this position: a National Crime Information Center (NCIC) check, and an Electronic Questionnaire for Investigations Processing (Tier 1, e-QIP) security clearance: see, . Preferred Skills, Knowledge, Abilities: Experience with Linux and bash scripting. Experience building dashboards with tools such as Tableau, Power BI, Looker, etc. Experience with automating bespoke tasks (e.g. basic web scraping, using 3rd party APIs). Familiarity with government security compliance standards. Graduate level education and/or certification in data engineering, analytics, data science or another relevant field (machine learning, IT systems, etc.) Compensation and Benefits: Acacia has established an internal compensation philosophy that centers equity and pay transparency. The salary for this position is set at $96,000. The salary listed is just one component of Acacia’s total compensation package for employees. Supporting Acacia staff—both personally and professionally—is our priority. Medical/Dental/Vision- Some plans at $0 cost to the employee Employee Assistance Program 20 days per year of vacation time 12 days per year of sick time 5 personal days 4 organization-wide Wellness Days 11 observed holidays, including the last week of December. $2000 Professional Development Stipend Home office set-up stipend Internet Stipend 401k with 5% employer contribution, no employee participation required. Student loan repayment assistance. Gym Reimbursement People of color and those who have been impacted by the criminal justice system are strongly urged to apply. To Apply: Please upload a resume and cover letter at the link provided or email hiring@acaciajustice.org with Subject: ATTN: Human Resources / [Job Title], Acacia Center for Justice Equal Opportunity Employment: Acacia is an equal opportunity employer and seeks to recruit persons of diverse backgrounds and support their retention and advancement within the organization. We are committed to fostering a workplace culture inclusive of people with respect to their race, ethnicity, national origin, gender/gender identity, sexual orientation, socio-economic status, veteran status, marital status, age, disabilities, political affiliation, religious beliefs, or any other characteristic. Our commitment to justice and diversity also means providing a work environment that is welcoming, respectful, and engaging. As a federal contractor, and in order to ensure a healthy and safe work environment, Acacia Center for Justice is requiring all employees to be fully vaccinated and provide proof of their COVID-19 vaccine before their start date. Employees who cannot receive the vaccine because of a disability/medical contraindication or sincerely-held religious belief may request an accommodation (e.g., an exemption) to this requirement. This job description is not meant to be an all-inclusive list of duties, responsibilities and requirements but constitutes a general definition of the position's scope and function within our organization. Acacia Center for Justice is an equal opportunity/affirmative action employer. All qualified applicants will be considered for employment without unlawful discrimination based on race, color, creed, national origin, sex, age, disability, marital status, sexual orientation, military status, prior record of arrest or conviction, citizenship status, current employment status, or caregiver status. Powered by JazzHR IzuMBpe7XX"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer for EOIR Programs,https://www.linkedin.com/jobs/view/data-engineer-at-eliassen-group-3511763941?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=bPKAM3L3y8xUncfN4xx9Aw%3D%3D&position=18&pageNum=9&trk=public_jobs_jserp-result_search-card," Acacia Center for Justice ",https://www.linkedin.com/company/acacia-center-for-justice?trk=public_jobs_topcard-org-name," Washington, DC "," 2 weeks ago "," Be among the first 25 applicants "," About Acacia Center for Justice:The objective of the Acacia Center for Justice (“Acacia”) is to expand on Vera’s work over the past twenty years in providing legal support and representation to immigrants facing deportation through the development, coordination, and management of national networks of legal services providers serving immigrants across the country. Acacia’s goals are twofold: to support immigrant legal services and defense networks to provide exceptional legal services to immigrants and to advocate for the expansion of these programs and the infrastructure critical to guaranteeing immigrants access to justice, fairness, and freedom. Acacia will focus the collective power of both Vera and CAIR on delivering accountable, independent, zealous, and person-centered legal services and representation to protect the rights of all immigrants at risk of deportation.Job Summary:The Data Engineer will be a member of Acacia’s Research Evaluation and Data Analytics (REDA) team. They will provide technical and analytical support to the Director of Data Systems and Analytics and the Director of Research and Evaluation for Acacia’s evolving reporting and research agenda. To achieve this, the Data Engineer will be responsible for building out the data infrastructure for the suite of six immigration legal orientation and representation programs that Acacia administers in partnership with the Legal Access Program of the Executive Office of Immigration Review (EOIR). This work involves managing and enhancing data inputs and architecture for EOIR program databases, as well as the AWS analytic environment that houses the databases, and helping to streamline and improve the ingestion and transformation of data from both internal and external sources. The Data Engineer will also work to develop solutions that allow Acacia staff and subcontractors to securely access, manipulate, and transmit data in support of program delivery, administration, mandatory reporting, and research and evaluation work. The Data Engineer will also collaborate with the Managing Director of IT on data security solutions and work with the external vendors responsible for developing and maintaining our database.Primary Duties/Responsibilities:Create and support systems and processes for collecting, compiling, manipulating, and analyzing data to support Acacia’s program and research, which includes:Working with the Acacia Research Evaluation and Data Analytics (REDA) team and program management staff to identify and solve data ingestion, migration, management, and integration challenges.Designing data models and setting up data environments to support reporting and analysis.Building and testing data ingestion, migration, and ETL/ELT processes.Automating processes and scheduling jobs within the data environment.Writing documentation of systems and processes for collaboration within REDA and with program management staff.Partner with REDA team members to provide coding support, conduct code reviews, and apply software engineering best practices for data security.Partner with REDA team members to generate mandatory and ad hoc reports for EOIR program work.Provide technical support for EOIR program databases (including data quality management, working with IT subcontractors to improve the system logic of database functions, and assisting with database trainings and database technical support for end-users).Ensure data ecosystems are security compliant and properly integrated with Acacia’s IT systems where applicable.Work closely with IT to ensure compliance and integration of systems.Required Skills, Knowledge, Abilities:Bachelor’s Degree or equivalent and/or certification in data engineering, analytics, data science or another relevant field (machine learning, IT systems, etc.)2+ years of full-time experience in a data engineering capacity.2+ years data analytics experience (including data cleaning and framing, generating reports, data visualizations, writing code and code review)Experience collaborating directly with data scientists and data analysts who develop analyses in any combination of R, Python, SQL, Tableau, Stata, etc. preferred.Working knowledge of AWS data product ecosystem.Prior experience designing and developing data models, building out and testing ETL data pipelines, and automating scheduled workflows using technologies including Github Actions, SQL & Python.Fluency in collaborating with Git & Github, with dedication to using these tools to conduct peer code reviews and uphold coding standards.High comfort level with ingesting messy source datasets that are prone to manual data entry errors and integrating these into a live database.Ability to build repeatable and well-documented processes and tools that can be used by other research & analytics team members, regardless of the languages they use to perform their analyses.Excellent oral and written communication skills, including ability to present and teach the use of data infrastructure to a range of audiences in a variety of formats, and work effectively on a large team to advance shared priorities.Strong social and emotional awareness with your team and external partners.The following security checks are required for this position: a National Crime Information Center (NCIC) check, and an Electronic Questionnaire for Investigations Processing (Tier 1, e-QIP) security clearance: see, .Preferred Skills, Knowledge, Abilities:Experience with Linux and bash scripting.Experience building dashboards with tools such as Tableau, Power BI, Looker, etc.Experience with automating bespoke tasks (e.g. basic web scraping, using 3rd party APIs).Familiarity with government security compliance standards.Graduate level education and/or certification in data engineering, analytics, data science or another relevant field (machine learning, IT systems, etc.) Compensation and Benefits: Acacia has established an internal compensation philosophy that centers equity and pay transparency. The salary for this position is set at $96,000. The salary listed is just one component of Acacia’s total compensation package for employees. Supporting Acacia staff—both personally and professionally—is our priority.Medical/Dental/Vision- Some plans at $0 cost to the employeeEmployee Assistance Program20 days per year of vacation time12 days per year of sick time5 personal days4 organization-wide Wellness Days11 observed holidays, including the last week of December.$2000 Professional Development StipendHome office set-up stipend Internet Stipend401k with 5% employer contribution, no employee participation required. Student loan repayment assistance.Gym ReimbursementPeople of color and those who have been impacted by the criminal justice system are strongly urged to apply.To Apply: Please upload a resume and cover letter at the link provided or email hiring@acaciajustice.org with Subject: ATTN: Human Resources / [Job Title], Acacia Center for JusticeEqual Opportunity Employment:Acacia is an equal opportunity employer and seeks to recruit persons of diverse backgrounds and support their retention and advancement within the organization. We are committed to fostering a workplace culture inclusive of people with respect to their race, ethnicity, national origin, gender/gender identity, sexual orientation, socio-economic status, veteran status, marital status, age, disabilities, political affiliation, religious beliefs, or any other characteristic. Our commitment to justice and diversity also means providing a work environment that is welcoming, respectful, and engaging.As a federal contractor, and in order to ensure a healthy and safe work environment, Acacia Center for Justice is requiring all employees to be fully vaccinated and provide proof of their COVID-19 vaccine before their start date. Employees who cannot receive the vaccine because of a disability/medical contraindication or sincerely-held religious belief may request an accommodation (e.g., an exemption) to this requirement. This job description is not meant to be an all-inclusive list of duties, responsibilities and requirements but constitutes a general definition of the position's scope and function within our organization.Acacia Center for Justice is an equal opportunity/affirmative action employer. All qualified applicants will be considered for employment without unlawful discrimination based on race, color, creed, national origin, sex, age, disability, marital status, sexual orientation, military status, prior record of arrest or conviction, citizenship status, current employment status, or caregiver status.Powered by JazzHRIzuMBpe7XX "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Junior Data Scientist/Database Engineer - Technology,https://www.linkedin.com/jobs/view/junior-data-scientist-database-engineer-technology-at-arena-investors-lp-3507702618?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=tacPE9WrCf4bWnjJc%2FqNmQ%3D%3D&position=5&pageNum=10&trk=public_jobs_jserp-result_search-card," Arena Investors, LP ",https://www.linkedin.com/company/arena-investors-lp?trk=public_jobs_topcard-org-name," Jacksonville, FL "," 1 month ago "," Be among the first 25 applicants ","Please Note: This job is advertised for our office in Jacksonville Fl, but open to remote in the Southeast Region. (Florida, Georgia, North/South Carolina, Virginia, Alabama, Texas) Arena Investors, LP (""Arena"") is a global investment management firm that seeks to generate attractive risk adjusted, consistent and uncorrelated returns by employing a fundamentals based, asset-oriented financing and investing strategy across the entire credit spectrum in areas where conventional sources of capital are scarce. Arena specializes in off-the-run, stressed, distressed, illiquid and esoteric special situation transactions through originations and acquisitions of asset-oriented investments across a wide array of asset types (including but not limited to private direct corporate credit, commercial real estate bridge lending, and commercial and consumer assets). Quaestor Advisors, LLC (“Quaestor”) is an affiliated Special Servicer, which provides mid and back office services, including asset management, to Arena Investors and external clients. Quaestor is looking to expand the Technology team, through the addition of a junior-to-mid-level database developer/data scientist who has a passion for coding, troubleshooting data and solving problems. The right candidate would be a key contributor to a proprietary financial data warehouse that is crucial to running its business. Responsivities would include analyzing data, troubleshooting logic problems, monitoring data health and developing data transformations/stored procedures while learning to interface with business-side sponsors from asset management, accounting, and operations. Ideal candidates will be organized, analytical, self-motivated, resourceful and able to work/communicate effectively with all internal functional groups. This is a great opportunity to be a part of a large-scale fintech platform while honing one’s technical skills and learning the private credit business. Responsibilities Analyze/Troubleshoot financial data of Arena’s Data warehouse. Assist with projects/tasks related to data analysis, data visualization, and reporting for our Operations, Finance & Asset Management departments. Develop stored procedures and data transformations. Develop PowerBI visualizations. Requirements Bachelor’s degree in computer science from a top university. 3-5 years experience in technology – tech pure play, startup, fintech, etc. Internships/Co-ops experience is a plus. Strong Relational Database knowledge (MS SQL Server, etc). Fluency in SQL, T-SQL, Stored Procedures. Familiarity with PowerBI (or Tableau) for data visualizations. Familiarity with SSRS for reporting. Experience with accounting, fund-accounting, trading operations-related data a plus. Familiarity with web development (HTML5, Javascript, jQuery, CSS, REST, etc) a plus. Familiarity with OOP and at least one language (C#, Java, etc) a plus. Comfortable with Agile/Scrum SDLC. A positive attitude, strong work ethic and a desire to work collaboratively across the organization. Strong attention to detail. COVID Vaccinated Benefits Health Care Plan (Medical, Dental & Vision) Retirement Plan (401k, IRA) Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation, Sick & Public Holidays) Family Leave (Maternity, Paternity) Short Term & Long Term Disability Training & Development Work From Home Free Food & Snacks Wellness Resources"," Associate "," Full-time "," Analyst "," Technology, Information and Internet " Data Engineer,United States,Junior Data Scientist/Database Engineer - Technology,https://www.linkedin.com/jobs/view/data-engineer-at-starschema-3479481781?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=WdbQ3LRDqD88iCWvE5WNeA%3D%3D&position=6&pageNum=10&trk=public_jobs_jserp-result_search-card," Arena Investors, LP ",https://www.linkedin.com/company/arena-investors-lp?trk=public_jobs_topcard-org-name," Jacksonville, FL "," 1 month ago "," Be among the first 25 applicants "," Please Note: This job is advertised for our office in Jacksonville Fl, but open to remote in the Southeast Region. (Florida, Georgia, North/South Carolina, Virginia, Alabama, Texas)Arena Investors, LP (""Arena"") is a global investment management firm that seeks to generate attractive risk adjusted, consistent and uncorrelated returns by employing a fundamentals based, asset-oriented financing and investing strategy across the entire credit spectrum in areas where conventional sources of capital are scarce. Arena specializes in off-the-run, stressed, distressed, illiquid and esoteric special situation transactions through originations and acquisitions of asset-oriented investments across a wide array of asset types (including but not limited to private direct corporate credit, commercial real estate bridge lending, and commercial and consumer assets).Quaestor Advisors, LLC (“Quaestor”) is an affiliated Special Servicer, which provides mid and back office services, including asset management, to Arena Investors and external clients.Quaestor is looking to expand the Technology team, through the addition of a junior-to-mid-level database developer/data scientist who has a passion for coding, troubleshooting data and solving problems. The right candidate would be a key contributor to a proprietary financial data warehouse that is crucial to running its business. Responsivities would include analyzing data, troubleshooting logic problems, monitoring data health and developing data transformations/stored procedures while learning to interface with business-side sponsors from asset management, accounting, and operations. Ideal candidates will be organized, analytical, self-motivated, resourceful and able to work/communicate effectively with all internal functional groups.This is a great opportunity to be a part of a large-scale fintech platform while honing one’s technical skills and learning the private credit business.ResponsibilitiesAnalyze/Troubleshoot financial data of Arena’s Data warehouse. Assist with projects/tasks related to data analysis, data visualization, and reporting for our Operations, Finance & Asset Management departments. Develop stored procedures and data transformations. Develop PowerBI visualizations.RequirementsBachelor’s degree in computer science from a top university. 3-5 years experience in technology – tech pure play, startup, fintech, etc. Internships/Co-ops experience is a plus. Strong Relational Database knowledge (MS SQL Server, etc). Fluency in SQL, T-SQL, Stored Procedures. Familiarity with PowerBI (or Tableau) for data visualizations. Familiarity with SSRS for reporting. Experience with accounting, fund-accounting, trading operations-related data a plus. Familiarity with web development (HTML5, Javascript, jQuery, CSS, REST, etc) a plus. Familiarity with OOP and at least one language (C#, Java, etc) a plus. Comfortable with Agile/Scrum SDLC. A positive attitude, strong work ethic and a desire to work collaboratively across the organization. Strong attention to detail.COVID Vaccinated BenefitsHealth Care Plan (Medical, Dental & Vision)Retirement Plan (401k, IRA)Life Insurance (Basic, Voluntary & AD&D)Paid Time Off (Vacation, Sick & Public Holidays)Family Leave (Maternity, Paternity)Short Term & Long Term DisabilityTraining & DevelopmentWork From HomeFree Food & SnacksWellness Resources "," Associate "," Full-time "," Analyst "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-chisel-analytics-3510057807?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=nmb%2F3GoxA%2FK1Fg2c3vJn7g%3D%3D&position=21&pageNum=10&trk=public_jobs_jserp-result_search-card," Chisel Analytics ",https://www.linkedin.com/company/chiselanalytics?trk=public_jobs_topcard-org-name," Dallas-Fort Worth Metroplex "," 1 week ago "," Over 200 applicants ","GENERAL PURPOSE The candidate will be responsible for designing ETL processes in the cloud with the objective of obtaining both external and internal data sources to feed the team's data repository following guidelines and best practice standards. The role will work together with the Data Scientists and Data Translators team as well as contact the IT team for the development of analytical projects and models. They will also be responsible for automating, optimizing and monitoring production implementations. DUTIES AND RESPONSIBILITIES Design, create and operate robust pipelines from internal / external sources to the Data Lake Transformation data to meet the requirements of the Data Science team. Automate and optimize project analytical solutions (End-to-end) Creation of unit tests. Support of productive projects. Work under Agile methodology with an advanced analytics team (Data Scientists, Data Translators) QUALIFICATIONS Education: Bachelor’s degree in computer science, information systems, or closely related field. Advanced degree (Masters or PhD) preferred Experience: Experience in the development of ETLs (3 years) Experience in database (3 years) Python / Scala (2 years) At least have experience in 2 different programming languages Experience in cloud platforms (desirable Azure) Basic knowledge of best programming practices. Teamwork. Autodidact, ability to learn on their own Orientation to results Desirable: Azure Data Factory, Azure Databricks, Azure DevOps CI / CD SAP Data Services Scrum"," Contract ",,, Data Engineer,United States,Data Engineer -272,https://www.linkedin.com/jobs/view/data-engineer-272-at-surge-technology-solutions-inc-3526766970?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=yO3fyMrTxd%2BulW7qxcbMkQ%3D%3D&position=23&pageNum=10&trk=public_jobs_jserp-result_search-card," Surge Technology Solutions Inc ",https://www.linkedin.com/company/surge-technology-solutions?trk=public_jobs_topcard-org-name," Peoria, IL "," 11 hours ago "," Be among the first 25 applicants ","Emp Type: W2 or 1099 (No C2C) Visa: H4EAD, GCEAD, L2, Green Card, US Citizens Location: Peoria IL Workplace Type: Hybrid (2 days a week ) Peoria IL , Chicago IL (Only USA Applicants) Experience: 8+ years experience required (USA Applicants only) Job Description: Summary: The main function of an analyst/developer is to develop and design web applications and web sites. A typical analyst/developer is responsible for directing web site content creation, enhancement and maintenance. Job Responsibilities Basic design, build or maintenance of web sites, using authoring or scripting languages, content creation tools, management tools and digital media. Identify problems uncovered by testing or customer feedback and correct problems. Evaluate code to ensure it is valid, meets industry standards and is compatible with devices or operation systems. Skills Verbal and written communication skills, problem solving skills, customer service and interpersonal skills. Basic ability to work independently and manage one's time. Basic knowledge of circuit boards, processors, electronic equipment and computer hardware and software. Basic knowledge of design techniques and principles involved in production of drawings and models. Basic knowledge of computer software, such as Adobe, Java, SQL, etc. Education/Experience Bachelor's degree in computer science or equivalent training required. 8-10 Years Experience Required. Education & Experience Required: Requires a Bachelor's degree in Computer Science or related field and more than 8 years of experience in development and support work related to Web development, API, Database Design & Development, and/or Data Analytics.. Applicable project/internship work will be considered if durations are listed on resumes. Technical Skills (Required) MySQL, Microsoft SQL Server, Oracle, IBM DB2 - Azure Cloud - ASP.NET, C# - Power BI (experience working with large data sets) (Desired) Snowflake Database - Demonstrable experience with software/web development with emphasis on Quality/internship work will be considered if durations are listed on resumes."," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-idr-inc-3523756808?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=u36LP8ixS372AMc3HtQjyQ%3D%3D&position=24&pageNum=10&trk=public_jobs_jserp-result_search-card," IDR, Inc. ",https://www.linkedin.com/company/idrinc?trk=public_jobs_topcard-org-name," Tennessee, United States "," 13 hours ago "," Over 200 applicants ","IDR is seeking a Data Engineer to join one of our top clients in the healthcare industry out of Franklin, TN. This opportunity is 100% remote, a long-term opportunity, and offers the chance to work for an ever-growing team!! If this sounds like a fit for you, please apply TODAY!! *NOT open to C2C / Sponsorship* Position Overview for the Data Engineer: Design and build data platforms and pipelines that increase analytic capabilities Identifying, designing, and implementing process improvements including optimizing data delivery and automating manual processes. Using Azure Data Factory and SSIS for data extraction and transformation Collaborate with Design, Data, and Product reams to assist and support their data infrastructure needs Required Skills for the Data Engineer: 2+ years of Data Engineer experience 2+ years of experience with SQL, queries, and relational databases 2+ years of ETL experience Experience with ""Big Data"" pipelines, architectures, and data sets Bachelor's Degree in IT or equivalent work experience What’s in it for you? Competitive compensation package Full Benefits; Medical, Vision, Dental, and more! Opportunity to get in with an industry leading organization Close-knit and team-oriented culture Why IDR? 20+ Years of Proven Industry Experience in 4 major markets Employee Stock Ownership Program Dedicated Engagement Manager who is committed to you and your success Medical, Dental, Vision, and Life Insurance ClearlyRated’s Best of Staffing® Client and Talent Award winner 9 years in a row"," Mid-Senior level "," Contract "," Information Technology and Engineering "," Hospitals and Health Care " Data Engineer,United States,Data Engineer I - REMOTE,https://www.linkedin.com/jobs/view/data-engineer-i-remote-at-core-group-resources-3511755938?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=glbjVZAoXWP%2Fw26%2BmllOuw%3D%3D&position=25&pageNum=10&trk=public_jobs_jserp-result_search-card," Core Group Resources ",https://www.linkedin.com/company/core-group-resources?trk=public_jobs_topcard-org-name," Phoenix, AZ "," 1 month ago "," Be among the first 25 applicants ","We Are Currently In The Market For The Following Core Group Resources (www.coregroupresources.com) is Americas leading recruitment company. Founded by a service academy graduate who has offshore experience, Core Group Resources expertise is unmatched in the marine offshore market, finance, IT, renewables, & non-profit for executive search, staffing, and expertise identification. For more information contact us at 281 347 4700. Data Engineer I – REMOTE Job Summary This position will work in cross-functional, geographically distributed agile teams of highly skilled data engineers, software/machine learning engineers, data scientists, DevOps engineers, designers, product managers, technical delivery teams, and others to continuously innovate analytic solutions. You will work in close collaboration with mining operations, subject matter experts, data scientists, and software engineers to develop advanced, highly automated data products. Responsibilities Design, develop, and review real-time/bulk data pipelines Design patterns for ingest, transformation, and egress Develop documentation of Data Lineage, and Data Dictionaries Utilize modern cloud technologies and employ best practices from DevOps/DataOps to produce production Python and SQL code Requirements Bachelor’s degree in Engineering, Computer Science, Analytics, or related field At least 3 years of work experience Knowledgeable practitioner in SQL and Python development, data engineering, data modeling, software engineering, and ML systems architecture Experience in data science, wrangling data, etc. Preferred knowledge in Azure Stream Architectures, Distributed Parallel Processing Learning Environment, problem solving/root cause analysis, Agile, Scrum, and Kanban #Dice"," Entry level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stellent-it-3527797182?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=yf4tfNcmuPqELbPx8PVy5g%3D%3D&position=1&pageNum=11&trk=public_jobs_jserp-result_search-card," Stellent IT ",https://www.linkedin.com/company/stellent-it?trk=public_jobs_topcard-org-name," Houston, TX "," 3 weeks ago "," Be among the first 25 applicants ","Houston, Texas long term Phone + skype Job Description Here is the skill set and experience I am looking for: 5+ years Data Engineering experience Python experience not just as a web app Panda Apache Airflow Proficient in SQL as a language Algorithm Experience Excellent communication This needs to be someone that is comfortable and has experience working with complex calculations and mathematical formulas. A strong statistics background would be nice."," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer -272,https://www.linkedin.com/jobs/view/data-engineer-272-at-surge-technology-solutions-inc-3526766970?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=xrDoPCAM7XFgj7Xw3ZKjUw%3D%3D&position=2&pageNum=11&trk=public_jobs_jserp-result_search-card," Surge Technology Solutions Inc ",https://www.linkedin.com/company/surge-technology-solutions?trk=public_jobs_topcard-org-name," Peoria, IL "," 11 hours ago "," Be among the first 25 applicants ","Emp Type: W2 or 1099 (No C2C) Visa: H4EAD, GCEAD, L2, Green Card, US Citizens Location: Peoria IL Workplace Type: Hybrid (2 days a week ) Peoria IL , Chicago IL (Only USA Applicants) Experience: 8+ years experience required (USA Applicants only) Job Description: Summary: The main function of an analyst/developer is to develop and design web applications and web sites. A typical analyst/developer is responsible for directing web site content creation, enhancement and maintenance. Job Responsibilities Basic design, build or maintenance of web sites, using authoring or scripting languages, content creation tools, management tools and digital media. Identify problems uncovered by testing or customer feedback and correct problems. Evaluate code to ensure it is valid, meets industry standards and is compatible with devices or operation systems. Skills Verbal and written communication skills, problem solving skills, customer service and interpersonal skills. Basic ability to work independently and manage one's time. Basic knowledge of circuit boards, processors, electronic equipment and computer hardware and software. Basic knowledge of design techniques and principles involved in production of drawings and models. Basic knowledge of computer software, such as Adobe, Java, SQL, etc. Education/Experience Bachelor's degree in computer science or equivalent training required. 8-10 Years Experience Required. Education & Experience Required: Requires a Bachelor's degree in Computer Science or related field and more than 8 years of experience in development and support work related to Web development, API, Database Design & Development, and/or Data Analytics.. Applicable project/internship work will be considered if durations are listed on resumes. Technical Skills (Required) MySQL, Microsoft SQL Server, Oracle, IBM DB2 - Azure Cloud - ASP.NET, C# - Power BI (experience working with large data sets) (Desired) Snowflake Database - Demonstrable experience with software/web development with emphasis on Quality/internship work will be considered if durations are listed on resumes."," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer I - REMOTE,https://www.linkedin.com/jobs/view/data-engineer-i-remote-at-core-group-resources-3511755938?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=oninYz8cFTfC1D8uSAHb%2Bw%3D%3D&position=3&pageNum=11&trk=public_jobs_jserp-result_search-card," Core Group Resources ",https://www.linkedin.com/company/core-group-resources?trk=public_jobs_topcard-org-name," Phoenix, AZ "," 1 month ago "," Be among the first 25 applicants ","We Are Currently In The Market For The Following Core Group Resources (www.coregroupresources.com) is Americas leading recruitment company. Founded by a service academy graduate who has offshore experience, Core Group Resources expertise is unmatched in the marine offshore market, finance, IT, renewables, & non-profit for executive search, staffing, and expertise identification. For more information contact us at 281 347 4700. Data Engineer I – REMOTE Job Summary This position will work in cross-functional, geographically distributed agile teams of highly skilled data engineers, software/machine learning engineers, data scientists, DevOps engineers, designers, product managers, technical delivery teams, and others to continuously innovate analytic solutions. You will work in close collaboration with mining operations, subject matter experts, data scientists, and software engineers to develop advanced, highly automated data products. Responsibilities Design, develop, and review real-time/bulk data pipelines Design patterns for ingest, transformation, and egress Develop documentation of Data Lineage, and Data Dictionaries Utilize modern cloud technologies and employ best practices from DevOps/DataOps to produce production Python and SQL code Requirements Bachelor’s degree in Engineering, Computer Science, Analytics, or related field At least 3 years of work experience Knowledgeable practitioner in SQL and Python development, data engineering, data modeling, software engineering, and ML systems architecture Experience in data science, wrangling data, etc. Preferred knowledge in Azure Stream Architectures, Distributed Parallel Processing Learning Environment, problem solving/root cause analysis, Agile, Scrum, and Kanban #Dice"," Entry level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-idr-inc-3523756808?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=YuYfsjQhuXizzqljIZ2DZw%3D%3D&position=4&pageNum=11&trk=public_jobs_jserp-result_search-card," IDR, Inc. ",https://www.linkedin.com/company/idrinc?trk=public_jobs_topcard-org-name," Tennessee, United States "," 13 hours ago "," Over 200 applicants ","IDR is seeking a Data Engineer to join one of our top clients in the healthcare industry out of Franklin, TN. This opportunity is 100% remote, a long-term opportunity, and offers the chance to work for an ever-growing team!! If this sounds like a fit for you, please apply TODAY!! *NOT open to C2C / Sponsorship* Position Overview for the Data Engineer: Design and build data platforms and pipelines that increase analytic capabilities Identifying, designing, and implementing process improvements including optimizing data delivery and automating manual processes. Using Azure Data Factory and SSIS for data extraction and transformation Collaborate with Design, Data, and Product reams to assist and support their data infrastructure needs Required Skills for the Data Engineer: 2+ years of Data Engineer experience 2+ years of experience with SQL, queries, and relational databases 2+ years of ETL experience Experience with ""Big Data"" pipelines, architectures, and data sets Bachelor's Degree in IT or equivalent work experience What’s in it for you? Competitive compensation package Full Benefits; Medical, Vision, Dental, and more! Opportunity to get in with an industry leading organization Close-knit and team-oriented culture Why IDR? 20+ Years of Proven Industry Experience in 4 major markets Employee Stock Ownership Program Dedicated Engagement Manager who is committed to you and your success Medical, Dental, Vision, and Life Insurance ClearlyRated’s Best of Staffing® Client and Talent Award winner 9 years in a row"," Mid-Senior level "," Contract "," Information Technology and Engineering "," Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-flywheel-digital-3513140667?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=OeaEwYnQzklyIro2MpKOeg%3D%3D&position=5&pageNum=11&trk=public_jobs_jserp-result_search-card," Flywheel Digital ",https://www.linkedin.com/company/flywheel-digital?trk=public_jobs_topcard-org-name," Baltimore, MD "," 1 week ago "," 113 applicants ","Flywheel Digital Hybrid- Office located in Baltimore, MD Data Engineer About Flywheel Flywheel Digital is a diverse collection of practitioners who have solved the most challenging problems for numerous Fortune 500 companies on Amazon. We love rolling up our sleeves to figure out the root cause of issues and implement structural fixes to get and keep our client's business on track. Our team of business managers, search managers, analysts, and software developers work together to provide industry-leading support to the best brands on Amazon. Flywheel is headquartered in Baltimore in the United States and has recently set up a European hub in London. Role Overview We're looking for a Data Engineer to join our team as part of our Product Development function. The best candidates will hit the ground running and contribute to our data team as we develop and maintain necessary data automation, reports, ETL/ELT, and quality controls using leading-edge cloud technologies. You will have a deep knowledge and understanding of all stages in the software development life cycle. The ability to self-start, desire to learn new technology, manage multiple priorities, and strong communication are all in your wheelhouse! Key Responsibilities Be a driving force in assuring quality, timely, accurate data across several business areas Build data pipelines that range from simple to complex, using technologies like Apache Airflow and AWS Lambda, Step Functions, and CloudWatch Write code in Python, PostgreSQL and MySQL. You must have deep experience with SQL views and stored procedures, and you understand data modelling and the value of an ERD Extract data from REST API endpoints Be comfortable being called upon to engage directly with technical analysts to help build concise technical requirements Have familiarity with version control concepts, have used GitHub in a team setting, and have experience collaborating with other engineers in a paired development environment Be reliable, accountable, and can work without supervision, independently and as part of a team Have high level of integrity, be action-oriented and an assertive communicator Be flexible, and able to handle multiple priorities as needed Strive to do things the right way, embrace change in dynamic, rapidly evolving environment Love to learn and use cutting-edge technology Your Experience 4 years of experience developing with Python 2 years of experience developing with MySQL and PostgreSQL (AWS Redshift would be ideal) Experience working in an agile development environment Experience with data modelling Experience with data pipelines/batch automation concepts (Apache Airflow would be ideal) Familiarity with Jira Familiarity with GitHub Experience with AWS S3 Experience with AWS Lambda and CloudWatch Experience with other AWS technologies: EC2, Step Functions, Glue, Athena, Data Pipeline What We Offer Our benefits package incorporates what we’re passionate about – unlocking your future, overall well-being and sustainability – whilst giving you control over your benefits. Unlimited PTO 401K – Saving Incentive plan Very Generous Medical, Vision, and Dental Insurance plans Flexible Spending Accounts Great learning and development opportunities Life Assurance and Disability insurance Option to opt into the Ascential Shares Scheme Inclusive Workforce At Ascential, our goal is to create a culture where individuals of all backgrounds feel comfortable in bringing their authentic selves to work. We want all Ascential people to feel included and truly empowered to contribute fully to our vision and goals. Everyone who applies will receive fair consideration for employment. We do not discriminate based upon race, colour, religion, sex, sexual orientation, age, marital status, gender identity, national origin, disability, or any other applicable legally protected characteristics in the location in which the candidate is applying. If you have any accessibility requirements that would make you more comfortable during the application and interview process, please let us know so that we can support you."," Associate "," Full-time "," Information Technology, Engineering, and Product Management "," IT Services and IT Consulting, Advertising Services, and Business Consulting and Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499583542?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=psknzYsJ0G76ixIhkyxHhA%3D%3D&position=6&pageNum=11&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Illinois, United States "," 1 week ago "," 33 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/etl-data-engineer-remote-at-stellent-it-3527796131?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=it9cYjyAGa7Yf8OM%2F%2F0CCw%3D%3D&position=7&pageNum=11&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Illinois, United States "," 1 week ago "," 33 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-rightclick-3467037317?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=uPTU9z3fzWSuzP2rnIXpxQ%3D%3D&position=8&pageNum=11&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Illinois, United States "," 1 week ago "," 33 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-perficient-3511789789?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=OPV668iS8REfgem3YFScYQ%3D%3D&position=9&pageNum=11&trk=public_jobs_jserp-result_search-card," Perficient ",https://www.linkedin.com/company/perficient?trk=public_jobs_topcard-org-name," New York, NY "," 1 week ago "," 186 applicants ","Overview We currently have a career opportunity for a Data Engineer to join our Financial Services team. This role is located in Pittsburg, PA. As a Data Engineer you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment and support of application developed for our clients. As a member working in a team environment you will take direction from Solution Architects and Leads on development activities. Perficient is always looking for the best and brightest talent and we need you! We’re a quickly-growing, global digital consulting leader, and we’re transforming the world’s largest enterprises and biggest brands. You’ll work with the latest technologies, expand your skills, and become a part of our global community of talented, diverse, and knowledgeable colleagues. Responsibilities Participate in technical planning and requirements gathering phases including design, code, test, troubleshoot, and document engineering software applications. Ensuring that technical software development process is followed on the project, familiar with industry best practices for software development. Lead definition of data requirements and work with the Technology Manager and team to implement. Design and implement data processing workflows to cleanse, standardize, deduplicate, and match data from Data Provider data submissions. Assess data model success consistent with user requirements and testing and implement improvements. Experience in Snowflake advanced concepts such as resource monitors, virtual warehouse sizing, query performance tuning, zero copy clone, and time travel. Provide guidance on moving data across different environments in Snowflake. Demonstrate the ability to adapt and work with team members of various experience level. Qualifications Passionate coder with 3 years of application development experience. Breadth of experience with Java, Python and Snowflake. Strong debugging, problem solving and investigative skills. Ability to assimilate disparate information (log files, error messages etc.) and pursue leads to find root cause problems. Experience with Agile/Scrum methodology. Self-starter who can work independently. Bachelor’s Degree in MIS, Computer Science, Math, Engineering or comparable major. Strong consulting and communication skills. Ability to work effectively with various organizations in pursuit of problem solutions. Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work. Who We Are Perficient is a leading global digital consultancy. We imagine, create, engineer, and run digital transformation solutions that help our clients exceed customers’ expectations, outpace competition, and grow their business. With unparalleled strategy, creative, and technology capabilities, our colleagues bring big thinking and innovative ideas, along with a practical approach to help our clients – the world’s largest enterprises and biggest brands succeed. What We Believe At Perficient, we promise to challenge, champion, and celebrate our people. You will experience a unique and collaborative culture that values every voice. Join our team, and you’ll become part of something truly special. We believe in developing a workforce that is as diverse and inclusive as the clients we work with. We’re committed to actively listening, learning, and acting to further advance our organization, our communities, and our future leaders… and we’re not done yet. Perficient, Inc. proudly provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, gender, sexual orientation, national origin, age, disability, genetic information, marital status, amnesty, or status as a protected veteran in accordance with applicable federal, state and local laws. Perficient, Inc. complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including, but not limited to, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training. Perficient, Inc. expressly prohibits any form of unlawful employee harassment based on race, color, religion, gender, sexual orientation, national origin, age, genetic information, disability, or covered veterans. Improper interference with the ability of Perficient, Inc. employees to perform their expected job duties is absolutely not tolerated. Disability Accommodations Perficient is committed to providing a barrier-free employment process with reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or accommodation due to a disability, please contact us. Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time. Select work authorization questions to ask when applicants apply Are you legally authorized to work in the United States? Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)? "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Azure Data Engineer,https://www.linkedin.com/jobs/view/azure-data-engineer-at-fusion-alliance-3512348961?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=uL1X%2B1phez8OAfDqm5KDMQ%3D%3D&position=10&pageNum=11&trk=public_jobs_jserp-result_search-card," Fusion Alliance ",https://www.linkedin.com/company/fusion-alliance?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 3 weeks ago "," Be among the first 25 applicants ","Data Engineer – Azure Data Services 6-8 years of data integration/ETL experience using one or more platforms 3+ years of strong experience with implementing data warehousing or reporting solutions, especially in a role with data integration design & development Solid SQL skills 3+ years of dedicated development experience using one or more of the following components: Azure Data Factory, Azure Synapse, Azure Blob or Azure Data Lake, DataBricks, Azure event hub/grid Hands-on Azure data pipelining experience with different input formats (structured, unstructured, semi-structured); understands how to design for streaming and event based ingestion patterns Nice plus - demonstrated experience working with Snowflake as target platform Nice plus – demonstrated experience using Informatica PowerCenter Established in 1994 in Indianapolis, Indiana, Fusion Alliance is highly regarded as an enterprise solution provider, delivering practical insights, engaging customer experiences, and human-driven technologies that transform the way our clients do business. That’s why over 450 clients across more than 100 companies trust us. They know that the solutions we build alongside them are robust, scalable, usable, and secure – even in the most challenging, dynamic, and highly regulated environments. We have deep experience in delivering solutions to companies within life sciences and healthcare, banking and insurance, manufacturing, energy and utilities, and more. Copy and paste the link to learn a little more about our consultants’ experience so far! https://fusionalliance.com/careers/spotlight/"," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Azure Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-diverse-lynx-3499123168?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=a11d38Py%2Bk6QTGfchf5lnA%3D%3D&position=11&pageNum=11&trk=public_jobs_jserp-result_search-card," Fusion Alliance ",https://www.linkedin.com/company/fusion-alliance?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 3 weeks ago "," Be among the first 25 applicants "," Data Engineer – Azure Data Services6-8 years of data integration/ETL experience using one or more platforms3+ years of strong experience with implementing data warehousing or reporting solutions, especially in a role with data integration design & developmentSolid SQL skills3+ years of dedicated development experience using one or more of the following components: Azure Data Factory, Azure Synapse, Azure Blob or Azure Data Lake, DataBricks, Azure event hub/gridHands-on Azure data pipelining experience with different input formats (structured, unstructured, semi-structured); understands how to design for streaming and event based ingestion patternsNice plus - demonstrated experience working with Snowflake as target platformNice plus – demonstrated experience using Informatica PowerCenterEstablished in 1994 in Indianapolis, Indiana, Fusion Alliance is highly regarded as an enterprise solution provider, delivering practical insights, engaging customer experiences, and human-driven technologies that transform the way our clients do business. That’s why over 450 clients across more than 100 companies trust us. They know that the solutions we build alongside them are robust, scalable, usable, and secure – even in the most challenging, dynamic, and highly regulated environments. We have deep experience in delivering solutions to companies within life sciences and healthcare, banking and insurance, manufacturing, energy and utilities, and more. Copy and paste the link to learn a little more about our consultants’ experience so far! https://fusionalliance.com/careers/spotlight/ "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Azure Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3523707600?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=ICZzm0itW6KF5Z1LYlJQFQ%3D%3D&position=12&pageNum=11&trk=public_jobs_jserp-result_search-card," Fusion Alliance ",https://www.linkedin.com/company/fusion-alliance?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 3 weeks ago "," Be among the first 25 applicants "," Data Engineer – Azure Data Services6-8 years of data integration/ETL experience using one or more platforms3+ years of strong experience with implementing data warehousing or reporting solutions, especially in a role with data integration design & developmentSolid SQL skills3+ years of dedicated development experience using one or more of the following components: Azure Data Factory, Azure Synapse, Azure Blob or Azure Data Lake, DataBricks, Azure event hub/gridHands-on Azure data pipelining experience with different input formats (structured, unstructured, semi-structured); understands how to design for streaming and event based ingestion patternsNice plus - demonstrated experience working with Snowflake as target platformNice plus – demonstrated experience using Informatica PowerCenterEstablished in 1994 in Indianapolis, Indiana, Fusion Alliance is highly regarded as an enterprise solution provider, delivering practical insights, engaging customer experiences, and human-driven technologies that transform the way our clients do business. That’s why over 450 clients across more than 100 companies trust us. They know that the solutions we build alongside them are robust, scalable, usable, and secure – even in the most challenging, dynamic, and highly regulated environments. We have deep experience in delivering solutions to companies within life sciences and healthcare, banking and insurance, manufacturing, energy and utilities, and more. Copy and paste the link to learn a little more about our consultants’ experience so far! https://fusionalliance.com/careers/spotlight/ "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499585212?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=Ui1Msi8rRL2ElP7x%2Bxt%2Fjw%3D%3D&position=13&pageNum=11&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Chicago, IL "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-traject-3504147945?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=VWevA%2FZGP4smOCdkiGdGjQ%3D%3D&position=14&pageNum=11&trk=public_jobs_jserp-result_search-card," Traject ",https://www.linkedin.com/company/bytraject?trk=public_jobs_topcard-org-name," Austin, TX "," 1 month ago "," Be among the first 25 applicants ","PLANOLY is the industry-leading social marketing platform trusted by over 5 million users to visually plan, schedule and measure performance across Instagram and Pinterest. PLANOLY is beautifully crafted to be simple, clean and easy to use. PLANOLY believes firmly in inclusivity and is thrilled to pave the way for brands, businesses and individuals of all backgrounds to carry out their digital marketing strategies seamlessly. PLANOLY is looking for a thoughtful, well-rounded Data Engineer to join a rapidly growing startup and work on building data management, data tools and data analytics services that help power business decisions. Our software platform that influencers, brands, agencies and marketing firms use daily gives us an incredibly rich and diverse dataset that we need to collect, transform, and analyze in order to improve effectiveness of our products as well as impact business decisions. You will have the opportunity to take a leading role in our engineering team, and help solve some challenging problems in the social media marketing space. Passion for social products and building great software is a must. Tools We Use Amazon Web Services Google Cloud Google BigQuery Google Data Studio dbt Python, SQL What You Will Do Data Modeling / Architecting via designing data models and implementing appropriate abstractions for immediate requirements. Understanding data lineage and dependencies, and develop and maintain existing ETL processes. Work with leadership, product, engineering, and marketing to productize answers to business questions. Communicate results using reproducible analysis methods and data visualizations. Monitor the quality of data and information, report on results, identify, and recommend system application changes required to improve the quality of data in all applications. Manipulate and analyze complex data from multiple sources and design, develop and generate ad hoc and operational reports in support of other teams and objectives. Design and build scalable micro services hosted in AWS using Python3.7 and/or NodeJS TypeScript. Design, implement and manage data warehouse plans for our group of products. Support existing data data services and processes running in Production. Collaborate closely and autonomously with a small team of engineers, designers and cross-functional users to understand data needs. Who You Are Bachelor’s degree in Computer Science or equivalent STEM field, or 3+ years of relevant work experience. Experience writing SQL queries and creating SQL based data models. Bonus: using dbt Experience communicating data analysis to business and peers, using data visualizations and reproducible analysis. 3+ years of professional experience building scalable software. Ability to work on green field projects with relatively minimal guidance. Ability to collaborate with other engineers, QA, and non technical people. Strong foundation in database systems (relational and non-relational). Experience with data warehousing solutions (Redshift, Big Query, etc). Proficiency with Python (2+ years). NodeJS experience is a plus. Strong understanding of serverless micro service architecture in the cloud (AWS, GCP). Using version control (e.g. Git). Bonus Skills and Experience Experience in Linux command line and writing shell scripts. Working knowledge of DevOps tools including Jenkins, Docker, etc. Experience with AWS technology: SES, SNS, SQS, EC2, Elasticache, KMS, S3, etc. NodeJS DBT Who We Are We are social media experts and first and foremost users of our tools to enhance our social media strategies. Planoly is built by influencers for influencers. We’re growing super fast and have been profitable since inception. We offer an open work environment where highly motivated engineers take full ownership of the products and help steer the firm. We are a huge advocate of work-life balance, which is seen in our open vacation and work-from-any-coffee-shops policies. We’ll provide you with lunch, snacks, drinks, regular team outings Learn more about PLANOLY at https://www.planoly.com and https://www.planoly.com/blog U.S. Equal Employment Opportunity/Affirmative Action Information Planoly is proud to be an equal opportunity employer and will consider all qualified individuals seeking employment without regards to race, color, creed, religion, gender, gender identity, national origin, citizenship, age, sex, marital status, ancestry, physical or mental disability, veteran status, sexual orientation, or any other protected classification. Powered by JazzHR"," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-sag-aftra-health-plan-sag-producers-pension-plan-3514101180?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=TAILMJf7tYnA1I21NPCesw%3D%3D&position=15&pageNum=11&trk=public_jobs_jserp-result_search-card," Traject ",https://www.linkedin.com/company/bytraject?trk=public_jobs_topcard-org-name," Austin, TX "," 1 month ago "," Be among the first 25 applicants "," PLANOLY is the industry-leading social marketing platform trusted by over 5 million users to visually plan, schedule and measure performance across Instagram and Pinterest. PLANOLY is beautifully crafted to be simple, clean and easy to use. PLANOLY believes firmly in inclusivity and is thrilled to pave the way for brands, businesses and individuals of all backgrounds to carry out their digital marketing strategies seamlessly.PLANOLY is looking for a thoughtful, well-rounded Data Engineer to join a rapidly growing startup and work on building data management, data tools and data analytics services that help power business decisions. Our software platform that influencers, brands, agencies and marketing firms use daily gives us an incredibly rich and diverse dataset that we need to collect, transform, and analyze in order to improve effectiveness of our products as well as impact business decisions. You will have the opportunity to take a leading role in our engineering team, and help solve some challenging problems in the social media marketing space. Passion for social products and building great software is a must.Tools We Use Amazon Web Services Google Cloud Google BigQuery Google Data Studio dbt Python, SQLWhat You Will Do Data Modeling / Architecting via designing data models and implementing appropriate abstractions for immediate requirements. Understanding data lineage and dependencies, and develop and maintain existing ETL processes. Work with leadership, product, engineering, and marketing to productize answers to business questions. Communicate results using reproducible analysis methods and data visualizations. Monitor the quality of data and information, report on results, identify, and recommend system application changes required to improve the quality of data in all applications. Manipulate and analyze complex data from multiple sources and design, develop and generate ad hoc and operational reports in support of other teams and objectives. Design and build scalable micro services hosted in AWS using Python3.7 and/or NodeJS TypeScript. Design, implement and manage data warehouse plans for our group of products. Support existing data data services and processes running in Production. Collaborate closely and autonomously with a small team of engineers, designers and cross-functional users to understand data needs.Who You Are Bachelor’s degree in Computer Science or equivalent STEM field, or 3+ years of relevant work experience. Experience writing SQL queries and creating SQL based data models. Bonus: using dbt Experience communicating data analysis to business and peers, using data visualizations and reproducible analysis.3+ years of professional experience building scalable software. Ability to work on green field projects with relatively minimal guidance. Ability to collaborate with other engineers, QA, and non technical people. Strong foundation in database systems (relational and non-relational). Experience with data warehousing solutions (Redshift, Big Query, etc). Proficiency with Python (2+ years).NodeJS experience is a plus.Strong understanding of serverless micro service architecture in the cloud (AWS, GCP). Using version control (e.g. Git).Bonus Skills and Experience Experience in Linux command line and writing shell scripts. Working knowledge of DevOps tools including Jenkins, Docker, etc. Experience with AWS technology: SES, SNS, SQS, EC2, Elasticache, KMS, S3, etc. NodeJS DBT Who We Are We are social media experts and first and foremost users of our tools to enhance our social media strategies. Planoly is built by influencers for influencers. We’re growing super fast and have been profitable since inception. We offer an open work environment where highly motivated engineers take full ownership of the products and help steer the firm. We are a huge advocate of work-life balance, which is seen in our open vacation and work-from-any-coffee-shops policies. We’ll provide you with lunch, snacks, drinks, regular team outingsLearn more about PLANOLY at https://www.planoly.com and https://www.planoly.com/blogU.S. Equal Employment Opportunity/Affirmative Action InformationPlanoly is proud to be an equal opportunity employer and will consider all qualified individuals seeking employment without regards to race, color, creed, religion, gender, gender identity, national origin, citizenship, age, sex, marital status, ancestry, physical or mental disability, veteran status, sexual orientation, or any other protected classification. Powered by JazzHR "," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-massachusetts-health-connector-3520241202?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=iu%2Fx%2FYrXQ1IZ9xCzipkDeg%3D%3D&position=16&pageNum=11&trk=public_jobs_jserp-result_search-card," Traject ",https://www.linkedin.com/company/bytraject?trk=public_jobs_topcard-org-name," Austin, TX "," 1 month ago "," Be among the first 25 applicants "," PLANOLY is the industry-leading social marketing platform trusted by over 5 million users to visually plan, schedule and measure performance across Instagram and Pinterest. PLANOLY is beautifully crafted to be simple, clean and easy to use. PLANOLY believes firmly in inclusivity and is thrilled to pave the way for brands, businesses and individuals of all backgrounds to carry out their digital marketing strategies seamlessly.PLANOLY is looking for a thoughtful, well-rounded Data Engineer to join a rapidly growing startup and work on building data management, data tools and data analytics services that help power business decisions. Our software platform that influencers, brands, agencies and marketing firms use daily gives us an incredibly rich and diverse dataset that we need to collect, transform, and analyze in order to improve effectiveness of our products as well as impact business decisions. You will have the opportunity to take a leading role in our engineering team, and help solve some challenging problems in the social media marketing space. Passion for social products and building great software is a must.Tools We Use Amazon Web Services Google Cloud Google BigQuery Google Data Studio dbt Python, SQLWhat You Will Do Data Modeling / Architecting via designing data models and implementing appropriate abstractions for immediate requirements. Understanding data lineage and dependencies, and develop and maintain existing ETL processes. Work with leadership, product, engineering, and marketing to productize answers to business questions. Communicate results using reproducible analysis methods and data visualizations. Monitor the quality of data and information, report on results, identify, and recommend system application changes required to improve the quality of data in all applications. Manipulate and analyze complex data from multiple sources and design, develop and generate ad hoc and operational reports in support of other teams and objectives. Design and build scalable micro services hosted in AWS using Python3.7 and/or NodeJS TypeScript. Design, implement and manage data warehouse plans for our group of products. Support existing data data services and processes running in Production. Collaborate closely and autonomously with a small team of engineers, designers and cross-functional users to understand data needs.Who You Are Bachelor’s degree in Computer Science or equivalent STEM field, or 3+ years of relevant work experience. Experience writing SQL queries and creating SQL based data models. Bonus: using dbt Experience communicating data analysis to business and peers, using data visualizations and reproducible analysis.3+ years of professional experience building scalable software. Ability to work on green field projects with relatively minimal guidance. Ability to collaborate with other engineers, QA, and non technical people. Strong foundation in database systems (relational and non-relational). Experience with data warehousing solutions (Redshift, Big Query, etc). Proficiency with Python (2+ years).NodeJS experience is a plus.Strong understanding of serverless micro service architecture in the cloud (AWS, GCP). Using version control (e.g. Git).Bonus Skills and Experience Experience in Linux command line and writing shell scripts. Working knowledge of DevOps tools including Jenkins, Docker, etc. Experience with AWS technology: SES, SNS, SQS, EC2, Elasticache, KMS, S3, etc. NodeJS DBT Who We Are We are social media experts and first and foremost users of our tools to enhance our social media strategies. Planoly is built by influencers for influencers. We’re growing super fast and have been profitable since inception. We offer an open work environment where highly motivated engineers take full ownership of the products and help steer the firm. We are a huge advocate of work-life balance, which is seen in our open vacation and work-from-any-coffee-shops policies. We’ll provide you with lunch, snacks, drinks, regular team outingsLearn more about PLANOLY at https://www.planoly.com and https://www.planoly.com/blogU.S. Equal Employment Opportunity/Affirmative Action InformationPlanoly is proud to be an equal opportunity employer and will consider all qualified individuals seeking employment without regards to race, color, creed, religion, gender, gender identity, national origin, citizenship, age, sex, marital status, ancestry, physical or mental disability, veteran status, sexual orientation, or any other protected classification. Powered by JazzHR "," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-adt-3506624977?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=7wwyyGfkmqUymhEKZEI%2F4w%3D%3D&position=17&pageNum=11&trk=public_jobs_jserp-result_search-card," ADT ",https://www.linkedin.com/company/adt?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 49 applicants ","Company Overview: ADT has been in the business of helping save lives since 1874. As the #1 smart home security provider in the U.S., we help protect and connect families, businesses and larger commercial customer every day. Our continuous innovation, advanced technology and strategic partnerships deliver products and services that help protect life and valuables, whether at home, your business or on the go. And as times change, so do we. Above all, our mission is clear: we help save lives for a living. Looking for a career where you can make a real impact? Join our team today and put purpose behind your paycheck. #WeAreADT Check out more about life at ADT here. What You’ll Do: As the Data Engineer, you’ll be responsible for developing and governing our data and information strategy to drive business decisions and growth. You will develop data procedures and policies and work closely with various departments to collect, prepare, organize, protect, and analyze data while ensuring that the company meets industry best practices. Designing and implementing data strategies and systems. Oversee the collection, storage, management, quality, and protection of data. Implementing data privacy policies and complying with data projection regulations. Determine where to cut costs and increase revenue based on data-derived insights. Effectively communicate the status, value, and importance of data collection to leadership and staff. Knowledge of relevant applications, big data solutions and tools. Thorough understanding of the business and data strategy. Improving and streamlining data systems within ADT and driving innovation. Recognize and act on opportunities. Display orientation to profitability. Understands business implications of decisions. Adapting strategy to changing conditions. Develops strategies to achieve organizational goals. Identifies external threats and opportunities. Understands organization's strengths & weaknesses. Analyzes and demonstrates knowledge of market and competition. What You’ll Need: Bachelor’s degree in information Technology of related field. Master’s degree preferred. 5+ years of experience in a data management role. Ability to apply advanced concepts such as exponents, logarithms, quadratic equations and permutations. Apply operations to such tasks as frequency distribution, test reliability/validity, variance analysis, correlation technique, sampling theory and factor analysis. Ability to define problems, collect data, establish facts, and draw valid conclusions. Interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables. Compensation & Benefits: The salary range for this role is $73,066 - $146,131 and is based on experience and qualifications. Certain roles are eligible for annual bonus and may include equity. These awards are allocated based on company and individual performance. We offer employees access to healthcare benefits, a 401(k) plan and company match, short-term and long-term disability coverage, life insurance, wellbeing benefits and paid time off among others. Employees accrue up to 120 hours in their first year. Your accrual rate increases after your first year. We also offer 6 paid holidays. ADT is an Equal Employment Opportunity (EEO) Employer. We celebrate diversity and are committed to building an inclusive team that represents a variety of backgrounds, perspectives, and skills. ADT strives to ensure every employee and applicant feels valued. Visit us at jobs.adt.com/diversity to learn more."," Associate "," Full-time "," Information Technology, General Business, and Other "," Consumer Services " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-ekodus-inc-3459613927?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=q0DvMCnKXD3rFezbkW3NHg%3D%3D&position=18&pageNum=11&trk=public_jobs_jserp-result_search-card," Ekodus INC. ",https://www.linkedin.com/company/ekodus-inc?trk=public_jobs_topcard-org-name," Minneapolis, MN "," 1 month ago "," 33 applicants ","Job Title :- Junior Data Engineer(Hybrid) Location :- Minneapolis, Minnesota Duration :- 6+ Months Job Description Number of Years' Experience Preferred: 3+ years Top 3-5 Requirements 1-3 years of relevant experience required Knowledge of advanced statistical concepts and techniques; skilled in linear algebra. Experience in data visualization tools such as Power BI, QuickSight, etc. Experience With AI/ML Platforms Like Dataiku, SageMaker, Others. Experience with statistical programming (SAS, R, Python, SQL etc.) & data visualization software in a data-rich environment. Please share resume at career@ekodusinc.com."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting, Information Services, and Software Development " Data Engineer,United States,Data Engineer - Data Science & Analytics,https://www.linkedin.com/jobs/view/data-engineer-data-science-analytics-at-costco-wholesale-3513511695?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=mBCoEWo%2F4AXgTWlsd3uyrA%3D%3D&position=19&pageNum=11&trk=public_jobs_jserp-result_search-card," Costco Wholesale ",https://www.linkedin.com/company/costco-wholesale?trk=public_jobs_topcard-org-name," Seattle, WA "," 1 week ago "," 112 applicants ","This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture. The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes. Our employees and our members come FIRST. Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others. In 2021, Costco contributed over $58 million to organizations such as United Way and Children's Miracle Network Hospitals. Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed. As proof, Costco ranks seventh in Forbes “World’s Best Employers”. The Data Engineer - Data Analytics is responsible for the end to end data pipelines to power analytics and data services. This role is focused on data engineering to build and deliver automated data pipelines from a plethora of internal and external data sources. The Data Engineer will partner with product owners, engineering and data platform teams to design, build, test, and automate data pipelines that are relied upon across the company as the single source of truth. If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined. ROLE Develops and operationalizes data pipelines to make data available for consumption (BI, Advanced analytics, Services). Works with data architects and data/BI engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality, and orchestration. Designs, develops, and implements ETL/ELT processes using IICS (informatica cloud). Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Azure Data Factory to improve and speed up delivery of our data products and services. Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency, and quality of data management. Communicates technical concepts to non-technical audiences both written and verbal. Performs peer reviews for other data engineer’s work. Required 5+ years’ experience engineering and operationalizing data pipelines with large and complex datasets. 5+ years’ of hands on experience with Informatica PowerCenter. 2+ years’ of hands on experience with Informatica IICS. 3+ years’ experience working with Cloud technologies; such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB, and other big data technologies. 5+ years’ experience with Data Modeling, ETL, and Data Warehousing. 2+ years’ hands on experience implementing data integration techniques such as event / message based integration (Kafka, Azure Event Hub), ETL. 3+ years’ hands on experience with Git / Azure DevOps Extensive experience working with various data sources; SQL,Oracle database, flat files (csv, delimited), Web API, XML. Advanced SQL skills; Understanding of relational databases, business data, and the ability to write complex SQL queries against a variety of data sources. Strong understanding of database storage concepts; Data Lake, Relational Databases, NoSQL, Graph, Data Warehousing. Able to work in a fast-paced agile development environment. Recommended Microsoft Azure/similar certifications. Experience delivering data solutions through agile software development methodologies. Exposure to the retail industry. Excellent verbal and written communication skills. Experience working with SAP integration tools including BODS. Experience with UC4 Job Scheduler. BA/BS in Computer Science, Engineering, or equivalent software/services experience. Required Documents Cover Letter Resume California applicants, please click here to review the Costco Applicant Privacy Notice. Apart from any religious or disability considerations, open availability is needed to meet the needs of the business. If hired, you will be required to provide proof of authorization to work in the United States. Applicants and employees for this position will not be sponsored for work authorization, including, but not limited to H1-B visas. Pay Ranges Level 2 - $100,000 - $135,000, Level 3 - $125,000 - $165,000 Level 4 - $155,000 - $195,000, Bonus and Restricted Stock Unit (RSU) eligible We offer a comprehensive package of benefits including paid time off, health benefits — medical/dental/vision/hearing aid/pharmacy/behavioral health/employee assistance, health care reimbursement account, dependent care assistance plan, commuter benefits, short-term disability and long-term disability insurance, AD&D insurance, life insurance, 401(k), stock purchase plan, SmartDollar financial wellness program, to eligible employees. Costco is committed to a diverse and inclusive workplace. Costco is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or any other legally protected status. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to IT-Recruiting@costco.com If hired, you will be required to provide proof of authorization to work in the United States."," Entry level "," Full-time "," Information Technology "," Retail " Data Engineer,United States,Python Data Engineer,https://www.linkedin.com/jobs/view/python-data-engineer-at-hexaware-technologies-3455245317?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=4s9%2B9TCI6y73Dct7Au5c3g%3D%3D&position=20&pageNum=11&trk=public_jobs_jserp-result_search-card," Hexaware Technologies ",https://in.linkedin.com/company/hexaware-technologies?trk=public_jobs_topcard-org-name," Santa Clara, CA "," 2 weeks ago "," Over 200 applicants ","Hexaware is hiring Python Data Engineer @ Santa Clara, CA with minimum 8+ years' experience. Interested candidates may apply here. Python Data Engineer Job Location: Santa Clara, CA Job Type: Fulltime Interview: 1st Level – Video Interview and 2nd Level – In-Person Work Type: Hybrid Model (3 days remote and 2 days onsite) Only local candidates to Santa Clara, CA who can work 2 days onsite every week. Required skills: Python (Core Python, Pandas, Numpy, Matplotlib) SQL Knowledge about data extraction and processing from different sources Good to have: API frameworks (Flask, Fast API, Tornado)"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-harnham-3486255924?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=52alUtnnRe1NR3%2Fkyjf4gQ%3D%3D&position=21&pageNum=11&trk=public_jobs_jserp-result_search-card," Harnham ",https://uk.linkedin.com/company/harnham?trk=public_jobs_topcard-org-name," Tempe, AZ "," 4 weeks ago "," Over 200 applicants ","Data Engineer Full-Time / Permanent 100% REMOTE ROLE $125,000-$175,000 Are you a creative individual who loves to work in collaborative, fast-paced environments? Does problem-solving in tech peak your interest? Are you interested in the marketing and advertising side of engineering? This is a great opportunity for an experienced Data Engineer to fuse interest in market tech with understanding and defining data for strategy. THE COMPANY The Data Engineer will be joining one of the most collaborative, innovative ad agencies in their industry. Their commitment to delivering the most creative campaigns to well-known clients is what makes this agency so special. This opportunity is great for an individual who is looking for personal autonomy in problem-solving & collaborating with teammates - someone who is looking to not only deliver but personally create solutions that matter. DAY-TO-DAY RESPONSIBILITIES: As a Data Engineer, you will be responsible for the overall data engineering for various projects. You will be required to: Work in Google Analytics and/or Adobe Analytics Work with large marketing data sets Build ETL pipelines on the back end with SQL Help build ad-hoc, front-end SQL queries Work and communicate with clients and other teams YOUR SKILLS AND EXPERIENCE: 3-4+ years of: Marketing/Advertising Agency experience GCP or AWS experience Strong SQL experience Python experience DBT experience Data warehouse experience THE BENEFITS: $125,000 - $175,000 plus full benefits & great company culture! HOW TO APPLY: Please register your interest by sending your CV to Davis Caspers via the Apply link on this page."," Mid-Senior level "," Full-time "," Information Technology, Advertising, and Engineering "," Advertising Services and Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-tekintegral-3528110263?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=6sepPYISlyvpx0QybxlOgg%3D%3D&position=22&pageNum=11&trk=public_jobs_jserp-result_search-card," Harnham ",https://uk.linkedin.com/company/harnham?trk=public_jobs_topcard-org-name," Tempe, AZ "," 4 weeks ago "," Over 200 applicants "," Data EngineerFull-Time / Permanent100% REMOTE ROLE$125,000-$175,000Are you a creative individual who loves to work in collaborative, fast-paced environments? Does problem-solving in tech peak your interest? Are you interested in the marketing and advertising side of engineering? This is a great opportunity for an experienced Data Engineer to fuse interest in market tech with understanding and defining data for strategy.THE COMPANYThe Data Engineer will be joining one of the most collaborative, innovative ad agencies in their industry. Their commitment to delivering the most creative campaigns to well-known clients is what makes this agency so special. This opportunity is great for an individual who is looking for personal autonomy in problem-solving & collaborating with teammates - someone who is looking to not only deliver but personally create solutions that matter.DAY-TO-DAY RESPONSIBILITIES:As a Data Engineer, you will be responsible for the overall data engineering for various projects. You will be required to:Work in Google Analytics and/or Adobe AnalyticsWork with large marketing data setsBuild ETL pipelines on the back end with SQLHelp build ad-hoc, front-end SQL queriesWork and communicate with clients and other teamsYOUR SKILLS AND EXPERIENCE:3-4+ years of:Marketing/Advertising Agency experienceGCP or AWS experienceStrong SQL experiencePython experienceDBT experienceData warehouse experienceTHE BENEFITS:$125,000 - $175,000 plus full benefits & great company culture!HOW TO APPLY:Please register your interest by sending your CV to Davis Caspers via the Apply link on this page. "," Mid-Senior level "," Full-time "," Information Technology, Advertising, and Engineering "," Advertising Services and Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ascendion-3511003521?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=duFVkpfCv%2B776Sz5zBGhfw%3D%3D&position=23&pageNum=11&trk=public_jobs_jserp-result_search-card," Harnham ",https://uk.linkedin.com/company/harnham?trk=public_jobs_topcard-org-name," Tempe, AZ "," 4 weeks ago "," Over 200 applicants "," Data EngineerFull-Time / Permanent100% REMOTE ROLE$125,000-$175,000Are you a creative individual who loves to work in collaborative, fast-paced environments? Does problem-solving in tech peak your interest? Are you interested in the marketing and advertising side of engineering? This is a great opportunity for an experienced Data Engineer to fuse interest in market tech with understanding and defining data for strategy.THE COMPANYThe Data Engineer will be joining one of the most collaborative, innovative ad agencies in their industry. Their commitment to delivering the most creative campaigns to well-known clients is what makes this agency so special. This opportunity is great for an individual who is looking for personal autonomy in problem-solving & collaborating with teammates - someone who is looking to not only deliver but personally create solutions that matter.DAY-TO-DAY RESPONSIBILITIES:As a Data Engineer, you will be responsible for the overall data engineering for various projects. You will be required to:Work in Google Analytics and/or Adobe AnalyticsWork with large marketing data setsBuild ETL pipelines on the back end with SQLHelp build ad-hoc, front-end SQL queriesWork and communicate with clients and other teamsYOUR SKILLS AND EXPERIENCE:3-4+ years of:Marketing/Advertising Agency experienceGCP or AWS experienceStrong SQL experiencePython experienceDBT experienceData warehouse experienceTHE BENEFITS:$125,000 - $175,000 plus full benefits & great company culture!HOW TO APPLY:Please register your interest by sending your CV to Davis Caspers via the Apply link on this page. "," Mid-Senior level "," Full-time "," Information Technology, Advertising, and Engineering "," Advertising Services and Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-it-minds-llc-3528108307?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=9Z7cT2r3zRoeitHV8LFggQ%3D%3D&position=24&pageNum=11&trk=public_jobs_jserp-result_search-card," IT Minds LLC ",https://www.linkedin.com/company/itminds-llc?trk=public_jobs_topcard-org-name," Bellevue, WA "," 3 weeks ago "," Be among the first 25 applicants ","Data Engineer @ Bellevue, WA Qualifications And Skills 3-5 years of experience in large-scale software development (preferably Agile) with emphasis on data modeling and database development 3-5 years of experience with data modeling tools (Erwin, ER/Studio, PowerDesigner) 3-5 years of experience with relational DBMSs and SQL coding (SQL Server, Oracle, Teradata, Snowflake) Ability to communicate effectively (both orally and in writing) with business users, project team leaders and application developers Experience participating in Agile/Scrum projects in a highly collaborative, multi-discipline team environment Proficiency with ETL tools and techniques (SSIS, Attunity, Informatica) 2+ years of experience with AWS and related services (EC2, S3, DynamoDB, ElasticSearch, SQS, SNS, Lambda, Airflow, Snowflake, etc.) Experience with object function/object-oriented scripting (Python, Java, C++, Scala) Experience in R Programming Thanks & Regards Krishna | IT Minds LLC | Phone:(949)534-3939 Ext 406 Direct: 949-200-7533| Email: krishna@itminds.net | : 9070 Irvine Centre DR, Suite 220 | Irvine, CA 92618 | 44075 Pipeline Plaza, Suite 305 | Ashburn, VA 20147| 102, Manjeera Trinity Corporate, Kukatpally, Hyderabad 500072| www.itminds.net"," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer with SQL,https://www.linkedin.com/jobs/view/data-engineer-with-sql-at-extend-information-systems-inc-3528100339?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=c%2Bax43yP63UEkAysIwG15A%3D%3D&position=25&pageNum=11&trk=public_jobs_jserp-result_search-card," Extend Information Systems Inc. ",https://www.linkedin.com/company/extendinfosys?trk=public_jobs_topcard-org-name," Cary, NC "," 3 weeks ago "," Be among the first 25 applicants ","Job Title: Data Engineer with SQL skills Location: Cary, NC (Should be comfortable to relocate within 1 month of selection) Duration: Fulltime Job Description Proficiency in understanding data and writing queries - In depth understanding of joins, complex queries, sub queries, data analysis and data quality testing. Understanding of query optimization techniques Data profiling on popular databases ( Oracle, MySQL, Hive, Impala, etc.) Ability to capture requirements and communicate with users Strong Interpersonal communication Client interaction for problem solving and staff support related to BI tools Ability to capture big data requirements from stakeholders and proposed an approach to deliver solution Ability to debug Tableau issues in production system and propose an approach to fix those Ability to debug big data issues in production system and propose an approach to fix those Ability to model data, gather required data Understand optimization of data sources e.g. consolidation of multiple data sources into one scalable solution Understand Tableau security related limiting access to dashboards for users or groups of users Best practices for dashboard performance (not necessarily data related) Experience with extract refresh schedule e.g. finishing of ETL; Extract should refresh automatically; Ask question around how many number of workloads consist of project Basic data visualization (only visualization expertize is not expected) Understanding of advanced Tableau concepts e.g. access level control Site administration experience Server administration experience Configuration of Tableau Server to support best performance Thanks & Regards Rajiv Ranjan Rai Extend Information System Inc Email: rajiv@extendinfosys.com Address: 44355 Premier Plaza UNIT 220, Ashburn, VA, USA - 20147 Web: WWW.extendinfosys.com"," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-lowe-s-companies-inc-3509360317?refId=uz94oW0FbASgTIEHEb9nNQ%3D%3D&trackingId=kKytprCJzuFvbazSjJc8%2FQ%3D%3D&position=19&pageNum=6&trk=public_jobs_jserp-result_search-card," Lowe's Companies, Inc. ",https://www.linkedin.com/company/lowe%27s-home-improvement?trk=public_jobs_topcard-org-name," Texas, United States "," 2 days ago "," 140 applicants ","Job Summary: The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver modules, stable application systems, and Data or Platform solutions. This includes developing, configuring, or modifying integrated business and/or enterprise infrastructure or application solutions within various computing environments. This role facilitates the implementation and maintenance of business and enterprise Data or Platform solutions to ensure the successful deployment of released applications. Key Responsibilities: Translates business requirements and specifications into logical program designs, modules, stable application systems, and data solutions with occasional guidance from senior colleagues; partners with Product Team to understand business needs and functional specifications Develops, configures, or modifies integrated business and/or enterprise application solutions within various computing environments by designing and coding component-based applications using various programming languages Conducts the implementation and maintenance of complex business and enterprise data solutions to ensure successful deployment of released applications Supports systems integration testing (SIT) and user acceptance testing (UAT), provides insight into defining test plans, and ensures quality software deployment Participates in the end-to-end product lifecycle by applying and sharing an in-depth understanding of the company and industry methodologies, policies, standards, and controls Understands Computer Science and/or Computer Engineering fundamentals; knows software architecture and readily applies this to Data or Platform solutions Automates and simplifies team development, test, and operations processes; develops conceptual, logical, and physical architectures consisting of one or more viewpoints (business, application, data, and infrastructure) required for business solution delivery Solves difficult technical problems; solutions are testable, maintainable, and efficient Supports the build, maintenance, and enhancements of data lake development; supports simple to medium complexity API, unstructured data parsing, and streaming data ingestion Excels in one more domain; understands pipelines and business metrics Builds, tests, and enhances data curation pipelines integration data from a wide variety of sources like DBMS, File systems, and APIs for various KPIs and metrics development with high data quality and integrity Supports the development of feature/inputs for data models in an Agile manner Works with Data Science team to understand mathematical models and algorithms; participates in continuous improvement activities including training opportunities; continuously strives to learn analytic best practices and apply them to daily activities Handles data manipulation (extract, load, transform), data visualization, and administration of data and systems securely and in accordance with enterprise data governance standards Maintains the health and monitoring of assigned analytic capabilities for a specific data engineering solution; ensures high availability of the platform; monitors workload demands; works with Technology Infrastructure Engineering teams to maintain the data platform; serves as an SME of one or more application Supports the build, maintenance, and enhancements of BI solutions; creates standard and ad hoc reports; uses basic report formatting like sorting, totaling, and exporting Minimum Qualifications: Bachelor's degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field) 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) Preferred Qualifications: Master's degree in Computer Science, CIS, or related field 2 years of IT experience developing and implementing business systems within an organization 4 years of experience working with defect or incident tracking software 4 years of experience with technical documentation in a software development environment 2 years of experience working with an IT Infrastructure Library (ITIL) framework 2 years of experience leading teams, with or without direct reports Experience with application and integration middleware Experience with database technologies 2 years of experience in Hadoop or any Cloud Bigdata components Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka, or equivalent Cloud Bigdata components About Lowe’s: Lowe’s Companies, Inc. (NYSE: LOW) is a FORTUNE® 50 home improvement company serving approximately 19 million customer transactions a week in the United States and Canada. With fiscal year 2021 sales of over $96 billion, Lowe’s and its related businesses operate or service nearly 2,200 home improvement and hardware stores and employ over 300,000 associates. Based in Mooresville, N.C., Lowe’s supports the communities it serves through programs focused on creating safe, affordable housing and helping to develop the next generation of skilled trade experts. For more information, visit Lowes.com. EEO Statement Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religious creed, sex, gender, age, ancestry, national origin, mental or physical disability or medical condition, sexual orientation, gender identity or expression, marital status, military or veteran status, genetic information, or any other category protected under federal, state, or local law."," Entry level "," Full-time "," Information Technology and Engineering "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-planet-technology-3506286769?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=IZnu75mqN1mge9C806PzOA%3D%3D&position=22&pageNum=8&trk=public_jobs_jserp-result_search-card," Planet Technology ",https://www.linkedin.com/company/the-planet-technology?trk=public_jobs_topcard-org-name," Orlando, FL "," 1 week ago "," 78 applicants ","Responsibilities -Key contributor to Identify, evaluate and implementation of Data Infrastructre. -Analysis of Large datasets -Design and Implementation of relational Databases and structures. -Build data pipeline with ADF for SQL server BI. Requirements -5+ years of SQL/T-SQL. -Experience with ADF -Experience with SQL server -Experience with SQL and NoSQL databases. -BS in Computer science or a related field."," Mid-Senior level "," Full-time "," Information Technology "," Entertainment Providers, Travel Arrangements, and Hospitality " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-maven-3500561395?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=q%2Fojw643c6HjaqH2VW6bkw%3D%3D&position=23&pageNum=8&trk=public_jobs_jserp-result_search-card," maven ",https://www.linkedin.com/company/maven-alpha?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 2 weeks ago "," 78 applicants ","Multi-strategy hedge fund is looking for an experienced Data Developer / Engineer to join its quantitative trading team. Your core focus will be to build sophisticated data pipelines and analytics used to perform advanced quantitative research to enhance existing and create new and profitable systematic trading strategies. Skills & Experience: > Strong academic background in a STEM field. > 5 -15 years of experience in researching and building data pipelines and analytics. >Financial markets experience is welcome but not required. > Expert programming skills in C++ and or Python."," Mid-Senior level "," Full-time "," Engineering, Information Technology, and Research "," Software Development, Technology, Information and Internet, and Financial Services " Data Engineer,United States,Data Engineer (Hybrid),https://www.linkedin.com/jobs/view/data-engineer-hybrid-at-captivation-3499487550?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=PLp1yY4Q8gwX%2Fax%2BuuFMvA%3D%3D&position=1&pageNum=9&trk=public_jobs_jserp-result_search-card," Captivation ",https://www.linkedin.com/company/captivation-software?trk=public_jobs_topcard-org-name," Columbia County, GA "," 2 weeks ago "," Be among the first 25 applicants ","Annual Salary: $175,000 - $260,000 (depends on experience level) Build Something to Be Proud Of. Captivation Software has built a reputation on providing customers exactly what is needed in a timely manner. Our team of engineers take pride in what they develop and constantly innovate to provide the best solution. Come work with us and help provide the tools to solve DoD’s Big Data problems! Captivation Software is looking for a talented Data Engineer to support the acquisition of mission critical and mission support data sets. The preferred candidate will have a background in supporting cyber and/or network related missions within the military spaces, as either a developer, analyst or engineer. Work is performed in a hybrid role with some on-site work in Washington, DC. Essential Job Responsibilities: The ideal candidate will have worked with big data systems, complex structured and unstructured data sets, and have supported government data acquisition, analysis, and/or sharing efforts in the past. To excel in the position, the candidate shall have a strong attention to detail, be able to understand technical complexities, and have the willingness to learn and adapt to the situation. The candidate will work both independently and as part of a large team to accomplish client objectives. Requirements Minimum Qualifications: Security Clearance - Must have a current Secret Security Clearance or higher and therefore all candidates must be a U.S. Citizens. 5 years experience as a developer, analyst, or engineer with a Bachelors in related field; OR 3 years relevant experience with Masters in related field; OR High School Diploma or equivalent and 9 years relevant experience. Experience with programming languages such as Python and Java. Proficiency with acquisition and understanding of network data and the associated metadata. Fluency with data extraction, translation, and loading including data prep and labeling to enable data analytics. Experience with Kibana and Elasticsearch. Familiarity with various log formats such as JSON, XML, and others. Experience with data flow, management, and storage solutions (i.e. Kafka, NiFi, and AWS S3 and SQS solutions). Ability to decompose technical problems and troubleshoot system and dataflow issues. Must be able to work a hybrid schedule with the local team members. Benefits Annual Salary: $175,000 - $260,000 (depends on experience level) Up to 20% 401k contribution (no matching required) Above market hourly rates $3,000 HSA Contribution 5 Weeks Paid Time Off Company Paid Employee Medical / Dental / Vision Insurance / Life Insurance / Short-Term & Long-Term Disability / AD&D"," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-pepsico-3485241841?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=8mBlNo9BIU1ZAW%2B3tj6QCA%3D%3D&position=2&pageNum=9&trk=public_jobs_jserp-result_search-card," PepsiCo ",https://www.linkedin.com/company/pepsico?trk=public_jobs_topcard-org-name," Plano, TX "," 1 week ago "," 68 applicants ","Overview PepsiCo operates in an environment undergoing immense and rapid change. Big-data and digital technologies are driving business transformation that is unlocking new capabilities and business innovations in areas like eCommerce, mobile experiences and IoT. The key to winning in these areas is being able to leverage enterprise data foundations built on PepsiCo’s global business scale to enable business insights, advanced analytics and new product development. PepsiCo’s Data Management and Operations team is tasked with the responsibility of developing quality data collection processes, maintaining the integrity of our data foundations and enabling business leaders and data scientists across the company to have rapid access to the data they need for decision-making and innovation. What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company Responsible for day-to-day data collection, transportation, maintenance/curation and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders Increase awareness about available data and democratize access to it across the company Job Description: As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Develop and optimize procedures to “productionalize” data science models. Define and manage SLA’s for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. COVID-19 vaccination is a condition of employment for this role. Please note that all such company vaccine requirements provide the opportunity to request an approved accommodation or exemption under applicable law. Qualifications 4+ years of overall technology experience that includes at least 3+ years of hands-on software development, data engineering, and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 3+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools is a plus. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Education BA/BS in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Able to lead a team effectively through times of change. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment. Competencies Highly influential and having the ability to educate challenging stakeholders on the role of data and its purpose in the business. Understands both the engineering and business side of the Data Products released. Places the user in the center of decision making. Teams up and collaborates for speed, agility, and innovation. Experience with and embraces agile methodologies. Strong negotiation and decision-making skill. Experience managing and working with globally distributed teams. EEO Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. PepsiCo is an Equal Opportunity Employer: Female / Minority / Disability / Protected Veteran / Sexual Orientation / Gender Identity If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law & EEO is the Law Supplement documents. View PepsiCo EEO Policy. Please view our Pay Transparency Statement"," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services and Manufacturing " Data Engineer,United States,Sr. Data Engineer,https://www.linkedin.com/jobs/view/sr-data-engineer-at-experfy-3530755289?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=qT5%2FrXXFsoXwjS407aUSsA%3D%3D&position=5&pageNum=9&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Austin, TX "," 3 weeks ago "," Be among the first 25 applicants "," A Sr. Data Engineer is proficient in the development of all aspects of data processing including data warehouse architecture/modeling and ETL processing. The position focuses research on development and delivery of analytical solutions using various tools including Confluent Kafka, Kinesis, Glue, Lambda, Snowflake and SQL Server. A Sr. Data Engineer must be able to work autonomously with little guidance or instruction to deliver business value.ResponsibilitiesPosition Responsibilities Partner with business stakeholders to gather requirements and translate them into technical specifications and process documentation for IT counterparts (on-prem and offshore) Highly proficient in the architecture and development of an event driven data warehouse; streaming, batch, data modeling, and storage Advanced database knowledge; creating/optimizing SQL queries, stored procedures, functions, partitioning data, indexing, and reading execution plans Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption Expert level of understanding and implementing ETL architecture; data profiling, process flow, metric logging and error handling Support continuous improvement by investigating and presenting alternatives to processes and technologies to an architectural review board Develop and ensure adherence to published system architectural decisions and development standards Multi-task across several ongoing projects and daily duties of varying priorities as required Interact with global technical teams to communicate business requirements and collaboratively build data solutionsRequirements 8+ years of development experience Expert level in data warehouse design/architecture, dimensional data modeling and ETL process development Advanced level development in SQL/NoSQL scripting and complex stored procedures (Snowflake, SQL Server, DynomoDB, NEO4J a plus) Extremely proficient in Python, PySpark, and Java AWS Expertise – Kinesis, Glue (Spark), EMR, S3, Lambda, and Athena Streaming Services – Confluent Kafka and Kinesis (or equivalent) Hands on experience in designing and developing applications using Java Spring Framework (Spring Boot, Spring Cloud, Spring Data etc)Apply for this job "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Sr. Data Engineer,https://www.linkedin.com/jobs/view/senior-data-engineer-at-razor-3527036512?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=ikOit4YL6zfmTcuO1wKr7A%3D%3D&position=6&pageNum=9&trk=public_jobs_jserp-result_search-card," Experfy ",https://www.linkedin.com/company/experfy?trk=public_jobs_topcard-org-name," Austin, TX "," 3 weeks ago "," Be among the first 25 applicants "," A Sr. Data Engineer is proficient in the development of all aspects of data processing including data warehouse architecture/modeling and ETL processing. The position focuses research on development and delivery of analytical solutions using various tools including Confluent Kafka, Kinesis, Glue, Lambda, Snowflake and SQL Server. A Sr. Data Engineer must be able to work autonomously with little guidance or instruction to deliver business value.ResponsibilitiesPosition Responsibilities Partner with business stakeholders to gather requirements and translate them into technical specifications and process documentation for IT counterparts (on-prem and offshore) Highly proficient in the architecture and development of an event driven data warehouse; streaming, batch, data modeling, and storage Advanced database knowledge; creating/optimizing SQL queries, stored procedures, functions, partitioning data, indexing, and reading execution plans Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption Expert level of understanding and implementing ETL architecture; data profiling, process flow, metric logging and error handling Support continuous improvement by investigating and presenting alternatives to processes and technologies to an architectural review board Develop and ensure adherence to published system architectural decisions and development standards Multi-task across several ongoing projects and daily duties of varying priorities as required Interact with global technical teams to communicate business requirements and collaboratively build data solutionsRequirements 8+ years of development experience Expert level in data warehouse design/architecture, dimensional data modeling and ETL process development Advanced level development in SQL/NoSQL scripting and complex stored procedures (Snowflake, SQL Server, DynomoDB, NEO4J a plus) Extremely proficient in Python, PySpark, and Java AWS Expertise – Kinesis, Glue (Spark), EMR, S3, Lambda, and Athena Streaming Services – Confluent Kafka and Kinesis (or equivalent) Hands on experience in designing and developing applications using Java Spring Framework (Spring Boot, Spring Cloud, Spring Data etc)Apply for this job "," Not Applicable "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-concurrency-inc-3485960102?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=y4%2F5DIIdNfqC3MObQ3JXCw%3D%3D&position=12&pageNum=9&trk=public_jobs_jserp-result_search-card," Concurrency, Inc. ",https://www.linkedin.com/company/concurrency?trk=public_jobs_topcard-org-name," United States "," 5 days ago "," Over 200 applicants ","Who We Are We are change agents. We are inspired technologists. We are unlike any other technology consulting firm. Our team fearlessly challenges the status quo, relentlessly pursues what’s next and pushes the limits of what’s possible. A Microsoft Gold Partner and multiple time Partner of the Year award recipient, Concurrency is renowned for its ability to turn unmatched technology expertise into client outcomes. Have we inspired the technologist in you? Come be a change agent at Concurrency. Who We’re Looking For We’re excited to add a Data Engineer to our Data & AI team. In this role, you’ll work with a team of customer-focused professionals who are committed to defining technical strategy, architecting, designing, and delivering end-to-end digital transformation. you'll demonstrate strong technical competence and business acumen through engaging in senior-level technology decision-making discussions related to agility, business value, data warehousing, and cloud-oriented data solutions. You’ll empower other consultants by sharing subject matter expertise in large enterprise implementations, as well as overseeing the delivery of large, complex, and strategic projects for enterprise customers. Position Responsibilities Data Engineers for various and unanticipated worksites throughout the U.S. (HQ: Brookfield, WI). Lead requirements and design sessions with customers and internal teams. Author functional requirements and technical design documentation. Build, automate, and modify ADF pipelines. Create or modify ELT/ETL procedures and scripts in T-SQL. Create or modify Python, Scala, and SQL programs. Develop Power BI Tabular Models, Reports and Dashboards. Work with the solution team to help set standard architectures, processes, and best practices. Technical Environment: Data Analysis, Data Migration, Data Mining, Machine Learning, Data Modeling, ETL, Power BI, MS Azure ML, Azure SQL Database, SQL Server, R Studio, Python (NumPy, Pandas). POSITION REQUIREMENTS: Bachelor’s degree in Computer Science, Management Information Systems, or a related field plus 3 years of experience in the job offered or in data analytics required. Required skills: Data Analysis, Data Migration, Data Mining, Machine Learning, Data Modeling, ETL, Power BI, MS Azure ML, Azure SQL Database, SQL Server, R Studio, Python (NumPy, Pandas). 100% telecommuting permitted. Concurrency takes pride in bringing a different mindset to consulting—that takes a diversity of thought, collaboration and resilience. We are an innovation-obsessed yet a fun and progressive place to work. We offer flexible work schedules, competitive compensation, and great benefits for our people and their families. In addition, all employees are eligible for several rewards and recognition programs, excellent training programs, and bonus opportunities to encourage our people to be the best versions of themselves in and out of work."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-eliassen-group-3511423862?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=eRv5zNwPPvF6kxn4TPqvDQ%3D%3D&position=10&pageNum=10&trk=public_jobs_jserp-result_search-card," Eliassen Group ",https://www.linkedin.com/company/eliassen-group?trk=public_jobs_topcard-org-name," Greater St. Louis "," 2 days ago "," 96 applicants ","We are looking for a Data Engineer for a cutting-edge cloud-based big data analytics platform. You will report to a Manager and be a part of an agile cloud engineering team responsible for to developing complex cloud native data processing capabilities as part of an AWS-based data analytics platform. You also will work with data scientists, as users of the platform, to analyze and visualize data and develop machine learning/AI models. Responsibilities Develop, enhance, and troubleshoot complex data engineering and data integration capabilities using python, R, lambda, Glue, Redshift, EMR, SAS, Sagemaker and related AWS data processing services. Collaborate with other software developers, database architects, data analysts and data scientists on projects to ensure data delivery and align data processing architecture and services across multiple ongoing projects. Perform other team contribution tasks such as peer code reviews, database defect support, and occasional backup production support. Work with the DevOps team to build and release software, ensuring the process follows appropriate change management guidelines. Qualifications Bachelor's degree with a major or specialized courses in Information Technology or commensurate experience 3+ years related experience with a combination of the following: Experience designing and building data processing pipelines and streaming Experience with big data and common tools (Hadoop, Spark, etc.) Experience with relational SQL databases, especially PostgreSQL Experience with UNIX / Linux operating systems is preferred. Experience with IAC tools like Terraform, Ansible, CDK is preferred. Experience with AWS cloud services: EC2, S3, RDS, Redshift, Glue, Lambda, Step Functions US Citizen Travel: 5% Candidates with less experience may be considered at a lower pay grade and salary range"," Associate "," Full-time "," Information Technology, Engineering, and General Business "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stytch-3515646590?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=kG7spJaqJlmh97gLIyBplg%3D%3D&position=14&pageNum=10&trk=public_jobs_jserp-result_search-card," Stytch ",https://www.linkedin.com/company/stytch?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 month ago "," Be among the first 25 applicants ","What We're Looking For Stytch is the platform for user authentication. We build infrastructure that sits in the critical path of our customer's applications. As a data engineer, you'll work on designing and building event-driven architecture systems to drive analytics insights and observability tooling for our customers. What Excites You Championing data-driven insights - you see data analytics and observability as a product critical to success Solving problems with pragmatic solutions — you know when to make trade-offs between completeness and utility and you know when to cut scope to ship something good enough quickly Building products that make developers lives easier — as a data engineer for a developer infrastructure company, what you build will have an immediate impact on our customers Shaping the culture and growing the team through recruiting, mentorship, and establishing best practices Learning new skills and technologies in a fast paced environment What Excites Us Comfort working in a modern data stack using tools like Snowflake, Redshift, DBT, Fivetran, ElasticSearch, and Kinesis Appreciation for schema design and architecture that balance flexibility and simplicity Experience designing and building highly reliable back-end and ETL systems 3+ years as a data or backend engineer What Success Looks Like Technical — build new, highly reliable services that our customers can depend on Ownership — advocate for projects and solutions that you believe in and ship them to production Leadership — level up your teammates by providing mentorship and guidance Our Tech Stack Data moves through Snowflake, ElasticSearch, MySQL, and Kinesis Go and Node for application services We run on AWS with Kubernetes for containerization gRPC and protobufs for internal service communication Expected base salary $150,000-$300,000. The anticipated base salary range is not inclusive of full benefits including equity, health care insurance, time off, paid parental leave, etc. This base salary is accurate based on information at the time of posting. Actual compensation for hired candidates will be determined using a number of factors including experience, skills, and qualifications. We're looking to hire a GREAT team and that means hiring people who are highly empathetic, ambitious, and excited about building the future of user authentication. You should feel empowered to apply for this role even if your experience doesn't exactly match up to our job description (our job descriptions are directional and not perfect recipes for exactly what we need). We are committed to building a diverse, inclusive, and equitable workspace where everyone (regardless of age, education, ethnicity, gender, sexual orientation, or any personal characteristics) feels like they belong. We look forward to hearing from you! Learn more about our team and culture here!"," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,"ETL Data Engineer, Remote",https://www.linkedin.com/jobs/view/etl-data-engineer-remote-at-stellent-it-3527796131?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=it9cYjyAGa7Yf8OM%2F%2F0CCw%3D%3D&position=7&pageNum=11&trk=public_jobs_jserp-result_search-card," Stellent IT ",https://www.linkedin.com/company/stellent-it?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Be among the first 25 applicants ","ETL Data Engineer Remote Phone + Skype Critical Skills Design, develop, and maintain scaled ETL process to deliver meaningful insights from large and complicated data sets. Support existing ETL processes written in SQL, troubleshoot and resolve production issues. Additional Responsibilities Work as part of a team to build out and support Data Lake, implement solutions using Python to process structured and unstructured data. Partner with business users, architects and cloud engineers to develop, implement, and automated data pipelines. Collaborate with Engineering teams to discovery and leverage new data being introduced into the environment Create and maintain report specifications and process documentations as part of the required data deliverables. Serve as liaison with business and technical teams to achieve project objectives, delivering cross functional reporting solutions. Communicating with business partners, other technical teams and management to collect requirements, articulate data deliverables, and provide technical designs. Ability to multitask and prioritize an evolving workload in a fast pace environment."," Entry level "," Full-time "," Other "," Staffing and Recruiting " Data Engineer,United States,"ETL Data Engineer, Remote",https://www.linkedin.com/jobs/view/data-engineer-at-rightclick-3467037317?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=uPTU9z3fzWSuzP2rnIXpxQ%3D%3D&position=8&pageNum=11&trk=public_jobs_jserp-result_search-card," Stellent IT ",https://www.linkedin.com/company/stellent-it?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Be among the first 25 applicants "," ETL Data EngineerRemotePhone + SkypeCritical SkillsDesign, develop, and maintain scaled ETL process to deliver meaningful insights from large and complicated data sets.Support existing ETL processes written in SQL, troubleshoot and resolve production issues.Additional ResponsibilitiesWork as part of a team to build out and support Data Lake, implement solutions using Python to process structured and unstructured data.Partner with business users, architects and cloud engineers to develop, implement, and automated data pipelines.Collaborate with Engineering teams to discovery and leverage new data being introduced into the environmentCreate and maintain report specifications and process documentations as part of the required data deliverables.Serve as liaison with business and technical teams to achieve project objectives, delivering cross functional reporting solutions.Communicating with business partners, other technical teams and management to collect requirements, articulate data deliverables, and provide technical designs.Ability to multitask and prioritize an evolving workload in a fast pace environment. "," Entry level "," Full-time "," Other "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-massachusetts-health-connector-3520241202?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=iu%2Fx%2FYrXQ1IZ9xCzipkDeg%3D%3D&position=16&pageNum=11&trk=public_jobs_jserp-result_search-card," Massachusetts Health Connector ",https://www.linkedin.com/company/healthconnector?trk=public_jobs_topcard-org-name," Boston, MA "," 1 week ago "," Over 200 applicants ","As a key member of the Connector’s Technology Team, the Data Engineer will support the group’s data management and development activities, utilizing appropriate technologies. They will interact directly with business owners and stakeholders to understand their information needs and deliver solutions that effectively meet their needs. Additionally, they will provide technical assistance while working directly with technical teams to implement and support data related solutions. This position requires experience with database and reporting technologies, and administration and reports to the Chief Architect   Key Responsibilities: Work directly with project stakeholders to identify, understand and document data and information needs and requirements Participate and ensure data and research requests are properly documented, qualified, and vetted according to policy and requirements Determine feasibility and data security issues Maintain ERD and data dictionaries for Enterprise Data Warehouse (EDW) data sets Develop and implementing data analyses, data collections, ETL and other data related needs Assess and document both internal and external data requests for need, quality, effort, and delivery requirements, through delivery and implementation Assist data analytics team by performing data collection, data analysis, and reporting Provide technical assistance as needed Participate in hands-on requirements and development of data and reporting solutions to ensure alignment with requirements and business expectations Work with internal analysts and other subject matter experts as necessary Support reporting requests and engage appropriate technical resources as necessary   Qualifications: BS degree in Computer Science, or Information Management Minimum 5 years of work experience in a production environment Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information, with a strong attention to detail and accuracy. Proven working experience as a data or business analyst. Demonstrable technical expertise regarding data models, database designs, data mining, ETL and normalization techniques. Experience working with fulfillment of 3rd party/external data requests Experience with common data analysis tools such as Excel, Tableau, MicroStrategy, SSAS, SAS, etc. Knowledge of HIPAA requirements and experience working in regulated environments under requirements such as IRS a plus Outstanding organizational and time management skills – able to manage and prioritize multiple assignments Self-starter with a passion for new technologies Excellent oral and written communication skills Experience with business intelligence, data warehouse, and ETL solutions   If interested:   Send cover letter and resume to Connector-jobs@state.ma.us.   Salary:   Salary range is competitive; salary will be commensurate with experience.   Please note:   Due to the requirement of 268A, please complete the disclosure form and return with your application. Link - https://www.mahealthconnector.org/wp-content/uploads/ApplicantsDisclosureForm.pdf All Health Connector employees are required to provide satisfactory proof of eligibility to work in the United States All Health Connector employees are required to provide satisfactory proof of full COVID vaccination. The Health Connector is operating on a hybrid work arrangement with 2 days in the downtown Boston office and 3 days working from home.   About the Connector: The Commonwealth Health Insurance Connector Authority (Health Connector) is an independent public authority serving as the Affordable Care Act (ACA)-compliant marketplace for the Commonwealth. The organization is charged with providing subsidized and unsubsidized health insurance to individuals and small employers. The Health Connector also oversees policy development related to health care reform under both state and federal laws, as well as conducting public education and outreach about health care reform and coverage opportunities. More information about the Connector and its programs is available at www.MaHealthConnector.org.   The Health Connector is an equal opportunity employer that values diversity as a vital characteristic of its work force. We consider qualified applicants without regard to race, color, religion, gender, sexual identity, gender identity, national origin, or disability.    "," Full-time ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ascendion-3511003521?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=duFVkpfCv%2B776Sz5zBGhfw%3D%3D&position=23&pageNum=11&trk=public_jobs_jserp-result_search-card," Ascendion ",https://www.linkedin.com/company/ascendion?trk=public_jobs_topcard-org-name," Chicago, IL "," 3 weeks ago "," Be among the first 25 applicants "," 100% Remote position!!! JD Title:- Data Engineer Duration:- 6-12 Months Contract Location:- 100% Remote Must-Have:-2+ years of Python development experience.Strong familiarity with Azure data services: HIGHLY Preferred - Azure SQL, Synapse, Data Lake, Data Factory, Databricks, Azure function, Service Bus, etc.Another cloud provider (GCP/AWS) will work if they are strong / they will have to pick Azure up.Solid understanding of real-time & batch data processing.WFH,Remote,Data Engineer,Python "," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-smx-services-consulting-inc-3509502758?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=JA0k6SwUKQmmxhVKeI8sew%3D%3D&position=15&pageNum=7&trk=public_jobs_jserp-result_search-card," SMX Services & Consulting, Inc. ",https://www.linkedin.com/company/smx-services-&-consulting-inc-?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 191 applicants "," Job DescriptionSMX Services & Consulting is looking for a Data Engineer: Principal Responsibilities: Create and maintain optimal data pipeline architecture to support data orchestration. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and ingestion of data from a wide variety of data sources using SQL, ETL and Azure Data Factory technologies. Support analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. Work with stakeholders including the Executive, Product, Data and Technology teams to assist with data-related technical issues and support their data infrastructure needs. Create data tools for analytics and data scientist team members that assist them in executing and optimizing data projects. Work with data and analytics experts to strive for greater functionality in our data systems. Collaborating with colleagues for the purpose of collecting and structuring data. Collect, audit, compile, and validate data from multiple sources. Communicate internally and with clients externally to collect and validate data as well as answer questions regarding data. Apply advanced knowledge and understanding of concepts, principals, and technical capabilities to manage a wide variety of projects. Recommends new practices, processes, and procedures. Provides solutions that may set precedent or have significant impact. Build automation and additional efficiencies into manual efforts. Education: Bachelor's degree in related field preferred, equivalent years' experience considered. Experience, Skills, and Abilities Requirements: At least five to seven years of data related or analytical work experience in a Data Engineer role, who has experience with data orchestration and pipeline creation in the Azure ecosystem. Experience building and optimizing 'big data' data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets and blob storage. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Senior Data Engineer,https://www.linkedin.com/jobs/view/senior-data-engineer-at-razor-3527036512?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=ikOit4YL6zfmTcuO1wKr7A%3D%3D&position=6&pageNum=9&trk=public_jobs_jserp-result_search-card," RAZOR ",https://www.linkedin.com/company/razor-talent-llc?trk=public_jobs_topcard-org-name," United States "," 10 hours ago "," 57 applicants ","Razor is looking for a highly motivated and skilled Data Engineer to join our team. In this role, you will be responsible for managing and analyzing large and complex data sets to identify business insights and support data-driven decision-making. You will work closely with cross-functional teams to design and implement data models, ETL pipelines, and data visualizations using Python, PySpark, and AWS cloud technologies. Key Responsibilities: Design, develop, and maintain ETL pipelines to extract, transform, and load large and complex data sets. Collaborate with cross-functional teams to gather requirements, design data models, and implement scalable solutions. Develop and maintain data visualizations and dashboards to present insights and key performance indicators. Perform data analysis, develop algorithms, and create predictive models to support business decision-making. Work closely with stakeholders to understand their needs and provide recommendations for data-driven solutions. Monitor and troubleshoot data pipelines and ensure data accuracy, integrity, and security. Keep up-to-date with emerging data technologies and best practices. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, Statistics, or a related field. 8+ years of experience in data engineering. Strong programming skills in Python and experience with PySpark. Experience with AWS cloud technologies, including EC2, S3, EMR, and Redshift. Strong SQL skills and experience with data warehousing concepts. Experience with data visualization tools such as Tableau, Power BI, or Looker. Strong problem-solving and analytical skills. Strong communication and collaboration skills. Ability to work independently and as part of a team. Preferred Qualifications: Experience with machine learning and data science libraries such as Scikit-learn, TensorFlow, or PyTorch. Experience with data streaming technologies such as Kafka, Kinesis, or Spark Streaming. Experience with NoSQL databases such as MongoDB or Cassandra. Knowledge of Agile/Scrum development methodologies."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ubs-3499279586?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=fk6rgt5MLJhex2ZioJUGiw%3D%3D&position=2&pageNum=10&trk=public_jobs_jserp-result_search-card," UBS ",https://ch.linkedin.com/company/ubs?trk=public_jobs_topcard-org-name," Nashville, TN "," 2 weeks ago "," 72 applicants ","Job Reference # 272093BR Job Type Full Time Your role Are you an expert when it comes to tools like Databricks, Kafka and Azure Data Factory? Do you have a track record of influencing IT stakeholders and business partners? Do you have proven ability to solve complex issues, covering both technical and business needs? We’re looking for a Data Engineer to: develop data pipelines utilizing Kafka, Cribl and Azure Data Factory process data using modern ETL techniques utilizing Databricks and related technologies contribute expertise to design and architecture of the platform work in a large data environment with hundreds of millions of daily records Your team You’ll be working in our End User Dataverse team in Nashville, TN. We provide the underlying data platform. This benefits our end user services teams and beyond. As a Data Engineer, you’ll play an important role in providing stable and reliable data streams. Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients. Your expertise ideally 2+ years of hands-on experience with Azure Data lake proficient with software development tools, such as Python, Azure Data Factory, Kafka, Cribl good Knowledge of Databricks and ETL Pipelines ability to solve complex issues, good at problem statement analysis and solution design thinking track record of influencing key IT stakeholders and business partners confident communicator that can explain technology to non-technical audiences capable of understanding client needs and translating this into products and services About Us UBS is the world’s largest and only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. With more than 70,000 employees, we have a presence in all major financial centers in more than 50 countries. Do you want to be one of us? How We Hire This role requires an assessment on application. Learn more about how we hire: www.ubs.com/global/en/careers/experienced-professionals.html Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. "," Not Applicable "," Full-time "," Information Technology and Engineering "," Banking, Financial Services, and Investment Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-starschema-3479481781?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=WdbQ3LRDqD88iCWvE5WNeA%3D%3D&position=6&pageNum=10&trk=public_jobs_jserp-result_search-card," Starschema ",https://hu.linkedin.com/company/starschema?trk=public_jobs_topcard-org-name," Arlington, VA "," 1 month ago "," Be among the first 25 applicants ","Company Description About Starschema At Starschema we believe that data has the power to change the world and data-driven organizations are leading the way. We help organizations use data to make better business decisions, build smarter products, and deliver more value for their customers, employees and investors. We dig into our customers’ toughest business problems, design solutions and build the technology needed to address today’s unique challenges. What you can expect as a Starschema team member As a member of the Starschema team, you will be on the front lines of digital transformation, working with some of the most innovative Fortune 500 companies to drive innovation and realize the promise of data-driven cultures. You will learn and use the latest data-centric technologies along with the core industry technologies. Our team is inclusive and fun. While we take our work seriously, we know how to have a good time while doing so. We encourage everyone to share their opinions and ideas, and our leadership wants to hear everyone’s input no matter what role they play in the organization. Job Description As a Data Engineer at Starschema, you will bring business value for our clients through end to end development, optimization and operation of automated reporting, data lakes and related software platforms. You will use the latest technologies like Apache Airflow, Apache Kafka, Apache Spark and AWS etc. We are seeking for experienced medior and senior professionals for our open position. What will You do: Build and maintain database/bigdata clusters; Build dashboards for infrastructure management and reporting; Design and deploy infrastructure management strategies to meet up time and monitoring SLA’s; Deploy code release in QA and PROD; Participate in building unit/performance/integration tests working with database developers; Participate in database SQL optimization plan; Deploy configuration and automation tools to remove manual steps in deploying, upgrading, and scaling systems and software across all environment. Qualifications We want to hear from you if You have: At least 3 years of experience in data engineering field; Solid background in Python and SQL; Experience building data solutions using big data tools: Airflow, Spark, Kafka, AWS; Experience with data pipeline and workflow management tools Hands-on experience with requirements analysis, design, coding and testing patterns; Has experience in engineering (commercial and open source) software platforms and large-scale data infrastructures; Experience working with cloud computing environments; Excellent communications skills in English (both written and oral); Intelligent, communicative team-player personality, interested in and willing to learn new skills and technologies. Additional Information What's In It For You: Remote work: You can work remotely from anywhere within the USA. Plus if you are based in Washington D.C area. Eligibility: We are unable to support work visa for this specific position so we are open to receive application from candidates who are eligible to work in the USA. Benefits & Community: A healthy lifestyle and the feeling of belonging are important to us, for both body and mind. We provide: 401K Insurance with matching Employee Assistance Program (EAP) Technical/Professional trainings Remote work / Home Office opportunity Start date: The sooner the better, but if you currently work somewhere and have a notice period, it is still fine, we will wait for the right person!"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Python Data Engineer,https://www.linkedin.com/jobs/view/python-data-engineer-at-ust-3478642218?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=0QvHUyBNimG0ik0lK5fo5w%3D%3D&position=9&pageNum=10&trk=public_jobs_jserp-result_search-card," UST ",https://www.linkedin.com/company/ustglobal?trk=public_jobs_topcard-org-name," Santa Clara, CA "," 4 weeks ago "," Over 200 applicants ","Who we are: At UST, we help the world’s best organizations grow and succeed through transformation. Bringing together the right talent, tools, and ideas, we work with our client to co-create lasting change. Together, with over 30,000 employees in 25 countries, we build for boundless impact—touching billions of lives in the process. Visit us at UST.com. The Opportunity: Title: Python Data Engineer FT Position Location: Santa Clara CA Onsite Primary Skills:- • Looking for Software Engineer • Design, build and maintain data processing pipelines. • Define API's and micro services consumed by other applications/teams. • Strong Python skills • Understanding on databases, data warehouses, data lakes. Good to have: • Manage ETL and machine learning model deploy, monitor, maintain, and track model in production • Understand and Maintain ETL pipeline, tools and deploy and monitor jobs. • Build, test and maintain tools, infrastructure to support Data science initiatives • Good communication and cross functional skills. • Experience deploying machine learning models into production environment. • Strong DevOps, Data Engineering skill and ML background • Knowledge of containerization and orchestration (such as Docker, Kubernetes) • Experience with CI/CD • Experience with ML training/retraining, Model Registry, ML model performance measurement What we believe: We’re proud to embrace the same values that have shaped UST since the beginning. Since day one, we’ve been building enduring relationships and a culture of integrity. And today, it's those same values that are inspiring us to encourage innovation from everyone, to champion diversity and inclusion and to place people at the centre of everything we do. Humility: We will listen, learn, be empathetic and help selflessly in our interactions with everyone. Humanity: Through business, we will better the lives of those less fortunate than ourselves. Integrity: We honour our commitments and act with responsibility in all our relationships. Equal Employment Opportunity Statement UST is an Equal Opportunity Employer. We believe that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion, or sexual orientation. All employment decisions shall be made without regard to age, race, creed, colour, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. UST reserves the right to periodically redefine your roles and responsibilities based on the requirements of the organization and/or your performance. • To support and promote the values of UST. • Comply with all Company policies and procedures"," Mid-Senior level "," Full-time "," Engineering, Other, and Information Technology "," IT Services and IT Consulting, Computer Hardware Manufacturing, and Semiconductor Manufacturing " Data Engineer,United States,Data Engineer I,https://www.linkedin.com/jobs/view/data-engineer-i-at-medical-solutions-3488030757?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=ymJ6j58tX5LTEsbCuWqsnw%3D%3D&position=13&pageNum=10&trk=public_jobs_jserp-result_search-card," Medical Solutions ",https://www.linkedin.com/company/medical-solutions?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","At Medical Solutions, we’re people who care, helping people who care. No matter how you look at it, there’s a whole lot of care going on in our world and that’s just the way we like it. What do we do? Medical Solutions is one of the nation’s largest providers of total workforce solutions in the healthcare industry, connecting nurses and allied health clinicians with hospitals and healthcare systems across the country and around the corner. Through our family of brands, we also serve a segment of clients outside of the healthcare space. And we’re the very best at what we do. You’ll love our culture that’s filled with heart and soul. As a company and employer, we’re sincerely and unabashedly us. We lead as humans first and believe the unique qualities of each team member make us better together. We share a purpose for helping others and the drive to make a difference. And we offer endless opportunities for personal and professional growth, throughout your career. At Medical Solutions, you’ll find a great place to work and a career home. We’ve received Best Places to Work awards, landed top industry awards, and received accolades for the impact we’ve made in business and within our community. But the only way to really get to know us, is to join us. We think you’ll fit right in. Data Engineer I - Job Description Our team is currently looking for a Data Engineer who is passionate about creating awesome analytics solutions for our customers and our teammates. You’ll be hands on with solution delivery, and be flexible, proactive, and solution focused. The ideal candidate will identify, build, and implement solutions that enable data scientists to create robust machine learning models. Together, the team will create end to end advanced analytic solutions and products. We value open and honest communication, and we strive to create a positive and fun team-based environment. We want teammates that challenge themselves, as well as those around them to create high quality analytical products and services. Job Responsibilities: Develop, maintain, and support modern data pipelines for use in machine learning Design, build, and test solutions for data transformation from a variety of data sources while modeling data in an efficient and performant manner Work directly with customers to understand reporting and analytics needs Collaborate with customers, BI analysts, and data scientists to design and build data solutions to enable robust machine learning Be a committed agile teammate who helps the team to reach their goals Participate in all agile ceremonies, including planning, pointing, demos, and retros Stay up to date on data analytic trends and employ industry (and company) best practices for data warehousing, ETL, ELT and Data Analytics in general Prioritize unit tests, both manual and automated, to ensure the highest code quality Participates in code reviews and pull requests Responsible for documenting solutions, metadata, and processes to the required company standard Assists in problem management and root cause analysis Job Qualifications: Bachelor’s degree in Data Analytics, or a closely related discipline in a Computer Science field (or equivalent) Bachelor’s degree and 0-2 years of Datawarehouse Design experience, and 0-2 years of data modeling experience; or, Master’s Degree, and 0-1 years of data warehouse design experience, and 0-1 years of data modeling experience. Working knowledge of data transformation (ETL and ELT) design patterns, optimization, and tools such as Azure Data Factory (ADF preferred) or SSIS Experience building and designing BI semantic models in SSAS tabular preferred Working knowledge of data visualization tools, with preference to PowerBI Comfortable using work management tools such as Jira or Azure DevOps Experience using version control such as Git, TFS, or SVN Understanding of web services and other Application Programming Interfaces Experience with Microsoft Office Suite with emphasis on Excel (pivot tables, v-lookup, working with data from BI semantic models) Robust knowledge of SQL and tuning SQL including analysis of query plans Experience and expertise in Python for data transformation Experience developing and operationalizing data pipelines for consumption in machine learning solutions Some of the benefits we offer… Insurance: Day 1 benefits (health, dental, vision, 401(k) + employer match after 6 months and 500 hours of employment and company-paid life insurance; short and long-term disability; supplemental life insurance for yourself, spouse & child(ren); and multiple voluntary benefits Remote work option - we’re where you are! Flexible PTO (PT-Oh!) Flexible schedules Award-winning training program Connectivity stipend Competitive compensation as part of our total rewards package Opportunity for additional/bonus compensation through individual and company performance targets determined by the Company at its discretion (8) paid Holidays Paid parental leave Employee Assistance Program (EAP) Why us? We live our Values in all we do Commitment to diversity, equity, and inclusion Focus on total wellbeing Employee Experience Team that provides perks in-office and virtually Relaxed culture and casual dress (t-shirts and flip-flops welcome!) Learn more about Medical Solutions and what it’s like to be part of our team. Check out our Careers website, https://www.thebestjobieverhad.com. Equal Opportunity Employer/Protected Veterans/Individuals with Disabilities The contractor will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the contractor’s legal duty to furnish information. 41 CFR 60-1.35(c)"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,REMOTE Data Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-state-farm-3487710806?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=Hc2fMHU7%2F%2BHuZpwDxWPV4A%3D%3D&position=16&pageNum=10&trk=public_jobs_jserp-result_search-card," State Farm ",https://www.linkedin.com/company/state_farm?trk=public_jobs_topcard-org-name," Dunwoody, GA "," 1 day ago "," Over 200 applicants ","Overview We are not just offering a job but a meaningful career! Come join our passionate team! As a Fortune 50 company, we hire the best employees to serve our customers, making us a leader in the insurance and financial services industry. State Farm embraces diversity and inclusion to ensure a workforce that is engaged, builds on the strengths and talents of all associates, and creates a Good Neighbor culture. We offer competitive benefits and pay with the potential for an annual financial award based on both individual and enterprise performance. Our employees have an opportunity to participate in volunteer events within the community and engage in a learning culture. We offer programs to assist with tuition reimbursement, professional designations, employee development, wellness initiatives, and more! Visit our Careers page for more information on our benefits, locations and the process of joining the State Farm team! REMOTE: Qualified candidates (outside of hub locations listed below) may be considered for 100% remote work arrangements based on where a candidate currently resides or is currently located. HYBRID: Qualified candidates (in or near hub locations listed below) should plan to spend time working from home and some time working in the office as part of our hybrid work environment. HUB LOCATIONS: Dunwoody, GA; Richardson, TX; Tempe, AZ; or Bloomington, IL Check out our Enterprise Technology department! Responsibilities The Data Visualization team is seeking a talented and creative Data Engineer to evaluate and enable data technologies that transform data into meaningful insights across the Enterprise. To be successful in this role, the engineer must be a strategic thinker and can bring a data-driven approach to solving complex business problems. We need an exceptional communicator, passionate about data, collaborative, analytical, and a problem-solver who has expertise related to business intelligence (BI) tooling. As a Data Engineer in this role you will get to: Position data and perform data analysis for use in visualizations that will provide insights into business opportunities. Interface with the business areas that are sourcing the data for the various analytical insights. Qualifications Highly desired skills: At least 5 years of experience in data engineering Strong proficiency in Python and SQL Experience with data warehousing and ETL tools Familiarity with cloud computing platforms, such as AWS Knowledge of data modeling and data visualization techniques Excellent problem-solving, analytical, communication, and interpersonal skills SPONSORSHIP: Applicants are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunity For Los Angeles candidates: Pursuant to the Los Angeles Fair Chance Initiative for Hiring, we will consider for employment qualified applicants with criminal histories. For San Francisco candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. For Colorado and Washington State candidates: Salary Range: $84,620.00-$169,250.00 For California, NYC, and CT candidates: Potential salary range: $84,620.00-$169,250.00 Potential yearly incentive pay: up to 15% of base salary Competitive Benefits including: 401k Plan Health Insurance Dental/Vision plans Life Insurance Paid Time Off Annual Merit Increases Tuition Reimbursement Health Initiatives For more details visit our benefits summary page SFARM "," Entry level "," Full-time "," Analyst, Information Technology, and Engineering "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-surge-technology-solutions-inc-3504908751?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=LVoJOHWGQ488dNGJPLkiWg%3D%3D&position=17&pageNum=10&trk=public_jobs_jserp-result_search-card," Surge Technology Solutions Inc ",https://www.linkedin.com/company/surge-technology-solutions?trk=public_jobs_topcard-org-name," Chicago, IL "," 1 week ago "," Be among the first 25 applicants ","Emp Type: W2 or 1099 (NO C2C) Visa: H1B, H4EAD, GCEAD, L2, Green Card, US Citizens Location: Chicago, IL / Remote -USA ( NO out of Country Applications ) Workplace Type: Hybrid Job Responsibilities Test programs or databases, correct errors and make necessary modifications. Modify existing databases and database management systems or direct programmers and analysts to make changes. Write and code logical and physical database descriptions and specify identifiers of database to management system or direct others in coding descriptions. Skills Verbal and written communication skills, problem solving skills, customer service and interpersonal skills. Basic ability to work independently and manage one's time. Basic knowledge of logical data modeling and physical data modeling. Basic knowledge of computer software, such as SQL, Visual Basic, Oracle, etc. Education/Experience Associate degree in computer programming or a relevant field required. Bachelor's degree preferred. 2-4 Years Experience Required. Typical task breakdown: Test Airflow, correct errors and make necessary modifications during our migration from Saas. Maintaining and enhancing database in Saas Provide support to business partners that have questions about data in CVA's in relation to numbers coming out of Saas. Technical Skills (Required) 2+ years exp in Saas 2+ years exp with SQL 2+ years exp in Snowflake (Desired) Project management exp specifically Agile methodology Airflow exp"," Mid-Senior level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ameri100-3523780012?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=uN57nZYWYSNoA7D%2BfwYkzQ%3D%3D&position=20&pageNum=10&trk=public_jobs_jserp-result_search-card," Ameri100 ",https://www.linkedin.com/company/ameri100?trk=public_jobs_topcard-org-name," Houston, TX "," 2 days ago "," Be among the first 25 applicants "," Need to ingest, transform and manipulate data to the needs of the model. Background in oil and gas, preferably midstream pipline experience. Python, DASK, Kubernetes are the technologies used. 6-10 years experience in the technologiesNow100 is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, genetics, disability, age, veteran status, or any other status protected by federal, state, or local law.About Now100Part of 100 Holdings Group, Now100 is Headquartered in Atlanta, Georgia, Now100 offers the highest quality professionals and solutions to help clients with their complex technology needs. Now100 is committed to understanding our clients’ needs and providing solutions that not only meet but exceed their expectations.­ "," Entry level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-diverse-lynx-3499123168?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=a11d38Py%2Bk6QTGfchf5lnA%3D%3D&position=11&pageNum=11&trk=public_jobs_jserp-result_search-card," Diverse Lynx ",https://www.linkedin.com/company/diverselynx?trk=public_jobs_topcard-org-name," United States "," 1 month ago "," Be among the first 25 applicants ","Job Description Role: Data engineer Location: (Remote) Experience:8-10 Year Duration: 12 Months Key Skills Strong experience building robust and scalable data processes and pipelines for modeling, analysis, and reporting Experience with Enterprise Data Warehousing systems including Snowflake and Teradata (or equivalent) Fluency in SQL and Python, including data wrangling and schema design Experience working with pipeline tools like Airflow and dbt Experience with BI processes and some experience with dashboard tools like Tableau Some experience with CI/CD and containerization tools (e.g. Jenkins, Docker, Kubernetes) Ability to initiate, refine, and complete projects with minimal guidance and some experience working in a scrum or release cycle environment Ability to clearly communicate technical concepts, definitions, logic, and processes to a non-technical audience Ability to think critically and collaborate cross-functionally with other data engineering, data science and analytics stakeholders distilling business requirements into clear data products Experience designing data ingestion processes and working with unstructured data a plus Project Descriptions Design, create, refine, and maintain data processes and pipelines used for modeling, analysis, and reporting Operationalize data products with detailed documentation, automated data quality checks and change alerts Support data access through various sharing platforms, including dashboard tools Troubleshoot failures in data processes, pipelines, and products Communicate and educate consumers on data access and usage, managing transparency in metric and logic definitions Collaborate with other data scientists, analysts, and engineers to build full-service data solutions Develop and communicate architectures, code patterns and data structure design choices to team of data scientists, analysts and engineers laying out tradeoffs Support codebase compatibility with Snowflake by designing, creating, and driving adoption of templates, packages, and best practices Optimize query and database performance through designing, creating, refining, and maintaining performance management system Work with cross-functional business partners and vendors to acquire and transform raw data sources Design, create, refine, and maintain data ingestion process Provide weekly updates to the team on progress and status of planned work Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company."," Entry level "," Contract "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3523707600?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=ICZzm0itW6KF5Z1LYlJQFQ%3D%3D&position=12&pageNum=11&trk=public_jobs_jserp-result_search-card," Diverse Lynx ",https://www.linkedin.com/company/diverselynx?trk=public_jobs_topcard-org-name," United States "," 1 month ago "," Be among the first 25 applicants "," Job DescriptionRole: Data engineerLocation: (Remote)Experience:8-10 Year Duration: 12 MonthsKey SkillsStrong experience building robust and scalable data processes and pipelines for modeling, analysis, and reportingExperience with Enterprise Data Warehousing systems including Snowflake and Teradata (or equivalent) Fluency in SQL and Python, including data wrangling and schema designExperience working with pipeline tools like Airflow and dbtExperience with BI processes and some experience with dashboard tools like TableauSome experience with CI/CD and containerization tools (e.g. Jenkins, Docker, Kubernetes)Ability to initiate, refine, and complete projects with minimal guidance and some experience working in a scrum or release cycle environmentAbility to clearly communicate technical concepts, definitions, logic, and processes to a non-technical audienceAbility to think critically and collaborate cross-functionally with other data engineering, data science and analytics stakeholders distilling business requirements into clear data productsExperience designing data ingestion processes and working with unstructured data a plusProject DescriptionsDesign, create, refine, and maintain data processes and pipelines used for modeling, analysis, and reportingOperationalize data products with detailed documentation, automated data quality checks and change alertsSupport data access through various sharing platforms, including dashboard toolsTroubleshoot failures in data processes, pipelines, and productsCommunicate and educate consumers on data access and usage, managing transparency in metric and logic definitionsCollaborate with other data scientists, analysts, and engineers to build full-service data solutionsDevelop and communicate architectures, code patterns and data structure design choices to team of data scientists, analysts and engineers laying out tradeoffsSupport codebase compatibility with Snowflake by designing, creating, and driving adoption of templates, packages, and best practicesOptimize query and database performance through designing, creating, refining, and maintaining performance management systemWork with cross-functional business partners and vendors to acquire and transform raw data sourcesDesign, create, refine, and maintain data ingestion process Provide weekly updates to the team on progress and status of planned workDiverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company. "," Entry level "," Contract "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer with SQL,https://www.linkedin.com/jobs/view/data-engineer-at-corecivic-3508969456?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=4bWUkr%2F8TYdkzja8BnXjhQ%3D%3D&position=1&pageNum=12&trk=public_jobs_jserp-result_search-card," Extend Information Systems Inc. ",https://www.linkedin.com/company/extendinfosys?trk=public_jobs_topcard-org-name," Cary, NC "," 3 weeks ago "," Be among the first 25 applicants "," Job Title: Data Engineer with SQL skillsLocation: Cary, NC (Should be comfortable to relocate within 1 month of selection)Duration: FulltimeJob DescriptionProficiency in understanding data and writing queries - In depth understanding of joins, complex queries, sub queries, data analysis and data quality testing.Understanding of query optimization techniquesData profiling on popular databases ( Oracle, MySQL, Hive, Impala, etc.)Ability to capture requirements and communicate with usersStrong Interpersonal communicationClient interaction for problem solving and staff support related to BI toolsAbility to capture big data requirements from stakeholders and proposed an approach to deliver solutionAbility to debug Tableau issues in production system and propose an approach to fix thoseAbility to debug big data issues in production system and propose an approach to fix thoseAbility to model data, gather required dataUnderstand optimization of data sources e.g. consolidation of multiple data sources into one scalable solutionUnderstand Tableau security related limiting access to dashboards for users or groups of usersBest practices for dashboard performance (not necessarily data related)Experience with extract refresh schedule e.g. finishing of ETL; Extract should refresh automatically; Ask question around how many number of workloads consist of projectBasic data visualization (only visualization expertize is not expected)Understanding of advanced Tableau concepts e.g. access level controlSite administration experienceServer administration experienceConfiguration of Tableau Server to support best performanceThanks & RegardsRajiv Ranjan RaiExtend Information System IncEmail: rajiv@extendinfosys.comAddress: 44355 Premier Plaza UNIT 220, Ashburn, VA, USA - 20147Web: WWW.extendinfosys.com "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer with SQL,https://www.linkedin.com/jobs/view/data-engineer-at-ubs-3474040000?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=rP1ij0SirzqB%2FJfZsR6VMw%3D%3D&position=2&pageNum=12&trk=public_jobs_jserp-result_search-card," Extend Information Systems Inc. ",https://www.linkedin.com/company/extendinfosys?trk=public_jobs_topcard-org-name," Cary, NC "," 3 weeks ago "," Be among the first 25 applicants "," Job Title: Data Engineer with SQL skillsLocation: Cary, NC (Should be comfortable to relocate within 1 month of selection)Duration: FulltimeJob DescriptionProficiency in understanding data and writing queries - In depth understanding of joins, complex queries, sub queries, data analysis and data quality testing.Understanding of query optimization techniquesData profiling on popular databases ( Oracle, MySQL, Hive, Impala, etc.)Ability to capture requirements and communicate with usersStrong Interpersonal communicationClient interaction for problem solving and staff support related to BI toolsAbility to capture big data requirements from stakeholders and proposed an approach to deliver solutionAbility to debug Tableau issues in production system and propose an approach to fix thoseAbility to debug big data issues in production system and propose an approach to fix thoseAbility to model data, gather required dataUnderstand optimization of data sources e.g. consolidation of multiple data sources into one scalable solutionUnderstand Tableau security related limiting access to dashboards for users or groups of usersBest practices for dashboard performance (not necessarily data related)Experience with extract refresh schedule e.g. finishing of ETL; Extract should refresh automatically; Ask question around how many number of workloads consist of projectBasic data visualization (only visualization expertize is not expected)Understanding of advanced Tableau concepts e.g. access level controlSite administration experienceServer administration experienceConfiguration of Tableau Server to support best performanceThanks & RegardsRajiv Ranjan RaiExtend Information System IncEmail: rajiv@extendinfosys.comAddress: 44355 Premier Plaza UNIT 220, Ashburn, VA, USA - 20147Web: WWW.extendinfosys.com "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer with SQL,https://www.linkedin.com/jobs/view/data-engineer-at-motor-information-systems-3475923720?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=icE1jKZL4w%2Bb%2FtRLnEl1DQ%3D%3D&position=3&pageNum=12&trk=public_jobs_jserp-result_search-card," Extend Information Systems Inc. ",https://www.linkedin.com/company/extendinfosys?trk=public_jobs_topcard-org-name," Cary, NC "," 3 weeks ago "," Be among the first 25 applicants "," Job Title: Data Engineer with SQL skillsLocation: Cary, NC (Should be comfortable to relocate within 1 month of selection)Duration: FulltimeJob DescriptionProficiency in understanding data and writing queries - In depth understanding of joins, complex queries, sub queries, data analysis and data quality testing.Understanding of query optimization techniquesData profiling on popular databases ( Oracle, MySQL, Hive, Impala, etc.)Ability to capture requirements and communicate with usersStrong Interpersonal communicationClient interaction for problem solving and staff support related to BI toolsAbility to capture big data requirements from stakeholders and proposed an approach to deliver solutionAbility to debug Tableau issues in production system and propose an approach to fix thoseAbility to debug big data issues in production system and propose an approach to fix thoseAbility to model data, gather required dataUnderstand optimization of data sources e.g. consolidation of multiple data sources into one scalable solutionUnderstand Tableau security related limiting access to dashboards for users or groups of usersBest practices for dashboard performance (not necessarily data related)Experience with extract refresh schedule e.g. finishing of ETL; Extract should refresh automatically; Ask question around how many number of workloads consist of projectBasic data visualization (only visualization expertize is not expected)Understanding of advanced Tableau concepts e.g. access level controlSite administration experienceServer administration experienceConfiguration of Tableau Server to support best performanceThanks & RegardsRajiv Ranjan RaiExtend Information System IncEmail: rajiv@extendinfosys.comAddress: 44355 Premier Plaza UNIT 220, Ashburn, VA, USA - 20147Web: WWW.extendinfosys.com "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-optomi-3510094886?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=2kxnHQA2U2A9mSM5%2BRd9xA%3D%3D&position=4&pageNum=12&trk=public_jobs_jserp-result_search-card," Optomi ",https://www.linkedin.com/company/optomi?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","YOU MUST BE A US CITIZEN OR HAVE A GREEN CARD TO APPLY. THIS ROLE IS NOT AVAILABLE ON A C2C BASIS and DOES NOT offer H1B Sponsorship Optomi, in partnership with a leader in Retail/Restaurant, is looking for a Data Engineer to join our team! This Data Engineer will be responsible for helping build the pipeline that feeds our new data warehouse solution and modernize our existing tech stack. What You’ll Do: Lead the design of scalable system architectures to support cross-functional interfaces based on interactions with the user community and knowledge of enterprise architecture practices. Assure solutions meet all user requirements, as well as to provide for future growth and expansion. Work with the business to identify high-level functional and technical requirements, develop ETL processes between multiple endpoints (including SOAP/REST APIs, SQL Server, S3, Redshift, and others), and support expansion of future “big data” projects. Work hands-on with a talented team of engineers to design, develop, test, and document (high-level system design diagrams and workflows) backend systems. Preferred Skills: Strong experience writing SQL queries for SQL Server or another Relational Database is required. Experienced with ETL tools such as Informatica, Talend, Boomi, SSIS etc is a plus. Data integrations between cloud and SaaS solutions Experience with Linux, Python, XML, Redshift, JavaScript, Groovy, JSON Understanding of cloud-based technology, preferably AWS. Successful track record of developing quality Enterprise scale software products and shipping production ready software for cloud/SaaS Solutions. Experience debugging distributed systems with high data loads. Experience with logging, monitoring, and alerting. Knowledge of Development IDEs and ability to use version control software such as GIT. Understanding of Web Services protocols such as REST, SOAP and API design. Data Modelling experience, Experience as an applications programmer on large-scale database management systems."," Mid-Senior level "," Full-time "," Information Technology "," Restaurants, IT Services and IT Consulting, and Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-merck-3531357999?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=UMU%2FD2Hb%2FBB4eKQqAmVnEQ%3D%3D&position=5&pageNum=12&trk=public_jobs_jserp-result_search-card," Optomi ",https://www.linkedin.com/company/optomi?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants "," YOU MUST BE A US CITIZEN OR HAVE A GREEN CARD TO APPLY. THIS ROLE IS NOT AVAILABLE ON A C2C BASIS and DOES NOT offer H1B Sponsorship Optomi, in partnership with a leader in Retail/Restaurant, is looking for a Data Engineer to join our team! This Data Engineer will be responsible for helping build the pipeline that feeds our new data warehouse solution and modernize our existing tech stack. What You’ll Do:Lead the design of scalable system architectures to support cross-functional interfaces based on interactions with the user community and knowledge of enterprise architecture practices. Assure solutions meet all user requirements, as well as to provide for future growth and expansion. Work with the business to identify high-level functional and technical requirements, develop ETL processes between multiple endpoints (including SOAP/REST APIs, SQL Server, S3, Redshift, and others), and support expansion of future “big data” projects. Work hands-on with a talented team of engineers to design, develop, test, and document (high-level system design diagrams and workflows) backend systems. Preferred Skills:Strong experience writing SQL queries for SQL Server or another Relational Database is required.Experienced with ETL tools such as Informatica, Talend, Boomi, SSIS etc is a plus.Data integrations between cloud and SaaS solutionsExperience with Linux, Python, XML, Redshift, JavaScript, Groovy, JSONUnderstanding of cloud-based technology, preferably AWS.Successful track record of developing quality Enterprise scale software products and shipping production ready software for cloud/SaaS Solutions.Experience debugging distributed systems with high data loads.Experience with logging, monitoring, and alerting.Knowledge of Development IDEs and ability to use version control software such as GIT.Understanding of Web Services protocols such as REST, SOAP and API design.Data Modelling experience,Experience as an applications programmer on large-scale database management systems. "," Mid-Senior level "," Full-time "," Information Technology "," Restaurants, IT Services and IT Consulting, and Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ohi-3512759684?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=zzvUvjw9njNcvThPCv1bRA%3D%3D&position=6&pageNum=12&trk=public_jobs_jserp-result_search-card," Optomi ",https://www.linkedin.com/company/optomi?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants "," YOU MUST BE A US CITIZEN OR HAVE A GREEN CARD TO APPLY. THIS ROLE IS NOT AVAILABLE ON A C2C BASIS and DOES NOT offer H1B Sponsorship Optomi, in partnership with a leader in Retail/Restaurant, is looking for a Data Engineer to join our team! This Data Engineer will be responsible for helping build the pipeline that feeds our new data warehouse solution and modernize our existing tech stack. What You’ll Do:Lead the design of scalable system architectures to support cross-functional interfaces based on interactions with the user community and knowledge of enterprise architecture practices. Assure solutions meet all user requirements, as well as to provide for future growth and expansion. Work with the business to identify high-level functional and technical requirements, develop ETL processes between multiple endpoints (including SOAP/REST APIs, SQL Server, S3, Redshift, and others), and support expansion of future “big data” projects. Work hands-on with a talented team of engineers to design, develop, test, and document (high-level system design diagrams and workflows) backend systems. Preferred Skills:Strong experience writing SQL queries for SQL Server or another Relational Database is required.Experienced with ETL tools such as Informatica, Talend, Boomi, SSIS etc is a plus.Data integrations between cloud and SaaS solutionsExperience with Linux, Python, XML, Redshift, JavaScript, Groovy, JSONUnderstanding of cloud-based technology, preferably AWS.Successful track record of developing quality Enterprise scale software products and shipping production ready software for cloud/SaaS Solutions.Experience debugging distributed systems with high data loads.Experience with logging, monitoring, and alerting.Knowledge of Development IDEs and ability to use version control software such as GIT.Understanding of Web Services protocols such as REST, SOAP and API design.Data Modelling experience,Experience as an applications programmer on large-scale database management systems. "," Mid-Senior level "," Full-time "," Information Technology "," Restaurants, IT Services and IT Consulting, and Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stand-8-technology-services-3518335857?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=eXHu%2BL4SPSF96D9H6eHmIw%3D%3D&position=7&pageNum=12&trk=public_jobs_jserp-result_search-card," STAND 8 Technology Services ",https://www.linkedin.com/company/westand8?trk=public_jobs_topcard-org-name," United States "," 2 days ago "," Over 200 applicants ","STAND 8 is a global leader providing end-to-end IT Solutions. We solve business problems through PEOPLE, PROCESS, and TECHNOLOGY and are looking for individuals to help us scale software projects designed to change the world! We are hiring for a Data Engineer for a cutting edge media entertainment company. You'll be joining a passionate team of data scientists and Python developers to build and implement solutions and utilize the best practices to keep us ahead of the industry. Highly motivated, inquisitive, willing to learn, and interested by the technology + subject matter (sports media). Build, test, and deploy cloud-native solutions used by our Enterprise Customer, supporting billions of calls per month with hundreds of millions of TV viewers. Work on an empowered cross functional team with Product, Engineering, Design and DevOps to deliver on your team’s core mission. Learn best practices to optimize your team’s applications for maximum speed, scale, security, and maintainability. Participate in Pull/Merge Request reviews Be a trusted member of your team, fostering a strong culture of collaboration and best practices in the software development lifecycle. Skills and Qualifications​​​​​ 2+ years of experience in professional software development Experience building services and features in production environment Extremely comfortable coding in at least one language Knowledge of Computer Science fundamentals Experience utilizing SQL and/or NoSQL databases Experience working in GitLab/GitHub Proven success working in Agile environments (Scrum, Kanban, etc.) Experience with cloud-native architectures (AWS serverless and/or containers preferred) at scale in production (or a strong desire for to learn this and gain these skills) The US base salary range for this contract position is $50-$60/hour. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training"," Entry level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499583532?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=TVaMm5EaRUcn0dD%2FsfSGxQ%3D%3D&position=8&pageNum=12&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Austin, TX "," 2 weeks ago "," 41 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer With Python & Pyspark,https://www.linkedin.com/jobs/view/data-engineer-with-python-pyspark-at-iquest-solutions-corporation-3500205514?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=Q8bI8ztYdLNinOeWMUhR3w%3D%3D&position=9&pageNum=12&trk=public_jobs_jserp-result_search-card," IQuest Solutions Corporation ",https://www.linkedin.com/company/iquest-solutions-corporation?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Required Skills Data Engineering Skills, Big Data Technologies, Spark, Any Language- Python or Scala or Java, AWS Qualifications Experience working with unstructured datasets Experience in programing using Python or Java or Scala. Experience in Pyspark or Spark Experience creating and executing data mappings and scripts to clean, compile and analyze data Ability to assess and maintain data pipeline, data quality in the database, and address data reporting issues. Experience developing and implementing testing protocols for data and system quality Experience in R and Stata. Experience with data visualization and software such as Tableau and Power BI. Excellent oral, written, analytical, and communication skills with the ability to lead a technical discussion with both technical and non-technical staff. Excellent analytical, verbal, and conflict-resolution skills."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-op3n-3527099523?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=kvlCRQm0TkVcr0vLoIyLmA%3D%3D&position=10&pageNum=12&trk=public_jobs_jserp-result_search-card," OP3N ",https://www.linkedin.com/company/op3n?trk=public_jobs_topcard-org-name," Los Angeles, CA "," 3 weeks ago "," Be among the first 25 applicants ","OP3N was founded in 2021 as a subsidiary of EST Media Holdings, OP3N imagines a world where every human can create, own, and connect their ideas to community. Our mission is to be a launchpad for ideas and communities to create meaningful experiences together by consolidating the tools needed to mint, share and engage with NFTs and digital tokens into one vertical stack. OP3N leverages its cross-industry expertise from the entertainment, gaming and tech ecosystems to lay the foundations for a new era of community-driven, inclusive entertainment while bringing everyone together on a journey into Web3. We're looking for an experienced Senior Data Engineer to own and scale our data infrastructure as we continue to build a first-of-its-kind web3 super app. This is a unique opportunity to join a small team of engineering and product leaders at the ground level, building scalable data solutions for what could be hundreds of millions of active users globally. Key Responsibilities Immediate: Audit current data infrastructure and availability in Cloud Firestore Spec and implement a new data warehouse (I.e. Snowflake) Partner with Product to develop a scalable data infrastructure and analytics framework with ever increasing user data Partner with engineering to ensure data flows into an easily queryable solution Ongoing Create and audit data infrastructure as the company, users, and product grows Build well-designed scalable systems to extract, transform, and load data into warehouses from a variety of sources Own data for new products/features: define data sources and architecture, set up data pipeline, design warehouse & reporting, etc. Explore available technologies and design solutions to continuously improve our data quality, workflow reliability, and scalability while reporting performance and capabilities Ideal Background & Skillset 5+ years of professional experience in data engineering 3-4 years of deep data pipeline experience including data warehouse setup Proven builder - has created & maintained warehouses Excellent written and verbal communication skills - able to clearly communicate needs and directions asynchronously across key stakeholders Experience partnering with Product and Engineering teams on large scale data projects Technical expertise: Fluency in Python Knowledge of Kafka & streaming technologies Machine learning + data modeling expertise a plus"," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,"Software Engineer, Data Infrastructure",https://www.linkedin.com/jobs/view/software-engineer-data-infrastructure-at-reddit-inc-3503717389?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=W1ldQEKOFZeXuxEuRZX0Zw%3D%3D&position=11&pageNum=12&trk=public_jobs_jserp-result_search-card," Reddit, Inc. ",https://www.linkedin.com/company/reddit-com?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Reddit is a community of communities where people can dive into anything through experiences built around their interests, hobbies, and passions. Our mission is to bring community, belonging, and empowerment to everyone in the world. Reddit users submit, vote, and comment on content, stories, and discussions about the topics they care about the most. From pets to parenting, there’s a community for everybody on Reddit and with over 50 million daily active users, it is home to the most open and authentic conversations on the internet. For more information, visit redditinc.com. Our mission is to bring community and belonging to everyone in the world. Reddit is a community of communities where people can dive into anything through experiences built around their interests, hobbies, and passions. With more than 50 million people visiting 100,000+ communities daily, it is home to the most open and authentic conversations on the internet. From pets to parenting, skincare to stocks, there’s a community for everybody on Reddit. For more information, visit redditinc.com. Our community of users generates over 65B analytics events per day, each of which is ingested by the Data Infrastructure team into a data warehouse that sees 55,000+ daily queries. As a data infrastructure engineer, you will build and maintain the systems and tools used across Reddit to generate, ingest, and store petabytes of raw data. You will collaborate with your team and partner teams like machine learning, and Ads to create and improve scalable, fault tolerant, self-serve systems. You will also develop standards, and frameworks to ensure a high level of data quality to help shape the data culture across all of Reddit! How You Will Contribute Refine and maintain our data infrastructure technologies to support real-time analysis of hundreds of millions of users. Own the data pipeline that surfaces 65B+ daily events to all teams, and the tools we use ingestion, storage and to improve data quality. Support warehousing, analytics and ML customers that rely on our data pipeline for analysis, modeling, and reporting. Build data pipelines with distributed streaming tools such as Kafka, Kinesis, Flink, or Spark Ship quality code to enable scalable, fault-tolerant and resilient services in a multi-cloud architecture Qualifications 3+ years of coding experience in a production setting writing clean, maintainable, and well-tested code. Experience with object-oriented programming languages such as Scala, Python, Go, or Java. Degree in Computer Science or equivalent technical field. Experience working with Terraform, Helm, Prometheus, Docker, Kubernetes, and CI/CD. Excellent communication skills to collaborate with stakeholders in engineering, data science, machine learning, and product. Benefits Comprehensive Health benefits 401k Matching Workspace benefits for your home office Personal & Professional development funds Family Planning Support Flexible Vacation & Reddit Global Days Off 4+ months paid Parental Leave Paid Volunteer time off Pay Transparency This job posting may span more than one career level. In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission. Additionally, Reddit offers a wide range of benefits to U.S.-based employees, including medical, dental, and vision insurance, 401(k) program with employer match, generous time off for vacation, and parental leave. To learn more, please visit https://www.redditinc.com/careers/. To provide greater transparency to candidates, we share base pay ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar stage growth companies. Final offer amounts are determined by multiple factors including, skills, depth of work experience and relevant licenses/credentials, and may vary from the amounts listed below. The base pay range for this position is: $157,400 - $236,100. Reddit is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, please contact us at ApplicationAssistance@Reddit.com."," Not Applicable "," Full-time "," Engineering and Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer Intern,https://www.linkedin.com/jobs/view/data-engineer-intern-at-covetrus-3523792790?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=c6sk84xGsqbhv40eqyTbJA%3D%3D&position=12&pageNum=12&trk=public_jobs_jserp-result_search-card," Covetrus ",https://www.linkedin.com/company/covetrus?trk=public_jobs_topcard-org-name," Portland, ME "," 11 hours ago "," 83 applicants ","Covetrus is a global animal-health technology and services company dedicated to empowering veterinary practice partners to drive improved health and financial outcomes. We’re bringing together products, services, and technology into a single platform that connects our customers to the solutions and insights they need to work best. Our passion for the well-being of animals and those who care for them drives us to advance the world of veterinary medicine. Covetrus has more than 5,000 employees, serving over 100,000 customers around the globe. Come explore the possibilities in our exciting, fast paced, high volume, work environment! Summary The Engineering Internship provides an immersive 3-month experience working directly within one of Covetrus’ Data Engineering teams: Data Pipeline Development Data Science Reporting & Analytics Interns will have the opportunity to learn the day-to-day functions of their role while gaining exposure to the Covetrus organization, leadership, and each other. Interns may also have the opportunity to apply to transition into the Engineering Department after graduation. Responsibilities Working with a mentor to learn and understand requirements and translating them to specifications Assist with developing data pipelines for our Data Lake Unit testing the code you will develop Perform code management into a shared code repository Promote code into a Production environment Required Knowledge And Skills Relevant coursework in Software Engineering Highly motivated self-starter with a desire to learn Proficient in Java or Python programming languages Basic experience with SQL and relational databases Demonstrates a can-do attitude meeting challenge, and to take on unfamiliar tasks Shows personal resilience; maintains energy and optimism, and rebounds quickly from challenges Education Currently pursuing a bachelor’s or master's degree in Computer Science or a related field. Covetrus is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. Jobs that are in Colorado: If you are a Colorado applicant, you are eligible to receive information about the salary range and benefits for this role. Please contact recruitment@covetrus.com"," Internship "," Full-time "," Information Technology "," Veterinary Services " Data Engineer,United States,Data Engineer Intern,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499585219?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=fGqLMsaTGtuq%2B51zBb1S%2Fw%3D%3D&position=13&pageNum=12&trk=public_jobs_jserp-result_search-card," Covetrus ",https://www.linkedin.com/company/covetrus?trk=public_jobs_topcard-org-name," Portland, ME "," 11 hours ago "," 83 applicants "," Covetrus is a global animal-health technology and services company dedicated to empowering veterinary practice partners to drive improved health and financial outcomes. We’re bringing together products, services, and technology into a single platform that connects our customers to the solutions and insights they need to work best. Our passion for the well-being of animals and those who care for them drives us to advance the world of veterinary medicine. Covetrus has more than 5,000 employees, serving over 100,000 customers around the globe. Come explore the possibilities in our exciting, fast paced, high volume, work environment!SummaryThe Engineering Internship provides an immersive 3-month experience working directly within one of Covetrus’ Data Engineering teams:Data Pipeline DevelopmentData ScienceReporting & AnalyticsInterns will have the opportunity to learn the day-to-day functions of their role while gaining exposure to the Covetrus organization, leadership, and each other. Interns may also have the opportunity to apply to transition into the Engineering Department after graduation.ResponsibilitiesWorking with a mentor to learn and understand requirements and translating them to specificationsAssist with developing data pipelines for our Data LakeUnit testing the code you will developPerform code management into a shared code repositoryPromote code into a Production environmentRequired Knowledge And SkillsRelevant coursework in Software EngineeringHighly motivated self-starter with a desire to learnProficient in Java or Python programming languagesBasic experience with SQL and relational databasesDemonstrates a can-do attitude meeting challenge, and to take on unfamiliar tasksShows personal resilience; maintains energy and optimism, and rebounds quickly from challengesEducationCurrently pursuing a bachelor’s or master's degree in Computer Science or a related field.Covetrus is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. Jobs that are in Colorado: If you are a Colorado applicant, you are eligible to receive information about the salary range and benefits for this role. Please contact recruitment@covetrus.com "," Internship "," Full-time "," Information Technology "," Veterinary Services " Data Engineer,United States,Data Engineer Intern,https://www.linkedin.com/jobs/view/data-engineer-at-audacy-inc-3531423986?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=tuhr61MZA8P2TfltRz3QZA%3D%3D&position=14&pageNum=12&trk=public_jobs_jserp-result_search-card," Covetrus ",https://www.linkedin.com/company/covetrus?trk=public_jobs_topcard-org-name," Portland, ME "," 11 hours ago "," 83 applicants "," Covetrus is a global animal-health technology and services company dedicated to empowering veterinary practice partners to drive improved health and financial outcomes. We’re bringing together products, services, and technology into a single platform that connects our customers to the solutions and insights they need to work best. Our passion for the well-being of animals and those who care for them drives us to advance the world of veterinary medicine. Covetrus has more than 5,000 employees, serving over 100,000 customers around the globe. Come explore the possibilities in our exciting, fast paced, high volume, work environment!SummaryThe Engineering Internship provides an immersive 3-month experience working directly within one of Covetrus’ Data Engineering teams:Data Pipeline DevelopmentData ScienceReporting & AnalyticsInterns will have the opportunity to learn the day-to-day functions of their role while gaining exposure to the Covetrus organization, leadership, and each other. Interns may also have the opportunity to apply to transition into the Engineering Department after graduation.ResponsibilitiesWorking with a mentor to learn and understand requirements and translating them to specificationsAssist with developing data pipelines for our Data LakeUnit testing the code you will developPerform code management into a shared code repositoryPromote code into a Production environmentRequired Knowledge And SkillsRelevant coursework in Software EngineeringHighly motivated self-starter with a desire to learnProficient in Java or Python programming languagesBasic experience with SQL and relational databasesDemonstrates a can-do attitude meeting challenge, and to take on unfamiliar tasksShows personal resilience; maintains energy and optimism, and rebounds quickly from challengesEducationCurrently pursuing a bachelor’s or master's degree in Computer Science or a related field.Covetrus is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. Jobs that are in Colorado: If you are a Colorado applicant, you are eligible to receive information about the salary range and benefits for this role. Please contact recruitment@covetrus.com "," Internship "," Full-time "," Information Technology "," Veterinary Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499580721?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=4SgEHKtGYjD8orkGknYrgg%3D%3D&position=15&pageNum=12&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Maryland, United States "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Lead Data Engineer,https://www.linkedin.com/jobs/view/lead-data-engineer-at-cvs-health-3493931467?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=Gp%2B8fRj0VcLQ7MnffZhvIw%3D%3D&position=16&pageNum=12&trk=public_jobs_jserp-result_search-card," CVS Health ",https://www.linkedin.com/company/cvs-health?trk=public_jobs_topcard-org-name," New York, NY "," 3 weeks ago "," 188 applicants ","HEALTH IS THE REAL WIN Join our team in Analytics & Behavior Change and help us pioneer bold new solutions that make it possible for our CVS Members to meet their health goals today and tomorrow. We are seeking a highly skilled Lead Data Engineer to join our innovative, agile, and fast-paced team. The team provides platform modernization, data delivery, and operations support to dozens of Analytics teams aligned to Healthcare, Retail, and Pharmacy initiatives. The ideal candidate will have a minimum of 5 years experience delivering creative automation, methodology, and infrastructure solutions to on-prem and cloud-based big-data customers. The Lead Data Engineer will collaborate with business leaders, support teams, and junior engineers to modernize workflows, automate routine operations, and solve complex business requirements. Collaborate with Analytics Engineering and supporting infrastructure teams to support and implement Linux-based workflow automations controlling HDFS, Hive, BigQuery, FTP, and other Big Data ecosystem components to present datasets to our customers. Use troubleshooting skills to identify and correct root cause of workflow failures based on error log outputs and environmental conditions. Participate in monthly Oncall support to cover after-hours, weekend, and holiday support needs. ETL data to and from GCP, Hadoop, and AWS Use SQL to examine, filter, and aggregate data in Hive, BigQuery, MySQL, SQL Server, DB2, and Oracle. Use programming skills including Python and Spark to automate workflow inventory, workflow execution, and publish pipeline health status for broad visibility. Use your experience and curiosity to deliver innovative results that boost productivity and efficiency Accept, engage, and complete multiple, limited scope assignments on time and autonomously Pay Range The typical pay range for this role is: Minimum: 115,000 Maximum: 230,000 Please keep in mind that this range represents the pay range for all positions in the job grade within which this position falls. The actual salary offer will take into account a wide range of factors, including location. Required Qualifications 5+ years interoperating with Big Data and Cloud Native ecosystems 5+ years building on the Linux platform using schedulers, scripts, and shell utilities 5+ years programming in Python, Spark, or similar 5+ years of writing SQL and DDL for Hive, BQ, and mainstream DBMS Preferred Qualifications Google Cloud Associate or Professional certification Demonstrated ability to take requirements from leadership and collaborate with engineering support Track record of innovating and delivering complex solutions Excellent collaboration and communication skills Experience working with data transformation processing Experience with Incident Management and Ticketing systems like ServiceNow and JIRA Education Bachelor’s degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline. Master’s degree or PhD preferred Business Overview Bring your heart to CVS Health Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver. Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable. We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an affirmative action employer, and is an equal opportunity employer, as are the physician-owned businesses for which CVS Health provides management services. We do not discriminate in recruiting, hiring, promotion, or any other personnel action based on race, ethnicity, color, national origin, sex/gender, sexual orientation, gender identity or expression, religion, age, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities."," Mid-Senior level "," Full-time "," Information Technology and Engineering "," Wellness and Fitness Services and Hospitals and Health Care " Data Engineer,United States,Lead Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-wimmer-solutions-3468515434?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=55gpuVE89er28dwXlThZ5A%3D%3D&position=17&pageNum=12&trk=public_jobs_jserp-result_search-card," CVS Health ",https://www.linkedin.com/company/cvs-health?trk=public_jobs_topcard-org-name," New York, NY "," 3 weeks ago "," 188 applicants "," HEALTH IS THE REAL WINJoin our team in Analytics & Behavior Change and help us pioneer bold new solutions that make it possible for our CVS Members to meet their health goals today and tomorrow.We are seeking a highly skilled Lead Data Engineer to join our innovative, agile, and fast-paced team. The team provides platform modernization, data delivery, and operations support to dozens of Analytics teams aligned to Healthcare, Retail, and Pharmacy initiatives. The ideal candidate will have a minimum of 5 years experience delivering creative automation, methodology, and infrastructure solutions to on-prem and cloud-based big-data customers. The Lead Data Engineer will collaborate with business leaders, support teams, and junior engineers to modernize workflows, automate routine operations, and solve complex business requirements.Collaborate with Analytics Engineering and supporting infrastructure teams to support and implement Linux-based workflow automations controlling HDFS, Hive, BigQuery, FTP, and other Big Data ecosystem components to present datasets to our customers.Use troubleshooting skills to identify and correct root cause of workflow failures based on error log outputs and environmental conditions.Participate in monthly Oncall support to cover after-hours, weekend, and holiday support needs.ETL data to and from GCP, Hadoop, and AWSUse SQL to examine, filter, and aggregate data in Hive, BigQuery, MySQL, SQL Server, DB2, and Oracle.Use programming skills including Python and Spark to automate workflow inventory, workflow execution, and publish pipeline health status for broad visibility.Use your experience and curiosity to deliver innovative results that boost productivity and efficiencyAccept, engage, and complete multiple, limited scope assignments on time and autonomouslyPay RangeThe typical pay range for this role is:Minimum: 115,000Maximum: 230,000Please keep in mind that this range represents the pay range for all positions in the job grade within which this position falls. The actual salary offer will take into account a wide range of factors, including location.Required Qualifications5+ years interoperating with Big Data and Cloud Native ecosystems5+ years building on the Linux platform using schedulers, scripts, and shell utilities5+ years programming in Python, Spark, or similar5+ years of writing SQL and DDL for Hive, BQ, and mainstream DBMSPreferred QualificationsGoogle Cloud Associate or Professional certificationDemonstrated ability to take requirements from leadership and collaborate with engineering supportTrack record of innovating and delivering complex solutionsExcellent collaboration and communication skillsExperience working with data transformation processingExperience with Incident Management and Ticketing systems like ServiceNow and JIRAEducationBachelor’s degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline.Master’s degree or PhD preferredBusiness OverviewBring your heart to CVS Health Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver. Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable. We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an affirmative action employer, and is an equal opportunity employer, as are the physician-owned businesses for which CVS Health provides management services. We do not discriminate in recruiting, hiring, promotion, or any other personnel action based on race, ethnicity, color, national origin, sex/gender, sexual orientation, gender identity or expression, religion, age, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities. "," Mid-Senior level "," Full-time "," Information Technology and Engineering "," Wellness and Fitness Services and Hospitals and Health Care " Data Engineer,United States,Lead Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-propel-solutions-inc-3510658391?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=7c7rorC%2BFvZxR78Be8KISg%3D%3D&position=18&pageNum=12&trk=public_jobs_jserp-result_search-card," CVS Health ",https://www.linkedin.com/company/cvs-health?trk=public_jobs_topcard-org-name," New York, NY "," 3 weeks ago "," 188 applicants "," HEALTH IS THE REAL WINJoin our team in Analytics & Behavior Change and help us pioneer bold new solutions that make it possible for our CVS Members to meet their health goals today and tomorrow.We are seeking a highly skilled Lead Data Engineer to join our innovative, agile, and fast-paced team. The team provides platform modernization, data delivery, and operations support to dozens of Analytics teams aligned to Healthcare, Retail, and Pharmacy initiatives. The ideal candidate will have a minimum of 5 years experience delivering creative automation, methodology, and infrastructure solutions to on-prem and cloud-based big-data customers. The Lead Data Engineer will collaborate with business leaders, support teams, and junior engineers to modernize workflows, automate routine operations, and solve complex business requirements.Collaborate with Analytics Engineering and supporting infrastructure teams to support and implement Linux-based workflow automations controlling HDFS, Hive, BigQuery, FTP, and other Big Data ecosystem components to present datasets to our customers.Use troubleshooting skills to identify and correct root cause of workflow failures based on error log outputs and environmental conditions.Participate in monthly Oncall support to cover after-hours, weekend, and holiday support needs.ETL data to and from GCP, Hadoop, and AWSUse SQL to examine, filter, and aggregate data in Hive, BigQuery, MySQL, SQL Server, DB2, and Oracle.Use programming skills including Python and Spark to automate workflow inventory, workflow execution, and publish pipeline health status for broad visibility.Use your experience and curiosity to deliver innovative results that boost productivity and efficiencyAccept, engage, and complete multiple, limited scope assignments on time and autonomouslyPay RangeThe typical pay range for this role is:Minimum: 115,000Maximum: 230,000Please keep in mind that this range represents the pay range for all positions in the job grade within which this position falls. The actual salary offer will take into account a wide range of factors, including location.Required Qualifications5+ years interoperating with Big Data and Cloud Native ecosystems5+ years building on the Linux platform using schedulers, scripts, and shell utilities5+ years programming in Python, Spark, or similar5+ years of writing SQL and DDL for Hive, BQ, and mainstream DBMSPreferred QualificationsGoogle Cloud Associate or Professional certificationDemonstrated ability to take requirements from leadership and collaborate with engineering supportTrack record of innovating and delivering complex solutionsExcellent collaboration and communication skillsExperience working with data transformation processingExperience with Incident Management and Ticketing systems like ServiceNow and JIRAEducationBachelor’s degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline.Master’s degree or PhD preferredBusiness OverviewBring your heart to CVS Health Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver. Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable. We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an affirmative action employer, and is an equal opportunity employer, as are the physician-owned businesses for which CVS Health provides management services. We do not discriminate in recruiting, hiring, promotion, or any other personnel action based on race, ethnicity, color, national origin, sex/gender, sexual orientation, gender identity or expression, religion, age, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities. "," Mid-Senior level "," Full-time "," Information Technology and Engineering "," Wellness and Fitness Services and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ennuviz-3488119247?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=IKd5%2BCtYogxDh3z67TrZaA%3D%3D&position=19&pageNum=12&trk=public_jobs_jserp-result_search-card," Ennuviz ",https://www.linkedin.com/company/ennuviz?trk=public_jobs_topcard-org-name," Minneapolis, MN "," 3 weeks ago "," Be among the first 25 applicants ","At Ennuviz, youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognised brands on the planet. And youll do it with cutting-edge technologies thanks to our close partnership with the worlds most prominent vendors. We are looking to hire a Data Engineer This is a hard core development role Requirements 7+ years experience in Spark Scala Programing. Design and development of data ingestion and transformation pipelines using Spark, Python, PySpark Hands on experience on Databricks and knowledge of AWS cloud Experience with Airflow Experience with SQL (Complex queries, Analytics and Data models) Good analytical skills and experience in agile team environment Preferred Knowledge of Snowflake and Kafka will be a plus Banking domain knowledge About Ennuviz Ennuviz offers Digital Transformation solutions to businesses to Streamline, Optimize Operating costs, and Delivering better Customer Experience and Employee Engagement. Our unique combination of skills, methodologies, frameworks, tools & governance, with well- proven accelerators and assets backed by our 30 years of technology and industry experience, help organisations to accomplish transformation goals like Operational excellence, Data- Centricity & Value of Data, Digital Twin . At Ennuviz , we are proud to be redefining the future of the way we work and live. We are creating a unique community, one of four strategic tech-enabled hubs that will redefine opportunity for everyone who works here."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-avmed-3475918746?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=fq9YD%2FFW6pQJojOXWLHgGA%3D%3D&position=20&pageNum=12&trk=public_jobs_jserp-result_search-card," Ennuviz ",https://www.linkedin.com/company/ennuviz?trk=public_jobs_topcard-org-name," Minneapolis, MN "," 3 weeks ago "," Be among the first 25 applicants "," At Ennuviz, youll deliver mission-critical technology and business solutions to Fortune 500companies and some of the most recognised brands on the planet. And youll do it withcutting-edge technologies thanks to our close partnership with the worlds most prominentvendors.We are looking to hire a Data EngineerThis is a hard core development roleRequirements7+ years experience in Spark Scala Programing.Design and development of data ingestion and transformation pipelines using Spark, Python, PySparkHands on experience on Databricks and knowledge of AWS cloud Experience with AirflowExperience with SQL (Complex queries, Analytics and Data models) Good analytical skills and experience in agile team environmentPreferredKnowledge of Snowflake and Kafka will be a plusBanking domain knowledgeAbout EnnuvizEnnuviz offers Digital Transformation solutions to businesses to Streamline, Optimize Operating costs, and Delivering better Customer Experience and Employee Engagement. Ourunique combination of skills, methodologies, frameworks, tools & governance, with well-proven accelerators and assets backed by our 30 years of technology and industry experience,help organisations to accomplish transformation goals like Operational excellence, Data-Centricity & Value of Data, Digital Twin .At Ennuviz , we are proud to be redefining the future of the way we work and live. We are creating a unique community, one of four strategic tech-enabled hubs that will redefineopportunity for everyone who works here. "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/informatica-data-engineer-at-fusion-alliance-3512350726?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=aZD3qNwxhfGez4vPXZZqdA%3D%3D&position=21&pageNum=12&trk=public_jobs_jserp-result_search-card," Ennuviz ",https://www.linkedin.com/company/ennuviz?trk=public_jobs_topcard-org-name," Minneapolis, MN "," 3 weeks ago "," Be among the first 25 applicants "," At Ennuviz, youll deliver mission-critical technology and business solutions to Fortune 500companies and some of the most recognised brands on the planet. And youll do it withcutting-edge technologies thanks to our close partnership with the worlds most prominentvendors.We are looking to hire a Data EngineerThis is a hard core development roleRequirements7+ years experience in Spark Scala Programing.Design and development of data ingestion and transformation pipelines using Spark, Python, PySparkHands on experience on Databricks and knowledge of AWS cloud Experience with AirflowExperience with SQL (Complex queries, Analytics and Data models) Good analytical skills and experience in agile team environmentPreferredKnowledge of Snowflake and Kafka will be a plusBanking domain knowledgeAbout EnnuvizEnnuviz offers Digital Transformation solutions to businesses to Streamline, Optimize Operating costs, and Delivering better Customer Experience and Employee Engagement. Ourunique combination of skills, methodologies, frameworks, tools & governance, with well-proven accelerators and assets backed by our 30 years of technology and industry experience,help organisations to accomplish transformation goals like Operational excellence, Data-Centricity & Value of Data, Digital Twin .At Ennuviz , we are proud to be redefining the future of the way we work and live. We are creating a unique community, one of four strategic tech-enabled hubs that will redefineopportunity for everyone who works here. "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-eze-3470792962?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=s5C7dgAdlo9yw4sQRuS3bA%3D%3D&position=22&pageNum=12&trk=public_jobs_jserp-result_search-card," Eze ",https://www.linkedin.com/company/ezewholesale?trk=public_jobs_topcard-org-name," Boston, MA "," 1 month ago "," Be among the first 25 applicants ","SS&C is a global provider of investment and financial services and software for the financial services and healthcare industries. Named to Fortune 1000 list as top U.S. company based on revenue, SS&C is headquartered in Windsor, Connecticut and has 20,000+ employees in over 90 offices in 35 countries. Some 18,000 financial services and healthcare organizations, from the world's largest institutions to local firms, manage and account for their investments using SS&C's products and services. Job Description Data Engineer Location: Boston, MA – Hybrid; In Office and Remote Get To Know The Team We're looking for a talented Data Engineer to join our Data Platform team. In this role, you will work closely with our Product, Engineering, Cloud, and Service teams to build best in class infrastructure supporting data across the Eclipse Platform and Data Marketplace. This is a strategic, high-impact role that will also help shape the future of SS&C Eze Eclipse products and services. Why You Will Love It Here! Flexibility: Hybrid Work Model & a Business Casual Dress Code, including jeans Your Future: 401k Matching Program, Professional Development Reimbursement Work/Life Balance: Flexible Personal/Vacation Time Off, Sick Leave, Paid Holidays Your Wellbeing: Medical, Dental, Vision, Employee Assistance Program, Parental Leave Diversity & Inclusion: Committed to Welcoming, Celebrating and Thriving on Diversity Training: Hands-On, Team-Customized, including SS&C University Extra Perks: Discounts on fitness clubs, travel and more! What You Will Get To Do Design, build, and own the core data models and key infrastructure to manage data and usage across the Eclipse Platform and Data Marketplace Design and implement a data quality monitoring system. The system should include source-to-target data validations as well as anomaly detection Collaborate closely with Engineering, Product Management, & IT, to inform product decision making with data and to identify opportunities for system improvements. Including recommendations and innovation for Information architecture and insights that are valuable to our customers Collaborate with the Engineering, IT, and Security team(s) to meet data governance requirements Build dashboards to help Leadership monitor performance, availability, and user behavior of the platform Answer questions from the leadership team for reporting, publications, and industry reports Think creatively to find optimal solutions to our complex, often unstructured problems What You Will Bring 3+ years' experience delivering secure, publicly facing, highly available SaaS based insights & platform(s) 3+ years designing solutions on a public cloud provider IaaS/PaaS 5+ years communicating with the leadership level to drive awareness and decision making for business enabling technology decisions Thank you for your interest in SS&C! To further explore this opportunity, please apply through our careers page on the corporate website at www.ssctech.com/careers. Unless explicitly requested or approached by SS&C Technologies, Inc. or any of its affiliated companies, the company will not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. SS&C offers excellent benefits including health, dental, 401k plan, tuition and professional development reimbursement plan. SS&C Technologies is an Equal Employment Opportunity employer and does not discriminate against any applicant for employment or employee on the basis of race, color, religious creed, gender, age, marital status, sexual orientation, national origin, disability, veteran status or any other classification protected by applicable discrimination laws. "," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-nutrisense-3514713578?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=X6CDdXLJ6KedNgc27woGQw%3D%3D&position=23&pageNum=12&trk=public_jobs_jserp-result_search-card," Nutrisense ",https://www.linkedin.com/company/nutrisenseio?trk=public_jobs_topcard-org-name," Chicago, IL "," 1 week ago "," 139 applicants ","$75 - 90k + Equity About Nutrisense If you are a Data Engineer with strong Relational DB and ETL experience, please read on… We are a HealthTech startup with a mission to help anyone discover and reach their health potential. The Nutrisense mobile app leverages continuous glucose monitoring (CGM) with access to virtual support from best-in-class dietitians so that everyone, not just individuals with diabetes, can learn about their unique food responses, understand their metabolism, and reach health goals that lead to an optimized life. Our customers range from the ultra-healthy trying to take their life to new levels to individuals who are relatively new to health monitoring and tracking. Our platform focuses on bringing individualized, real-time data about your metabolic health to the palm of your hands. If you are interested in joining a mission-driven startup that is disrupting the health industry and cares about creating a dynamic environment for its employees, apply today. In This Role You Will Be: Collaborating closely with data scientists, data analysts, and software engineers to build and maintain the infrastructure required for optimal extraction, transformation, and loading of data from diverse sources Designing, developing, and owning data pipelines that power analytics for business, research, and product teams Developing elegant data models to deliver business intelligence to stakeholders across the company under a self-service analytics model Implementing and evangelizing data engineering best practices, including testing, incremental modeling, documentation, and data snapshots Working in a cross-functional, distributed team environment where your self-starting personality will thrive To Thrive In This Role You Will Need: Bachelor's degree in computer science, engineering, or an equivalent field 2 - 5 years experience in a data-focused engineering role Strong working knowledge of relational databases and cloud-based analytical data warehouses Advanced expertise in SQL query design and optimization Experience with DBT, including testing, documentation, macros, and operations Experience developing and maintaining custom data integrations Experience with cloud services and products Proficiency with programming languages such as GO, Python, JavaScript, or C++ Obsessive attention to detail to catch bugs, resolve edge cases, and perform root cause analysis on internal and external data and processes. Effective communication and organizational skills Preferred Qualifications: Experience with GCP Ability to thrive in a fast-paced remote environment Genuine passion for health and wellness Total Compensation Package For This Role Includes: Competitive Compensation Options - Choice of Base Salary + Equity 100% Remote Environment - We encourage our employees to travel and experience life as they see fit Excellent Health Benefits - Medical, Dental, Vision Flexible PTO Work From Home Reimbursement Health & Fitness Reimbursement Nutrisense Discounts And Much Much More! Nutrisense has an organization-wide commitment to diversity, equity and inclusion. We strive to create a work environment where everyone has a sense of belonging. Individuals from historically underrepresented or underserved communities are strongly encouraged to apply. We benchmark total compensation to industry standards, taking into account candidate's experience, skill, and geographical location."," Associate "," Full-time "," Information Technology, Engineering, and Science "," Wellness and Fitness Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-chubb-3503823602?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=Y1eHCmrAtb8q1E%2Bky7dd7Q%3D%3D&position=24&pageNum=12&trk=public_jobs_jserp-result_search-card," Chubb ",https://ch.linkedin.com/company/chubb?trk=public_jobs_topcard-org-name," Jersey City, NJ "," 2 weeks ago "," 134 applicants "," We are looking for an experienced data engineer to support our Knowledge Graph team. The role focuses on ingesting data into the Knowledge Graph.QualificationsProficient with Python, SOL, and ETL. Needs to know, or be willing to learn, RDF, SPARQL, and OWL.Ideal candidate for this role can relate data engineering efforts to creating business value and solving real world problems. Proficient with Python. Bachelors degree in Computer Science, Data Science, Software Engineering or related educational background.. Excellent data analysis and advanced data manipulation techniques using SQL. Excellent oral and written communication skills. Excellent working knowledge of relational databases. Decent with Linux. Ability to adapt to rapidly and constantly changing stakeholder requirements. Quick to learn, ability to prioritize activities and responsive to the needs of the businessEEO StatementAt Chubb, we are committed to equal employment opportunity and compliance with all laws and regulations pertaining to it. Our policy is to provide employment, training, compensation, promotion,and other conditions or opportunities of employment, without regard to race, color, religious creed, sex, gender, gender identity, gender expression, sexual orientation, marital status, national origin,ancestry, mental and physical disability, medical condition, genetic information, military and veteran status, age, and pregnancy or any other characteristic protected by law.Performance and qualifications are the only basis upon which we hire, assign, promote, compensate, develop and retain employees. Chubb prohibits all unlawful discrimination, harassment and retaliationagainst any individual who reports discrimination or harassment.352900 "," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499586076?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=wMYwTdu%2BwEzVQ0f%2B8ckgzw%3D%3D&position=25&pageNum=12&trk=public_jobs_jserp-result_search-card," Chubb ",https://ch.linkedin.com/company/chubb?trk=public_jobs_topcard-org-name," Jersey City, NJ "," 2 weeks ago "," 134 applicants "," We are looking for an experienced data engineer to support our Knowledge Graph team. The role focuses on ingesting data into the Knowledge Graph.QualificationsProficient with Python, SOL, and ETL. Needs to know, or be willing to learn, RDF, SPARQL, and OWL.Ideal candidate for this role can relate data engineering efforts to creating business value and solving real world problems. Proficient with Python. Bachelors degree in Computer Science, Data Science, Software Engineering or related educational background.. Excellent data analysis and advanced data manipulation techniques using SQL. Excellent oral and written communication skills. Excellent working knowledge of relational databases. Decent with Linux. Ability to adapt to rapidly and constantly changing stakeholder requirements. Quick to learn, ability to prioritize activities and responsive to the needs of the businessEEO StatementAt Chubb, we are committed to equal employment opportunity and compliance with all laws and regulations pertaining to it. Our policy is to provide employment, training, compensation, promotion,and other conditions or opportunities of employment, without regard to race, color, religious creed, sex, gender, gender identity, gender expression, sexual orientation, marital status, national origin,ancestry, mental and physical disability, medical condition, genetic information, military and veteran status, age, and pregnancy or any other characteristic protected by law.Performance and qualifications are the only basis upon which we hire, assign, promote, compensate, develop and retain employees. Chubb prohibits all unlawful discrimination, harassment and retaliationagainst any individual who reports discrimination or harassment.352900 "," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cvs-health-3522843006?refId=tJDZkJXLzLj9voE6v5gbkg%3D%3D&trackingId=I0SGqQ%2F29igY8Lv0CJpZ6g%3D%3D&position=19&pageNum=7&trk=public_jobs_jserp-result_search-card," CVS Health ",https://www.linkedin.com/company/cvs-health?trk=public_jobs_topcard-org-name," Hartford, CT "," 21 hours ago "," 33 applicants ","Job Description Assists in the development of large-scale data structures and pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needs Applies understanding of key business drivers to accomplish own work Uses expertise, judgment and precedents to contribute to the resolution of moderately complex problems Leads portions of initiatives of limited scope, with guidance and direction Writes ETL (Extract / Transform / Load) processes, designs database systems and develops tools for real-time and offline analytic processing Collaborates with client team to transform data and integrate algorithms and models into automated processes Uses knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries to build data pipelines Uses programming skills in Python, Java or any of the major languages to build robust data pipelines and dynamic systems Builds data marts and data models to support clients and other internal customers Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards Pay Range The typical pay range for this role is: Minimum: $ 70,000 Maximum: $ 140,000 Please keep in mind that this range represents the pay range for all positions in the job grade within which this position falls. The actual salary offer will take into account a wide range of factors, including location. Required Qualifications 1+ years of progressively complex related experience Experience with bash shell scripts, UNIX utilities & UNIX Commands Preferred Qualifications Ability to leverage multiple tools and programming languages to analyze and manipulate data sets from disparate data sources Ability to understand complex systems and solve challenging analytical problems Strong problem-solving skills and critical thinking ability Strong collaboration and communication skills within and across teams Knowledge in Java, Python, Hive, Cassandra, Pig, MySQL or NoSQL or similar Knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries against data in the HDFS environment Experience building data transformation and processing solutions Has strong knowledge of large-scale search applications and building high volume data pipelines Education Bachelor's degree or equivalent work experience in Computer Science, Engineering, Machine Learning, or related discipline Master’s degree or PhD preferred Business Overview Bring your heart to CVS Health Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver. Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable. We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an affirmative action employer, and is an equal opportunity employer, as are the physician-owned businesses for which CVS Health provides management services. We do not discriminate in recruiting, hiring, promotion, or any other personnel action based on race, ethnicity, color, national origin, sex/gender, sexual orientation, gender identity or expression, religion, age, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities."," Entry level "," Full-time "," Information Technology "," Wellness and Fitness Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-advantis-global-3490307445?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=WapyYzffDYz6SOjX4gU1iw%3D%3D&position=3&pageNum=8&trk=public_jobs_jserp-result_search-card," Advantis Global ",https://www.linkedin.com/company/advantis-global-services?trk=public_jobs_topcard-org-name," Austin, TX "," 3 weeks ago "," Over 200 applicants ","NO C2C The Data Engineer will join a critical engineering team responsible for maintaining data infrastructure utilized by Security customers. You will have ownership of a critical data lake. You will be responsible for collaborating with customers and internal teams to build and maintain data pipelines, onboard data sets into the data lake, and respond to customer requests regarding critical data. Successful candidates will have: 1+ year of experience contributing to system design or architecture 4+ years of experience creating and maintaining enterprise data pipelines(terabyte, exabyte) – strong experience onboarding data sets into a data lake Experience developing data pipeline parsers (Scala preferred); experience with Apache Parquet Experience with Spark or Apache Flink Experience with CDK preferred Experience using modern open source and cloud technologies - AWS preferred"," Mid-Senior level "," Contract "," Engineering and Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-zenapse-3516479400?refId=kR6fuf3ke01BNC9Akq%2F%2BsQ%3D%3D&trackingId=cf0RAvwN3yKG0op1Fcjz4g%3D%3D&position=4&pageNum=8&trk=public_jobs_jserp-result_search-card," zenapse ",https://www.linkedin.com/company/zenapse?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 1 week ago "," 142 applicants ","Work from home! Zenapse is a fully remote team. Are you a skilled data engineer with experience in Neo4j and hands on cypher coder? We'd love to hear from you. Zenapse is an AI SaaS platform for dynamic marketing and CX optimization powered by emotional intelligence and human understanding. We are a Google Cloud Build Partner and our software is now being sold on the Google Cloud Enterprise Application marketplace. Successful candidate will have the following experience: Hands on data engineer with experience in Neo4j and writing Cypher 10 + years experience developing highly available, low latency and highly scalable architectures. 10 + years experience with architecting APIs, middleware and enterprise gateways Previous Google Cloud experience. Apigee or comparable API Gateway experience is a plus. Kafka experience a plus. Previous web/mobile application development a plus Excels in Agile Scrum environment, excellent communication skills, and a self starter Ability to thrive in a fast paced, entrepreneurial environment We are an investor backed company and rapidly growing. Zenapse offers full benefits and a great team oriented environment."," Mid-Senior level "," Full-time "," Information Technology and Engineering "," Software Development and Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-brooksource-3510607974?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=%2BasiNJw0KNokKTyGD%2FiTng%3D%3D&position=10&pageNum=9&trk=public_jobs_jserp-result_search-card," Brooksource ",https://www.linkedin.com/company/brooksource?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","Data Engineer 6-Month Contract to Hire Fully Remote – Based out of EST The Data Engineer will partner with a wide range of business teams to implement analytical and data solutions that drive business value and customer satisfaction. He or She will be responsible for collecting, storing, processing, analyzing large sets of data and building applications and solutions using data. The primary focus will be on building, maintaining, implementing, monitoring, supporting, and integrating analytical and data solutions with the architecture used across the company. Job Experience 2+ years of experience in database development using PL/SQL and SQL Server 2012 or later and Snowflake. 2+ years of experience building applications and scripting business logic in Python. 1+ years’ of experince working within a cloud environment- specially AWS. 1+ years of proven experience building data pipelines and ETL flows in SQL Server or Snowflake or Informatica or other ETL tools. How You'll Shine: Maintain and monitor our analytics data warehouses and data platform. Build, Implement, test, deploy, and maintain stable, secure, and scalable data engineering solutions and pipelines in support of data and analytics projects, including integrating new sources of data into our central data warehouse, and moving data out to applications and affiliates. Responsible for hands-on development, deployment, maintenance, and support of variety of Cloud and on-premises Solutions, web service infrastructure and supporting technologies. Produce scalable, replicable code and engineering solutions that help automate repetitive data management tasks. Works closely with project managers, business analysts, data scientists and other groups in the organization to understand and translate functional requirements and processes into technical specifications. Knowledge and skills Excellent listening, interpersonal, communication (written & verbal) and problem-solving skills. Ability to collect and compile relevant data Extremely organized with great attention to detail Excellent ability to analyze information and think systematically Strong business analysis skills A strong team player with some ability to work independently Good understanding of the company’s business processes and the industry at large Technical Skills Excellent SQL knowledge and experience working with relational databases, query authoring (SQL) and working familiarity with various databases. Excellent working knowledge of scripting in Python. Experience building and optimizing data pipelines and data sets leveraging various scripting languages or ETL tools. Expertise writing complex stored procedures and functions using PL/SQL and building SSIS packages. Ability to perform root cause analysis on internal and external data processes to answer specific business questions and identify opportunities for improvement. Good analytic skills related to working with unstructured datasets. Build processes supporting data extraction, transformation, and loading of data into data structures. A successful history of manipulating, processing, and extracting value from large, disconnected datasets Brooksource provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws"," Associate "," Full-time "," Analyst and Business Development "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-versar-inc-3531152322?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=Vn1leopihNVJtV6q5cT9zw%3D%3D&position=14&pageNum=9&trk=public_jobs_jserp-result_search-card," Versar, Inc ",https://www.linkedin.com/company/versar?trk=public_jobs_topcard-org-name," Washington, DC "," 1 day ago "," Be among the first 25 applicants ","Summary Versar is currently seeking an experienced data engineer/data scientist who will work within the organization to provide advanced data-driven solutions, business insights and support decisions. As a data scientist/data engineer, you will have a sound knowledge and experience Statistical Models, Data Mining, Machine Learning, building analytical solutions to deliver insights, insights communication and presentation, as well as demonstrating the ability to combine advanced analytics skills with exceptional business acumen. In addition, you will be key to the success of the department’s goal of creating a reliable foundation for ongoing reporting and analytics. This is a position that is located in Washington, DC, with limited remote opportunities. Duties/Responsibilities: Lead and support analytical projects by providing Statistical expertise in study design, statistic modeling, data analysis, outcome interpretation and communication. Ability to utilize extensive experience and skills in Java Script with Vector Graphics and/or DAX PowerBI to translate data into focused and targeted information to provide analytical solutions that provides actionable recommendations to drive the success of the key initiatives. Ability to apply statistics knowledge, develop analytics solutions (descriptive, inferencing, predictive, and prescriptive), and derive insights to support business decisions, and proactively come up with ideas to explore and uncover new findings and improve process efficiency. Provides technical development expertise for designing, coding, testing, debugging, documenting, and supporting all types of analytical products consistent with the established specifications and business requirements in order to deliver business value. Develop a deep understanding of the key initiative of the organization, and be able Participates in development of standards, design and implementation of proactive processes to collect and report data and statistics on assigned systems. Communicate analytical findings and recommendations in a clear and concise way to non-technical audiences, both in oral and written presentations. Work closely with Information Management teams to automate recurring tasks and improve processes to continually increase efficiency Requirements Experience 5+ years of experience of leveraging advanced analytics to drive business decisions Strong background in Statistic modeling and development of analytics solution based on business needs Education/Skills Educational background in Computer Science or Electrical Engineering is required Extensive demonstrated experience with Java Script with 1) Vector Graphic and/or 2) DAX PowerBI Proficiency with programming languages, automation tools and stats packages (e.g. R, python, SAS, hive)is required Sound knowledge and experience in Statistical Models and Data Mining is required Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks etc.is required Must demonstrate exceptional analytical skills with business acumen Must have communication skills and the ability to develop and present solutions to all levels of management (including executive levels) Must have demonstrated the ability to solve complex problems with minimal direction Must be able to interact effectively and patiently with customers especially while under pressure Must exhibit an ability to multitask to meet objectives and deadlines Ability to establish and maintain positive working relationships with other employees Behavioral Competencies Action oriented- Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm. Customer focus- Building strong customer relationships and delivering customer-centric solutions. Communicates effectively- Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Decision quality- Making good and timely decisions that keep the organization moving forward. Collaborates- Building partnerships and working collaboratively with others to meet shared objectives. Nimble learning- Actively learning through experimentation when tackling new problems, using both successes and failures as learning fodder. Demonstrates self-awareness- Using a combination of feedback and reflection to gain productive insight into personal strengths and weaknesses."," Entry level "," Full-time "," Information Technology "," Defense and Space Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-eliassen-group-3511763941?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=bPKAM3L3y8xUncfN4xx9Aw%3D%3D&position=18&pageNum=9&trk=public_jobs_jserp-result_search-card," Eliassen Group ",https://www.linkedin.com/company/eliassen-group?trk=public_jobs_topcard-org-name," Washington DC-Baltimore Area "," 1 week ago "," Over 200 applicants ","Data Engineer 12+ month contract This role is open to US Citizens OR Green Card holders only. W2 basis only-No C2C Position Summary The incumbent in this position will be part of the Data Science and Engineering Technologies team and will be filling in for a data engineer must be able to analyze, design, develop, integrate, run, and support various GCP ETL and Data related jobs from ETL, Data Warehouse to across multiple technologies and architectures involving various technologies including application servers, databases, logs, APIs for external data sources and operating systems. This position will be required to: · Work with Business Owners, Data and Business Intelligence teams [On Looker/Tableau etc] to identify requirement on new and existing data sources and implement ETL logics from various different types of interfaces that we extract from – APIs, Web Services, Databases, external and On-Prem databases and warehouses. · Work with business users and technical designers to assist in efficient data model designs that meet business unit requirements involving integrations of various ACS technical data from systems and platforms. · Work with management, project managers and other lead developers to design and develop pipelines and ensure data accuracies. · Lead and participate in troubleshooting and fixing major system problems in core data systems and supplemental data pipelines as well. · Understand relationship between GCP products well – Primarily Data Fusion, Big Query and Looker, and demonstrate experience in Data Fusion [comparable ETL acceptable] and Google Big Query [Comparable DW acceptable]. · Provide strong leadership and mentoring for less senior personnel in the areas of design, implementation, and professional development. · Where required, effectively delegate tasks to development teams of Software Engineering, providing guidance and proper knowledge transfer to ensure that the work is completed successfully. · Be flexible to work during non-Business hours. The ideal candidate will: · Have experience with Data Fusion or Equivalent, Big Query or Equivalent, SQL server, scripting in Java/Python that works well in GCP products, and their respective practices. · Python experience is a plus. · Develop data ETL pipelines that meets both the functional and non-functional requirements, including performance, scalability, availability, reliability and security. · Have experience with writing code in Java, in order to work on data extracts that require cleanup. · Have a working knowledge of XML, JSON and other forms of data streaming artifacts and related technologies in a Java/Python environment. · Have strong written and verbal communication skills · Able to multi-task on various streams of the entire data process. Education and Experience and Technical Requirements: · Bachelor’s degree or equivalent experience. · 5+ years with proven results in system development, implementation, and operations is required. Strong understanding of design patterns with a focus on tiered, large-scale data systems."," Mid-Senior level "," Contract "," Engineering and Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-private-energy-partners-3501298892?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=ATpLaklJTZB2poOD4zjFVg%3D%3D&position=22&pageNum=9&trk=public_jobs_jserp-result_search-card," Private Energy Partners ",https://uk.linkedin.com/company/private-energy-partners?trk=public_jobs_topcard-org-name," Los Angeles Metropolitan Area "," 2 weeks ago "," 182 applicants ","About Private Energy Partners   Private Energy Partners (PEP) is part of Quinbrook Infrastructure Partners (Quinbrook) group, a specialist investment manager focused exclusively on lower carbon and renewable energy infrastructure investment and operational asset management in the US, UK and Australia. Quinbrook is led and managed by a senior team of power industry professionals who have collectively invested over US$ 8.2 billion in energy infrastructure assets since the early 1990’s, representing over 19.5GW of power supply capacity.  Quinbrook invests across the technology landscape encompassing distributed scale solar PV, onshore wind, battery storage, biomass, fugitive methane recovery, demand response, power-to-x, grid support and flexibility, community energy networks, EV charging and ‘Virtual Power Plants’.  PEP is a cross fund team that provides specialty project and technology advisory services including support for digital activities.    Role Description  We are looking for an experienced and dynamic Data Engineer to join our team which is based in Los Angeles, CA. In this role, you will be responsible for designing, building, and maintaining our operational technology data infrastructure. You will play a key role in ensuring the availability, reliability, and scalability of our data systems that support analytics, applications, and business insights across our operating portfolio.  Additionally, you will be responsible for analyzing data to support and build business intelligence dashboards and customer facing metrics. This role requires working closely with key stakeholders at Quinbrook portfolio companies and with the investment and asset management teams at PEP and Quinbrook across the US, UK and Australia.  Core duties and skills:  Manage the onboarding and ongoing data flows from Quinbrook portfolio companies and energy assets into the cloud data repositories (e.g. AWS / Azure).   Collaborate with analysts, engineers, and other stakeholders to identify data requirements and implement data-driven solutions.  Manage cloud based datalake and applications to ensure uptime, security, and availability of services.    Create and manage API data connections to assets, grid data sources, and third-party data sources.  Create and deploy dashboards, calculations, algorithms, and KPIs.    Communicate with stakeholders and provide technical support to portfolio companies for PEP digital services.  Manage version control code, relational and non-relational databases.  Build and implement Data and Analytics solutions using industry best practice tools, technologies, and methods.    In this role you will have an outstanding opportunity to add value and take on additional responsibilities commensurate with personal and professional growth over time. PEP offers a long term and rewarding career path for outstanding candidates. We’re looking for someone who is a great fit for our culture, which is hands-on, innovative, fast-paced, disciplined, and humble. This position may require occasional domestic travel as part of the job responsibilities.  Qualifications  Five+ years of experience with cloud-based data infrastructure hosting environments and applications.   Strong cloud skills and knowledge of cloud data related technologies (Azure / AWS). Experience with dashboarding and business intelligence applications.  Experience with data warehousing techniques, data integration and data management tools for time series data.  Strong SQL and Python knowledge, and experience designing efficient data models.  Natural intellectual curiosity and interest in exposure to broader energy transition opportunities and digitization.  Proactive, entrepreneurial individual who can function independently and as part of a highly collaborative a team who exhibits initiative and a demonstrated desire to learn.  Bonus: Data science experience in python environment using time series data.  We encourage you to apply if you don’t meet all these qualifications and skills. A career at Private Energy Partners means you’ll have the opportunity to learn and practice new skills, explore interesting fields and do challenging work across our global portfolio companies. You will be able to make a meaningful impact on accelerating the green energy transition.   Candidates from Los Angeles area preferred but open to remote candidates.  Featured Benefits:  100% employee covered health insurance including medical insurance, vision insurance, and dental insurance.  401(k) with employer match up to 6%  Disability insurance  A reasonable estimate for the base salary range is $125,000 to $150,000 per year. "," Full-time ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-softstandard-solutions-3502753289?refId=4vekjYppPvjlYxIWABtnfg%3D%3D&trackingId=sOczkp8%2F26N7bgeoZ0u7Eg%3D%3D&position=25&pageNum=9&trk=public_jobs_jserp-result_search-card," SoftStandard Solutions ",https://www.linkedin.com/company/softstandard?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Job Brief We are looking for an experienced data engineer to join our team. You will use various methods to transform raw data into useful data systems. For example, you’ll create algorithms and conduct statistical analysis. Overall, you’ll strive for efficiency by aligning data systems with business goals. To succeed in this data engineering position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of learning machine methods. If you are detail-oriented, with excellent organizational skills and experience in this field, we’d like to hear from you. Responsibilities: Analyze and organize raw data Build data systems and pipelines Evaluate business needs and objectives Interpret trends and patterns Conduct complex data analysis and report on results Prepare data for prescriptive and predictive modeling Build algorithms and prototypes Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop analytical tools and programs Collaborate with data scientists and architects on several projects Requirements and skills: Previous experience as a data engineer or in a similar role Technical expertise with data models, data mining, and segmentation techniques Knowledge of programming languages (e.g., Java and Python) Hands-on experience with SQL database design Great numerical and analytical skills Degree in Computer Science, IT, or similar field; a Master’s is a plus Data engineering certification (e.g. IBM Certified Data Engineer) is a plus."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-softstandard-solutions-3502753289?refId=aTk8xGxdA9BBUHvODl8GpA%3D%3D&trackingId=vnuZKgP3yw4nI%2B4ghF4Xeg%3D%3D&position=1&pageNum=10&trk=public_jobs_jserp-result_search-card," SoftStandard Solutions ",https://www.linkedin.com/company/softstandard?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Job Brief We are looking for an experienced data engineer to join our team. You will use various methods to transform raw data into useful data systems. For example, you’ll create algorithms and conduct statistical analysis. Overall, you’ll strive for efficiency by aligning data systems with business goals. To succeed in this data engineering position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of learning machine methods. If you are detail-oriented, with excellent organizational skills and experience in this field, we’d like to hear from you. Responsibilities: Analyze and organize raw data Build data systems and pipelines Evaluate business needs and objectives Interpret trends and patterns Conduct complex data analysis and report on results Prepare data for prescriptive and predictive modeling Build algorithms and prototypes Combine raw information from different sources Explore ways to enhance data quality and reliability Identify opportunities for data acquisition Develop analytical tools and programs Collaborate with data scientists and architects on several projects Requirements and skills: Previous experience as a data engineer or in a similar role Technical expertise with data models, data mining, and segmentation techniques Knowledge of programming languages (e.g., Java and Python) Hands-on experience with SQL database design Great numerical and analytical skills Degree in Computer Science, IT, or similar field; a Master’s is a plus Data engineering certification (e.g. IBM Certified Data Engineer) is a plus."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-rightclick-3467037317?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=uPTU9z3fzWSuzP2rnIXpxQ%3D%3D&position=8&pageNum=11&trk=public_jobs_jserp-result_search-card," RightClick ",https://www.linkedin.com/company/rightclick?trk=public_jobs_topcard-org-name," White Plains, NY "," 1 week ago "," Over 200 applicants ","Our client is one of the leading automotive retailers in the United States. With over 300 locations and more than 10 million vehicles sold, they aim to transform the automotive industry through its innovation and bold leadership. They are looking to take on a Data Engineer who will be responsible for analyzing, cleansing, and transforming data to support applications, reports/dashboards, and analytics. This is a hybrid position based in White Plains, NY. Data Engineer’s Responsibilities and Duties Design, develop and maintain our client’s enterprise data platforms Building real-time and batch data pipelines to process and unify data from multiple sources into data marts, reporting tables and application databases Leverage technologies including Python, PySpark, SQL, AWS Glue, and more Work in collaboration with other Data Engineers, Analysts, Report/Dashboard Developers and Data Scientists to maintain and grow our client’s data platforms and capabilities Data Engineer’s Qualifications and Skills Bachelor's degree in data engineering, computer engineering, or related discipline Proven experience as a data engineer, software developer, or similar Ability to build and optimize data sets, ‘big data’ data pipelines and architectures Expert proficiency in Python, C++, Java, R, and SQL Excellent analytic skills associated with working on unstructured datasets RightClick is an equal opportunity employer who agrees not to discriminate against any employee or job applicant irrespective of race, color, creed, alienage, religion, sex, national origin, age, disability, gender (including gender identity), marital status, sexual orientation, citizenship or any other characteristic protected by law."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting and Motor Vehicle Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oshi-health-3523707600?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=ICZzm0itW6KF5Z1LYlJQFQ%3D%3D&position=12&pageNum=11&trk=public_jobs_jserp-result_search-card," Oshi Health ",https://www.linkedin.com/company/oshihealth?trk=public_jobs_topcard-org-name," Carrollwood Village, FL "," 1 week ago "," Be among the first 25 applicants ","Do you love to work with data, finding ways to make it reportable, and building models that will add clinical and commercial value for the future? Do you want to bring your skills and experience to a growth stage engineering team, and help set us up for smart expansion? Are you excited by the prospect of having a high-visibility high-impact role in a fast-moving startup? Are you passionate about healthcare, and looking to create a revolutionary new approach to digestive healthcare with a radically better patient experience? If so, you could be a perfect fit for our team of like-minded professionals who share a common mission and passion for helping others and a desire to build a great company. Oshi Health is revolutionizing GI care with a virtual clinic that provides easy, convenient access to a multidisciplinary care team including a GI Physician, Registered Dietician, Mental Health Professional, and Health Coach that takes a whole-person approach to diagnosing, managing and treating digestive health conditions. Our care is built on the latest evidence-based protocols and is delivered virtually through an app, secure messaging and telehealth visits with the care team. NOTE: Oshi is a fully remote company, with team members all over the US. What You’ll Do This role will be a perfect fit if you enjoy learning the entire stack and taking on interdisciplinary challenges. A primary focus will be building out a data engineering program, including ETL, data governance, sanitization, and data operations. This part of your responsibilities will be a balance of executing data operations, and building automation to make those operations scalable. You will also have the opportunity to join engineers on the frontend end (React Native and React.js), backend (Node.js Lambdas) and Salesforce to help build the Oshi platform. Experience with any of these technologies is a plus, but more important is an enthusiasm to adapt and learn the ones that are new to you. What you’ll do: build the Oshi data program Implement and maintain data pipelines using Stitch, Databricks, and other scripting as needed to feed a PostgreSQL schema supporting Tableau reporting Support users of Tableau by updating data sources or modifying inbound inputs as needed deliver critical BI reports Own the Oshi data model, ensuring that new features built and new technologies adopted serve the needs of the clinical, commercial, product, and engineering teams Manage ETL of client eligibility files and other data, to make them available for Oshi use in a secure and timely manner. Wherever possible, replace bespoke processes with automation Once you have a understanding of Oshi’s requirements, design and implement a data strategy, (with your recommendation of approach and products,) to meet the needs of Oshi’s analytics, commercial, and clinical business lines Your Work Will Also Include AWS maintenance and administration Writing technical documentation to outline designs for forthcoming features, outlining the implementation across all technology layers Meeting with colleagues in Strategy, Product, and Clinical to support their needs from the Engineering group. Production support responsibilities (shared with the entire engineering team) responding to alerts in Datadog, reviewing and troubleshooting issues Our Tech Stack Mobile Platforms Supported: iOS & Android Cross-Platform Mobile Language: React Native Other Languages: React-js, HTML, CSS, Java (Salesforce Apex), Node.js (Lambda) Systems: Salesforce, AWS Amplify / Cognito / Lambda Your Profile A minimum of 3+ years of professional experience Bachelor's Degree or equivalent experience Good interpersonal and relationship skills that include a positive attitude Self-starter who can find a way forward even when the path is unclear. Team player AND a leader simultaneously. What You’ll Bring To The Team Passionate about creating value that changes people's lives Make low-level decisions quickly while being patient and methodical with high-level ones Are curious and passionate about digging into new technologies with a knack for picking them up quickly Adept at prioritizing value and shipping complex products while coordinating across multiple teams Love working with a diverse set of engineers, product managers, designers, and business partners Strive to excel, innovate and take pride in your work Work well with other leaders Are a positive culture driver Excited about working in a fast-paced, startup culture Experience in a regulated industry (healthcare, finance, etc.) a plus And Perks We’re revolutionizing GI care — and our employees are driving the change. We’re a hard-working and fun-loving team, committed to always learning and improving, and dedicated to doing the right thing for our members. To achieve our mission, we invest in our people: We Make Healthcare More Equitable And Accessible Mission-driven organization focused on innovative digestive care Thrive on diversity with monthly DEIB discussions, activities, and more Virtual-first culture: Work from home anywhere in the US Live our core values: Own the outcome, Do the right thing, Be direct and open, Learn and improve, Team, Thrive on diversity We Take Care Of Our People Competitive compensation and meaningful equity Employer-sponsored medical, dental and vision plans Access to a “Life Concierge” through Overalls, because we know life happens Tailored professional development opportunities to learn and grow We Rest, Recharge And Re-energize Unlimited paid time off — take what you need, when you need it 13 paid company holidays to power down Team events, such as virtual cooking classes, games, and more Recognition of professional and personal accomplishments Oshi Health’s Core Values Go For It Do the Right Thing Be Direct & Open Learn & Improve TEAM - Together Everyone Achieves More Oshi Health is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Powered by JazzHR"," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-sag-aftra-health-plan-sag-producers-pension-plan-3514101180?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=TAILMJf7tYnA1I21NPCesw%3D%3D&position=15&pageNum=11&trk=public_jobs_jserp-result_search-card," SAG-AFTRA Health Plan | SAG-Producers Pension Plan ",https://www.linkedin.com/company/sag-aftra-health-plan-sag-producers-pension-plan?trk=public_jobs_topcard-org-name," Burbank, CA "," 1 week ago "," Be among the first 25 applicants ","For more than 50 years the SAG-AFTRA Health Plan and SAG-Producers Pension Plan (the Plans) have provided health and retirement benefits to thousands of entertainment professionals and their dependents across the globe. The Plans are currently undergoing a major transformation focused on evolving our operations toward a participant-inspired experience. This is an exciting opportunity for a skilled Data Scientist to support our efforts. As a Data Engineer, you will design, develop, test and maintain efficient and sustainable data models to keep data accessible and ready for analysis. Working closely with the Analytics Team, you will engage with business teams to understand requirements, design conceptual, logical and physical data models, and perform root-cause analysis and recommend solutions. This will be a hands on role utilizing Extract, Transfer & Load (ETL) tools to deliver source to target mappings, physical and logical data models and related scripts to automate the data transformation and loading processes Essential Job Functions Work closely with data experts to build and maintain KPI data dictionary, metadata, data standards, and ensure adherence to the Plans Analytics Method and data standards. Engage business teams to understand requirements, document them and deliver robust and scalable solutions in the form of data models that can be leveraged for self-service analytics. Explore ways of modeling the Plans unstructured voice and text data into frameworks fit for analysis. Design conceptual, logical and physical data models, maintain data dictionary and capture metadata. Perform gap analysis as needed for purposes of maintaining, continuously enhancing data models and integrating KPI’s into the analytics platform as and when new KPI and business metrics are adapted by the organization. Work with application development team to deploy analytics data products through such ways as embedding analysis models into business applications and mobile solutions. Establish and maintain provenance, integrity and security of data used for self-service reporting, ad-hoc analysis or other levels of analysis. Utilize ETL tools and other data pipeline automation techniques to develop and maintain source to target mapping that includes extract requirements, derived field logic, domain values and data lineage. Ensure developed data models are easy to use and efficient to access data thus enable transparency of data lineage to business teams and all stakeholders Minimum Qualifications Bachelor’s Degree in Computer Science, Information Systems, or other related field as well as equivalent work experience. Minimum 5 years of data engineering, data modeling or data architecture experience with a focus on multidimensional data modeling for both structured and unstructured data. Experience designing conceptual, logical and physical data models and maintaining data dictionary and capturing metadata. Experience creating and maintaining automated data pipelines, data standards, and best practices to maintain integrity and security of the data; ensure adherence to developed standards. Experience in developing and maintaining source to target mapping that includes extract requirements, derived field logic, domain values and data lineage. Experience in relevant technical languages and tools such as SQL, Python, NoSQL, Airflow, Quartz, ERWIN or equivalent. Previous work experience in the Healthcare industry preferred. Due to the COVID-19 pandemic, the Plans require all employees whose jobs necessitate them to work in the office to be fully vaccinated. The Plans do not discriminate on the basis of disability or religion and will provide reasonable accommodations that do not cause an undue hardship to the Plans' operations or pose a direct threat to the safety or health of individuals in the workplace."," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-tekintegral-3528110263?refId=V6oEdxTn25u4vVk%2FAJSQkg%3D%3D&trackingId=6sepPYISlyvpx0QybxlOgg%3D%3D&position=22&pageNum=11&trk=public_jobs_jserp-result_search-card," TekIntegral ",https://www.linkedin.com/company/tekintegral?trk=public_jobs_topcard-org-name," Redmond, WA "," 3 weeks ago "," Be among the first 25 applicants ","Its onsite role with potential to hire after or within 6 months Summary Of Position The Data Engineer role will work closely with an agile development team to analyze and optimize ETL processes, tables, views, materialized views, and stored procedures to create a high-performance data warehouse solution for our client. This role will be working with a diverse set of technology including Amazon Redshift and S3 and Snowflake. The Data Engineer will focus on optimizing existing systems as well as developing designs and strategies for implementing new systems. This role will be responsible for requirements gathering, analysis, data categorization, interpreting requirements, and developing development tasks for sprint execution. Essential Functions Design, implement, test, and deploy data processing infrastructure Contribute to architecture of highly scalable and reliable data engineering solutions for moving large data efficiently across systems Perform work in an Agile team setting Break down, estimate, and provide just-in-time design for small increments of work Work in a complex data infrastructure environment Develop in-depth data pipeline using industry-standard data integration tools Full development life cycle management, including gathering, analysis, architecture, design, implementation, testing, deployment, and technical support Write test cases and test scripts for data quality assurance Responsible for creating stored procedures and functions Develop dimensional data model with the industry-standard tool Interpret reporting requirements into actionable development tasks Analyze and optimize SQL based stored procedures and jobs Analyze table indices for performance Design and implement Materialized Views and Views Develop high-performance programs and procedures for ETL Processes Analyze, categorize and document data sources and elements Write and optimize queries and provide guidance to other developers accessing data Agile development experience required, must be comfortable working with a distributed team Competencies Ensures Accountability Tech Savvy Communicates Effectively Values Differences Customer Focus Resourcefulness Drives Results Plans and Prioritizes Decision Quality Self-Development Work Environment This job operates in a professional office environment. This role routinely uses standard office equipment such as computers, phones, photocopiers, filing cabinets, and fax machines. Required Education And Experience Bachelors Degree 5 + years of Data Engineering Experience or equivalent experience Qualifications Five + (5) years building out highly scalable, scaled-out architectures on large scale database platforms Five + (5) years of experience in a senior data engineering role Excellent SQL Skills Deep knowledge of systems such as Redshift, Snowflake, Postgres, Redis Experience with other programming languages such as Java and Ruby a plus Strong understanding of different data access standards and including REST and SOAP data sourcesAdvanced competency in SQL with the ability to optimize and mentor others to perform query optimization in large scale database environments Experience with any industry-standard tool for Source Control and Project Management Experience with data visualization and/or dashboard development Strong written and oral communication skills Demonstrates critical thinking, analytical and problem-solving skills, and ability to think creatively Exhibits a sense of ownership, urgency, accountability, and drive to learn new technologies Demonstrated ability to achieve stretch goals in a highly innovative and fast-paced environment Preferred Familiarity with AGILE and API development Familiarity with test-driven development methodology for analytic solutions Apache NiFi Redis Please share resume at Career@tekintegral.com OR nkumar@tekintegral.com"," Entry level "," Full-time "," Other "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-corecivic-3508969456?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=4bWUkr%2F8TYdkzja8BnXjhQ%3D%3D&position=1&pageNum=12&trk=public_jobs_jserp-result_search-card," CoreCivic ",https://www.linkedin.com/company/corecivic?trk=public_jobs_topcard-org-name," Nashville Metropolitan Area "," 2 weeks ago "," 93 applicants ","At CoreCivic, our employees are driven by a deep sense of service, high standards of professionalism and a responsibility to better the public good. CoreCivic is currently seeking a Data Engineer located at our corporate office in Brentwood, TN. Come join a team that is dedicated to making an impact for the people and communities we serve. Who We Are: CoreCivic is a diversified government solutions company with the scale and experience needed to solve tough government challenges in cost-effective ways. We provide a broad range of solutions to government partners that serve the public good through high-quality corrections and detention management, innovative and cost-saving government real estate solutions, and a growing network of residential reentry centers to help address America’s recidivism crisis. We are the nation's largest owner of partnership correctional, detention and residential reentry facilities and have been a flexible and dependable partner for government for more than 30 years. What We Have: More than just a job but the start of a successful career! Supportive environment where employee growth is promoted. Comprehensive benefits package & competitive wages. PTO & paid holidays. Paid job training & other great incentives. What You Get To Do: The Data Engineer translates data and information analytic requirements into an information systems data repository/business intelligence system that supports end user data analysis, forecasting and mining. Responsible for the analysis of the business intelligence system's requirements, design specifications, design and implementation of ETL processes, validation/accuracy, maintenance, and resource utilization. Must be knowledgeable about creating, enhancing scalability, defining conceptual, logical, and physical models, and able to participate in tool evaluations. Performs technical planning, system integration, verification and validation, cost, risk, and supportability analyses for total systems to solve complex, non-routine analytics problems. Leads design and build of analytic solutions using on-premise and cloud technologies supporting all project phases (concept, design, fabrication, test, installation, operation, maintenance, and data archive). Ensures the logical and systematic conversion of customer requirements into a solution roadmap that acknowledge technical, data quality, business process, schedule, and cost constraints. Performs functional analysis, timeline analysis, requirements allocation, and interface definition studies to translate customer requirements into analytic solution specifications. Responsible for meta data strategy, change data capture and extract strategy, data cleansing and security strategy, various technical standards, performance metrics, and usage capacity metrics. Works with internal team members; effectively communicates with them in sharing, completing, and reviewing work. Provides documentation, mentoring, and training to stakeholders for solutions built or technologies being deployed.​​ Manages problem and request tickets for assigned technology. Ensures availability of supported technology. Maintains relationships with stakeholder departments and ensures all solutions and support is provided in the context of assisting application users' succeed in their business goals. Domestic U.S. travel may be required. Qualifications: Graduate from an accredited college or university with a Bachelor’s degree in information technology, computer science, business, or engineering is required. Seven years of full-time professional experience with progressive responsibility in modern data platform design, API integration, real-time streaming, and message brokering, in addition to data lake/warehouse solution design and development, is required. Experience must include: three years' experience writing SQL; extensive background building and delivering data warehouse or data lake solutions using Oracle and/or Microsoft technologies such as Oracle Autonomous Data Warehouse, Azure Data Lake, Data Factory, Databricks, Synapse Analytics, and Power BI. ​ Strong functional knowledge of relational database concepts and data structures; demonstration of relational and dimensional data modeling. ​ Experience with data quality initiatives, scrubbing data, and preparing data. Experience with data integration/ ETL tools, specifically Talend, Informatica, AWS Glue, Microsoft Azure Data Factory and/or SSIS. Experience with a programming language such as C#, JavaScript, or PHP. Experience with Python and R is preferred. Two years' experience leading technology-based projects is required. Experience with large databases or data warehouses in an online production environment is required. Strong communication skills and ability to engage with senior business leaders. Proficiency in Microsoft Office applications is required. Unix shell scripting is preferred. A valid driver's license is required. CoreCivic is a Drug-Free Workplace and EOE – including Disability/Veteran"," Mid-Senior level "," Full-time "," Information Technology "," Public Safety " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ubs-3474040000?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=rP1ij0SirzqB%2FJfZsR6VMw%3D%3D&position=2&pageNum=12&trk=public_jobs_jserp-result_search-card," UBS ",https://ch.linkedin.com/company/ubs?trk=public_jobs_topcard-org-name," Weehawken, NJ "," 2 weeks ago "," 111 applicants ","Job Reference # 270728BR Job Type Full Time Your role Are you a highly motivated person looking to take up a challenge? Do you like analyzing and have a knack for understanding and improving business processes? Do you want to design and build next generation business applications using the latest technologies? Are you confident at iteratively refining user requirements and removing any ambiguity? Do you have a curious nature, always interested in how to innovate? We're looking for a Data Engineer to support and contribute to the design of components for the strategic Market Risk platform and deliver high quality specifications on time take the ownership of VaR/Stress analysis for all major/minor releases work to the required standards and to face off to stakeholders in development, QA and product management as well as the business ensuring the correct solution is being delivered create /design the solution, particularly around VaR (Value at Risk), Stress and RniV (Risk Not invar) framework conduct activities such as analyzing and evaluating processes for new requirements, by using a variety of internal Data analyze, negotiate, validate and communicate requirements and their respective impact on different stakeholders work closely and effectively with business, Market Risk Officers, Quants and IT teams across different geographies Your team You’ll be working in the Risk Technology team in Weehawken NJ. The Risk Technology organization at UBS, delivers quality, innovative solutions that support our Group Functions business partners in achieving their operational goals. Technology is at the very heart of UBS.  As a team of talented diverse workforce, we have a critical role to play in building, delivering and maintaining the systems, services and infrastructure that power our business.  Technology is about people and every person has a crucial role to play on our Technology team. Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients. Your expertise a degree level qualification and ideally have 8+ years of experience experience in Market Risk domain preferably in one of the Stress, VaR, Rniv hands on SQL skills first-hand experience in creating functional / business requirements specifications experience in applying business analysis techniques and the ability to interpret a set of technical requirements and develop robust solutions understand the software development lifecycle team player with excellent written and oral communication skills, ability to work with a globally distributed IT team strong analytical, problem-solving and synthesizing skills certifications like FRM / CFA will be added advantage About Us UBS is the world’s largest and only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. With more than 70,000 employees, we have a presence in all major financial centers in more than 50 countries. Do you want to be one of us? How We Hire This role requires an assessment on application. Learn more about how we hire: www.ubs.com/global/en/careers/experienced-professionals.html Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. "," Not Applicable "," Full-time "," Information Technology and Engineering "," Banking, Financial Services, and Investment Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-motor-information-systems-3475923720?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=icE1jKZL4w%2Bb%2FtRLnEl1DQ%3D%3D&position=3&pageNum=12&trk=public_jobs_jserp-result_search-card," UBS ",https://ch.linkedin.com/company/ubs?trk=public_jobs_topcard-org-name," Weehawken, NJ "," 2 weeks ago "," 111 applicants "," Job Reference #270728BRJob TypeFull TimeYour roleAre you a highly motivated person looking to take up a challenge? Do you like analyzing and have a knack for understanding and improving business processes?Do you want to design and build next generation business applications using the latest technologies? Are you confident at iteratively refining user requirements and removing any ambiguity? Do you have a curious nature, always interested in how to innovate?We're looking for a Data Engineer to support and contribute to the design of components for the strategic Market Risk platform and deliver high quality specifications on time take the ownership of VaR/Stress analysis for all major/minor releases work to the required standards and to face off to stakeholders in development, QA and product management as well as the business ensuring the correct solution is being delivered create /design the solution, particularly around VaR (Value at Risk), Stress and RniV (Risk Not invar) framework conduct activities such as analyzing and evaluating processes for new requirements, by using a variety of internal Data analyze, negotiate, validate and communicate requirements and their respective impact on different stakeholders work closely and effectively with business, Market Risk Officers, Quants and IT teams across different geographiesYour teamYou’ll be working in the Risk Technology team in Weehawken NJ. The Risk Technology organization at UBS, delivers quality, innovative solutions that support our Group Functions business partners in achieving their operational goals. Technology is at the very heart of UBS.  As a team of talented diverse workforce, we have a critical role to play in building, delivering and maintaining the systems, services and infrastructure that power our business.  Technology is about people and every person has a crucial role to play on our Technology team.Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients.Your expertise a degree level qualification and ideally have 8+ years of experience experience in Market Risk domain preferably in one of the Stress, VaR, Rniv hands on SQL skills first-hand experience in creating functional / business requirements specifications experience in applying business analysis techniques and the ability to interpret a set of technical requirements and develop robust solutions understand the software development lifecycle team player with excellent written and oral communication skills, ability to work with a globally distributed IT team strong analytical, problem-solving and synthesizing skills certifications like FRM / CFA will be added advantage About UsUBS is the world’s largest and only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.With more than 70,000 employees, we have a presence in all major financial centers in more than 50 countries. Do you want to be one of us?How We HireThis role requires an assessment on application. Learn more about how we hire: www.ubs.com/global/en/careers/experienced-professionals.htmlJoin usAt UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs.From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact?Disclaimer / Policy StatementsUBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. "," Not Applicable "," Full-time "," Information Technology and Engineering "," Banking, Financial Services, and Investment Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ohi-3512759684?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=zzvUvjw9njNcvThPCv1bRA%3D%3D&position=6&pageNum=12&trk=public_jobs_jserp-result_search-card," Ohi ",https://www.linkedin.com/company/o-hi-social-network?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 month ago "," Be among the first 25 applicants ","As a Data Engineer at Ohi, you will be building and maintaining the technical systems and data infrastructure that the business will need to quickly grow and attain massive scale. You will be part of a team that prioritizes testing, collaboration, personal development, and getting things done. You will be part of a company that is providing consumers with a faster, better, and more sustainable experience by enabling e-commerce companies to use our platform. Responsibilities Build and iterate on data systems that fit our business needs Lead data gathering, accuracy and definition throughout the company Build collaborative relationships with the Product and Business teams Bring best-in-class engineering process into a new organization to enable the engineering team to work effectively at speed with high quality Document your activities to enable the team to work quickly and independently Develop creative new ideas for how to meet our team goals Skills And Requirements 4+ years of hands-on experience building production applications in an OO programing language (Python required) You have experience building ETL processes You have deployed data-heavy infrastructure from the ground up You are able to articulate and explain technical restrictions to non-technical members of the team and explain various tradeoffs that exist You are self-directed and effective at managing your time Bonus: You have worked with Athena, Looker, Databricks, and/or Spark with large datasets"," Entry level "," Full-time "," Engineering and Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499585219?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=fGqLMsaTGtuq%2B51zBb1S%2Fw%3D%3D&position=13&pageNum=12&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," San Jose, CA "," 2 weeks ago "," 54 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-audacy-inc-3531423986?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=tuhr61MZA8P2TfltRz3QZA%3D%3D&position=14&pageNum=12&trk=public_jobs_jserp-result_search-card," Audacy, Inc. ",https://www.linkedin.com/company/audacy-inc?trk=public_jobs_topcard-org-name," Vancouver, WA "," 4 hours ago "," Be among the first 25 applicants ","Overview Known for our outstanding voices across the U.S., Audacy is the premier provider of local, spoken word, and premium audio content. For technical talent, we have created a team that leverages new toolsets, develops collaboratively with business stakeholders, and is pushing the boundaries of audio every day. We are looking for an innovative, passionate and results driven Senior Software Engineer to join our rapidly growing engineering team to help us build the future of radio and audio together . You listen to Audacy content all of the time…and may not know how much more incredible audio content we offer including: podcasts, news & sports talk radio, and amazing music stations. As a leader in all-things-audio, Audacy is transforming the listening experience through innovation and interactivity–and you may be just the person to join our team to help us do this. The Audacy Data Engineering team is looking for the right person to join us. At Audacy, you’ll contribute to our growing data warehouse engineering efforts. You’ll have a chance to focus in areas you’re already skilled and take on new challenges over time. Our distributed team has a fun and inclusive environment with a commitment to work life balance. We are engineers and we understand what engineers need. We’ll provide you with a work environment where collaboration is encouraged but you’re not bombarded with superfluous interruptions. We have highly encouraged communication silent times on Thursdays so that everyone can focus without worrying they’re missing anything. We work in modern agile feature teams that self-organize and work together to build great software. You’ll have the opportunity to contribute to new ideas, learn new technologies, and architect new features. We have fun doing what we do, and you will too. The Technology Team at Audacy is an organization that has leaned-in to hybrid work, having leaders and team members across the country. Our technical centers of excellence are in some of the best cities to live in (Denver, Philadelphia, and Chicago) and we pride ourselves in building amazing products, participating in local service with Audacy, and keeping our culture at the heart of our employee experience. You’ll have the opportunity to contribute to new ideas, learn new technologies and design new features. We have fun doing what we do, and you will too. If you love audio and building incredible software, then we are the place for you. Responsibilities What You'll Do: Designing and developing enterprise level applications using Snowflake and AWS tools. Design, build and deploy streaming and batch data pipelines capable of processing and storing petabytes of data quickly and reliably Partnering with product teams, data analysts and data scientists to help design and build data-forward solutions Build and maintain dimensional data warehouses in support of business intelligence and optimization product tools Develop data catalogs and data validations to ensure clarity and correctness of key business metrics Contributing using your awesome verbal and written communication skills. Qualifications More About You: Required & Preferred You possess quality experience with Snowflake DB and data models of all kinds. You are exceptionally skilled at building ETL’s and pipelines using Python and other data services like Airflow and AWS Glue. You have a solid understanding of AWS features like S3 policies, IAM, ARN, and EC2. You understand the difference between a row based storage db like Postgress or MySQL and a columnar database like Snowflake, BigQuery, or Singlestore. You take ownership of the data under your prevue, and always consider privacy first. You work well with competing tasks, and if things change, you pivot quickly and execute smoothly. Preferred If you have experience in the radio broadcast, streaming, or ad tech industry, that’s a plus. Degree in Computer Science or related field, or equivalent practical experience. 3+ years of enterprise or related experience Pay Transparency Additional Information The anticipated starting salary range for individuals expressing interest in this position is $100,000-$120,000 . Salary to be determined by the education, experience, knowledge, skills, abilities and location of the applicant, as well as internal and external equity. This position can be located remotely or at one of Audacy’s offices in Denver, New York City, Philadelphia, or Chicago. Audacy offers full time employees with a comprehensive benefits package to include: health care coordinator, medical, dental, vision, telemedicine, flexible spending accounts, health savings account, disability, life insurance, critical illness, hospital indemnity, accident insurance, paid time off (sick, vacation, personal, parental, volunteer), 401(k) retirement plan, discounted employee stock purchase, student loan payment assistance program, legal assistance, life assistance program, identity theft protection, discounted home and auto insurance, and pet insurance. About Us Audacy, Inc. (NYSE: AUD) is a leading multi-platform audio content and entertainment company with the country’s best collection of local music, news and sports brands, a premium podcast creator, major event producer, and digital innovator. Audacy engages 200 million consumers each month, bringing people together around content that matters to them. Learn more at www.audacyinc.com , Facebook (Audacy Corp) and Twitter (@AudacyCorp). EEO Audacy is an Equal Opportunity and Affirmative Action Employer. Audacy affords equal employment opportunity to qualified individuals regardless of their race, color, religion or religious creed, sex/ gender (including pregnancy, childbirth, breastfeeding, or related medical conditions), sexual orientation, gender identity, gender expression, national origin, ancestry, age (over 40), physical or mental disability, medical condition, genetic information, marital status, military or veteran status, or other classification protected by applicable federal, state, or local law, and to comply with all applicable laws and regulations. Consistent with our commitment to equal employment opportunity, we provide reasonable accommodations to qualified individuals with disabilities who need assistance in applying electronically for a position with Audacy, unless doing so would impose an undue hardship. To request a reasonable accommodation for this purpose, please call 1-610-660-5614. Please note that this phone number is to be used solely to request an accommodation with respect to the online application process. Calls for any other reason will not be returned. Reasonable accommodation requests are considered on a case-by-case basis."," Entry level "," Full-time "," Information Technology "," Advertising Services, Online Audio and Video Media, and Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-wimmer-solutions-3468515434?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=55gpuVE89er28dwXlThZ5A%3D%3D&position=17&pageNum=12&trk=public_jobs_jserp-result_search-card," Wimmer Solutions ",https://www.linkedin.com/company/wimmer-solutions?trk=public_jobs_topcard-org-name," Dallas, TX "," 1 day ago "," Over 200 applicants ","DATA ENGINEER DALLAS, TX OR FORT LAUDERDALE/MIAMI, FL JOB ID: 21157 We have an exciting opportunity currently available is a Data Engineer position. In this role, you will implement methods to improve data reliability and quality. Combining raw information from different sources to create consistent and machine-readable formats. You also develop and test architectures that enable data extraction and transformation for predictive or prescriptive modeling. This position will require you to wear many hats with focus on building out our Python ETL processes and writing superb SQL. This work will be highly visible within our organization and will involve clearly articulating complex data trends to stakeholders at all levels. We’ll count on you to help us maximize our strategic use of data and contribute to developing best practices while doing so. What You Get To Do Work closely with our data science team to help build complex algorithms that provide unique insights into our data Use agile software development processes to make iterative improvements to our back-end systems Model front-end and back-end data sources to help draw a more comprehensive picture of user flows throughout the system and to enable powerful data analysis Build data pipelines that clean, transform, and aggregate data from disparate sources Develop models that can be used to make predictions and answer questions for the overall business WHAT YOU BRING Bachelors degree (or commensurate experience) in computer science, information technology, engineering, or related discipline 5 years of development experience with Python, SQL, Snowflake and data visualization/exploration tools Experience with the Azure is a must. Communication skills, especially for explaining technical concepts to nontechnical business leaders Ability to work on a dynamic, research-oriented team that has concurrent projects Experience in building or maintaining ETL processes ML experience is a plus Must be able to work for a US based company without requiring visa sponsorship."," Mid-Senior level "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-propel-solutions-inc-3510658391?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=7c7rorC%2BFvZxR78Be8KISg%3D%3D&position=18&pageNum=12&trk=public_jobs_jserp-result_search-card," Propel Solutions Inc. ",https://www.linkedin.com/company/propel-solutions-inc?trk=public_jobs_topcard-org-name," New York, United States "," 1 week ago "," Over 200 applicants "," Roles& Responsibilities:Technology Stack: Python, java, AWS services (S3,Lambda, EMR. Glue, Redshift, Dynamobd, ECS), Docker, Kubernetes, Apache kafka, Airflow, SQL "," Entry level "," Contract "," Information Technology, Analyst, and Engineering "," IT Services and IT Consulting, Computer Hardware Manufacturing, and Computer and Network Security " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-avmed-3475918746?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=fq9YD%2FFW6pQJojOXWLHgGA%3D%3D&position=20&pageNum=12&trk=public_jobs_jserp-result_search-card," AvMed ",https://www.linkedin.com/company/avmed-health-plans?trk=public_jobs_topcard-org-name," Doral, FL "," 1 day ago "," 187 applicants ","Embrace better opportunities. Are you passionate about health, happiness and helping others? So are we. We’re committed to helping our health plan Members live a happy and healthy lifestyle, and we believe that it starts here first, with our Associates. At AvMed, we provide the tools and opportunities to enhance, expand, and support each Associate’s personal and professional growth. From tuition reimbursement to exercise classes in the office, embrace a better career with AvMed and join our team! Position: Data Engineer Scope of position: The Data Engineer is responsible for building data and business intelligence applications and administers data governance tools. Provide technical support for data governance program projects from initial scoping through implementation. Essential Job Functions Uses data tools, techniques, and manipulation to address basic and complex business problems. Perform data integration and ETL with relational, no-SQL data sources and flat files. Collaborates with data stewards, business architects, data architects, data modelers, application architects, subject matter experts, data owners and IT partners to execute data governance initiatives. Assists with writing and review of data governance policies, procedures and standards. Uses data profiling and data quality tools to expose and determine causes of data quality issues and write data quality rules. Collaborates with data owners and data stewards to fix data issues and develop a corrective action plan to prevent future data issues. Participates in metadata management activities, including building out the business glossary and data dictionary. Develops and delivers presentations for department and senior leadership. Performs additional duties and responsibilities as assigned by management. We Offer Competitive Salaries Comprehensive Benefits: Medical Plans, Health Savings Account, Dental, Vision, and more… Paid Time Off, Company Paid Holidays, Paid Time Off Cash-In 401(k)plan with matching contributions, Tuition Assistance, Associate Discounts You Have Bachelor's in science, Technology, Engineering, Mathematics, Business, Health Informatics, Healthcare Analytics or related field required Masters preferred 1- 2 years of experience in Data governance-related disciplines: data architecture; data modeling; data pipeline development, metadata management; data quality analysis; data quality process creation; data profiling and data lineage required SQL and SQL server technology is required Experience with open-source big data technologies like Apache Nifi, Kafka, Spark in a healthcare environment is highly preferred You May Also Have Business Intelligence: MS Power BI, Qlik Sense, Tableau- Proficiency: Intermediate Ability to build ETL and ELT processes-Proficiency: Intermediate Microsoft Office Suite (Excel, Word, PowerPoint, Access)-Proficiency: Intermediate Analytical and Problem Solving-Proficiency: Intermediate Environment At AvMed, you will find a family. Our Associates love all the opportunities for advancement, the flexible work environment and team activities. We also encourage our Associates to embrace a life rich in what matters most — health and happiness. We call it being WELLfluent. Join AvMed. Join the WELLfluent! Location: Miami or Gainsville, Florida State This position is Hybrid 2 days in office and 3 days remote AvMed is a tobacco/drug free workplace, EOE"," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Informatica Data Engineer,https://www.linkedin.com/jobs/view/informatica-data-engineer-at-fusion-alliance-3512350726?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=aZD3qNwxhfGez4vPXZZqdA%3D%3D&position=21&pageNum=12&trk=public_jobs_jserp-result_search-card," Fusion Alliance ",https://www.linkedin.com/company/fusion-alliance?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 3 weeks ago "," Be among the first 25 applicants ","Data Engineer – Informatica Mid-level experiences 6-8 years of data integration/ETL experience using one or more platforms 3+ years of strong experience with implementing data warehousing or reporting solutions, especially in a role with data integration design & development 4+ years of dedicated development experience using Informatica PowerCenter Minimum 1 year with Azure data services; hands-on with data lake, Azure functions, Azure event hub/grid, databricks, Azure Synapse Solid SQL skills Very beneficial - Experience developing data streaming patterns and working with messaging platforms like Kafka Nice plus - demonstrated experience working with Snowflake as target platform Nice plus - Experience working with Informatica data quality solutions ; profiling, cleansing, dashboard visualization Established in 1994 in Indianapolis, Indiana, Fusion Alliance is highly regarded as an enterprise solution provider, delivering practical insights, engaging customer experiences, and human-driven technologies that transform the way our clients do business. That’s why over 450 clients across more than 100 companies trust us. They know that the solutions we build alongside them are robust, scalable, usable, and secure – even in the most challenging, dynamic, and highly regulated environments. We have deep experience in delivering solutions to companies within life sciences and healthcare, banking and insurance, manufacturing, energy and utilities, and more. Copy and paste the link to learn a little more about our consultants’ experience so far! https://fusionalliance.com/careers/spotlight/"," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499586076?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=wMYwTdu%2BwEzVQ0f%2B8ckgzw%3D%3D&position=25&pageNum=12&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Massachusetts, United States "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-chubb-3503823602?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=94IGP2k%2FzNuvnRZTlr0azg%3D%3D&position=1&pageNum=13&trk=public_jobs_jserp-result_search-card," Chubb ",https://ch.linkedin.com/company/chubb?trk=public_jobs_topcard-org-name," Jersey City, NJ "," 2 weeks ago "," 134 applicants "," We are looking for an experienced data engineer to support our Knowledge Graph team. The role focuses on ingesting data into the Knowledge Graph.QualificationsProficient with Python, SOL, and ETL. Needs to know, or be willing to learn, RDF, SPARQL, and OWL.Ideal candidate for this role can relate data engineering efforts to creating business value and solving real world problems. Proficient with Python. Bachelors degree in Computer Science, Data Science, Software Engineering or related educational background.. Excellent data analysis and advanced data manipulation techniques using SQL. Excellent oral and written communication skills. Excellent working knowledge of relational databases. Decent with Linux. Ability to adapt to rapidly and constantly changing stakeholder requirements. Quick to learn, ability to prioritize activities and responsive to the needs of the businessEEO StatementAt Chubb, we are committed to equal employment opportunity and compliance with all laws and regulations pertaining to it. Our policy is to provide employment, training, compensation, promotion,and other conditions or opportunities of employment, without regard to race, color, religious creed, sex, gender, gender identity, gender expression, sexual orientation, marital status, national origin,ancestry, mental and physical disability, medical condition, genetic information, military and veteran status, age, and pregnancy or any other characteristic protected by law.Performance and qualifications are the only basis upon which we hire, assign, promote, compensate, develop and retain employees. Chubb prohibits all unlawful discrimination, harassment and retaliationagainst any individual who reports discrimination or harassment.352900 "," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer II,https://www.linkedin.com/jobs/view/data-engineer-ii-at-ssp-innovations-llc-3497876865?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=7RgQZGZp7lBidoJC8naP5w%3D%3D&position=2&pageNum=13&trk=public_jobs_jserp-result_search-card," SSP Innovations, LLC ",https://www.linkedin.com/company/ssp-innovations-llc?trk=public_jobs_topcard-org-name," Huntsville, AL "," 3 weeks ago "," Be among the first 25 applicants ","The purpose of the Data Engineer is to migrate and convert data from foreign data models into our proprietary 3-GIS data model. In addition to data conversion, the data engineer supports existing 3-GIS accounts with data changes for system corrections and upgrade requirements. Data maintenance tasks will require a good working knowledge of ArcGIS tools, SQL, ArcPy and an ability to navigate a relational data model to update data tables and associated relationships on large spatial tables to perform daily tasks. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Requirements Responsible for supporting data migrations and data conversions from disparate systems into the 3-GIS Data Model. Oversee data mapping exercises and work closely with customers to understand their data models and to provide mapping between systems. Employ a variety of tools including but not limited to Python Scripting, Database Scripting languages such as SQL, advanced knowledge of ESRI applications, FME workbenches and internally developed tools, as the assignment dictates. Exhibit strong communication skills to ensure customer understanding of data migrations and conversions and must work with various team members such as Solutions Engineers, Data Technicians, Sales Executives and Project Managers to ensure project success. Must be detail oriented and maintain proper time management as projects generally have specific timelines and milestones required for project success Ability to provide references for users by writing and maintaining documentation Design, develop, and test data transformation, extraction, and migration activities. Prepare technical reports by collecting, analyzing, and summarizing information Perform tasks efficiently while validating methodology. Interact directly (face-to-face and remotely) with clients and project teams Provide best practice recommendations during data mapping and project exercises. Collaborate with leadership to improve customer experience Other duties as assigned Required Qualifications: You are someone who is motivated by helping other people solve problems. You thrive in a busy environment, and are passionate about providing an outstanding experience for our customers. Bachelor’s Degree in Information Technology or related field 3+ years of experience in data conversion, data mapping, and data analysis 3+ years of experience using ESRI, FME, or a combination of both Intermediate database experience including SQL relationship statements Intermediate Programming/Report Analysis experience Strong organizational skills and attention to detail Experience with issue tracking software (i.e. Jira) Communication skills, both oral and written Deadline driven, ability to work independently as well as a team player Preferred: 3+ year of experience with managed telecommunications databases Proficiency in ticketing support tools like Jira Database troubleshooting experience (including management tools such as SQL Developer, SQL Server Manager, PGAdmin) Experience with Python or similar scripting languages Experience with large databases containing millions of rows per table (Oracle experience preferred) Flexible to work across different US time zones Basic 3-GIS data model or application experience Working conditions This position can operate in a professional office environment or remotely. This role requires routine use of standard office equipment such as computers, phones, and copiers. Powered by JazzHR 429xmCySkX"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499580721?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=QQcmnJUMHbOvD%2FSwtkNngA%3D%3D&position=3&pageNum=13&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Maryland, United States "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-neteffects-3520296443?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=x7FPhQXEZBP8YbFlTxESyg%3D%3D&position=4&pageNum=13&trk=public_jobs_jserp-result_search-card," neteffects ",https://www.linkedin.com/company/neteffects?trk=public_jobs_topcard-org-name," Greater St. Louis "," 1 week ago "," 58 applicants ","Data Engineer There is no C2C or 1099 for this position Title: Data Engineer Hybrid: 2-3 days a week (St. Louis, MO) US Citizen only Must Have Skills: 3+ years of experience in designing and building data processing. Crafting code and automation. Strong knowledge in Python, SQL, and Cloud- AWS"," Associate "," Full-time "," Analyst, Engineering, and Research "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-inceed-3509657372?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=kqDze7ak1pSCG4LAFalOpA%3D%3D&position=5&pageNum=13&trk=public_jobs_jserp-result_search-card," Inceed ",https://www.linkedin.com/company/inceed?trk=public_jobs_topcard-org-name," Kansas City, MO "," 2 weeks ago "," Over 200 applicants ","Compensation: $100,000 - $140,000 Location: Kansas City, Missouri Title: Data Engineer Inceed has partnered with a great company to help find a skilled Data Engineer to join their team! The Data Engineer will help build and maintain the data storage platform. Responsibilities: Craft systems and withstand disruption and failures Create a reliable environment that produces high uptime for data services Help integrate modern data technologies and practices Required Qualifications & Experience: Bachelor's Degree in CS, IT, MIS, Data Science, or related field Extensive knowledge and experience of Databricks (Delta Lake) Experience implementing solutions with Python, Spark, etc. Perks & Benefits: Health, Dental, and Vision Insurance Annual Bonus Employee Ownership Program (ESOP) Hybrid work schedule If you are interested in learning more about the Data Engineer opportunity, please submit your resume for consideration. Our client is unable to provide sponsorship at this time. We are Inceed, a staffing and direct placement firm who believes in the possibility of something better. Our mission is simple: We’re here to help every person, whether client, candidate, or employee, find and secure what’s better for them. Inceed is an equal opportunity employer. Inceed prohibits discrimination and harassment of any type and affords equal employment opportunities to employees and applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, disability status, genetic information, protected veteran status, or any other characteristic protected by law. "," Entry level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,"Data Engineer, Jr.",https://www.linkedin.com/jobs/view/data-engineer-jr-at-altamira-technologies-corporation-3515540581?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=JdwfhBPmULXOjtWwNxTcmA%3D%3D&position=6&pageNum=13&trk=public_jobs_jserp-result_search-card," Altamira Technologies Corporation ",https://www.linkedin.com/company/altamira-corporation?trk=public_jobs_topcard-org-name," Fort Bragg, NC "," 1 month ago "," Be among the first 25 applicants ","Data Engineer Altamira delivers a variety of analytic and engineering capabilities to the US National Security community, but the tech culture and the caliber of the individuals that bring these capabilities to fruition are what really set us apart. Our Dayton, OH office is highly focused on the Space-based mission set with a heavy emphasis on sensor exploitation and analysis; Tampa, FL focuses on ‘art-of-the-possible’ analytics of all kinds with an emphasis on graph technologies, NLP, and wrangling complex data sets; and all forces converge at our headquarters in the Northern Virginia/Washington DC area where we host our tech events and support engineering and analytic missions across several IC and DOD agencies. While our work occurs in different states and different mission domains, we’ve got analytics at the heart of every operation and genuine curiosity for new methods, techniques, and solutions. Our specialties are data science and analytics, data engineering, software engineering, and end-to-end analytic solutions architecture. We’ve also got some awesome benefits like the Altamira Healthy Living program, with ongoing competitions and a flexible spending stipend for health and wellness-related items. Location: Ft Bragg, NC The Role: The Data Engineer will support data scientists and analysts through wrangling and readying of datasets for analysis. The Data Engineer will be versed in Extract, Transforms, Load (ETL) techniques and technologies, and will also be familiar with a variety of database types, schemas, and ontologies for centralized data storage. You’ll be working on an interdisciplinary team serving as the data custodian, responsible for its provenance, accuracy, and location throughout project execution. The workflows that you design will support a variety of analytics and analytic applications. Your skills: Demonstrable expertise in software development or software engineering Working knowledge of 2 or more programming languages Experience with Agile software development practices and tools such as JIRA and Confluence Experience designing and delivering software solutions in cloud environments (AWS strongly preferred) Experience with multiple database types, and designing ontologies and schemas in support of various analytic or query-based applications (e.g., ElasticSearch, Cassandra, JanusGraph) Familiarity with producing Analysis of Alternatives (AoA) of data storage methods is strongly desired Familiarity with containerization tools and techniques, container orchestration, and workflow management, to include technologies such as Docker, Kubernetes, Jenkins, or Terraform is desired Your quals: Secret, TS, or TS/SCI clearance (TS/SCI strongly preferred) 1+ years in a software development or software engineering role supporting analytic applications Bachelor’s Degree (BS) or higher in technical field related to software development Experience delivering software or technical solutions to DOD or IC customers, USAF, JAIC, NRO, NGA, or other intelligence organizations preferred Altamira is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, or protected veteran status. We focus on recruiting talented, self-motivated employees that find a way to get things done. Join our team of experts as we engineer national security!"," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,"Data Engineer, Jr.",https://www.linkedin.com/jobs/view/onsite-bi-data-engineer-etl-%2B-azure-at-irvine-technology-corporation-3497559047?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=TOmlY9CxQdzkepI3dcrnfg%3D%3D&position=7&pageNum=13&trk=public_jobs_jserp-result_search-card," Altamira Technologies Corporation ",https://www.linkedin.com/company/altamira-corporation?trk=public_jobs_topcard-org-name," Fort Bragg, NC "," 1 month ago "," Be among the first 25 applicants "," Data EngineerAltamira delivers a variety of analytic and engineering capabilities to the US National Security community, but the tech culture and the caliber of the individuals that bring these capabilities to fruition are what really set us apart. Our Dayton, OH office is highly focused on the Space-based mission set with a heavy emphasis on sensor exploitation and analysis; Tampa, FL focuses on ‘art-of-the-possible’ analytics of all kinds with an emphasis on graph technologies, NLP, and wrangling complex data sets; and all forces converge at our headquarters in the Northern Virginia/Washington DC area where we host our tech events and support engineering and analytic missions across several IC and DOD agencies.While our work occurs in different states and different mission domains, we’ve got analytics at the heart of every operation and genuine curiosity for new methods, techniques, and solutions. Our specialties are data science and analytics, data engineering, software engineering, and end-to-end analytic solutions architecture. We’ve also got some awesome benefits like the Altamira Healthy Living program, with ongoing competitions and a flexible spending stipend for health and wellness-related items.Location: Ft Bragg, NCThe Role: The Data Engineer will support data scientists and analysts through wrangling and readying of datasets for analysis. The Data Engineer will be versed in Extract, Transforms, Load (ETL) techniques and technologies, and will also be familiar with a variety of database types, schemas, and ontologies for centralized data storage. You’ll be working on an interdisciplinary team serving as the data custodian, responsible for its provenance, accuracy, and location throughout project execution. The workflows that you design will support a variety of analytics and analytic applications.Your skills:Demonstrable expertise in software development or software engineering Working knowledge of 2 or more programming languages Experience with Agile software development practices and tools such as JIRA and ConfluenceExperience designing and delivering software solutions in cloud environments (AWS strongly preferred) Experience with multiple database types, and designing ontologies and schemas in support of various analytic or query-based applications (e.g., ElasticSearch, Cassandra, JanusGraph)Familiarity with producing Analysis of Alternatives (AoA) of data storage methods is strongly desiredFamiliarity with containerization tools and techniques, container orchestration, and workflow management, to include technologies such as Docker, Kubernetes, Jenkins, or Terraform is desiredYour quals:Secret, TS, or TS/SCI clearance (TS/SCI strongly preferred)1+ years in a software development or software engineering role supporting analytic applicationsBachelor’s Degree (BS) or higher in technical field related to software development Experience delivering software or technical solutions to DOD or IC customers, USAF, JAIC, NRO, NGA, or other intelligence organizations preferredAltamira is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, or protected veteran status. We focus on recruiting talented, self-motivated employees that find a way to get things done. Join our team of experts as we engineer national security! "," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-compunnel-inc-3511142476?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=kxpWjPJVwypNpIWl2Iedxg%3D%3D&position=8&pageNum=13&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," Durham, NC "," 1 week ago "," Over 200 applicants ","Data Engineer (W2 role with Direct Client) Location: Durham, NC or Boston, MA! or Westlake TX Hybrid role. Required: Bachelor’s degree required. 6-8 years of related experience in data engineering, modeling to support BI needs. A learning and growth approach with passion for solving customer problems and delivering engaging digital solutions. Experience with Enterprise data lake strategy and use of Snowflake Your shown experiences in effectively designing data model, creating curation tables/views by writing the logic for the calculated columns and measures for reporting needs. Proven experience in understanding multi-functional enterprise data, navigating between business analytic needs and data, and being able to work hand-in-hand with other members of technical teams to execute on product roadmaps to enable new insights with our data. Experience working on an agile team and embodies an agile mindset. Financial services experience preferred. Requirement gathering allows you to understand the impact of design decisions on the data strategy and on data consumers. You've experienced working in technology or a related field, with experiences preferred in one or more of the following: data warehouses, data lakes. The Skills You Bring Experience in crafting re-useable data model for various Business Intelligence reporting needs. Proven experience in crafting tables/views by writing the logic for the calculated columns and measures using SQL queries on Snowflake (AWS) and python. Experience in working with AWS, MS Azure or other cloud providers. Experience in DevOps integration with Snowflake is a Plus. Data ingestion tool such as NiFi experience is a plus. Ability to develop ELT/ETL pipelines to move data to and from Snowflake data store using combination of Python and Snowflake SnowSQL. An ability to understand and communicate sophisticated concepts optimally to a variety of audiences, both technical and non-technical. You are a collaborative and constructive teammate within a newly formed local and globally distributed team. Intellectually curious, you take the initiative to learn new skills and share that knowledge in your squad and chapter contexts and seek learning. Adaptable, flexible and thrives in a changing, dynamic environment, managing multiple tasks at a given time. COVID work policy Safety is our top priority. Once we can be together in person with fewer safety measures, this role will be move to our dynamic working approach. You’ll be spending some of your time onsite depending on the nature and needs of your role. Special Instructions: MUST HAVE: SQL, python, AWS, snowflake, ETL (NIFI preferred, Informatica is ok)."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-brooksource-3497505729?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=%2Fmg2kG%2FTjEk9BkEslNyM2A%3D%3D&position=9&pageNum=13&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," Durham, NC "," 1 week ago "," Over 200 applicants "," Data Engineer (W2 role with Direct Client)Location: Durham, NC or Boston, MA! or Westlake TXHybrid role.Required: Bachelor’s degree required.6-8 years of related experience in data engineering, modeling to support BI needs.A learning and growth approach with passion for solving customer problems and delivering engaging digital solutions.Experience with Enterprise data lake strategy and use of SnowflakeYour shown experiences in effectively designing data model, creating curation tables/views by writing the logic for the calculated columns and measures for reporting needs.Proven experience in understanding multi-functional enterprise data, navigating between business analytic needs and data, and being able to work hand-in-hand with other members of technical teams to execute on product roadmaps to enable new insights with our data.Experience working on an agile team and embodies an agile mindset.Financial services experience preferred.Requirement gathering allows you to understand the impact of design decisions on the data strategy and on data consumers.You've experienced working in technology or a related field, with experiences preferred in one or more of the following: data warehouses, data lakes.The Skills You BringExperience in crafting re-useable data model for various Business Intelligence reporting needs.Proven experience in crafting tables/views by writing the logic for the calculated columns and measures using SQL queries on Snowflake (AWS) and python.Experience in working with AWS, MS Azure or other cloud providers.Experience in DevOps integration with Snowflake is a Plus.Data ingestion tool such as NiFi experience is a plus.Ability to develop ELT/ETL pipelines to move data to and from Snowflake data store using combination of Python and Snowflake SnowSQL.An ability to understand and communicate sophisticated concepts optimally to a variety of audiences, both technical and non-technical.You are a collaborative and constructive teammate within a newly formed local and globally distributed team.Intellectually curious, you take the initiative to learn new skills and share that knowledge in your squad and chapter contexts and seek learning.Adaptable, flexible and thrives in a changing, dynamic environment, managing multiple tasks at a given time.COVID work policySafety is our top priority. Once we can be together in person with fewer safety measures, this role will be move to our dynamic working approach. You’ll be spending some of your time onsite depending on the nature and needs of your role. Special Instructions: MUST HAVE: SQL, python, AWS, snowflake, ETL (NIFI preferred, Informatica is ok). "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,co-op Data Engineer,https://www.linkedin.com/jobs/view/co-op-data-engineer-at-bose-corporation-3515308369?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=MuwWVWArRAHafDP4J4I5cQ%3D%3D&position=10&pageNum=13&trk=public_jobs_jserp-result_search-card," Bose Corporation ",https://www.linkedin.com/company/bose-corporation?trk=public_jobs_topcard-org-name," Framingham, MA "," 1 week ago "," Over 200 applicants ","Job Description At Bose, better sound is just the beginning. We’re passionate engineers, developers, researchers, retailers, marketers … and dreamers. One goal unites us — to create products and experiences our customers simply can’t get anywhere else. We are driven to help people reach their fullest human potential. Creating technology to help people to feel more, do more, and be more. We are highly motivated and curious, and we come to work every day looking to solve real problems and make the best experiences for our customers possible. The Bose Data Engineering team is responsible for design, development, and enhancement of Bose Data Platforms (Analytics & Customer Data Platforms) in leading and supporting Advanced Analytics & AI/ML workloads. This team is highly impactful and a key enabler of Bose Digital journey by playing a central role in the Data driven transformation. What you will be working on? As a junior Data Engineer, you will be working closely with internal customers to build data pipelines, from data acquisition to ML model deployment and monitoring. As part of an agile delivery team, you will design, develop, deploy, and support data and AI/ML pipelines, applying best practices and implementing the necessary integrations with our Data Platform ecosystem. This role requires passion for AWS Serverless solutions, python, and SQL coding. Under the supervision of a Senior Data Engineer, part of your responsibilities will be: Design and develop data pipelines pipelines for connected devices, web applications, and mobile applications that support the customer experiences. Stay up to date on relevant technologies, plug into user groups, understand trends and opportunities that ensure we are using the best techniques and tools Collaborate with AWS Cloud Architects to optimize and evaluate scalable and serverless solutions by becoming the SME for AWS solutions, including Sagemaker for ML. Work in multi-functional agile teams to continuously experiment, iterate and deliver on new data product objectives. Qualifications (Demonstrated Competence) Familiar with Python and SQL knowledge Passion for AWS and developing python micro-services in the Cloud Familiar git and docker You appreciate agile software processes, data-driven development, reliability, and responsible experimentation. Highly Desirable But Not Required Skills Include Hands-on experience with Amazon Web Services & serverless applications Understanding of CI/CD pipelines Python and SQL proficient Familiar with SaaS applications Bose is an equal opportunity employer that is committed to inclusion and diversity. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status, or any other legally protected characteristics. For additional information, please review: (1) the EEO is the Law Poster (http://www.dol.gov/ofccp/regs/compliance/posters/pdf/OFCCP_EEO_Supplement_Final_JRF_QA_508c.pdf); and (2) its Supplements (http://www.dol.gov/ofccp/regs/compliance/posters/ofccpost.htm). Please note, the company's pay transparency is available at http://www.dol.gov/ofccp/pdf/EO13665_PrescribedNondiscriminationPostingLanguage_JRFQA508c.pdf. Bose is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the application or employment process, please send an e-mail to Wellbeing@bose.com and let us know the nature of your request and your contact information."," Entry level "," Full-time "," Information Technology "," Motor Vehicle Manufacturing and Computers and Electronics Manufacturing " Data Engineer,United States,co-op Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-tier4-group-3483136732?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=xyyjeAT60%2F8awiCLYRX4QA%3D%3D&position=11&pageNum=13&trk=public_jobs_jserp-result_search-card," Bose Corporation ",https://www.linkedin.com/company/bose-corporation?trk=public_jobs_topcard-org-name," Framingham, MA "," 1 week ago "," Over 200 applicants "," Job DescriptionAt Bose, better sound is just the beginning. We’re passionate engineers, developers, researchers, retailers, marketers … and dreamers. One goal unites us — to create products and experiences our customers simply can’t get anywhere else. We are driven to help people reach their fullest human potential. Creating technology to help people to feel more, do more, and be more. We are highly motivated and curious, and we come to work every day looking to solve real problems and make the best experiences for our customers possible.The Bose Data Engineering team is responsible for design, development, and enhancement of Bose Data Platforms (Analytics & Customer Data Platforms) in leading and supporting Advanced Analytics & AI/ML workloads. This team is highly impactful and a key enabler of Bose Digital journey by playing a central role in the Data driven transformation.What you will be working on?As a junior Data Engineer, you will be working closely with internal customers to build data pipelines, from data acquisition to ML model deployment and monitoring. As part of an agile delivery team, you will design, develop, deploy, and support data and AI/ML pipelines, applying best practices and implementing the necessary integrations with our Data Platform ecosystem. This role requires passion for AWS Serverless solutions, python, and SQL coding.Under the supervision of a Senior Data Engineer, part of your responsibilities will be: Design and develop data pipelines pipelines for connected devices, web applications, and mobile applications that support the customer experiences. Stay up to date on relevant technologies, plug into user groups, understand trends and opportunities that ensure we are using the best techniques and tools Collaborate with AWS Cloud Architects to optimize and evaluate scalable and serverless solutions by becoming the SME for AWS solutions, including Sagemaker for ML. Work in multi-functional agile teams to continuously experiment, iterate and deliver on new data product objectives.Qualifications (Demonstrated Competence) Familiar with Python and SQL knowledge Passion for AWS and developing python micro-services in the Cloud Familiar git and docker You appreciate agile software processes, data-driven development, reliability, and responsible experimentation.Highly Desirable But Not Required Skills Include Hands-on experience with Amazon Web Services & serverless applications Understanding of CI/CD pipelines Python and SQL proficient Familiar with SaaS applicationsBose is an equal opportunity employer that is committed to inclusion and diversity. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status, or any other legally protected characteristics. For additional information, please review: (1) the EEO is the Law Poster (http://www.dol.gov/ofccp/regs/compliance/posters/pdf/OFCCP_EEO_Supplement_Final_JRF_QA_508c.pdf); and (2) its Supplements (http://www.dol.gov/ofccp/regs/compliance/posters/ofccpost.htm). Please note, the company's pay transparency is available at http://www.dol.gov/ofccp/pdf/EO13665_PrescribedNondiscriminationPostingLanguage_JRFQA508c.pdf. Bose is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the application or employment process, please send an e-mail to Wellbeing@bose.com and let us know the nature of your request and your contact information. "," Entry level "," Full-time "," Information Technology "," Motor Vehicle Manufacturing and Computers and Electronics Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-infodyne-solutions-llc-3528107975?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=quekCRluRbW9EBYsvPp8zw%3D%3D&position=12&pageNum=13&trk=public_jobs_jserp-result_search-card," InfoDyne Solutions, LLC ",https://www.linkedin.com/company/infodyne-solutions?trk=public_jobs_topcard-org-name," Dallas, TX "," 3 weeks ago "," Be among the first 25 applicants "," Job Description 5-8+ year of experiance in Data Engineering SQL Proficiency Spark/Hadoop Proficiency 2+Yrs Data Engineering expertise in Big data pipelines Python and PySprak proficiency is MUST AWS (EMR, Lambda, Kinesis, Redshift, RDS), Airflow, Snowflake experience is a plus "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-argano-3522220686?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=zvBgCFPIkGpZc9oy9DGOBw%3D%3D&position=13&pageNum=13&trk=public_jobs_jserp-result_search-card," Argano ",https://www.linkedin.com/company/argano?trk=public_jobs_topcard-org-name," United States "," 1 day ago "," 39 applicants "," Argano is a business modernization partner, purpose-built to give rise to the possibilities of the Digital Renaissance for companies with complex sales and operating environments. We innovate adaptive, efficient, cloud-based digital operating foundations on which the transformational businesses of the 21st century must be built. These modern, scalable, and sustainable foundations integrate operations from commerce to cash to close to consolidation and free our clients to innovate and respond in new and cost-effective ways. The Argano platform uniquely offers the advantage of integrated, world-class capability partners, working together to solve complex challenges across the full spectrum of our client's business. For more information, visit www.argano.comJob Responsibilities Designing, constructing, testing, automating, and maintaining architectures and processing workflows Building robust, efficient, and reliable data infrastructure to support eco-system Integrating existing and new datasets into current inbound and outbound data pipelines Supporting the effective development and deployment of new products such as database tables and views to support dashboards or reports Driving the collection of new data and refinement of existing data sources Managing, developing and documenting data and software quality controls for all data engineering activities, including data mapping across systems Ongoing development and documentation of data engineering methods and tools to transform data into suitable structures for analysis Managing a structured testing programme for data workflows and data engineering tools developed and deployed Qualifications And Experience Levels Minimum 4+experience in related field Bachelor of computer science filed and professional data engineering qualification Personal Attributes Experience with AWS architecture Experience with Relational Databases (Redshift, PostgreSQL) Experience with Non-Relational Data storage (Parquet, ORC) Experience with Fivetran data movement platform (ETL) Experience with SQL Excellent communication skills (Written & Verbal) – should be able to communicate with various levels in the organization Excellent people and stakeholder management skills Analytical and problem-solving skills. Analytical and Logical Thinking. Open to travel between locations. The base compensation range for this position is $100,000 - $145,000, commensurate with experience. Argano also offers a performance-based bonus and strong benefits package including Medical, Dental, Vision, 401K, Paid Parental Leave and Flexible Time Off.#MS3 "," Entry level "," Full-time "," Information Technology "," Business Consulting and Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-burpee-gardening-3496169499?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=kRfckwMKEERXEZujVO7ACA%3D%3D&position=14&pageNum=13&trk=public_jobs_jserp-result_search-card," Burpee Gardening ",https://www.linkedin.com/company/w-atlee-burpee?trk=public_jobs_topcard-org-name," Warminster, PA "," 3 weeks ago "," Over 200 applicants ","This position works with cross-functional stakeholders to understand business needs and is responsible for transforming Burpee enterprise data into valuable business reports for decision-making. This position is responsible for maintaining the company’s data and reporting platform, such as the data pipelines, data flows, data warehouse, and the business intelligence reporting system. Core Essential Responsibilities Identifies source system data sets and develops Azure pipelines into the data warehouse. Maintains the company reporting platform in Azure Synapse / Power BI, including pipelines, data flows, and dashboards. Develops, maintains, and documents physical and logical data models and algorithms. Works closely with business stakeholders and identifies problems and opportunities, recommending solutions. Develops a deep understanding of performance and productivity metrics and develops new metrics to measure performance. Collaborates with the business to implement Power BI dashboards using the Agile methodology. Plays a key role in learning and introducing best practices in data and business intelligence solutions. Conducts root cause analysis on business problems, in partnership with leadership to identify insights that inform and drive key business decisions. Summarizes and communicates recommendations to leadership. Exercises judgment in financial, operational, product, or customer analysis to identify and help resolve issues. Maintains comprehensive and detailed project tracking and routinely communicate project status with business stakeholders. Effectively communicates insights and plans to cross-functional team members and management. Handles all matters with the highest level of confidentiality. Completes special projects as assigned. Preferred Qualifications Minimum of 2-3 years’ experience working in high-volume manufacturing or a CPG industry Strong knowledge of data governance Knowledge of AI and ML Minimal Qualifications Bachelor’s degree in a quantitative field (e.g., engineering, sciences, math, statistics, business, public policy, or economics). Minimum of 2-3 years’ experience working with SQL, DAX, and Python languages Minimum of 2-3 years’ experience working with Azure Data Factory or Azure Synapse Pipelines Minimum of 2-3 years’ experience working with Power BI or Tableau Hands on with a high focus on key results and customer value Highly organized, analytical, flexible with shifting priorities and able to always exercise strong judgment. Excellent communication and interpersonal skills. Willingness to get the job done with a strong sense of urgency within an appropriate timeframe. Applicants must be legally authorized for employment in the United States without need for current or future employer-sponsored work authorization"," Associate "," Full-time "," Information Technology, Analyst, and Engineering "," Food and Beverage Services, Wholesale, and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-softstandard-solutions-3494541426?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=jri30BdcANqvp8X%2BzqFbig%3D%3D&position=15&pageNum=13&trk=public_jobs_jserp-result_search-card," SoftStandard Solutions ",https://www.linkedin.com/company/softstandard?trk=public_jobs_topcard-org-name," New York, United States "," 3 weeks ago "," Over 200 applicants ","Note: NO C2C, ONLY W2 Job Description: Have exceptional analytical and problem-solving skills Are self-motivated and able to work independently with minimum supervision Have experience writing modern data pipelines deployed in the cloud Have worked on agile teams to deliver software iteratively Have a BS in an engineering field OR can make us feel intensely confident that you don’t need one Have 5+ years of experience working with data as a software developer or data practitioner Have experience building and maintaining modern data pipelines in the cloud at an Enterprise scale, ideally using Snowflake and dbt Have exceptional analytical and problem-solving skills It would be nice if you Have demonstrable experience with CI/CD principles and trunk-based development. Have experience leading scrum teams. Can show us one or more passion projects or open-source work you have contributed to in your own time. Have experience with AWS, Airflow, Docker, Qlik Replicate, or Fivetran."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-alliant-the-audience-company-3527492029?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=8VkFmgYPgYLKXbbT6nq6ug%3D%3D&position=16&pageNum=13&trk=public_jobs_jserp-result_search-card," Alliant - The Audience Company ",https://www.linkedin.com/company/alliant-the-audience-company?trk=public_jobs_topcard-org-name," Brewster, NY "," 2 days ago "," 52 applicants ","Overview Alliant is a leading provider of innovative data-driven solutions that optimize marketing profitability. Alliant is looking for a S Data Engineer to support our Data Science team with responsibility for data manipulation, model scoring and report generation. Qualified candidates will have experience in large-scale databases, data processing, quality control, SAS programming, and MS Excel. The successful candidate will have well-developed and highly organized work habits. Principal Responsibilities: • Read in, examine, clean and transform data for model development and analysis using SAS • Score data with existing models for analysis • Independently create reports as needed using SAS • Extract relevant information from complex transactional datasets independently • Identify data issues and questions, and communicate directly with statisticians • Utilize multi-channel marketing industry expertise to evaluate and diagnose campaign data • Identify potential problems in statistical samples, and communicate with the Sales and Account Management teams • Diagnose and solve problems with data, layouts and dictionaries • Provide summaries of data assets for internal and external clients • Participate in client meetings to explain data, ask for clarification, and request additional assets • Understand Alliant's statistical models as they relate to the company's production scoring system; edit model code to enhance or modify scored output • Provide quality assurance review of production scoring of statistical models • Maintain and update model production schedule to assure adequate and precise timing of samples to statisticians • Request, coordinate and communicate with Production team to assure on-time and accurate delivery of datasets Required Qualifications & Skills: • Master’s degree in a quantitative field preferred • 5+ years’ experience with data cleaning and manipulation preferred • Advanced SAS experience, including SAS macro preferred; advanced SQL experience will also be considered • Must have 1+ year experience with large scale databases and Unix • Must have strong Microsoft Excel experience • Must be highly self-motivated with the ability to work on multiple projects independently in a fast-paced environment • Must be flexible, detailed, team-oriented and committed to having a good time while doing a great job The base salary range for this role is between $70,000-$100,000 plus bonus. About Alliant Alliant is a leading data company trusted by thousands of marketers. We deliver highly predictive custom and on-demand audience solutions across TV, programmatic, social, direct mail and more. The Alliant DataHub — built on billions of consumer transactions, advanced data science and high-performance technology — is the foundation for profit-driven audience solutions. For more information, visit: alliantdata.com. The position is based in Alliant’s offices in northern Westchester County, New York. For consideration, please forward your resume and salary requirements to mailto:recruiting@alliantdata.com, Subject Line: Data Engineer. Alliant is an equal opportunity employer."," Associate "," Full-time "," Information Technology, Marketing, and Science "," Advertising Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499586067?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=fZG57ee1Y03%2FIgFo%2BCaVKg%3D%3D&position=17&pageNum=13&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Los Angeles, CA "," 2 weeks ago "," 40 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-shipwell-3483775454?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=8qVWGcZYNZsUQD9r8nH1Wg%3D%3D&position=18&pageNum=13&trk=public_jobs_jserp-result_search-card," Shipwell ",https://www.linkedin.com/company/shipwell?trk=public_jobs_topcard-org-name," Austin, TX "," 4 weeks ago "," Be among the first 25 applicants ","About Shipwell In a world where shipping expectations and complexity are greater than ever, Shipwell is on a mission to empower supply chain efficiency at scale across every company size, stage, and industry. Supply chain solutions today are highly disconnected, rigid, and difficult to use, but Shipwell is disrupting the status quo. Our solution combines everything our customers need in a comprehensive platform that adapts as the market and business demands change, so they can effectively manage the entire process in one place and never have to rip and replace. Shipwell is proud to be recognized by industry experts as a leader in shipping and logistics, including Gartner Magic Quadrant for TMS and Forbes 2020 Next Billion-Dollar Startup. Join us and be part of the Shipping Evolution(R) Our Culture Shipwell is a fast-paced, high-energy start-up that strives to build the future of shipping every day. Diversity of thought and cross-department collaboration is very important to us. We deliver open, honest, careful communication and work as hard as we play. We create & deliver solutions that are revolutionizing the industry, which brings excitement and purpose to our work. If you are looking for a place that will help you tap into your best work-self and give you hands-on experience building something big, then we invite you to come and build the future of shipping with us! About The Role Data Engineer to guide the development and delivery of data solutions and capabilities. As a core member of the data team, the Data Engineer is responsible for the design and evolution of our data pipelines, storage systems, and data management tools and processes. The Data Engineer works closely with analytics, solution architecture, and other stakeholders to understand and prioritize data needs and to ensure that our data infrastructure supports key business initiatives. As a Data Engineer, you will drive work to completion with hands-on development responsibilities, and you will partner with team leadership to provide thought leadership and innovation. What you'll do when you get here: Provides technical leadership to guide delivery of end-to-end data solutions. Collaborates with analysts and other stakeholders to understand their data needs and ensure that the data infrastructure supports their work. Develop training materials and execute training in Statistical Programming activities, for new and existing staff. Crafts and builds reusable components, frameworks, and libraries at scale to support data capabilities. Identifies activities for continuous improvement/automation and makes recommendations based on detailed analysis and standard processes. Drives data engineering processes and guides team members to ensure work is high quality and on time. Fixes data issues and performs root cause analyses to proactively resolve product and operational issues. Hands-on experience building data pipelines using AWS technology. Python, SQL, ETL/ELT. Develop Data Pipelines to build a data lake in AWS, leveraging technologies like EC2, S3, Lambda, Glue, Dynamo DB, Redshift etc. Expertise in building data ingestion tools using technologies like Python to extract data from Relational Databases/External API's. Experience in Redshift/Snowflake or any MPP and columnar database on the Cloud. Self-starter, curious, accountable, enjoys a healthy level of autonomy, strong work ethic, able to succeed in a fast-paced, high intensity start up environment. Demonstrates agility and welcomes change. Contribute to process improvement initiatives Continuously improve data infrastructure to increase efficiency, scalability, and reliability. Ensure data quality and integrity through the implementation of best practices and data governance. What you need to have: 5+ years of experience in data engineering or a related field 3+ years' experience in cloud environments like AWS Hands-on experience building data pipelines using AWS technology. Develop Data Pipelines to build a data lake in AWS, leveraging technologies like EC2, S3, Lambda, Glue, Dynamo DB, Redshift etc. Strong problem-solving and analytical skills Strong communication and collaboration skills Experience with big data technologies (e.g. Hadoop, Spark) is a plus Experience working on CI/CD processes and source control tools such as GitHub and related dev processes Experience with Databricks and Snowflake Bachelor's or Master's degree in Computer Science, Data Science, or a related field (or equivalent experience) Life long learner Why Shipwell: 401k plan (including match) Generous parental leave Competitive salary and equity opportunity Team building events and office competitions Friendly, talented, and inclusive company culture Office in Austin, TX or 100% remote Health, vision, dental, teladoc, HSA, FSA, & Life insurance Incredible growth opportunity at a fast growing company Subsidized wifi, cell phone, and educational reimbursements Receive a technology package including a MacBook Pro The Salary Range for this role is between $170,000 - $$190,000 based on Years of Experience, Skillset, and Location. Here at Shipwell, we are a Remote Forward company. You have the opportunity to work within our office located in Austin, TX or you can choose to be fully remote. Shipwell is an Equal Opportunity Employer and we will not tolerate discrimination or harassment of any sort. We do celebrate diversity and believe experience comes in different forms; many skills are transferable; and passion goes a long way. Diversity in our team makes for better problem solving, more creative thinking, and ultimately a better product and company culture. Even more important than your resume is a clear demonstration of impact, dedication, and the ability to thrive in a fast paced and collaborative environment. Shipwell strives to have an inclusive work environment; so if you are hard working & good at what you do then please come as you are. We want you to contribute, grow, & learn at Shipwell and we encourage you to apply if your experience is close to what we're looking for. We are looking forward to adding new perspectives to our team! For more information about Shipwell visit shipwell.com, or connect with us on Twitter @shipwell, LinkedIn, and Facebook.com/Shipwellinc"," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-extend-information-systems-inc-3527799558?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=%2FXYhRthadXOjhADUYN4I%2BQ%3D%3D&position=19&pageNum=13&trk=public_jobs_jserp-result_search-card," Extend Information Systems Inc. ",https://www.linkedin.com/company/extendinfosys?trk=public_jobs_topcard-org-name," Austin, TX "," 3 weeks ago "," Be among the first 25 applicants ","Data Engineer-Snowflake Location: Austin, TX (Day 1 Onsite) Duration: Long Term Required Skills 5-10 years of experience with Teradata is Required 5-10 Years Of Experience With Snowflake Is Required 2-5 years of experience with ETL Framework is Required 5-10 years of experience with GBI ETL Framework is Required 5-10 years of experience with Teradata ETL framework is Required 5-10 years of experience with Amazon Web Services S3 (AWS S3) is Required 2-5 years of experience with Amazon Web Services (AWS) is Required 2-5 Years Of Experience With Python Is Required 2-5 years of experience with MemSQL is Required Thanks & Regards Shankar Kr Singh Extend Information Systems Cell: (571) 421-2684 ; Ext. 116 Email: shankar@extendinfosys.com Address: 44355 Premier Plaza UNIT 220, Ashburn, VA, USA - 20147 Web: WWW.extendinfosys.com"," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3507581735?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=k1PB1En9o8zJ2BWHXvo6uA%3D%3D&position=20&pageNum=13&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Oregon, United States "," 1 week ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/senior-data-engineer-at-nike-3526161890?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=k5%2FC1A2lFvGNiUMhxMWbHA%3D%3D&position=21&pageNum=13&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Oregon, United States "," 1 week ago "," Be among the first 25 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-it-avalon-3513219951?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=fhL67Ebbg%2F6aSwdS%2BpUCfQ%3D%3D&position=22&pageNum=13&trk=public_jobs_jserp-result_search-card," IT Avalon ",https://www.linkedin.com/company/it-avalon?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 193 applicants ","Data Engineer, Sr. - Remote Job Description: The hire will be responsible for expanding and optimizing our data and pipeline architecture and data flow and collection. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. In addition, the Data Engineer will collaborate and support software developers, implementation architecture, and data engineers on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects. In addition, they must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. Responsibilities: Because we work on a programmatic leading edge of many technologies, we need someone who is a creative problem solver, resourceful in getting things done, and can shift productively to or from working independently and collaboratively. This person would also take on the following responsibilities: Process unstructured data into a form suitable for analysis. Support the business with ad hoc data analysis and build reliable data pipelines. Implementation of best practices and IT operations in mission-critical tighter SLA data pipelines using Airflow Query Engine Migration from Dremio to Redshift. We leverage: Multiple AWS Data & Analytic Services(e.g.,Glue, Kinesis, S3) , SQL (e.g., PostgreSQL, Redshift, Athena); NoSQL (e.g., DocumentDB, MongoDB); Kafka, Docker, Spark(AWS EMR and DataBricks), Airflow, Dremio, Qubole, etc We use AWS extensively, so experience with AWS cloud and AWS Data & Analytics certification will help you hit the ground running. Skills and Qualifications: 8+ years of real-world Data Engineering experience. Programming experience, ideally in Python and other data engineering languages like Scala Programming knowledge to clean structure and semi-structure datasets. Experience processing large amounts of structured and unstructured data. Streaming data experience is a plus. Experience building and optimizing big data data pipelines, architectures, and data sets. Background in Linux Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and other cloud big data technologies like DataBricks, Snowflake, Dremio, and Qubole. Build processes supporting data transformation, data structures, metadata, dependency, and workload management A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Experience creating a platform on which complex data pipelines are built using orchestration tools like Airflow, and Astronomer. Experience with real-time sync between OLTP and OLAP using AWS technologies like realtime sync between AWS Aurora and AWS Redshift."," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-state-farm-3487713211?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=V7Li9ywqVJO4IniDyzlFqg%3D%3D&position=23&pageNum=13&trk=public_jobs_jserp-result_search-card," IT Avalon ",https://www.linkedin.com/company/it-avalon?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 193 applicants "," Data Engineer, Sr. - Remote Job Description:The hire will be responsible for expanding and optimizing our data and pipeline architecture and data flow and collection. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. In addition, the Data Engineer will collaborate and support software developers, implementation architecture, and data engineers on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects. In addition, they must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.Responsibilities:Because we work on a programmatic leading edge of many technologies, we need someone who is a creative problem solver, resourceful in getting things done, and can shift productively to or from working independently and collaboratively. This person would also take on the following responsibilities:Process unstructured data into a form suitable for analysis.Support the business with ad hoc data analysis and build reliable data pipelines.Implementation of best practices and IT operations in mission-critical tighter SLA data pipelines using AirflowQuery Engine Migration from Dremio to Redshift.We leverage: Multiple AWS Data & Analytic Services(e.g.,Glue, Kinesis, S3) , SQL (e.g., PostgreSQL, Redshift, Athena); NoSQL (e.g., DocumentDB, MongoDB); Kafka, Docker, Spark(AWS EMR and DataBricks), Airflow, Dremio, Qubole, etcWe use AWS extensively, so experience with AWS cloud and AWS Data & Analytics certification will help you hit the ground running.Skills and Qualifications: 8+ years of real-world Data Engineering experience.Programming experience, ideally in Python and other data engineering languages like ScalaProgramming knowledge to clean structure and semi-structure datasets.Experience processing large amounts of structured and unstructured data. Streaming data experience is a plus.Experience building and optimizing big data data pipelines, architectures, and data sets.Background in LinuxBuild the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and other cloud big data technologies like DataBricks, Snowflake, Dremio, and Qubole.Build processes supporting data transformation, data structures, metadata, dependency, and workload managementA successful history of manipulating, processing, and extracting value from large, disconnected datasets.Experience creating a platform on which complex data pipelines are built using orchestration tools like Airflow, and Astronomer.Experience with real-time sync between OLTP and OLAP using AWS technologies like realtime sync between AWS Aurora and AWS Redshift. "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Software Data Engineer,https://www.linkedin.com/jobs/view/software-data-engineer-at-strategic-legal-practices-apc-3510794826?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=H00ElOY368fZgJVVzmgW9w%3D%3D&position=24&pageNum=13&trk=public_jobs_jserp-result_search-card," Strategic Legal Practices, APC ",https://www.linkedin.com/company/strategic-legal-practices?trk=public_jobs_topcard-org-name," Los Angeles, CA "," 1 week ago "," 51 applicants ","About Strategic Legal Practices Based in Los Angeles, Strategic Legal Practices is one of the largest litigation firms within California, representing clients in a range of consumer protection and civil litigation matters. Our Firm measures our success by how well our clients do. We are armed with a group of experienced attorneys, led by one of the most successful Lemon Law and Consumer Fraud litigators in California. The best predictor of performance is our record of achievement. We are proud to have successfully helped thousands of clients in their pursuit against car manufacturers. Our success rate is unmatched by any other Firm. Software Data Engineer Job Description Strategic Legal Practices has a great opportunity for a multi-faceted self-starting Software Data Engineer who wants to help share the way that an influential law firm engages with data engineering. As a Software Data Engineer with SLP, you will hit the ground running in order to build new data pipelines and analytic solutions across the firm. Our ideal candidate is passionate about groundbreaking technology, enjoys creativity, and has strong written and verbal communication skills. This role would suit an ambitious Software Data Engineer coming from a background of either working in a fast growth scale up company, or someone who has worked within a large organization who has an appetite for taking on more responsibility and autonomy while leading a team. Responsibilities Include, But Are Not Limited To, The Following Work with business analysts and data engineers to enable self-service analysis for data analysts within business teams; Leverage your data engineering skills to impact our business by taking ownership of key projects requiring coding and data pipelines; Provide operational & production support for the self-service ad-hoc analytics; Implementing processes to monitor data quality, ensuring production data is always accurate and available for key partners and business processes that depend on it; Diagnose and debug issues in development, staging and production environments; Operational support including issues investigation and remediation; Support and maintenance of existing platform management code base and tooling, including Python, Java SQL, Ansible (or similar) and Cloudformation (or similar); Qualifications Ability to architect, set up, administer and scale data pipelines and build integration through external/third-party APIs; Experience ingestion of data from internal and third-party sources and familiarity with data design principles; Strong communication skills and ability to discuss the product and with executives and business owners; Excellent problem solving and troubleshooting skills with ability to identify and call out issues as appropriate; Experience of successfully building, optimising and validating models (e.g. linear models, time-series methods; ML algorithms, Classifiers, etc); Understanding principles of ITIL and maintaining provision of products and services to consumers. Can work with ticketing systems; Preferred Education And Experience B.S. degree in computer science, mathematics, statistics, or a similar quantitative field; 3+ years of related experience required; Deep knowledge in various ETL/ELT tools and concepts, data modeling, SQL, query performance optimization; Technical expertise with data models, data mining, and segmentation techniques; Knowledge of cloud big data services and technologies; More advanced ML algorithms such as Bayesian & Hierarchical Modelling and Markov Chain Monte Carlo (MCMC) is desirable; Python libraries such as Scikit-Learn and PyStan; Version control software such as Git; Experience with Microsoft Azure and/or AWS; Command line tools such as Bash; Benefits Visa Sponsorship available for the right candidate 401k with Employer Match Employer Paid Health, Dental & Vision Insurance STD, LTD & Life Insurance Holidays & Paid Time Off Paid Parking Referral Program Employee Assistance Program Employee Discount Program Employee breakfasts, lunches, and events Position Type: Full-time, remote option available. Schedule: Monday to Friday Location: Century City, CA 90067 Strategic Legal Practices, APC is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based on race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. At Strategic Legal Practices, we put people first—our lawyers, legal professionals, and clients. We empower our lawyers and legal professionals with the knowledge, mentorship, and resources they need, and we encourage everyone to pursue a path that allows them to feel fulfilled. If you stay at Strategic Legal Practices, your career will grow, and you will have the opportunities you desire."," Entry level "," Full-time "," Information Technology "," Law Practice " Data Engineer,United States,Software Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-zortech-solutions-3528105992?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=ivcci9o28vzKxDgNBAbllQ%3D%3D&position=25&pageNum=13&trk=public_jobs_jserp-result_search-card," Strategic Legal Practices, APC ",https://www.linkedin.com/company/strategic-legal-practices?trk=public_jobs_topcard-org-name," Los Angeles, CA "," 1 week ago "," 51 applicants "," About Strategic Legal PracticesBased in Los Angeles, Strategic Legal Practices is one of the largest litigation firms within California, representing clients in a range of consumer protection and civil litigation matters. Our Firm measures our success by how well our clients do. We are armed with a group of experienced attorneys, led by one of the most successful Lemon Law and Consumer Fraud litigators in California. The best predictor of performance is our record of achievement. We are proud to have successfully helped thousands of clients in their pursuit against car manufacturers. Our success rate is unmatched by any other Firm.Software Data Engineer Job DescriptionStrategic Legal Practices has a great opportunity for a multi-faceted self-starting Software Data Engineer who wants to help share the way that an influential law firm engages with data engineering. As a Software Data Engineer with SLP, you will hit the ground running in order to build new data pipelines and analytic solutions across the firm. Our ideal candidate is passionate about groundbreaking technology, enjoys creativity, and has strong written and verbal communication skills.This role would suit an ambitious Software Data Engineer coming from a background of either working in a fast growth scale up company, or someone who has worked within a large organization who has an appetite for taking on more responsibility and autonomy while leading a team.Responsibilities Include, But Are Not Limited To, The FollowingWork with business analysts and data engineers to enable self-service analysis for data analysts within business teams;Leverage your data engineering skills to impact our business by taking ownership of key projects requiring coding and data pipelines;Provide operational & production support for the self-service ad-hoc analytics;Implementing processes to monitor data quality, ensuring production data is always accurate and available for key partners and business processes that depend on it;Diagnose and debug issues in development, staging and production environments;Operational support including issues investigation and remediation;Support and maintenance of existing platform management code base and tooling, including Python, Java SQL, Ansible (or similar) and Cloudformation (or similar);QualificationsAbility to architect, set up, administer and scale data pipelines and build integration through external/third-party APIs;Experience ingestion of data from internal and third-party sources and familiarity with data design principles;Strong communication skills and ability to discuss the product and with executives and business owners;Excellent problem solving and troubleshooting skills with ability to identify and call out issues as appropriate;Experience of successfully building, optimising and validating models (e.g. linear models, time-series methods; ML algorithms, Classifiers, etc);Understanding principles of ITIL and maintaining provision of products and services to consumers. Can work with ticketing systems;Preferred Education And ExperienceB.S. degree in computer science, mathematics, statistics, or a similar quantitative field;3+ years of related experience required;Deep knowledge in various ETL/ELT tools and concepts, data modeling, SQL, query performance optimization;Technical expertise with data models, data mining, and segmentation techniques;Knowledge of cloud big data services and technologies;More advanced ML algorithms such as Bayesian & Hierarchical Modelling and Markov Chain Monte Carlo (MCMC) is desirable;Python libraries such as Scikit-Learn and PyStan;Version control software such as Git;Experience with Microsoft Azure and/or AWS;Command line tools such as Bash;BenefitsVisa Sponsorship available for the right candidate401k with Employer MatchEmployer Paid Health, Dental & Vision InsuranceSTD, LTD & Life InsuranceHolidays & Paid Time OffPaid ParkingReferral ProgramEmployee Assistance ProgramEmployee Discount ProgramEmployee breakfasts, lunches, and eventsPositionType: Full-time, remote option available.Schedule: Monday to FridayLocation: Century City, CA 90067Strategic Legal Practices, APC is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based on race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.At Strategic Legal Practices, we put people first—our lawyers, legal professionals, and clients. We empower our lawyers and legal professionals with the knowledge, mentorship, and resources they need, and we encourage everyone to pursue a path that allows them to feel fulfilled. If you stay at Strategic Legal Practices, your career will grow, and you will have the opportunities you desire. "," Entry level "," Full-time "," Information Technology "," Law Practice " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-motor-information-systems-3475923720?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=icE1jKZL4w%2Bb%2FtRLnEl1DQ%3D%3D&position=3&pageNum=12&trk=public_jobs_jserp-result_search-card," MOTOR Information Systems ",https://www.linkedin.com/company/motor-information-systems?trk=public_jobs_topcard-org-name," Troy, MI "," 1 week ago "," Over 200 applicants "," Job DescriptionMOTOR Information Systems, an owned subsidiary of Hearst, is actively seeking a Data Engineer who possesses a strong passion for designing, optimizing, refactoring, and upgrading complex data solutions. MOTOR’s Insights team is a new team with an exponential growth opportunity – both in terms of technology and personnel, and we are looking for someone to share our mission that will transform our company’s future. With this position, you will have a rare opportunity to use your talents, passions, and expertise to help drive this massive change in how we build, organize, and optimize our backend systems and processes across all MOTOR Insights product line. This position offers excellent career growth and promotional opportunities, stellar compensation, and an opportunity to work with the world's premier provider of aftermarket automotive data. Hearst and MOTOR Information Systems could very well be your best and last position in a rewarding and fulfilling career!Required Experience (must Have) Minimum 5+ years’ data engineering experience working with designing data models in both column store and relational databases, incorporating disparate data to solve complex business needs. Minimum Bachelor’s degree or Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. Experience building and optimizing data pipelines, architectures, database modeling and data sets that meet business requirements. Experience in building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources in ‘big data’ technologies or similar. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. Experience in building processes supporting data transformation, analytics tools, data structures, metadata, dependency, and workload management. Assist Data Science team that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and optimizing our product into an innovative industry leader. Experience in using Azure DevOps, or Jira project management/requirements management. Experience with big data tools: Hadoop, Spark, Kafka, or a related technology. Proficient in programming languages such as SQL, Python, R, Shell Scripting Experience with NoSQL databases, including Postgres and Cassandra is a plus - (Nice to have) Strong working experience with technologies like AWS/Databricks and (minimum 3+ years’ experience) working with big data, including expertise designing data models in both column store and relational databases, incorporating disparate data to solve complex business needs - (Nice to have) AWS lambda functions (Python preferred) AWS databases (Aurora, DynamoDB, or Redshift preferred) AWS storage services (EC2, S3 preferred) Data Lake experience or similar (AWS preferred) Primary Responsibilities Developing and implementing an overall organizational data strategy that is in line with business processes. The strategy includes data model designs, database development, implementation and management of data warehouses and data analytics systems. Identifying data sources, both internal and external and work out a plan for data management that is aligned with organizational data strategy. Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system. Managing end-to-end data architecture, from selecting the platform, designing the technical architecture, and developing the application to finally testing and implementing the proposed solution. Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition. Being MOTOR Driven is Being Diversity DrivenWe are MOTOR driven. At MOTOR, we are driven by diversity and creating an inclusive and welcoming workplace that celebrates our differences. Being MOTOR driven is celebrating your uniqueness. #MOTORDrivenEEO Employer "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-merck-3531357999?refId=jLUo4q%2FMPqV4qtHi8E49HA%3D%3D&trackingId=UMU%2FD2Hb%2FBB4eKQqAmVnEQ%3D%3D&position=5&pageNum=12&trk=public_jobs_jserp-result_search-card," Merck ",https://www.linkedin.com/company/merck?trk=public_jobs_topcard-org-name," Rahway, NJ "," 3 weeks ago "," Be among the first 25 applicants ","Job Description Our IT team operates as a business partner proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver the services and solutions that help everyone to be more productive and enable innovation. Position Description The Information Technology team at our Company's R&D division is looking for a Data Engineer. Reporting into the Product Manager, Data Publication Service, this position will work closely with Product Managers within the Product Line. The data engineer develops and constructs data products and services and integrates them into systems and business processes. Primary activities include designing and developing high quality and secure data pipelines and processes that access, move, clean, transform, integrate, structure, store and visualize data. He or She may support analytics from a deep understanding of data architecture, data engineering, data analysis, reporting, visualization and a basic understanding of data science techniques and workflows. Key Responsibilities Designs, implements and maintains data pipelines through different data layers (ingest, data transformation, reporting) Schedules data pipeline activities Defines data storage structures and manage data storage. Defines security. Optimizes performance. Advises on the Ways of Working on Data Analytics projects. Understands the business meaning of data and reflects data concepts in data transformation. Assists with design of complex reports as data engineering SME. Gathers and processes raw data at scale. Discovers opportunities for data acquisition. Conducts the processing and cleaning up of data sets. Recommends and implements ways to improve data reliability, quality and freshness. Prior Experience Database design Understanding of virtualization tools Information management Data warehousing / data integration concepts Position Qualifications Education Minimum Requirement: B.S. degree in Computer Science, Information Systems or related field. Required Experience And Skills The candidate should have 1-2 year of Information Technology experience in the areas of Data Engineering and Information Management. A strong understanding of data management concepts, design principles, and best practices is required. Experience with methods and tools of Data Engineering are required. Exceptional interpersonal and communication skills are mandatory. The ideal candidate has practical experience transferring big data in & out of AWS and transforming, computing in AWS at scale. Preferred Experience And Skills Prior 1 year of pharmaceutical related experience is desirable. Skills in data modeling and design are preferred. Strong ability to work independently and a demonstrated ability to succeed in a dynamic and complex team environment are preferred. Experience with SQL, Data Ingestion, Integration & Virtualization technologies and Cloud Computing is preferred. Experience with Agile data engineering concepts such as CICD, pipelines, and iterative development & deployments is preferred. Our Support Functions deliver services and make recommendations about ways to enhance our workplace and the culture of our organization. Our Support Functions include HR, Finance, Information Technology, Legal, Procurement, Administration, Facilities and Security. Who We Are … We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For … Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. NOTICE FOR INTERNAL APPLICANTS In accordance with Managers' Policy - Job Posting and Employee Placement, all employees subject to this policy are required to have a minimum of twelve (12) months of service in current position prior to applying for open positions. If you have been offered a separation benefits package, but have not yet reached your separation date and are offered a position within the salary and geographical parameters as set forth in the Summary Plan Description (SPD) of your separation package, then you are no longer eligible for your separation benefits package. To discuss in more detail, please contact your HRBP or Talent Acquisition Advisor. Residents of Colorado Click here to request this role’s pay range. Employees working in roles that the Company determines require routine collaboration with external stakeholders, such as customer-facing commercial, or research-based roles, will be expected to comply not only with Company policy but also with policies established by such external stakeholders (for example, a requirement to be vaccinated against COVID-19 in order to access a facility or meet with stakeholders). Please understand that, as permitted by applicable law, if you have not been vaccinated against COVID-19 and an essential function of your job is to call on external stakeholders who require vaccination to enter their premises or engage in face-to-face meetings, then your employment may pose an undue burden to business operations, in which case you may not be offered employment, or your employment could be terminated. Please also note that, where permitted by applicable law, the Company reserves the right to require COVID-19 vaccinations for positions, such as in Global Employee Health, where the Company determines in its discretion that the nature of the role presents an increased risk of disease transmission. Current Employees apply HERE Current Contingent Workers apply HERE US And Puerto Rico Residents Only Our company is committed to inclusion, ensuring that candidates can engage in a hiring process that exhibits their true capabilities. Please click here if you need an accommodation during the application or hiring process. For more information about personal rights under Equal Employment Opportunity, visit: EEOC Know Your Rights EEOC GINA Supplement Pay Transparency Nondiscrimination We are proud to be a company that embraces the value of bringing diverse, talented, and committed people together. The fastest way to breakthrough innovation is when diverse ideas come together in an inclusive environment. We encourage our colleagues to respectfully challenge one another’s thinking and approach problems collectively. We are an equal opportunity employer, committed to fostering an inclusive and diverse workplace. Under New York City, Washington State and California State law, the Company is required to provide a reasonable estimate of the salary range for this job. Final determinations with respect to salary will take into account a number of factors, which may include, but not be limited to the primary work location and the chosen candidate’s relevant skills, experience, and education. Expected Salary Range Available benefits include bonus eligibility, health care and other insurance benefits (for employee and family), retirement benefits, paid holidays, vacation, and sick days. For Washington State Jobs, a summary of benefits is listed here. Learn more about your rights, including under California, Colorado and other US State Acts Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation: No relocation VISA Sponsorship No Travel Requirements 10% Flexible Work Arrangements Hybrid Shift Valid Driving License: Yes Hazardous Material(s) Requisition ID:R225303"," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting, Telecommunications, and Biotechnology Research " Data Engineer,United States,Onsite BI Data Engineer (ETL + Azure),https://www.linkedin.com/jobs/view/onsite-bi-data-engineer-etl-%2B-azure-at-irvine-technology-corporation-3497559047?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=TOmlY9CxQdzkepI3dcrnfg%3D%3D&position=7&pageNum=13&trk=public_jobs_jserp-result_search-card," Irvine Technology Corporation ",https://www.linkedin.com/company/irvine-technology-corporation?trk=public_jobs_topcard-org-name," Houston, TX "," 2 weeks ago "," 65 applicants ","Searching for a full-time employee to execute strategic projects to create competitive edge solutions combining design, data, and software engineering while providing superb customer service. The ideal candidate will have experience cleaning data, reporting and dashboarding, ETL, excellent SQL skills, performance tuning, database architecture and design and all the other lessons learned in a long career working with data. Fantastic Azure Synapse and Azure Spark & SQL skills are preferred for this role. Responsibilities: Collaborate with data scientists, product management, and web engineers to deliver value and project outcomes. Convert prototype models and data pipelines built by data scientists for use in production. Work closely with the Architect and offer suggestions on Data warehouse & pipeline designs. Work with Application Team/IT to improve workflows in the source apps to improve quality of feeds into the warehouse. Able to create internal reports/dashboard in SSRS/Power BI for custom monitoring of ETL pipelines/Warehouse health. Balance long-term code health and maintainability with business needs. Profiling and performance tuning of production code. Qualifications: 7+ years with ETL (SSIS or similar) using multiple sources Experience working on projects within the cloud ideally Azure Strong development background with experience in T-SQL 2+ Years working with Azure Synapse (formerly Azure DW) in a Production environment, using both Spark pools & SQL pools 2+ Years Building pipelines/ETL using Azure Data Factory in a Production environment Had a major role in 1+ Migrations/Conversions between one data source and another Irvine Technology Corporation (ITC) is a leading provider of technology and staffing solutions for IT, Security, Engineering, and Interactive Design disciplines servicing startups to enterprise clients, nationally. We pride ourselves in the ability to introduce you to our intimate network of business and technology leaders - bringing you opportunity coupled with personal growth, and professional development! Join us. Let us catapult your career! Irvine Technology Corporation provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Irvine Technology Corporation complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities."," Mid-Senior level "," Full-time "," Information Technology "," Information Services " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-brooksource-3497505729?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=%2Fmg2kG%2FTjEk9BkEslNyM2A%3D%3D&position=9&pageNum=13&trk=public_jobs_jserp-result_search-card," Brooksource ",https://www.linkedin.com/company/brooksource?trk=public_jobs_topcard-org-name," Indianapolis, IN "," 2 weeks ago "," 119 applicants "," Associate Data Engineer Duration: 6 month Contract to HireClient Location: Indianapolis, IN (must be able to go on-site)Compensation: $35-38/hr., Depending on experience *This position is not eligible for C2C or sponsorship. The Enterprise Data Solutions team is helping to build an organization that makes decisions driven by data. As a data engineer on the Enterprise Data Solutions team, you are helping enable data driven decision making by centralizing, processing, and preparing data from throughout the organization to be used to solve real business problems. You will work in a collaborative, agile team with a modern cloud based data stack to deliver value frequently. This individual will effectively exhibit the company's core values in everything they do by performing the following main duties: Building integrations with data sources to ingest data in the central data lake using various technologies. Cleansing, joining, preparing, and transforming data from raw sources into models suited for analytics purposes Leveraging DataOps practices such as data test automation, automated quality checks, and automated deployment - to ensure high quality and improve time to delivery Collaborate with the members of the BI team and others within the organization to ensure data needs are met Supporting the end users of the data and analytics, responding to tickets and inquiries from business partners when data quality issues occur Maintaining data governance through documentation of data solutions, through ERDs, Confluence documentation, or external tools Ensuring our enterprise data is timely and accurate Requirements Bachelor’s degree (B.A.) in Information Systems or other related field from a four-year college or university, or equivalent combination of education and experience. 1-3 years of experience Strong SQL skills Experience with cloud-based data warehouses, BigQuery a plus Data modeling skills and understanding of analytical data warehousing Understanding of data exploration, visualization, and BI tools such as Looker, Tableau, and Power BI Experience with data pipeline and workflow management tools such as Azkaban, Luigi, Airflow Experience with data processing using Python Expeirence with DBT, a plusWorking knowledge of source control using git "," Associate "," Full-time "," Information Technology "," IT Services and IT Consulting and Government Administration " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-tier4-group-3483136732?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=xyyjeAT60%2F8awiCLYRX4QA%3D%3D&position=11&pageNum=13&trk=public_jobs_jserp-result_search-card," Tier4 Group ",https://www.linkedin.com/company/tier4group?trk=public_jobs_topcard-org-name," United States "," 4 weeks ago "," Over 200 applicants ","No c2C Direct Hire, Fully Remote Role Are you a Data Engineer who is highly creative, collaborative, and able to Promote effective data management practices and encourage a better understanding of data and analytics? Serving as a Data Engineer you will be responsible for building, managing, and optimizing data pipelines that deliver curated data for key analytics consumers and initiatives, while building solutions for a wide variety of real-time, near real-time, and batch analytical processes enabling business users to make critical decisions. You will also be responsible for possessing a strong combination of IT, data governance, analytics, and creative skills while playing a pivotal role in extending the next generation of the data ecosystem and operationalizing the most urgent data and analytics initiatives. This is a full-time, remote permanent opportunity with competitive compensation. This opportunity also comes with health/dental/vision benefits, generous PTO, tuition reimbursement, fitness reimbursement, generous retirement benefits, and more! Requirements · Bachelor’s degree in Data Management, Computer Science, Information Systems, and or a related field · Experience in data management disciplines including data integration, data modeling, and optimization · Experience with Python, PowerShell, and or Linux · Experience building data pipelines, ETL processes, and data structures · Knowledge of relational and non-relational database theory and structure query language (SQL)"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Senior Data Engineer,https://www.linkedin.com/jobs/view/senior-data-engineer-at-nike-3526161890?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=k5%2FC1A2lFvGNiUMhxMWbHA%3D%3D&position=21&pageNum=13&trk=public_jobs_jserp-result_search-card," Nike ",https://www.linkedin.com/company/nike?trk=public_jobs_topcard-org-name," Beaverton, OR "," 3 days ago "," 45 applicants ","Become a Part of the NIKE, Inc. Team NIKE, Inc. does more than outfit the world’s best athletes. It is a place to explore potential, obliterate boundaries and push out the edges of what can be. The company looks for people who can grow, think, dream and create. Its culture thrives by embracing diversity and rewarding imagination. The brand seeks achievers, leaders and visionaries. At NIKE, Inc. it’s about each person bringing skills and passion to a challenging and constantly evolving game. NIKE is a technology company. From our flagship website and five-star mobile apps to developing products, managing big data and providing leading edge engineering and systems support, our teams at NIKE Global Technology exist to revolutionize the future at the confluence of tech and sport. We invest and develop advances in technology and employ the most creative people in the world, and then give them the support to constantly innovate, iterate and serve consumers more directly and personally. Our teams are innovative, diverse, multidisciplinary and collaborative, taking technology into the future and bringing the world with it. Who Are We Looking For Nike has embraced big data technologies to enable data-driven decisions. We are looking for a Data Engineer to keep pace. As a Data Engineer, you will work with a variety of dedicated Nike teammates and be a driving force for building first-class solutions for Enterprise Data and Analytics. The ideal candidate will have programming experience, exceptional data skills, be comfortable with ambiguity and will enjoy working in a fast-paced, dynamic environment. What Will You Work On Design and building simple (non-complex) reusable components of larger process or framework to support analytics products with guidance from experienced peers Design and implement product features in collaboration with Business and Technology partners Anticipate, identify and solve issues concerning data management to improve data quality Clean, prepare and optimize data at scale for ingestion and consumption Support the implementation of new data management projects and re-structure of the current data architecture Implement automated workflows and routines using workflow scheduling tools Understand and use continuous integration, test driven development and production deployment frameworks Participate in design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards Analyze and profile data for the purpose of designing scalable solutions Solve straightforward data issues and perform root cause analysis to proactively resolve product issues Who Will You Work With You will be collaborating with the Engineering manager, the Product Manager, other Engineering team members and with a variety of dedicated Nike teammates. You will join a team that will be a driving force in building Data and Analytic solutions for Nike Technology. What You Bring Some combination of these qualifications and technical skills will position you well for this role: 5+ years’ experience developing Data & Analytic solutions Bachelor’s Degree in computer science Experience building data lake solutions using one or more of following: AWS, EMR, S3, Hive & Spark Experience with relational SQL Experience with scripting languages such as Shell, Python Experience with source control tools such as GitHub and related dev process Experience with workflow scheduling tools such as Airflow In-depth knowledge of scalable cloud Passion for data solutions Strong understanding of data structures and algorithms Strong understanding of solution and technical design Strong problem solving and analytical mentality Experience working with Agile Teams Able to influence and communicate effectively, both verbally and written, with team members and business partners Able to quickly pick up new programming languages, technologies, and frameworks NIKE, Inc. is a growth company that looks for team members to grow with it. Nike offers a generous total rewards package, casual work environment, a diverse and inclusive culture, and an electric atmosphere for professional development. No matter the location, or the role, every Nike employee shares one galvanizing mission: To bring inspiration and innovation to every athlete* in the world. NIKE, Inc. is committed to employing a diverse workforce. Qualified applicants will receive consideration without regard to race, color, religion, sex, national origin, age, sexual orientation, gender identity, gender expression, veteran status, or disability. ]]>"," Mid-Senior level "," Full-time "," Information Technology and Engineering "," Retail " Data Engineer,United States,REMOTE Data Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-state-farm-3487713211?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=V7Li9ywqVJO4IniDyzlFqg%3D%3D&position=23&pageNum=13&trk=public_jobs_jserp-result_search-card," State Farm ",https://www.linkedin.com/company/state_farm?trk=public_jobs_topcard-org-name," Bloomington, IL "," 1 day ago "," 168 applicants ","Overview We are not just offering a job but a meaningful career! Come join our passionate team! As a Fortune 50 company, we hire the best employees to serve our customers, making us a leader in the insurance and financial services industry. State Farm embraces diversity and inclusion to ensure a workforce that is engaged, builds on the strengths and talents of all associates, and creates a Good Neighbor culture. We offer competitive benefits and pay with the potential for an annual financial award based on both individual and enterprise performance. Our employees have an opportunity to participate in volunteer events within the community and engage in a learning culture. We offer programs to assist with tuition reimbursement, professional designations, employee development, wellness initiatives, and more! Visit our Careers page for more information on our benefits, locations and the process of joining the State Farm team! REMOTE: Qualified candidates (outside of hub locations listed below) may be considered for 100% remote work arrangements based on where a candidate currently resides or is currently located. HYBRID: Qualified candidates (in or near hub locations listed below) should plan to spend time working from home and some time working in the office as part of our hybrid work environment. HUB LOCATIONS: Dunwoody, GA; Richardson, TX; Tempe, AZ; or Bloomington, IL Check out our Enterprise Technology department! Responsibilities The Data Visualization team is seeking a talented and creative Data Engineer to evaluate and enable data technologies that transform data into meaningful insights across the Enterprise. To be successful in this role, the engineer must be a strategic thinker and can bring a data-driven approach to solving complex business problems. We need an exceptional communicator, passionate about data, collaborative, analytical, and a problem-solver who has expertise related to business intelligence (BI) tooling. As a Data Engineer in this role you will get to: Position data and perform data analysis for use in visualizations that will provide insights into business opportunities. Interface with the business areas that are sourcing the data for the various analytical insights. Qualifications Highly desired skills: At least 5 years of experience in data engineering Strong proficiency in Python and SQL Experience with data warehousing and ETL tools Familiarity with cloud computing platforms, such as AWS Knowledge of data modeling and data visualization techniques Excellent problem-solving, analytical, communication, and interpersonal skills SPONSORSHIP: Applicants are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunity For Los Angeles candidates: Pursuant to the Los Angeles Fair Chance Initiative for Hiring, we will consider for employment qualified applicants with criminal histories. For San Francisco candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records. For Colorado and Washington State candidates: Salary Range: $84,620.00-$169,250.00 For California, NYC, and CT candidates: Potential salary range: $84,620.00-$169,250.00 Potential yearly incentive pay: up to 15% of base salary Competitive Benefits including: 401k Plan Health Insurance Dental/Vision plans Life Insurance Paid Time Off Annual Merit Increases Tuition Reimbursement Health Initiatives For more details visit our benefits summary page SFARM "," Entry level "," Full-time "," Analyst, Information Technology, and Engineering "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-zortech-solutions-3528105992?refId=qe2r1wgEe8UwbKeFfyahsQ%3D%3D&trackingId=ivcci9o28vzKxDgNBAbllQ%3D%3D&position=25&pageNum=13&trk=public_jobs_jserp-result_search-card," Zortech Solutions ",https://ca.linkedin.com/company/zortech?trk=public_jobs_topcard-org-name," West New York, NJ "," 3 weeks ago "," Be among the first 25 applicants "," Role: Data EngineerLocation: Remote/US/NJ (Hybrid-Onsite)Duration: 3-6+ MonthsJob Description5-8 years of total experienceSkills: Python/Pyspark "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Senior Data Engineer,https://www.linkedin.com/jobs/view/senior-data-engineer-at-saransh-inc-3527090041?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=bibnhcfurOUxD0TDq%2BvwBQ%3D%3D&position=1&pageNum=14&trk=public_jobs_jserp-result_search-card," Saransh Inc ",https://www.linkedin.com/company/saransh-inc-usa?trk=public_jobs_topcard-org-name," New York, NY "," 8 hours ago "," Be among the first 25 applicants ","About The Sr Data Engineer Role As a Data Engineer with ETL/ELT background, the candidate needs to design and develop reusable data ingestion processes from variety of sources and build data pipelines in Talend, PostgreSQL and other technologies such as Synapse Azure cloud data warehouse platform and reporting processes. The ideal candidate should be proficient in writing SQLs so they can be embedded into the informatica (IICS) for data transformation. Ideal candidate should be willing to learn open source technologies on the fly as job demands. This is a hands-on data engineering role. Responsibilities Design, develop & implement ETL processes on Talend Advanced SQL knowledge, capable to write optimized queries for faster data work flows. Must be extremely well versed with handling large volume data and work using different tools to derive the required solution. Work with offshore team, business analysts and other data engineering teams to ensure alignment of requirements, methodologies and best practices Requirements Bachelor's Degree or master's degree in Computer Science. Overall about 8-10 years of Data Engineering experience Last 5+yrs experience in Talend Proven work experience in Spark , Python ,SQL , Any RDBMS. Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS Strong database fundamentals including SQL, performance and schema design. Understanding of CI/CD framework is an added advantage. Ability to interpret/write custom shell scripts. Python scripting is a plus. Experience with Azure platform and Synapse Experience with Git / Azure DevOps"," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-converse-3484780935?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=CBttCogHnnObmRidYnsMYw%3D%3D&position=2&pageNum=14&trk=public_jobs_jserp-result_search-card," Converse ",https://www.linkedin.com/company/converse?trk=public_jobs_topcard-org-name," Boston, MA "," 3 weeks ago "," Over 200 applicants ","Become part of the Converse Team Converse is a place to explore potential, break barriers and push out the edges of what can be. The company looks for people who can grow, think, dream and create. Its culture thrives by embracing diversity and rewarding imagination. The brand seeks achievers, leaders and visionaries. At Converse, it’s about each person bringing skills and passion to a challenging and constantly evolving world to make things better as a team. Converse, Inc. Boston, MA. Work closely with Project Management and Business teams to completely define specifications to ensure the project acceptance. Involved in preparation of functional and technical specifications with different cross teams. Lead team, defining solution options, providing estimates on effort and risk, and evaluating technical feasibility in Agile development process, including Scrum and Kanban. Work on troubleshooting data and analytics issues and perform root cause analysis to proactively resolve issues. Develop data extracts and feeds from the full spectrum of systems in the Converse ecosystem, including transactional ERP systems, POS data, product and merchandising systems. Engineer data products for a variety of Operations analytics use cases, ranging from reporting and data visualization to advanced analytics/machine learning use cases. Support designing technical specifications and data transformation models for junior developers. Ensure development is on track and meets specifications as defined by product management and the business. Responsible for data integrity of current platform and QA of new releases. Support the development and maintenance of backlog items and solution feature. Participate in sprint planning activities from a development perspective. Responsible for designing cloud-based data architecture using AWS stacks. Design and develop Python data science and data engineering libraries dealing with structured and unstructured data. Work with a variety of database types (SQL/NoSQL, columnar, object-oriented) and diverse data formats. Responsible for ETL with Spark and building data pipelines/orchestrations in Airflow and working on ETL tools like Matillion. Responsible for DevOps toolchain and Continuous Development, Continuous Integration and Automated Testing using Jenkins. Ensure and use data engineering for advanced analytics/data science and Software development skills. Experience Must Include Applicant must have a Bachelor’s degree in Computer Science, Information Systems, or Information Technology and 5 years of progressive post-baccalaureate experience in the job offered or a related occupation. Data warehousing; ETL or ELT; Amazon Web Service (AWS) Cloud Services, including AWS S3, AWS Lambda, AWS EC2, AWS EMR or AWS DynamoDB; Relational Database Management Systems (RDBMS), such as Oracle, Teradata, SQL Server or Snowflake; Database Development with writing stored procedures, functions, triggers, cursors or SQL queries; Hadoop, HDFS, Hive or Spark; Programming languages, including Java or Python; Business Intelligence Tools, such as Tableau; Unix Shell scripting; and Version control systems, such as Git, Bitbucket or Github Converse is more than a company; it’s a worldwide advocate for self-expression. This belief motivates our employees, permeates our working environment and inspires our products. No two of us look or think exactly alike. We are each one-of-a-kind. Individually and as a culture, we have the freedom to create and grow professionally. Generous benefits packages only sweeten the experience. From Boston to Shanghai, from Brand Design to Finance, Converse is a brand that celebrates the unique and creative people of the world. Together, we’re different. "," Entry level "," Full-time "," Information Technology "," Retail Apparel and Fashion " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-wise-skulls-3528108302?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=FRZ1zC6IHtiUAOkIjqFq3g%3D%3D&position=3&pageNum=14&trk=public_jobs_jserp-result_search-card," Wise Skulls ",https://www.linkedin.com/company/wearewiseskulls?trk=public_jobs_topcard-org-name," Springfield, MA "," 3 weeks ago "," Be among the first 25 applicants ","Title: Data Engineer Location: Springfields, MA (Day 1 Onsite) Duration: 6+ Months (Possibility of Extension) Implementation Partner: Infosys End Client: To be disclosed Jd Overall IT experience 10+ years 6+ years as ETL developer in Data Warehouse projects preferably in Insurance domain Hands-on Python experience good to have Advance SQL hands-on experience - a must Experience in ETL tool preferably Informatica or SSIS Good communication"," Entry level "," Full-time "," Other "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-hexaquest-global-3528109255?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=OfyOxHYCTiSoTyFLTd7IXg%3D%3D&position=4&pageNum=14&trk=public_jobs_jserp-result_search-card," HexaQuEST Global ",https://www.linkedin.com/company/hexaquest-global?trk=public_jobs_topcard-org-name," New York, NY "," 3 weeks ago "," Be among the first 25 applicants "," Evaluate business needs and objectives.Develop data models and database pipeline architectures on Azure environment.Perform data mining, segmentation and interpret trends and patterns.Conduct complex data analysis and report on results.Prepare data for prescriptive and predictive modeling.Build algorithms and prototypes.Explore ways to enhance data quality and reliability.Identify opportunities for data acquisition and develop programs.Collaborate with data scientists and architects to enhance the data quality.Work independently on SQL, PySpark, ADF, Synapse, Databricks. Good to have Healthcare experience. "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-themathcompany-3503063562?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=krfbvgpguw2wD1rFVxq8QQ%3D%3D&position=5&pageNum=14&trk=public_jobs_jserp-result_search-card," TheMathCompany ",https://www.linkedin.com/company/themathcompany?trk=public_jobs_topcard-org-name," Chicago, IL "," 2 weeks ago "," Over 200 applicants ","COMPANY OVERVIEW At TheMathCompany, we enable viable and valuable data and analytics transformations for our clients. Our mission is to help Fortune 500 organizations build core capabilities that set them on a path to achieve analytics self-sufficiency. We are changing the way companies go about executing enterprise-wide data engineering and data science initiatives, by defining and delivering comprehensive and robust analytics engagements. With a holistic range of services across data engineering, data science and management consulting, we are in the business of disrupting the analytics services and product space. To help us achieve our objectives, we are looking for passionate and experienced practitioners to be part of our US organization and be part of the growth story of one of the fastest AI/ML startups in the world. We, as an organization, are committed to your personal success and professional development. TheMathCompany offers and supports our employees' development of their personal brand through professional experiences, best in-class learning opportunities, inclusion, collaboration and personal well-being. ROLE DESCRIPTION We are looking for passionate individuals to help our clients solve complex challenges enabling their sustained analytics transformation. As a member of our Client Services team, you will lead a team of Associates and Senior Associates who design, execute, and implement cutting edge analytical solutions. You are responsible for institutionalizing data driven insights and recommendations to bring customer strategies to life. An ideal candidate should be able to manage client engagements and relationships while leading internal teams. Your role will involve generating insights and developing recommendations and communicating strategy and delivery plan with clients. The candidate will become familiar with the strategic direction of our customers and help them translate their goals into action on existing projects while also developing a pipeline of additional projects As a Data Engineering Tech Lead, you will: • Conduct requirements gathering workshops and lead the solution/architecture design process jointly with customers (min 5+ years) • Contribute subject matter expertise to effort estimation and create timelines • Articulate recommendations through compelling presentations and architectural blueprint documents to a variety of audiences, including client Business management and experienced IT architects • Ability to work independently; lead teams focused on specific work streams of large projects. • Lead technical teams through complex, multi-phased delivery projects and provide hands-on delivery guidance (min 5+years and min of 5–8-member team) • Ability to capture data requirements, create data mapping documents • Work with customers to run projects using agile methodology and translate requirements into multiple sprints with clear identification of cross-team dependencies • Work with customers to create and influence proper standards for development, governance, and operational lifecycle • Identify ongoing risks and pain points throughout the project, develop and implement mitigation measures • Mentor and train the team on design and architecture best practices in the data engineering space • Ability to work in an ambiguous environment • Strong problem solving and troubleshooting skills with the ability to exercise mature judgment. Required Qualifications: • 6 + years of experience with design, development and deployment of Data and Analytics solutions at scale • Experience across the entire software development lifecycle with agile methodology • 5+ years hands on experience in designing data lake or data warehousing environments, tuning and ETL/ELT process development • ETL/ELT Experience required on various tools and cloud technologies such as AWS, Azure or GCP • Data transformation, data mapping, profiling, file conversions, analysis and other similar experience required • Advanced SQL writing skills for querying and transforming data • Hands on experience in programming languages – Python or similar • Strong executive presentation skills, able to strategically and effectively communicate with both external and internal stakeholders, as well as collaboratively across business units, technology, and policy teams. Preferred Qualification: • Bachelor’s degree • Consulting background is a plus Please Note: This is a Full-time opportunity"," Mid-Senior level "," Full-time "," Consulting, Strategy/Planning, and Engineering "," IT Services and IT Consulting and Business Consulting and Services " Data Engineer,United States,Data Engineer / BI Developer,https://www.linkedin.com/jobs/view/data-engineer-bi-developer-at-robert-half-3511766734?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=31K9JA8NiKwXENt39SG2%2Bg%3D%3D&position=6&pageNum=14&trk=public_jobs_jserp-result_search-card," Robert Half ",https://www.linkedin.com/company/robert-half-international?trk=public_jobs_topcard-org-name," Culver City, CA "," 1 week ago "," 80 applicants ","*Please email Valerie.nielsen@rht.com for immediate response! I don't want you to get lost in our black hole :) The Role: Data Engineer / BI Developer Location: Playa Vista, California (onsite 1x per week) Salary: 145,000 Overview: Exciting opportunity to work with our client in the Product space (rapid growth considering market conditions!) This role will work on a team of 5 BA's and engineers. need to be strong in SQL Must Haves: Azure PowerBI or similar SQL NoSQL *no sponsorship* *Please email Valerie.nielsen@rht.com for immediate response! I don't want you to get lost in our black hole :)"," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-incedo-inc-3485056578?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=U8HBIJey6xWBKwIS1IXAyg%3D%3D&position=7&pageNum=14&trk=public_jobs_jserp-result_search-card," Incedo Inc. ",https://www.linkedin.com/company/incedo-inc?trk=public_jobs_topcard-org-name," Piscataway, NJ "," 4 weeks ago "," 173 applicants ","Title- Big Data Engineer Duration- Full Time Location – Piscataway, NJ (Hybrid) · Experience: 5-6 Years’ experience · Role: Big data engineer · Skills: Expert in druid, ETL pipeline knowledge, and GCP Must Have Skills: GCP, Oozie, Hadoop, ETL PREFERRED SKILLS • One or more years of programming in SQL, R and/or Python. • Experience with R and/or Python is strongly desired • Familiarity with data warehousing concept • Expert in Microsoft Office suite, especially Word, Excel and Powerpoint Experience & Qualifications • Strong mathematical skills, analytical skills and mastery of web analytics techniques to transform raw data into insights and recommendations that will effectively support decision making processes · Bachelors (Masters preferred) degree in Math’s, Stats, or Comp. Sc. Etc. · Strong knowledge of statistical techniques and expertise in use of statistical packages · 5-7 years of relevant experience in data science, machine learning or applied statistics. · Working experience with relational databases (SQL) and large scale distributed systems, expertise in Python and R · Expertise in data collection, development, and reporting data-driven insights and recommendations · Highly organized and rigorous thinking, able to solve problems diligently and creatively · Strong quantitative and qualitative skills with proven data interpretation abilities and technical skills · Knowledge of multi-site website functionality, tracking, concepts and tagging infrastructure · Experience with statistical analysis methods and tools (e.g. A/B testing, t-tests, z-tests) · Genuine team player, able to work well and pleasantly in a team and work independently for the team · Strong time management skills with a track record of consistently meeting deadlines · Ability to communicate effectively with technical and non-technical audiences · Detail oriented, able to record, organize and track content from meetings and discussions"," Mid-Senior level "," Full-time "," Information Technology and Consulting "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-hushh-ai-3510982303?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=Gv1dMQrstySjyz7GLFksFw%3D%3D&position=8&pageNum=14&trk=public_jobs_jserp-result_search-card," hushh.ai ",https://www.linkedin.com/company/hushh-ai?trk=public_jobs_topcard-org-name," Seattle, WA "," 1 week ago "," Over 200 applicants ","Title: Data Engineering Intern [Paid Internship] Location: Seattle, WA [Hybrid first with first few months in Seattle office for team building and bonding). Office close to University of WA in Seattle Duration: ASAP, Apr 1 - Sep 30 (6 months) Hushh is looking for a Data Engineering Intern to join our dynamic and passionate team. As a Data Engineering Intern, you will work closely with the UX Designer, Software Developer and Software Engineering Intern to help bring the hushh wallet to life. You will be responsible for ensuring that the user's data flows seamlessly through the product and creates a magical flywheel of data. Responsibilities: Work closely with the UX Designer, Software Developer and Software Engineering Intern to ensure that the product meets our user's needs Design and build data pipelines that integrate with various APIs and databases Develop ETL processes to transform and clean data Help with data modeling and database design Develop and maintain scalable data infrastructure Work with our Data Science team to develop data-driven insights and models Requirements: Currently pursuing a Bachelor's or Master's degree in Computer Science, Data Science, Information Systems or related field Strong programming skills in Python, Java or Scala Experience with SQL and NoSQL databases Familiarity with data modeling and database design concepts Familiarity with ETL processes and data pipelines Knowledge of data warehousing and data integration concepts Experience with cloud platforms such as AWS, GCP or Azure Strong analytical and problem-solving skills Good communication and collaboration skills Nice-to-Have: Prior experience with security and data products on iOS and Android Prior experience in helping startups go from 0 to 1 Compensation and Benefits: We offer a competitive salary, bonus, and equity package, as well as great benefits you'd expect from a seed-stage startup. We believe in investing in our team, so you'll have access to professional development opportunities, mentorship programs, and a supportive and collaborative work environment. If you are passionate about data engineering and want to work on a product that helps users take control of their data, we would love to hear from you via jobs@hush1one.com . Hushh is an equal opportunity employer and we value diversity at our company."," Internship ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-electrify-america-3520153927?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=0Mvn7qOvKuCCZWw80sbZMg%3D%3D&position=9&pageNum=14&trk=public_jobs_jserp-result_search-card," Electrify America ",https://www.linkedin.com/company/electrifyamerica?trk=public_jobs_topcard-org-name," Reston, VA "," 1 week ago "," 193 applicants ","Role Summary Electrify America is building an exciting enterprise solution to solve complex, cutting edge business problems related to EV charging. As the IoT Data Engineer you will work with a team to develop advanced automated ETL/integration solutions. The IoT Data Engineer will be responsible for identifying solutions, deigning the identified solution, implementing the solution, testing the solution, and optimizing the solution. The IoT Data Engineer will work with an evolving tech stack currently leveraging AWS and Snowflake as the key tools for data integration, retention, and cleansing. The IoT Data Engineer will work closely with the data architect, other engineers, business and technical analysts to build a pool of clean, sustainable data. The IoT Data Engineer will build monitoring and quality checks to maintain the ongoing health of the data to support the quality team. Develop and maintain automated integration solutions. Engineer ETL/ETL solutions for various datasets. Support Electrify America in the mission to develop a clean, consolidate set of viable data for advanced analytics. Strong affinity for integrating quality checks and monitoring into all solutions. Develop technical documentation. Collaborate with business and technical users to align on complete technical requirements. Role Responsibilities Main responsibility – assign % of time spent (70%) Develop automated integration solutions. Develop complex ELT/ETL logic. Implement quality monitoring solutions. Optimize existing implementations. Develop cutting edge business KPIs using advanced logic directed by the business. Developing and maintaining a thorough knowledge of business requirements. Additional responsibilities – assign % of time (30%) Collaborate with data architects, analysts, and other developers to divide work and develop strong, business oriented solutions. Build technical documentation of implementation solutions. Fully Remote Primary Location United States-EA Home Based Experience 5 – 8 years of relevant experience Education Undergraduate degree in computer science, data science, information science, quantitative modeling, business intelligence or a has relevant work experience in a related field General Skills Experience working with evolving data models and expanding interconnected data sources. ETL/ELT experience. Experience with SQL and relational databases. Ability to take a data platform solution from start to finish – identify the solution, built it, test it, optimize it, and maintain it. Experience developing automated data integration. Ability to apply data quality and data cleaning principals in an automated manor. Ability to collaborate with a team and define strong ETL and data quality solutions. Experience designing advanced technical solutions to business problems. Ability to develop reporting solutions with an evolving pool of data. Experience solving complex business problems. Experience developing enterprise level KPIs. Experience developing quality data pipelines. Ability to explore and analyze data independently. Ability to keep up with ever evolving data engineering best practices and standards. Curious and self-driven. Analytical/logical thinker. Ability to consider both the problem and the proposed solution to determine what truly meets the business need; big picture thinking. Familiarity with SCRUM ideology. Strong communication skills and ability to communicate with both business and technical resources. Suggest improvements and optimized solutions for the data pipeline both to and from the data warehouse. Strong focus on best practices and sustainable, automated solutions. Desire to build things right, rather than build them twice. Strong documentation skills. Flexible when faced with scope or timeline changes. Experience working in an enterprise ecosystem. Ability to collaborate with other teams and business stakeholders to set clear delivery timelines. Strong solution oriented drive. Cross-functional coordination. Multi-stakeholder communication and collaboration. Results oriented. Ability to solve business requests in a creative manor. Specialized Skills ETL/ELT experience. Relational database experience. Experience working in an enterprise level ecosystem. Coding experience. Experience building automated, sustainable integration solutions. Experience with data quality and quality monitoring. Strong SQL knowledge. Experience with Agile workflow and SCRUM Experience writing technical documentation Familiarity with data modeling/data architecture. Experience analyzing Tool agnostic. Experience building solutions that incorporate change data capture Experience with Project Management Software/Jira. (desired) Experience with change management tools. (desired) ETL experience with Matillion, AWS (S3, Lambda, SNS, SQS, Glue, etc), and Snowflake (Snowpipe, etc). (desired) Experience with Python, Java, JavaScript, or Scala. (desired) Experience parsing complex JSON. (desired) Knowledge of EVs. (desired) Familiarity with OCPP/OCPI standards. (desired) Experience with Tableau dashboard development. (desired) Work Flexibility No/Minimal Travel, Remote Work Available Volkswagen Group of America is an Equal Opportunity Employer. We welcome and encourage applicants from all backgrounds, and do not discriminate based on race, sex, age, disability, sexual orientation, national origin, religion, color, gender identity/expression, marital status, veteran status, or any other characteristics protected by applicable laws. Salary range is dependent on factors such as geographical differentials, industry-based experience, skills, training, credentials, and other qualifications. In the state of California, the salary range is $85,900 - $124,600. In the state of Colorado, the salary range is $78,100 - $113,300. In the state of Washington, the salary range is $85,900 - $124,600. In New York City, the salary range is $93,700 - $136,000. In Westchester County, the salary range is $93,700 - $136,000. In the state of Rhode Island, the salary range is $78,100 - $113,300."," Not Applicable "," Full-time "," Information Technology "," Motor Vehicle Manufacturing and Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ivy-energy-3484738543?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=J5jFITBCwX8xWILt0eoVBw%3D%3D&position=10&pageNum=14&trk=public_jobs_jserp-result_search-card," Ivy Energy ",https://www.linkedin.com/company/ivyenergy?trk=public_jobs_topcard-org-name," Oakland, CA "," 3 weeks ago "," Be among the first 25 applicants ","Today's financial system is built to favor those with money. Grid's mission is to level that playing field by building financial products that help users better manage their financial future. The Grid app lets users access cash, build credit, spend money, file their taxes and lots, lots more. Grid is a fast growing team that's deeply passionate about making a difference in the lives of millions. We're solving huge problems and believe that a merit driven culture allows every team member to play a big role. Join our growing team in our Bay Area headquarters or additional offices across the US. Our Data Engineers are responsible for converting the data surrounding a customer's financial life into tools that make our customers smarter and better equipped financially for the future. We value making intelligent decisions backed by good data and tools. We're looking for people who share our values, particularly if you have experience analyzing, processing, and learning from large data sets. Problems We Work On We're looking for a seasoned data engineer who can help us lay the foundation of an exceptional data engineering practice. The ideal candidate will be confident with a programming language of their choice, be able to learn new technologies quickly, have strong software engineering and computer science fundamentals and have extensive experience with common big data workflow frameworks and solutions. You will be writing code, setting style guides and collaborating cross-functionally with product, engineering and leadership. Analytics: Collect all the data for a user into tools that help our customers Machine Learning: Using a variety of techniques to reach better insights Data Processing: Managing data & statistics using scalable and efficient technologies Visualization: Envision our data as beautiful graphs and tools that allow customers to explore their data & ask their own questions Risk: Analyze data for anomalous patterns and build tools that allow us to find bad actors quickly We Practice Open collaboration Code reviews Testing Agile development We Use Go Python MySQL Google Cloud Platform Kubernetes Kubeflow Docker Google Pubsub BigQuery Firebase We're looking for Engineers to: Design and implement platform services, frameworks and ecosystems Build a scalable, reliable, operable and performant big data workflow platform for data scientists/engineers, AI/ML engineers, and product/operation team members Drive efficiency and reliability improvements through design and automation: performance, scaling, observability, and monitoring Requirements Strong programming skills with Python Strong programming skills with a typed programming language, such as Java, Scala, Go, etc. Disciplined approach to development, testing, and quality assurance Excellent communication skills, capable of explaining highly technical problems in English Understand data processing and ETL, hands on experience building pipelines and using frameworks such as Hive hdfs, Presto, Spark, etc. Really Strong Candidates May Have Really strong candidates may have: Actively contribute to open source software Worked with a strong, lean-based development environment Previous work experience in a start-up environment Ability to recognize the right tool for the right situation/problem. Will have strong programming skill in Go PI204123141"," Entry level "," Full-time "," Information Technology "," IT System Data Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-genworth-3518406820?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=XhpEUQparFTyxDvFfdoIMg%3D%3D&position=11&pageNum=14&trk=public_jobs_jserp-result_search-card," Genworth ",https://www.linkedin.com/company/genworth-financial_2?trk=public_jobs_topcard-org-name," Raleigh, NC "," 1 week ago "," 54 applicants ","POSITION TITLE Data Engineer LOCATION Hybrid – Raleigh, North Carolina YOUR ROLE The Data Engineer reports to the Senior Manager, Data & Analytics Engineering in the IT Organization and is responsible for contributing towards goals and objectives of the Data & Analytics Engineering team. This team is undertaking a data modernization effort to build a cloud first data lake and warehouse. They’re also transforming the technology & processes around machine learning and data science. This role will be responsible for working on various solution components like AWS, Talend, Snowflake as part of the end-to-end data engineering solutions. This role will bring deep technical expertise in data solutions by building data engineering components and data pipelines. This person will be a consummate team player and participate in code reviews, design reviews, and operate with a mindset of continuous improvement Your Responsibilities Build and maintain the necessary frameworks and technology architecture for data pipelines (ETL / ELT) and machine learning pipelines Contribute actively to processes needed to achieve operational excellence in all areas, including project management and system reliability. Design, Build and launch new data pipelines in production in partnership with the business stakeholders. Design and support of new machine learning pipelines as well as dashboards and reports in production. Your Qualifications BA/BS in Computer Science, Math, Physics, or other technical fields. 3+ years of experience in Data Engineering, BI, or Data Warehousing. 3+ years of strong experience in SQL and Data Analysis 3+ years of strong experience in Big Data technologies such as Python, Hive, and Spark 3+ years in development of data pipelines/ETL for data ingestion, data preparation, data integration, data aggregation, feature engineering, etc. Strong analytical skills with high attention to detail and accuracy Preferred Qualifications Experience working in Financial Technology companies. Solid working knowledge of Data Warehousing co Working knowledge of AWS and Tableau Working knowledge of Talend, Snowflake WHY WORK AT ENACT We have a real impact on the lives of the people we serve We work on challenging and rewarding projects We give back to the communities where we live We offer competitive benefits including: Medical, Dental, Vision, Flexible Spending Account options beginning your first day Generous Choice Time Off policy 12 Paid Holidays 40 hours of volunteer time off 401K Account with matching contributions Tuition Reimbursement and Student Loan Repayment Paid Family Leave Child Care Subsidy Program Company Enact, operating principally through its wholly-owned subsidiary Genworth Mortgage Insurance Corp. since 1981, is a leading U.S. private mortgage insurance provider committed to helping more people achieve the dream of homeownership. Building on a deep understanding of lenders’ businesses and a legacy of financial strength, we partner with lenders to bring best-in class service, leading underwriting expertise, and extensive risk and capital management to the mortgage process, helping to put more people in homes and keep them there. By empowering customers and their borrowers, Enact seeks to positively impact the lives of those in the communities in which it serves in a sustainable way. Enact is headquartered in Raleigh, North Carolina. Through our values of Excellence, Improvement and Connection, the Enact team delivers on our mission to help more people realize the dream of homeownership. The positive impact we can have on our world inspires us to go the extra mile. We look at the bigger picture, always considering our customers’ processes and their borrowers’ experience. We work hard to anticipate all the effects our actions might have. That can make our work challenging, and also satisfying. Are you the kind of person who’s always anticipating your customers’ needs? Always one step ahead, ready to catch that unexpected curveball? If so, you could thrive with us. We are proud to be an equal opportunity employer and all hiring decisions are based on merit, qualifications, and business need. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics."," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-stackadapt-3531323218?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=%2BktvHkSKoHhugG7taQI%2FAA%3D%3D&position=12&pageNum=14&trk=public_jobs_jserp-result_search-card," StackAdapt ",https://ca.linkedin.com/company/stackadapt?trk=public_jobs_topcard-org-name," United States "," 10 hours ago "," 167 applicants ","StackAdapt is a self-serve advertising platform that specializes in multi-channel solutions including native, display, video, connected TV, audio, in-game, and digital out-of-home ads. We empower hundreds of digitally-focused companies to deliver outcomes and exceptional campaign performance everyday. StackAdapt was founded with a vision to be more than an advertising platform, it’s a hub of innovation, imagination and creativity. We're looking to add Data Engineers to our data team! This team works on solving complex problems for StackAdapt's digital advertising platform. You'll be working directly with our data scientists, data engineers, Engineering team, and CTO on building pipelines and ad optimization models. With databases that process millions of requests per second, there's no shortage of data and problems to tackle. Want to learn more about our Data Science Team: https://alldus.com/ie/blog/podcasts/aiinaction-ned-dimitrov-stackadapt/ Learn more about our team culture here: https://www.stackadapt.com/careers/data-science Watch our talk at Amazon Tech Talks: https://www.youtube.com/watch?v=lRqu-a4gPuU StackAdapt is a Remote First company, and we are open to candidates located anywhere in the US for this position. What you'll be doing: Design modular and scalable real time data pipelines to handle huge datasets Understand and implement custom ML algorithms in a low latency environment Work on microservice architectures that run training, inference, and monitoring on thousands of ML models concurrently What you'll bring to the table: Have the ability to take an ambiguously defined task, and break it down into actionable steps Have deep understanding of algorithm and software design, concurrency, and data structures Experience in implementing probabilistic or machine learning algorithms Interest in designing scalable distributed systems A high GPA from a well-respected Computer Science program Enjoy working in a friendly, collaborative environment with others StackAdapters enjoy: Competitive salary + equity 401K matching 3 weeks vacation + 3 personal care days + 1 Culture & Belief day + birthdays off Access to a comprehensive mental health care platform Full benefits from day one of employment Work from home reimbursements Optional global WeWork membership for those who want a change from their home office Robust training and onboarding program Coverage and support of personal development initiatives (conferences, courses, etc) Access to StackAdapt programmatic courses and certifications to support continuous learning Mentorship opportunities with industry leaders An awesome parental leave policy A friendly, welcoming, and supportive culture Our social and team events! StackAdapt is a diverse and inclusive team of collaborative, hardworking individuals trying to make a dent in the universe. No matter who you are, where you are from, who you love, follow in faith, disability (or superpower) status, ethnicity, or the gender you identify with (if you’re comfortable, let us know your pronouns), you are welcome at StackAdapt. If you have any requests or requirements to support you throughout any part of the interview process, please let our Talent team know. About StackAdapt We've been recognized for our high performing campaign conversion rates, award-winning customer service, and innovation by numerous industry publications including: 2023 Best Workplaces for Women by Great Place to Work® Top 20 on Ad Age's Best Places to Work 2023 #1 DSP on G2 and leader in the Video and Cross-Channel Advertising Categories A Top Growing Company in Canada based on the Globe and Mail's 2022 Business Report Named an Enterprise Fast 15 Winner for 2022, as part of the Technology Fast 50™ Program "," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-concentrix-3503244262?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=og9cGtW4hrfidr5qn%2FSjKA%3D%3D&position=13&pageNum=14&trk=public_jobs_jserp-result_search-card," Concentrix ",https://www.linkedin.com/company/concentrix?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Need Experience with the following:Teradata Unix Hadoop Hive PySpark Cloud "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-cgi-3478349289?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=p6XT4pWYWa%2BfUjd%2BbhxidA%3D%3D&position=14&pageNum=14&trk=public_jobs_jserp-result_search-card," CGI ",https://ca.linkedin.com/company/cgi?trk=public_jobs_topcard-org-name," Arlington, VA "," 1 month ago "," 161 applicants ","Position Description This is an exciting full-time opportunity to work in a fast-paced environment with a team of passionate technologists. We take an innovative approach to supporting our client, working side-by-side in an agile environment using emerging technologies. As a solution builder, you will be working to support the client’s mission and goals of building an enterprise analytics platform. Due to the nature of the government contracts, this position requires US Citizenship. Your future duties and responsibilities Demonstrate in-depth technical capabilities with the ability to support multiple work streams and drive assimilation of new techniques and solutions. Develop robust data platforms for enterprise analytics business intelligence solutions. Evaluate data quality using SQL and data analysis techniques that improve client-reporting capabilities. Follow technology trends within the Big Data road map and inform clients how this technology will benefit the future development platform. Participate in team problem solving efforts and offer ideas to solve client issues. Understand data needs and construct data pipelines for automating and accelerating data preparation. Required Qualifications To Be Successful In This Role An interim Secret clearance is required to begin working onsite with our client, and a Secret clearance must be maintained throughout the project duration. Due to the nature of the government contract requirements and/or clearance requirements, US citizenship is required. Basic Qualifications: Bachelor’s degree or Master’s degree in Computer Science, Mathematics or STEM related discipline. 1+ Years of Experience developing data solutions using relational database management systems such as Oracle, SQL Server, Redshift, SAP HANA, etc. 1+ Years of Experience using Python, PowerShell, Perl or other scripting languages to extract and manipulate data. 1+ Years of Experience in creating complex SQL queries, stored procedures, functions, data structures and strong analytical problem-solving skills. Experience working with messy data, building data pipelines and automation activities. Experience working in an Agile based environment. Strong technical and troubleshooting techniques. “CGI is required by law in some jurisdictions to include a reasonable estimate of the compensation range for this role. The determination of this range includes various factors not limited to skill set level, experience and training, and licensure and certifications. To support the ability to reward for merit-based performance, CGI typically does not hire individuals at or near the top of the range for their role. Compensation decisions are dependent on the facts and circumstances of each case. A reasonable estimate of the current range is $65,000-95,000.” As a Federal Contractor, all members of CGI Federal, regardless of role or work location are required to be fully vaccinated, with the exception of those with approved medical or religious accommodations. #CGIFederalJob #Dice #IAF Insights you can act on While technology is at the heart of our clients’ digital transformation, we understand that people are at the heart of business success. When you join CGI, you become a trusted advisor, collaborating with colleagues and clients to bring forward actionable insights that deliver meaningful and sustainable outcomes. We call our employees “members” because they are CGI shareholders and owners and owners who enjoy working and growing together to build a company we are proud of. This has been our Dream since 1976, and it has brought us to where we are today — one of the world’s largest independent providers of IT and business consulting services. At CGI, we recognize the richness that diversity brings. We strive to create a work culture where all belong and collaborate with clients in building more inclusive communities. As an equal-opportunity employer, we want to empower all our members to succeed and grow. If you require an accommodation at any point during the recruitment process, please let us know. We will be happy to assist. Ready to become part of our success story? Join CGI — where your ideas and actions make a difference. Qualified applicants will receive consideration for employment without regard to their race, ethnicity, ancestry, color, sex, religion, creed, age, national origin, citizenship status, disability, pregnancy, medical condition, military and veteran status, marital status, sexual orientation or perceived sexual orientation, gender, gender identity, and gender expression, familial status, political affiliation, genetic information, or any other legally protected status or characteristics. CGI provides reasonable accommodations to qualified individuals with disabilities. If you need an accommodation to apply for a job in the U.S., please email the CGI U.S. Employment Compliance mailbox at US_Employment_Compliance@cgi.com. You will need to reference the requisition number of the position in which you are interested. Your message will be routed to the appropriate recruiter who will assist you. Please note, this email address is only to be used for those individuals who need an accommodation to apply for a job. Emails for any other reason or those that do not include a requisition number will not be returned. We make it easy to translate military experience and skills! Click here to be directed to our site that is dedicated to veterans and transitioning service members. All CGI offers of employment in the U.S. are contingent upon the ability to successfully complete a background investigation. Background investigation components can vary dependent upon specific assignment and/or level of US government security clearance held. CGI will consider for employment qualified applicants with arrests and conviction records in accordance with all local regulations and ordinances. CGI will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with CGI’s legal duty to furnish information. "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cvs-health-3522843007?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=JU0DD4UOODSE4sVZLIapWA%3D%3D&position=15&pageNum=14&trk=public_jobs_jserp-result_search-card," CVS Health ",https://www.linkedin.com/company/cvs-health?trk=public_jobs_topcard-org-name," Hartford, CT "," 21 hours ago "," 157 applicants ","Job Description The Data Engineer will support the technical analysis and translation of business needs to create high-quality Business Requirements, Functional Specifications and Source to Target Mapping (STTM) for various projects and initiatives, solving complex problems, including working with other systems across various business units Daily responsibilities: Assists in the development of large-scale data structures and pipelines to organize, collect and standardize data that helps generate insights and addresses reporting needs Applies understanding of key business drivers to accomplish own work Uses expertise, judgment and precedents to contribute to the resolution of moderately complex problems Leads portions of initiatives of limited scope, with guidance and direction Writes ETL (Extract / Transform / Load) specifications, designs database systems and develops tools for real-time and offline analytic processing Collaborates with client team to transform data and integrate algorithms and models into automated processes Uses knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries to build data pipelines Uses programming skills in Python, Java or any of the major languages to build robust data pipelines and dynamic systems Builds data marts and data models to support clients and other internal customers Integrates data from a variety of sources, assuring that they adhere to data quality and accessibility standards Elicits business requirements to create technical specifications and documents. Pay Range The typical pay range for this role is: Minimum: $ 70,000 Maximum: $ 140,000 Please keep in mind that this range represents the pay range for all positions in the job grade within which this position falls. The actual salary offer will take into account a wide range of factors, including location. Required Qualifications 2+ years of progressively complex related experience SQL expertise Ability to understand complex systems and solve challenging analytical problems Strong problem-solving skills and critical thinking ability Preferred Qualifications Ability to leverage multiple tools and programming languages to analyze and manipulate data sets from disparate data sources Strong collaboration and communication skills within and across teams Knowledge in Java, Python, Hive, Cassandra, Pig, MySQL or NoSQL or similar Knowledge in Hadoop architecture, HDFS commands and experience designing & optimizing queries against data in the HDFS environment Experience building data transformation and processing solutions Has strong knowledge of large-scale search applications and building high volume data pipelines Education Bachelor's Degree in Computer Science, Engineering, Machine Learning, or related discipline or equivalent work experience . Business Overview Bring your heart to CVS Health Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver. Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable. We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an affirmative action employer, and is an equal opportunity employer, as are the physician-owned businesses for which CVS Health provides management services. We do not discriminate in recruiting, hiring, promotion, or any other personnel action based on race, ethnicity, color, national origin, sex/gender, sexual orientation, gender identity or expression, religion, age, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities."," Entry level "," Full-time "," Information Technology "," Wellness and Fitness Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-farmer-s-fridge-3495978356?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=9LPPALTO5whR9PVWMIKalA%3D%3D&position=16&pageNum=14&trk=public_jobs_jserp-result_search-card," Farmer's Fridge ",https://www.linkedin.com/company/farmer%27s-fridge?trk=public_jobs_topcard-org-name," Chicago, IL "," 3 weeks ago "," 180 applicants ","Farmer’s Fridge is on a mission to make it simple for everyone to eat well. We serve healthy, handcrafted meals and snacks from our growing network of 500+ Smart Fridges (software-enabled vending machines) in 20+ Markets. We are striving to change the food system from the ground up – one Fridge at a time. We are a team that cares -– about the business, the impact our product makes, and each other. We are data-driven, innovative, and quick to move on a good idea. We are looking for people who want to collaborate in an entrepreneurial, inclusive culture and have a passion to succeed. About this Role: As a Data Engineer within Farmer’s Fridge’s data engineering team, you will get to use cutting-edge tools to build out a scalable foundational data platform that will serve the diverse reporting, analytics, and decision-making needs of the company. You will be working on a cloud-native, real-time, and event-driven architecture with a strong focus on decoupled storage and computing. The foundation that you build will be instrumental in solving our most pressing needs as a business. What You’ll Do: Code, architect, and develop the foundational data platform according to best practices to ensure a scalable and resilient real-time data platform that caters to multiple personas Work together with business partners to understand how they use data and what they need from data to ensure that what we are building is closely tied to tangible business objectives Work on data quality, data operations, and resolving tech debt to continuously improve the platform Keep up to date on the latest data engineering best practices for personal growth Collaborate cross-functionally to identify issues and potential improvements to operational workflows What are we looking for in a Data Engineer? 3+ years of AWS experience with services like DynamoDB, Aurora, Kinesis, Lambda, S3. With similar programming experience with Python, Scala, or Java. 2+ years of experience with SQL 1+ years of experience with Snowflake and dbt Experience with data orchestration tools like Dagster or Airflow Farmer’s Fridge Diversity Statement: Don’t meet every single requirement? Studies have shown that women and people of color are less likely to apply for jobs unless they meet every single qualification. At Farmer’s Fridge, we are dedicated to building a diverse, inclusive, and authentic workplace, so if you’re excited about this role but your past experience doesn’t align perfectly with every qualification in the job description, we encourage you to apply anyways. You may be just the right candidate for this or other roles. Benefits at Farmer's Fridge: In This Together - We stay connected, whether in person or virtually. We encourage transparency through monthly town hall meetings and weekly financial updates. Participation ranges from sampling and providing feedback on the new menu items we’re coming up with in our test kitchen to contributing meaningfully to our DEIB committee. We enjoy cross-functional lunch & learns, social hours, and game nights but also respect that you have a life outside of work. Happier Weekdays - Each day at work should fill you with joy. We're a fun and passionate group, and we don't take ourselves too seriously. Bring your unique self to work, dress comfortably, and always feel free to share your thoughts and opinions. We encourage curiosity; there's no hierarchy here when we're all swapping ideas. Never run on empty - Daily Farmer's Fridge meal, Thursday charcuterie, draft cold brew and beer, office snacks, and Friday happy hours are just some of the offerings to make sure you aren't distracted by a growling stomach. Innovate & Elevate - We're all teachers and learners. You'll grow, and help grow the company through cross-functional collaboration, open access to leadership, and regular business updates. You have a direct impact on the company’s bottom line. You can also impact your personal bottom line by participating in our 401(k) plan that includes a company match with immediate vesting."," Associate "," Full-time "," Information Technology and Engineering "," Food and Beverage Services " Data Engineer,United States,Data Engineer (Multiple openings at a variety of levels),https://www.linkedin.com/jobs/view/data-engineer-multiple-openings-at-a-variety-of-levels-at-red-arch-solutions-3481699425?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=l6JiBAXidjYEDG1YQAmX5g%3D%3D&position=17&pageNum=14&trk=public_jobs_jserp-result_search-card," Red Arch Solutions ",https://www.linkedin.com/company/red-arch-solutions?trk=public_jobs_topcard-org-name," Chantilly, VA "," 4 weeks ago "," Be among the first 25 applicants ","***Active TS/SCI with CI Polygraph Required*** Red Arch Solutions is a leading U.S. small business providing its customers with state-of-the-art tactical and strategic intelligence, systems, and software engineering solutions, solving some of the most pressing and unique intelligence community challenges related to national security. Our employees are exceptionally skilled professionals. We recruit individuals who are dedicated to collecting, analyzing, and disseminating critical information to national leaders. Our engineers design, develop, and deploy mission critical solutions to support the war fighters. Program Mission… The CDF program is an evolution for the way DoD programs, services, and combat support agencies access data by providing data consumers (e.g., systems, app developers, etc.) with a “one-stop shop” for obtaining ISR data. The CDF significantly increases the DI2E’s ability to meet the ISR needs of joint and combined task force commanders by providing enterprise data at scale. The CDF serves as the scalable, modular, open architecture that enables interoperability for the collection, processing, exploitation, dissemination, and archiving of all forms and formats of intelligence data. Through the CDF, programs can easily share data and access new sources using their existing architecture. The CDF is a network and end-user agnostic capability that enables enterprise intelligence data sharing from sensor tasking to product dissemination. Responsibilities... Primary responsibility is to work with data providers within the IC and DoD Enterprise to identify and ingest data sets into the CDF data broker. In this role you will: Develop, optimize, and maintain data ingest flows using Apache Nifi and Python. Develop within the components in the cloud platform, such as Apache Kafka, NiFi, and HBase. Communicate with data owners to set up and ensure CDF streaming and batching components are working (including configuration parameters). Document SOP related to streaming configuration, batch configuration or API management depending on role requirement. Document details of each data ingest activity to ensure they can be understood by the rest of the team What we’d like to see… A minimum of 3 years of experience with programming and software development including analysis, design, development, implementation, testing, maintenance, quality assurance, troubleshooting and/or upgrading of software systems DoD 8570 IAT Level II Certification (e.g. Security+) Demonstrable CentOS command line knowledge Working knowledge of web services environments, languages, and formats such as RESTful APIs, SOAP, FTP/SFTP, HTML, JavaScript, XML, and JSON Understanding of foundational ETL concepts Experience implementing data ignorations with in the IC DoD Enterprise. Desired Skills: Experience or expertise using, managing, and/or testing API Gateway tools and Rest APIs (desired) 2+ Experience in Python Development Experience or expertise configuring an LDAP client to connect to IPA (desired) Advanced organizational skills with the ability to handle multiple assignments Strong written and oral communication skills Years of Experience: Junior Level (0-4 years), Mid Level (5-8 years), Senior Level (9+) Education: Bachelor's degree in systems engineering, computer engineering, or a related technical field (preferred) Location: Chantilly, VA Clearance: Active TS/SCI w/ ability to obtain CI Poly "," Entry level "," Full-time "," Information Technology "," Information Technology & Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-whiztek-corp-3483750379?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=d0phaME83nxTAxjs6KJRFg%3D%3D&position=18&pageNum=14&trk=public_jobs_jserp-result_search-card," WHIZTEK Corp ",https://www.linkedin.com/company/whiztek?trk=public_jobs_topcard-org-name," Chicago, IL "," 4 weeks ago "," 198 applicants ","Job title : Data Engineer Location : Chicago IL (Onsite) Key skills : PySpark, Sql, etl, azure data factory, azure data bricks. End to end design and solution experience. Role Description The Data Engineer will be responsible for delivering high quality modern data solutions through collaboration with our engineering, analysts, and product teams in a fast-paced, agile environment leveraging cutting-edge technology to reimagine how Healthcare is provided. They will be instrumental in designing, integrating, and implementing solutions as well supporting migrations of existing workloads to Azure cloud. The Data Engineer is expected to have extensive knowledge of modern programming languages, designing and developing data solutions. Core Responsibilities Develop and automate solutions to consume data from multiple data sources, including external API Program and modify code in languages like SQL, Javascript, JSON, and Python to support and implement Data Warehouse solutions Design and deploy enterprise-scale cloud infrastructure solutions Research, analyze, recommend and select technical approaches for solving difficult and meaningful development and integration problems Work closely with the Data and Engineering teams to design best in class Azure implementations Participate in efforts to develop and execute testing, training, and documentation across applications Design, develop and deliver customized ETL and Database solutions Other duties, as assigned Required skills : Relevant working experience with Azure Familiarity with provisioning, configuring, and developing solutions in Azure Data Lake, Azure Data Factory, Azure SQL Data Warehouse, Azure Synapse and Azure Cosmos DB Hands-on experience with cloud orchestration and automation tools, CI/CD pipeline creation Hands-on experience working with PaaS/ IaaS/ SaaS products and solutions Understanding of Distributed Data Processing of big data batch or streaming pipelines Experience in DevOps, Python or Java or Json, (HL7/ FHIR is a plus) A desire to work within a fast-paced, collaborative, and team-based support environment Willingness to identify and implement process improvements, and best practices as well as ability to take ownership Familiarity with healthcare data and healthcare insurance feeds is a plus Excellent oral and written communication skills"," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,"Data Engineer (Richardson, TX)",https://www.linkedin.com/jobs/view/data-engineer-richardson-tx-at-infosys-3494513314?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=ByOoVb%2BCINdUacUb3DOt%2BA%3D%3D&position=19&pageNum=14&trk=public_jobs_jserp-result_search-card," Infosys ",https://in.linkedin.com/company/infosys?trk=public_jobs_topcard-org-name," Richardson, TX "," 1 day ago "," Over 200 applicants ","Infosys is looking for Data Engineers who must be Polyglots with expertise in multiple technologies and can work as a full-stack developer in complex engineering projects. Required Qualifications: Bachelor’s degree or foreign equivalent required from an accredited institution. Minimum 5 years of IT experience US Citizens and those authorized to work in the U.S. are encouraged to apply. We are unable to sponsor at this time. Preferred Qualifications: Experience in end-to-end implementation of projects using Cloudera Hadoop, Spark, Hive, HBase, Sqoop, Kafka Strong programming knowledge in Scala or Python for Spark application development Strong knowledge and hands-on experience in SQL, Unix shell scripting Experience in data warehousing technologies, ETL/ELT implementations Sound Knowledge of Software engineering design patterns and practices Strong understanding of Functional programming. Experience with Ranger, Atlas, Tez, Hive LLAP, Neo4J, NiFi, Airflow, or any DAG based tools Good hands on in RESTful APIs Good Hands-on experience on SQL Development. Knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and Dockers Experience with data visualization tools like Tableau, Kibana, etc Experience with design and implementation of ETL/ELT framework for complex warehouses/marts Knowledge of large data sets and experience with performance tuning and troubleshooting Planning and Co-ordination skills Good Communication and Analytical skills Experience and desire to work in a Global delivery environment. Ability to work in team in diverse/ multiple stakeholder environment. Work Location: Richardson, TX About Us Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem. Infosys is an equal opportunity employer and all qualified applicants will receive consideration without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, spouse of protected veteran, or disability."," Mid-Senior level "," Full-time "," Consulting, Engineering, and Information Technology "," Information Services " Data Engineer,United States,Python Data Engineer,https://www.linkedin.com/jobs/view/python-data-engineer-at-iquest-solutions-corporation-3527792804?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=1Ca9olTQ7mC0L%2F7Uqtn3UQ%3D%3D&position=20&pageNum=14&trk=public_jobs_jserp-result_search-card," IQuest Solutions Corporation ",https://www.linkedin.com/company/iquest-solutions-corporation?trk=public_jobs_topcard-org-name," Irving, TX "," 3 weeks ago "," Be among the first 25 applicants "," ResponsibilitiesDesign, develop, enhance, code, test, deliver and debug software.Develop software products for larger, more complex stories spanning multiple technology domains.Facilitate and lead story breakup and grooming.Drive feature-level architecture/design sessions.Participate in product-level architecture/design sessions.Recommend actions to improve procedures and standards.Identify and communicate technical trends and/or emerging technology. QualificationsBachelor's degree in Computer Science, Math, or Engineering preferred.5 or more years of professional software development experience is required.Experience with Python, C#, and any RDBMS development is required. Experience with Spark (PySpark, SparklyR or SparkR) on any cloud or on-prem platform is highly desired. Knowledge of Agile software development desired, including CI/CD, TDD, Pair Programming, and IaC. Experience in bash or any other scripting language will be preferred. Knowledge in areas of web servers, load balancers, caching, network virtualization, and containers will be preferred. "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer (Remote),https://www.linkedin.com/jobs/view/data-engineer-remote-at-tripoint-solutions-3510763669?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=P5ldURrwwaxIU6ZKgCBpbA%3D%3D&position=21&pageNum=14&trk=public_jobs_jserp-result_search-card," Tripoint Solutions ",https://www.linkedin.com/company/tripoint-solutions?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 141 applicants ","Tripoint Solutions is seeking a Data Engineer to join our team. The Data Engineer will be part of a team responsible for ensuring the success of a highly visible, results-driven federal client through the development of a cloud-based next generation system. This position requires the applicant to parse disparate data sources, including structured and unstructured elements, to find the patterns and meaning in large quantities of data. The successful candidate will leverage machine learning as well as best of breed pipeline technology to process and store a variety of data elements. Location: This position is eligible for fully remote work. Selected candidates living within a 25 miles radius of the NITAAC office in Rockville, MD will be required to come into the office once a week. The selected candidate must be currently located in, or willing to relocate to, a state supported by Tripoint Solutions corporate offices (AL, DC, FL, IL, LA, MD, MI, MN, MS, NJ, NC, PA, TN, TX, or VA). The successful candidate will be accountable to: Creating and maintaining optimal data pipeline architecture. Assembling large, complex data sets that meet functional / non-functional business requirements. Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Building analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Keeping data separated and secure across national boundaries through multiple data centers and AWS regions. Strong interest to learn and stay up to date on relevant technologies, trends, industry standards and identify new ones to implement. Experience, Education & Training: Bachelor's degree in computer science, Math, Analytics, Statistics, Informatics, Information Technology or equivalent quantitative field. 5 years of experience working in a Data Engineer or Data Scientist role. Experience with cloud data services (AWS preferred). Experience solutioning and applying Natural Language Processing (NLP) and or Machine Learning (ML) technologies Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience with Microsoft SQL, database development and design. Experience building processes supporting data integration, transformation, data structures, metadata, dependency and workload management. Demonstrated success in manipulating, processing and extracting value from large disconnected data sets. Demonstrated accomplishments in designing, coding, testing and supporting data analytics and reporting solutions in a cloud environment. Experience with object-oriented/object function scripting languages: Python, Java Concept experience; information retrieval, search engine, document data extraction Preferred experience with AWS cloud services: Textract, Comprehend, GlueMaker, Athena, Notebook Working knowledge of message queueing, stream processing, and highly scalable ‘big data’ data stores. Clearance Requirements: Applicants selected may be subject to a government security investigation and must meet eligibility requirements for potential access to classified information. Accordingly, US Citizenship or Green Card is required. About Tripoint Solutions We are technology innovators, partnered with state-of-the-art providers, such as AWS, ServiceNow, and UiPath, to drive digital transformation in the federal space. TPS teams are bringing automation and data science into areas of the government that are crying out for fresh tech—making positive impacts felt by tens of thousands of users, countless citizens, and all six branches of the military each day. Our Agile teams are responsible for envisioning, launching, and operating the massive data systems and analytics platforms used to manage $14.5B in government procurements and $200B in military real estate assets globally. At TPS, we apply the power of cloud technologies to help the government think smarter and function better—for everyone. TPS Company Values We value and respect each employee's dedicated work and unique contributions; as they directly impact who we are and what we do. Your talent and innovative thinking bring leading-edge solutions to our customers. Our success is driven by the dedication of our employees. Employee-generated solutions have sustained our continued success and customer satisfaction Benefit Offerings Tripoint Solutions builds flexibility into health benefit plan choices, covers most of the monthly premiums, and helps employees build a career with impact through our generous professional development program. We offer all full-time employees: Medical, Dental, Vision benefits with a national provider network (company pays 100% of Vision and Dental premiums) Flexible Spending and Health Savings Accounts (FSA & HSA) Company-paid Life and Disability insurance including Short-Term, Long-Term, and Accidental Paid-time off (PTO), accruing with each year of service, up to 20 days, plus 11 paid holidays 401(k) Retirement Plan - No waiting period to contribute and company makes 3% contribution of eligible pay in addition to annual profit-sharing contribution option Eligibility to receive impact bonuses each quarter Referral Program Professional Development Reimbursement Program to pursue undergraduate, graduate, training, and certifications Monthly transportation, parking, and cell phone service reimbursement COVID-19 Related Information Tripoint Solutions does not have a vaccination mandate applicable to all employees. However, to protect the health and safety of its employees and to comply with customer requirements, Tripoint Solutions may require employees in certain positions to be fully vaccinated against COVID-19. Vaccination requirements will depend on the status of the federal contractor mandate and customer site requirements. Furthermore, remote work arrangements are subject to change based on customer site requirements. Tripoint Solutions is an Equal Opportunity Employer/Veterans/Disabled"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer - CDH - Remote,https://www.linkedin.com/jobs/view/data-engineer-cdh-remote-at-mayo-clinic-3482049243?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=YvkTT6jM%2F5i5BUroVRWr2Q%3D%3D&position=22&pageNum=14&trk=public_jobs_jserp-result_search-card," Mayo Clinic ",https://www.linkedin.com/company/mayo-clinic?trk=public_jobs_topcard-org-name," Rochester, MN "," 1 month ago "," 48 applicants ","Responsibilities Mayo Clinic is seeking a motivated Data Engineer to be responsible for creating and maintaining the analytical infrastructure that enables most functions in the data world. You will be responsible for and development, testing, and maintenance, of architectures for large-scale medical related databases in GCP using Big Query and other state of the art systems. You will also be responsible for creating data set processes for verification, acquisition, mining and modeling of clinical data through micro-services and API’s. Create and maintain optimal data pipeline architecture using state of the art systems that access data via API’s. Ability to build and optimize data sets, 'big data' data pipelines and architectures. Ability to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions. Excellent analytic skills associated with working on unstructured datasets. Ability to build processes that support data transformation, workload management, data structures, dependency and metadata in GCP. Develop and test large, complex data sets that meet functional / non-functional business requirements. Implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data source formats using on premise and cloud technology. Integrate object storage features within cloud storage to create data at rest driven compute models. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep Mayo’s data separate and secure across national boundaries through multiple data centers and cloud regions Work with data and analytics experts to strive for greater functionality in our data systems. Continues to build knowledge of the organization, processes and customers. Performs a range of mainly straightforward assignments. Uses prescribed guidelines or policies to analyze and resolve problems. Receives a moderate level of guidance and direction. This vacancy is not eligible for sponsorship/ we will not sponsor or transfer visas for this position. Why Mayo Clinic? Mayo Clinic is the nation's best hospital (U.S. News & World Report, 2022-2023) and ranked #1 in more specialties than any other care provider. We have a vast array of opportunities ranging from Nursing, Clinical, to Finance, IT, Administrative, Research and Support Services to name a few. Across all locations, you’ll find career opportunities that support diversity, equity and inclusion. At Mayo Clinic, we invest in you with opportunities for growth and development and our benefits and compensation package are highly competitive. We invite you to be a part of our team where you’ll discover a culture of teamwork, professionalism, mutual respect, and most importantly, a life-changing career! Mayo Clinic offers a variety of employee benefits. For additional information please visit Mayo Clinic Benefits . Eligibility may vary. Qualifications Bachelor's degree in Computer Science or Engineering from an accredited University or College; OR an Associate’s degree in Computer Science or Engineering from an accredited University or College with 2 years of experience. Additional Qualifications Have working knowledge and experience in Data Engineering with a minimum of 2 years of experience in data engineering and data science or analytical modeling. Experience using scripting languages (Python, JavaScript). A minimum of 2 years of experience leveraging micro-services using a high-level language (C#, C++, Java) for data access and analytics. Strong interpersonal, time management skill and demonstrated experience working on cross functional teams A minimum of 1 year of SQL or No-SQL experience. Experience working in an agile development environment leveraging tools such as Jira. Experience with scrum, coding from user stories, and performing retros. Preferred qualifications for this position include: Experience using advanced data processing solutions/capabilities such as Apache Spark, Hive, Pig and Kafka. Experience using big data, statistics and knowledge of data related aspects of machine learning. Experience working with Linux or other Unix based operating systems. Knowledge of how workflow scheduling solutions such as Apache Airflow and Google Composer related to data systems. Knowledge of using Infrastructure as code (Kubernetes, Docker) in a cloud environment. Experience in practicing CI/CD (Jenkins, GitHub Actions, ADO) Experience with cloud platforms such as GCP, Azure, AWS Exemption Status Exempt Compensation Detail $95,492.80 - $133,681.60/ year Benefit Eligible Yes Schedule Full Time Hours/Pay Period 80 Schedule Details Monday - Friday 8-5 CST Weekend Schedule As business needs dictate. Remote Worker Yes Site Description Mayo Clinic is located in the heart of downtown Rochester, Minnesota, a vibrant, friendly city that provides a highly livable environment for more than 34,000 Mayo staff and students. The city is consistently ranked among the best places to live in the United States because of its affordable cost of living, healthy lifestyle, excellent school systems and exceptionally high quality of life. Department Center for Digital Health International Assignment No Country United States Job Posting Category Business Recruiter Laura Percival Equal Opportunity Employer As an Affirmative Action and Equal Opportunity Employer Mayo Clinic is committed to creating an inclusive environment that values the diversity of its employees and does not discriminate against any employee or candidate. Women, minorities, veterans, people from the LGBTQ communities and people with disabilities are strongly encouraged to apply to join our teams. Reasonable accommodations to access job openings or to apply for a job are available."," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cbts-3488684632?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=o%2Flf6u5%2FKO%2Be3t7Ub7IryQ%3D%3D&position=23&pageNum=14&trk=public_jobs_jserp-result_search-card," CBTS ",https://www.linkedin.com/company/cbts-technology-covered?trk=public_jobs_topcard-org-name," Cincinnati, OH "," 3 weeks ago "," Over 200 applicants ","CBTS is currently seeking a Data Engineer for a position located in Cincinnati, OH. This position will play a key role in executing the company’s data strategies by supporting existing ETL processes and creating of new ETL processes. Ensure proper data workflows and architecture are being followed. The individual will work with the business to gather requirements, document processes, work on break / fix remediation and work to provide necessary data to operations for monitoring, data management and process workflow. This is a hybrid position located in Cincinnati, OH. Responsibilities: Identify and create strategies to address data quality concerns and enforce standards. Implement data management repositories based on multiple internal and external data sources Manage logical and physical data models and maintain detailed design documents Troubleshoot critical ETL workflow and data centric problems and recommend solutions Work with analytics business partners to analyze business needs, data sources and develop technical data pipeline solutions to ingest raw data into data warehouse Write new or modify existing code and conduct complete end to end unit tests on data and data pipelines Collaborate with the business analytics team to analyze, resolve, and put in place measures to maintain data accuracy and integrity in support of strategic analytics applications Obtain and ingest raw data from a variety of sources and methodologies leveraging appropriate coding languages Write transformation logic on raw data and subsequently create semantic layers to publish data in a form suitable for consumption by business users and BI visualization tools Assist in API development to enable business and/or other system’s consumption of published data Create and maintain pipeline process documentation and recovery procedures on how to resume failed pipeline processing Consult with and assist other programmers to analyze, schedule and implement new or modified workflows May mentor junior engineers in proper coding techniques and practices Experience: · Ability to write code using common scripting languages such as SQL, Python, and/or other scripting languages 2-5 years of hands-on experience implementing, maintaining, and supporting data management solutions including program/project delivery management Experience with ETL tools and techniques and/or pipeline building Proficient with data discovery, data analysis and data virtualization techniques Full understanding of data base concepts, data typing and database cardinality principles Experience in a data analytics environment working with multiple tables and big-data concepts Expertise with SQL, Snowflake, SQLServer or MySQL including data troubleshooting, building complex queries, table joins and views working with multiple tables and big-data concepts Experience building applications or pipelines with serverless technologies like Azure data factory, data bricks or AWS serverless platform a plus Solid MS Excel skills including ability to use advanced formulas, nested if statements, pivot tables and conduct full data analysis of pipelined data Experience with BI application development a plus Proficient with process workflow mapping and business process improvement QUALIFIED CANDIDATES CAN EMAIL THEIR RESUMES TO todd.marinelli@cbts.com. PLEASE INCLUDE “DATA ENGINEER” IN THE SUBJECT OF YOUR EMAIL. Cincinnati Bell Technology Solutions provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, disability, genetic information, marital status, amnesty, or status as a protected veteran in accordance with applicable federal, state and local laws."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Big Data Engineer - Remote,https://www.linkedin.com/jobs/view/big-data-engineer-remote-at-tekintegral-3528108688?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=0g%2Fc53E%2BKtWJLwArdOsnnw%3D%3D&position=24&pageNum=14&trk=public_jobs_jserp-result_search-card," TekIntegral ",https://www.linkedin.com/company/tekintegral?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Be among the first 25 applicants ","Data Engineer Location : Remote EST Hours (candidates from EST/CST only will be considered) Duration : 6 months plus Only GC or US Citizens needed for this role Underlined skills are a must have. Should have excellent communication skills. Self-driven with a desire to find solutions and make an impact. You look to dig deeper beyond the surface level of a job profile researching market trends, and domain technology, and seek out information to have a solid understanding of the business. You ask for feedback and want to grow and develop in your career. Responsibilities Build data integration (ETL) pipelines using SQL, EMR, Python and Spark Technical Knowledge and leadership Collaborate with Software Solution team members and other staff to validate desired outcomes for code prior to, during, and post-development Skills Required 9+ years building data pipelines and implementing feeds for data warehouse Strong technical understanding to be able to contribute in meetings to discuss best practices and/or technical solutions to business problems Able to understand requirements and business needs from client teams and stakeholders and translate those to technical requirements Strong background coding in Python3 Extensive experience processing massive datasets utilizing Spark and EMR Clusters Experience with Snowflake AWS Deep understanding of database design and data structures. Code Repository (Github, Bitbucket) Linux/Shell scripting Experience with Snowflake Experience with Airflow gkumar@tekintegral.com"," Entry level "," Full-time "," Other "," Staffing and Recruiting " Data Engineer,United States,Cloud Data Engineer,https://www.linkedin.com/jobs/view/cloud-data-engineer-at-alludo-3485474163?refId=H7H2G9jEq0FASwz4CPDGEQ%3D%3D&trackingId=0%2FTZht36FVpX%2BTVKu0qs3w%3D%3D&position=25&pageNum=14&trk=public_jobs_jserp-result_search-card," Alludo ",https://ca.linkedin.com/company/alludo-group?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Cloud Data Engineer Push the boundaries of tech. In your sweatpants. Alludo is looking for an experienced Cloud Data engineer to help us change the way the world works. Here, you’ll facilitate the advancement of our analytical capabilities. We are moving from “what was/is” to Predictive modeling and Business optimization. This role will be the key to establishing and developing the data science capabilities at the company. The top creative and technical minds could work anywhere. So why are so many of them choosing Alludo? Here are three reasons: 1. This is the moment. It’s an exciting time at Alludo, with new leadership, a refreshed brand (you probably know us as Corel!), and a whole new approach to changing the way the world works. We’re at the forefront of a movement, and we want you to ride this wave with us. 2. We want you to be you. Too often, companies tell you about their culture and then expect you to fit it. Our culture is built from the people who work here. We want you to feel safe to be who you are, take risks, and show us what you’ve got. 3. It’s your world. We know you have a life. We want to be part of it, but not all of it. At Alludo, we’re serious about empowering people to work when, how, and where they want. Couch? Sweatpants? Cool with us. We believe that happy employees mean happy customers. That’s why we hire amazing people and get out of their way. Sound good so far? Awesome. Let’s talk more about the role and see if we’re destined to be together. THE ROLE Design, develop and implement large scale, high-volume, high-performance data infrastructure and pipelines for Data Lake and Data Warehouse Build and implement ETL frameworks to improve code quality and reliability Build and enforce common design patterns to increase code maintainability Ensure accuracy and consistency of data processing, results, and reporting Design cloud-native data pipelines, automation routines, and database schemas that can be leveraged to do predictive and prescriptive machine learning. Communicate ideas clearly, both verbally and through concise documentation, to various business sponsors, business analysts and technical resources YOU 5+ years of professional experience 3+ years of experience working in data engineering, business intelligence, or a similar role 2+ years of experience in ETL orchestration and workflow management tools like Airflow, flink, etc. using AWS/GCP (i.e Airflow, Luigi, Prefect, Dagster, digdag.io, Google Cloud Composer, AWS Step Functions, Azure Data Factory, UC4, Control-M) 1+ years of experience with the Distributed data/similar ecosystem (Spark, Hive, Druid, Presto) and streaming technologies such as Kafka/Flink Expert knowledge of at least one programming language such as Python/Java, Python preferred Expert knowledge of SQL Familiarity with DevOps SnowFlake, Netezza, Teradata, AWS Redshift, Google BigQuery, Azure Data Warehouse, or similar Experience with cloud service providers: Microsoft Azure, Amazon AWS Expertise with containerization orchestration engines (Kubernetes) BS in Computer Science, Software Engineering, or relevant field desired US: · Alludo is an award-winning solution that has millions of users and decades of innovation under our belts. · We offer a fully remote workspace, and we mean it. There is no pressure to work in an office whatsoever. · Hours are flexible! You’ve worked hard to build your life, and we don’t want you to give it up for work. · Our team is growing fast, and there’s a ton of energy and a lot of really smart, motivated, fun people ready to welcome you in. What are you waiting for? Apply now! We can’t wait to meet you. (FYI, we’re lucky to have a lot of interest and we so appreciate your application, though please note that we’ll only contact you if you’ve been selected for an interview.) About Alludo: Alludo, the company behind the award-winning, globally recognizable brands including Parallels®, CorelDRAW®, MindManager®, and WinZip®, is helping people work better and live better. Our professional-caliber graphics, virtualization, and productivity solutions are finely tuned for the digital remote workforce delivering the freedom to work when, where, and how you want. With a 35+ year legacy of innovation, Alludo empowers ALL YOU DO helping more than 2.5 million paying customers to enable, ideate, create, and share on any device, anywhere. To learn more, visit www.alludo.com. It is our policy and practice to offer equal employment opportunities to all qualified applicants and employees without regard to race, color, age, religion, national origin, sex, political affiliation, sexual orientation, marital status, disability, veteran status, genetics, or any other protected characteristic. Alludo is committed to an inclusive, barrier-free recruitment and selection process and work environment. If you are contacted for a job opportunity, please advise us of any accommodations that are required. Appropriate accommodations will be provided upon request as required by Federal and Provincial regulations and Company Policy. Any information received relating to accommodations will be treated as confidential."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer - Remote Opportunity!,https://www.linkedin.com/jobs/view/data-engineer-remote-opportunity%21-at-burtch-works-3487733882?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=Gyh5BIzlKP4QXgsLAVIFwA%3D%3D&position=1&pageNum=15&trk=public_jobs_jserp-result_search-card," Burtch Works ",https://www.linkedin.com/company/burtch-works?trk=public_jobs_topcard-org-name," United States "," 3 days ago "," Over 200 applicants ","Job Description: One of the top rated wireless provider is looking for a Data Engineer to build data products and identify valuable data sources. Responsibilities: Analyze large information to discover trends and patterns Propose solutions and strategies to business challenges Assist with their phase of data acquisition Qualifications: 5+ years working with Azure, Spark and Python Experience in data warehouses and data modeling Being able to call APIs Azure Synapse experience a plus Keywords: Azure, Azure Synapse, Python, Spark, SQL, Data Warehouse, Data Modeling, RestAPIs, APIs, Databricks, PowerBI, Azure Data Factory"," Mid-Senior level "," Full-time "," Engineering, Information Technology, and Other "," Computer and Network Security, Technology, Information and Internet, and Telecommunications " Data Engineer,United States,Cloud Data Engineer,https://www.linkedin.com/jobs/view/cloud-data-engineer-at-alludo-3485474163?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=xQQbZN3x%2B7DMMmEbOs0ipQ%3D%3D&position=2&pageNum=15&trk=public_jobs_jserp-result_search-card," Alludo ",https://ca.linkedin.com/company/alludo-group?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Cloud Data Engineer Push the boundaries of tech. In your sweatpants. Alludo is looking for an experienced Cloud Data engineer to help us change the way the world works. Here, you’ll facilitate the advancement of our analytical capabilities. We are moving from “what was/is” to Predictive modeling and Business optimization. This role will be the key to establishing and developing the data science capabilities at the company. The top creative and technical minds could work anywhere. So why are so many of them choosing Alludo? Here are three reasons: 1. This is the moment. It’s an exciting time at Alludo, with new leadership, a refreshed brand (you probably know us as Corel!), and a whole new approach to changing the way the world works. We’re at the forefront of a movement, and we want you to ride this wave with us. 2. We want you to be you. Too often, companies tell you about their culture and then expect you to fit it. Our culture is built from the people who work here. We want you to feel safe to be who you are, take risks, and show us what you’ve got. 3. It’s your world. We know you have a life. We want to be part of it, but not all of it. At Alludo, we’re serious about empowering people to work when, how, and where they want. Couch? Sweatpants? Cool with us. We believe that happy employees mean happy customers. That’s why we hire amazing people and get out of their way. Sound good so far? Awesome. Let’s talk more about the role and see if we’re destined to be together. THE ROLE Design, develop and implement large scale, high-volume, high-performance data infrastructure and pipelines for Data Lake and Data Warehouse Build and implement ETL frameworks to improve code quality and reliability Build and enforce common design patterns to increase code maintainability Ensure accuracy and consistency of data processing, results, and reporting Design cloud-native data pipelines, automation routines, and database schemas that can be leveraged to do predictive and prescriptive machine learning. Communicate ideas clearly, both verbally and through concise documentation, to various business sponsors, business analysts and technical resources YOU 5+ years of professional experience 3+ years of experience working in data engineering, business intelligence, or a similar role 2+ years of experience in ETL orchestration and workflow management tools like Airflow, flink, etc. using AWS/GCP (i.e Airflow, Luigi, Prefect, Dagster, digdag.io, Google Cloud Composer, AWS Step Functions, Azure Data Factory, UC4, Control-M) 1+ years of experience with the Distributed data/similar ecosystem (Spark, Hive, Druid, Presto) and streaming technologies such as Kafka/Flink Expert knowledge of at least one programming language such as Python/Java, Python preferred Expert knowledge of SQL Familiarity with DevOps SnowFlake, Netezza, Teradata, AWS Redshift, Google BigQuery, Azure Data Warehouse, or similar Experience with cloud service providers: Microsoft Azure, Amazon AWS Expertise with containerization orchestration engines (Kubernetes) BS in Computer Science, Software Engineering, or relevant field desired US: · Alludo is an award-winning solution that has millions of users and decades of innovation under our belts. · We offer a fully remote workspace, and we mean it. There is no pressure to work in an office whatsoever. · Hours are flexible! You’ve worked hard to build your life, and we don’t want you to give it up for work. · Our team is growing fast, and there’s a ton of energy and a lot of really smart, motivated, fun people ready to welcome you in. What are you waiting for? Apply now! We can’t wait to meet you. (FYI, we’re lucky to have a lot of interest and we so appreciate your application, though please note that we’ll only contact you if you’ve been selected for an interview.) About Alludo: Alludo, the company behind the award-winning, globally recognizable brands including Parallels®, CorelDRAW®, MindManager®, and WinZip®, is helping people work better and live better. Our professional-caliber graphics, virtualization, and productivity solutions are finely tuned for the digital remote workforce delivering the freedom to work when, where, and how you want. With a 35+ year legacy of innovation, Alludo empowers ALL YOU DO helping more than 2.5 million paying customers to enable, ideate, create, and share on any device, anywhere. To learn more, visit www.alludo.com. It is our policy and practice to offer equal employment opportunities to all qualified applicants and employees without regard to race, color, age, religion, national origin, sex, political affiliation, sexual orientation, marital status, disability, veteran status, genetics, or any other protected characteristic. Alludo is committed to an inclusive, barrier-free recruitment and selection process and work environment. If you are contacted for a job opportunity, please advise us of any accommodations that are required. Appropriate accommodations will be provided upon request as required by Federal and Provincial regulations and Company Policy. Any information received relating to accommodations will be treated as confidential."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cryptorecruit-3520754991?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=tU98JoY0JTltKpRGNlj7IA%3D%3D&position=3&pageNum=15&trk=public_jobs_jserp-result_search-card," CryptoRecruit ",https://au.linkedin.com/company/cryptorecruit?trk=public_jobs_topcard-org-name," Boston, MA "," 3 days ago "," 34 applicants ","Company The company is looking for a Senior Data Engineer to help build out our suite of analytics products. As a member of the Data Engineering Team, you’ll work closely with the world’s leading blockchain protocols, developing real-time data pipelines to ingest their data, and identify actionable insights that will help them grow. In this role, you will also play a leading part in continuing to build out our unique blockchain parsing technology, Chainwalkers. The engineering team has extensive expertise across data pipelining, distributed databases, at-scale web applications, large-scale front-end applications and data visualisations. They work relentlessly towards our goals, and care a great deal about building quality products with a talented, authentic team. Responsibilities Design, build and maintain real-time data pipelines that process blockchain transactions from dozens of different blockchain networks. Develop data models that translate complex, esoteric blockchain data into standardised formats that are analytics-ready. Design automated systems that evaluate and parse the results of smart contract calls. Work alongside the Data Science team to curate and prototype new data-sets to tackle emerging problems. Lead and scope large technical projects. Productionalise time-series metrics for our partners and product teams. Develop systems to monitor the integrity and uptime of data. Requirements You possess a strong technical background that includes 5+ years of experience working in a senior engineering position with data infrastructure/distributed systems. Strong familiarity with blockchain and cryptocurrencies You have a high bar for the quality of data, the quality of code and ultimately an attention to detail. You have experience writing, maintaining and debugging ETL jobs that leverage distributed data frameworks such as Spark, Kafka and Airflow. You are comfortable with the command line, and are not afraid to get your hands dirty with infrastructure and ops when required. You have extensive experience working with Spark. You have worked with languages such as Python, Scala and Go. You have extensive experience working with data warehouses/lakes such as AWS Redshift and Delta Lake. You are capable of gluing together different services and tools, even if you haven’t previously worked with them. You have worked in an agile sprint-based manner. You are relentless when tasked with solving hairy technical challenges. Remuneration And Benefits Better than market rate with equity plan Make sure to follow us here to get our most live jobs https://www.linkedin.com/company/cryptorecruit Cryptorecruit are the worlds leading specialist recruiter for the blockchain/Cryptocurrency industry. We recruit positions from CEO,CTO, Project Manager, Solidity developer, frontend and Backend Blockchain developers to marketing/sales and customer service roles. Please browse our website and at www.cryptorecruit.com to search all our job vacancies."," Entry level "," Full-time "," Consulting "," Human Resources Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-it-minds-llc-3528108307?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=MdK9zfwBtUUHJM3F5vQfdw%3D%3D&position=4&pageNum=15&trk=public_jobs_jserp-result_search-card," IT Minds LLC ",https://www.linkedin.com/company/itminds-llc?trk=public_jobs_topcard-org-name," Bellevue, WA "," 3 weeks ago "," Be among the first 25 applicants ","Data Engineer @ Bellevue, WA Qualifications And Skills 3-5 years of experience in large-scale software development (preferably Agile) with emphasis on data modeling and database development 3-5 years of experience with data modeling tools (Erwin, ER/Studio, PowerDesigner) 3-5 years of experience with relational DBMSs and SQL coding (SQL Server, Oracle, Teradata, Snowflake) Ability to communicate effectively (both orally and in writing) with business users, project team leaders and application developers Experience participating in Agile/Scrum projects in a highly collaborative, multi-discipline team environment Proficiency with ETL tools and techniques (SSIS, Attunity, Informatica) 2+ years of experience with AWS and related services (EC2, S3, DynamoDB, ElasticSearch, SQS, SNS, Lambda, Airflow, Snowflake, etc.) Experience with object function/object-oriented scripting (Python, Java, C++, Scala) Experience in R Programming Thanks & Regards Krishna | IT Minds LLC | Phone:(949)534-3939 Ext 406 Direct: 949-200-7533| Email: krishna@itminds.net | : 9070 Irvine Centre DR, Suite 220 | Irvine, CA 92618 | 44075 Pipeline Plaza, Suite 305 | Ashburn, VA 20147| 102, Manjeera Trinity Corporate, Kukatpally, Hyderabad 500072| www.itminds.net"," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-saic-3502031328?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=vGfDYaNGciih0EpqasE8Gg%3D%3D&position=5&pageNum=15&trk=public_jobs_jserp-result_search-card," SAIC ",https://www.linkedin.com/company/saicinc?trk=public_jobs_topcard-org-name," Tampa, FL "," 2 weeks ago "," 38 applicants ","SAIC is seeking an experienced, results–oriented, mission-driven Data Engineer. We are looking for a talented, innovative, and experienced individual to support the effort to increase the speed of data delivery in the support of national security objectives. The ideal candidate will have a solid understanding executing IT systems and supporting transformations utilizing an agile framework that promotes collaboration, transparency, and continuous improvement. This is high visibility program, and we are seeking a results-oriented self-starter. As a Data Engineer work with a fast-paced team that acquires, controls, and catalogs large scale, complex datasets for our customer’s mission. The position requires a highly detail-oriented individual who understands the complexities of data movement and curation. You will also be responsible for analyzing the customer’s current dataset catalog to identify areas of weakness, which will then drive additional targeted dataset acquisition, and cataloging technologies as appropriate. You will be part of a team responsible for transforming the customers’ data ecosystem with strategies that keep them mission focused and mission informed. The role requires a driven individual who will stay current and experiment with state-of-the art Data Science and AI/ML technologies, ability and need to understand and support pathfinder mission to mature a data management policy. Initial efforts will involve pursuing authoritative data sources and setting up data pipelines into tools and data analytic environment. RESPONSIBILITIES: Collaborate with data stewards, data custodians, and data managers on data collection methods, data management processes/capabilities, and data policy/governance implementation Establish processes and methods to acquire, store, investigate, and manage data Establish processes to improve data quality and efficiency Evaluate data management systems to improve operational procedures Organize, implement, and enforce correct data collection policies and procedures Identify opportunities to automate data lifecycle processes Gather raw data and convert to standardized formats to improve data access and analysis Troubleshoot data-related problems as appropriate to the data management lifecycle Perform data inventory and cataloging to ensure insight into data holdings, identify gaps, prioritize collection/processing, and administer data policy/governance Respond to data calls, reporting tasks, and internal/external requirements tasking Collaborate with other data engineers and data scientists on projects Qualifications - External Bachelors and nine (9) years or more experience; Masters and seven (7) years or more experience; PhD or JD and four (4) years or more experience. Active DoD Secret clearance with the ability to attain a TS/SCI. KNOWLEDGE, SKILLS, AND ABILITIES: Ability to identify opportunities for AI integration and develop/deploy AI solutions. Experience using Advana’s toolset (Databricks, Qlik, Gamechanger, etc.) or Advana itself is a bonus Experience with Maven Smart Systems or Palantir Foundry. Experience performing full lifecycle data management in large-scale data ecosystems in on-prem, cloud, and hybrid environments Experience implementing and establishing data ownership and data stewardship Experience in data cataloging and metadata management Ability to effectively communicate (verbal and written) with technical and non-technical personnel and all levels of management and staff Experience utilizing data analytics to enable organizational learning and data-driven decisions Experience with Agile Methodologies Experience programming in Python, SQL, C++ a"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ccs-global-tech-3509183513?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=vP3WO3hw%2BzZZFRmPQhbYOw%3D%3D&position=6&pageNum=15&trk=public_jobs_jserp-result_search-card," CCS Global Tech ",https://www.linkedin.com/company/california-creative-solutions-ccs-?trk=public_jobs_topcard-org-name," Florida, United States "," 1 week ago "," Be among the first 25 applicants ","REQ INFO SHEET Job Title : Data Engineer Additional Details 10 -15 years overall IT experience. Project Scope The Data Management and Integrations Division is requesting to recruit for two Data Engineer contractors on behalf of Miami Dade Corrections Department (MDCR). The Contractor expense will be paid by MDCR. Recognizing the importance of data analytics for MDCR, the Mayor’s office and MDCR have teamed up in support of a newly created Data Analytics department within MDCR, while the Mayor’s office has provided Data Analysts to augment that team. To further enhance the new department, MDCR has requested ITD to recruit and hire two Data Engineers to round out the skill sets required to support this effort. Responsibilities Implement data pipelines to build Azure Analysis Services reporting data models. Perform complex analyses of business data and processes. Provide analytic and strategic models to address key questions across a portfolio of businesses. Collect, organize, manipulate, and analyze a wide variety of data. Track and report on the performance of the deployed models. Assist in the development of dashboards to help executives in strategic decision making. Perform and interpret data studies and product experiments pertaining to new data sources or new uses for existing data sources. Develop prototypes, proof of concepts, measures, KPIs, derived and custom fields. Skill Set Data Management, database, database structure systems, Azure Analysis Services, KPI, data mining, data models, Minimum Qualifications Bachelor’s degree. Minimum of 8 years of experience in developing, supporting a medium to large organization’s database systems including database structure systems, data management resources, data mining and data models are required. Additional related work experience and/or certifications may substitute for the required education on a year-for-year basis. Preferred Qualifications Experience with Corrections, Inmate related data"," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer/Analyst,https://www.linkedin.com/jobs/view/data-engineer-analyst-at-gravity-it-resources-3525949439?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=5%2FCStYwPYdSSASi34SqPMw%3D%3D&position=7&pageNum=15&trk=public_jobs_jserp-result_search-card," Gravity IT Resources ",https://www.linkedin.com/company/gravity-it-resources?trk=public_jobs_topcard-org-name," Salt Lake City Metropolitan Area "," 3 days ago "," Over 200 applicants ","Job Title: Data Engineer/Analyst Location: Salt Lake City, UT Job-Type: Contract to Hire Referral Fee: $500 Employment Eligibility: Gravity cannot transfer nor sponsor a work visa for this position. Applicants must be eligible to work in the U.S. for any employer directly (we are not open to contract or “corp to corp” agreements). Position Overview: As a Data Engineer you will work as part of a team to that is responsible for building, maintaining, and administering a global data warehouse. You will build data pipelines, automate tasks, and create process and tools that will allow the company to gather data from 100’s of data sources in an effort to organize, use, and report on data efficiently. You will be focused on API and ETL Development using Python and will be working with AWS, Snowflake, and SQL/SQL Server daily. Duties & Responsibilities:API an ETL development using Python. Configuration changes – getting access/putting EC2 and on prem instances together and deploying changes. Some Snowflake configuration and DB administration as needed. Meet and collaborate with vendors and other data sources to address issues, get clarification, and execute on strategies. Data validation Required Experience & Skills:Experience with API calls and ETL Development using Python Experience using Postman (or something similar) to set up API calls Snowflake and/or SQL/SQL Server experience Experience or exposure to AWS Nice to Have Experience: Tableau experience"," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,DATA ENGINEER,https://www.linkedin.com/jobs/view/data-engineer-at-oloop-technology-solutions-3527795325?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=EPW03%2FuNqKX0eoYNDDkROw%3D%3D&position=8&pageNum=15&trk=public_jobs_jserp-result_search-card," Oloop Technology Solutions ",https://www.linkedin.com/company/oloop?trk=public_jobs_topcard-org-name," Hartford, CT "," 3 weeks ago "," Be among the first 25 applicants ","Hartford,CT 6+ months contract Job Summary Strong experience in Unix/Shell Scripting Strong experience in Hadoop Eco System (Hive, HDFS, Map Reducer etc.) Strong experience in Spark, Python/Scala is an added advantage. Strong knowledge/experience in GCP (Big Query, GCS, Pub/Sub, FHIR, DataProc, Dataflow, Cloud functions, Airflow/Composer etc.) Experience in Tidal & Service Now ticketing system Strong communication, analytical & problem-solving skills & SDLC process Required Skills Technical Skills- ,Hive,ANSI SQL,Apache Hadoop Domain Skills- ,Payer Nice to have skills Techincal Skills- ,Core Java,Python,Map Reduce,Google Cloud - Big DataDomain Skills- Technology Data Management"," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,DATA ENGINEER,https://www.linkedin.com/jobs/view/data-engineer-at-oloop-technology-solutions-3527792699?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=0bLUCRe%2Ftsy%2B2l9nyQviGA%3D%3D&position=9&pageNum=15&trk=public_jobs_jserp-result_search-card," Oloop Technology Solutions ",https://www.linkedin.com/company/oloop?trk=public_jobs_topcard-org-name," Nashville, TN "," 3 weeks ago "," Be among the first 25 applicants "," Nashville,TN4+ months contractJob SummaryRequired Skills Technical Skills- ,MySQL,Python,PySpark,AWS Domain Skills- ,PayerNice to have skills Techincal Skills- ,AWS Services DomainRoles & Responsibilities ExperienceTechnical Skills : AWS,Python ,Spark , SQL Domain Skills : Healthcare Preferably Must:Minimum 5+ years of experience in writing Python/Spark ETL Code using AWS cloud storage services like S3 and RDS,Hands on experience in any database(Preferably Teradata) with strong knowledge in writing complex SQLs Nice to have skills: Databricks, Jenkins or any other DevOps CICD pipeline, Git, Any Scheduling Tool As a Data Engineer, you will listen to questions from the business and find an answer. You will navigate enterprise scale data systems looking for the pieces of that answer. Parts may be stored in different tables, databases, or platforms. Once found, you will assemble those pieces in a way that allows others to act on it. The data sets you work with may be on the order of petabytes, and thus your solutions must be scalable and be efficient. You will have access to many DevOps tools to assist your work, and your solutions must fit in with those tools and pipelines. You also be part of a development community with the ability to influence those tools and pipelines, as well as be a part of the organization's initial steps into AI and Machine Learning. Candidates should be comfortable with complex SQL queries, and have exposure to Spark/Python programming language to connect to data stores on AWS cloud and analyze/transform data. Additionally, the candidate should be able to see the big picture within the data, and be able to tell a story from seemingly disjoint pieces. The ideal candidate will constantly be on the lookout for areas in the development process to improve as well as look for new and meaningful insights in the data. You will be joining an agile team made up of senior business analysts, quality engineers, and other developers. You will work with them to deliver actionable insights to the business. "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-oloop-technology-solutions-3527795326?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=mJ4oaWW9hgFq0XA7AmOJVg%3D%3D&position=10&pageNum=15&trk=public_jobs_jserp-result_search-card," Oloop Technology Solutions ",https://www.linkedin.com/company/oloop?trk=public_jobs_topcard-org-name," Hartford, CT "," 3 weeks ago "," Be among the first 25 applicants "," Hartford,CT5+ Months contractBig data development with good work experience in Hadoop, Hive, Spark, PySpark and GCP with good knowledge in health care domain. Should have strong knowledge on building hive pipelines and prior project experience in Agile methodology. Knowledge on tools like JIRA, Github is desired. Prior experience on working with Clients in an onshore-offshore model. Very good communication skills and team collaboration skills Required Skills Technical Skills- ,Hive,Apache Hadoop,Google Cloud Platform,PySparkDomain Skills- 5.Nice to have skills Techincal Skills- ,ANSI SQL "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-charles-schwab-3501065342?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=AGbsuayCcZ4MD4uLsGfHYg%3D%3D&position=11&pageNum=15&trk=public_jobs_jserp-result_search-card," Charles Schwab ",https://www.linkedin.com/company/charles-schwab?trk=public_jobs_topcard-org-name," Lone Tree, CO "," 3 weeks ago "," Be among the first 25 applicants ","Your Opportunity At Schwab, the Data and Rep Technology (DaRT) organization governs the strategy and implementation of the enterprise data warehouse, Data Lake, and emerging data platforms. Our mission is to drive activation of data solutions, rep engagement technology (Sales, Marketing and Service) and client intelligence to achieve targeted business outcomes, address data risk and safeguard competitive edge. We help Marketing, Finance, Risk and executive leadership make fact-based decisions by integrating and analyzing data. What you are good at As part of the Business Data Delivery team, you will partner with our Business stakeholders and Data Engineering team to design and develop data solutions for data science, analytics and reporting. We are a team of passionate data engineers and SMEs who bring a lot of energy, focus and fresh ideas that support our mission to provide value by seeing the world “Through Clients' Eyes”. ETL Developers work with large teams, including onshore and offshore developers, using best-in-class technologies including Teradata, Informatica, Hadoop and BigQuery. You will design, development and implement enterprise data integration solutions with opportunities to grow in responsibility, work on exciting and challenging projects, train on new technologies and work with other Developers to set the future of the Data Warehouse. What you have Demonstrated ability to work independently as an ETL Developer with a track record of delivering code with minimal defects. 2-4 years of hands-on experience with data integration tools such as Informatica Power Center and Talend. 2-4 years in Data Warehouse platforms such as Teradata and BigData/Hadoop. Experience in data modeling (logical and/or physical). Hands-on experience working with near real-time and/or real-time data ingestion techniques. SQL experience with the ability to develop, tune and debug complex SQL applications is required. Experience with Google Cloud Platform, BigQuery and Informatica Intelligent Cloud Services (IICS) highly desirable. Experience with scheduling tools (eg. Control M, ESP). Ability to quickly learn & become proficient with new technologies. Strong analytical, problem-solving, influencing, prioritization, decision-making and conflict resolution skills. Exceptional interpersonal skills, including teamwork and communication. In addition to the salary range, this role is also eligible for bonus or incentive opportunities Why work for us? Own Your Tomorrow embodies everything we do! We are committed to helping our employees ignite their potential and achieve their dreams. Our employees get to play a central role in reinventing a multi-trillion-dollar industry, creating a better, more modern way to build and manage wealth. Benefits: A competitive and flexible package designed to empower you for today and tomorrow. We offer a competitive and flexible package designed to help you make the most of your life at work and at home—today and in the future. TD Ameritrade, a subsidiary of Charles Schwab, is an Equal Opportunity Employer. At TD Ameritrade we believe People Matter. We value diversity and believe that it goes beyond all protected classes, thoughts, ideas, and perspectives."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-driveniq-3504963351?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=aUnzkXIyudGjlwg9gMas1g%3D%3D&position=12&pageNum=15&trk=public_jobs_jserp-result_search-card," DrivenIQ ",https://www.linkedin.com/company/driveniq?trk=public_jobs_topcard-org-name," Towson, MD "," 2 weeks ago "," Be among the first 25 applicants ","Company Description DrivenIQ is a data intelligence disruptor! Using localized data and geo-based intel, we know who is in market, ready to buy, and we can ensure that organizations and/or agency partners align their media spend and advertising platforms with true data to drive effective ROI for the client. In this hand’s-on position, the Data Engineer will work very closely with the support and development team and apply knowledge of data engineering to design and develop enterprise analytical solutions. As a Data Engineer, you will be responsible for designing, building, and maintaining the data pipelines and architectures that support our data intelligence products and services. The individual filling this role will be expected to hit the ground running on the delivery of owning and running different data requests. In this role, you will be expected to design and develop proof-of-concept solutions in support of presales activities as well as support the development of standardized analytical offerings. This position is 100% remote. Job Description What you’ll do: Design and implement data pipelines to ingest, transform, and store large volumes of structured and unstructured data from various sources Own, handle, and run data requests with an understanding of data matrix’s Collaborate with data scientists and product owners to understand their data needs and develop solutions to support their work Build and maintain data lakes and data warehouses to support data analysis and reporting Optimize data pipelines for performance and scalability. Develop and maintain documentation for data pipelines and architectures Stay up-to-date with the latest data engineering technologies and best practices Work closely with the support team to ensure understanding of all client data Define data engineering strategies to meet the demands of business requirements Define the technical requirements of the data engineering solutions Define the data requirements of the data engineering solutions Conduct sophisticated analyses and build models, as required Translate data engineering results into clear business focused deliverables for decision makers Lead project plans and works with development team to deploy models into operational systems Other job duties may be assigned by supervisor Qualifications What you’ll need: Bachelor's or Master's degree in Computer Science, Data Science, or a related field 3+ years of experience in data engineering 3+ years of experience in Python 3+ years of experience in SQL 3+ years of experience in Athena Experience with cloud technologies such as AWS Experience with Firehorse/Kinesis Experience with machine learning Experience with big data technologies such as Hadoop, Spark, or Flink Experience with data storage and processing technologies such as SQL, NoSQL, and columnar databases Strong problem-solving, communication, and presentation skills Strong business focus with the ability to excel at connecting business requirements to data engineering objectives Extensive experience working with IT Development teams to implement analytical application/solution development Additional Information What We Offer: Competitive Pay Holidays + Unlimited PTO. It’s all about balance and we trust you will get your work done! Medical, Dental, Vision plans available Short-Term and Long-Term Disability Plans, Company Sponsored Complimentary $25k Basic Life Insurance and AD&D Learning and growth opportunities Remote work - work comfortably from your home This job advertisement in no way states or implies that these are the only duties and responsibilities to be performed by this employee. A full job description will be provided if and when a job offer is presented. The employee will be required to follow any other instructions and to perform any other duties and responsibilities upon the request of a supervisor. DrivenIQ is an Equal Opportunity Employer. Minorities, women, veterans, and individuals with disabilities are encouraged to apply."," Mid-Senior level "," Full-time "," Information Technology "," Data Infrastructure and Analytics " Data Engineer,United States,Data Engineer - 100% Remote,https://www.linkedin.com/jobs/view/data-engineer-100%25-remote-at-radian-3510638146?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=jBvkoobddspxBLTaQhfROQ%3D%3D&position=13&pageNum=15&trk=public_jobs_jserp-result_search-card," Radian ",https://www.linkedin.com/company/radian?trk=public_jobs_topcard-org-name," Greater Philadelphia "," 1 week ago "," Over 200 applicants ","See yourself at Radian? We see you here too. At Radian, we see you. For the person you are and the potential you hold. That’s why we’ve embraced a new way of working that lets our people across the country be themselves, be their best and be their boldest. Because when each of us is truly seen, each of us gives our best – and at Radian, we’ll give you our best right back. See Yourself as a Data Engineer The Data Engineer develops database objects and structures for data storage, retrieval, and reporting according to project specification. Participates and leads small teams on assignments to accomplish project goals. The focus of this position will be on data warehouse and ETL activities. See Your Primary Duties and Responsibilities Ensure effectiveness of the current Information Technology systems. Participate in the following: Prepares database loadable files for the data warehouse Applies pre-defined business transformation rules Designs and develops back-end databases for business intelligence applications Program, test, implement and maintain any data extraction programs necessary to extract the data Designs and develops ETL processes for delivery into the data warehouse Ensuring the correct application of the business rules through data query after the data is loaded into the Data Warehouse Monitor loads to ensure successful completion Perform Data conversion, Quality and Verification activities Perform SQL tuning for PL/SQL procedures, Views, Brio reports, etc Create and manage daily, weekly, and monthly data operations and schedule processes Identifies and Coordinates source data extraction from other operational systems Participation in design sessions chaired by data stewards and/or IT personnel where decisions are made involving the transformation from source to target Optimize ETL performance Design various data movement load processes Develop and implement the error handling strategy for ETL Provides support to database administrators and interfaces with business users to ensure the database is satisfying business requirements. Evaluates and modifies existing technology to take into account changes in business requirements, equipment configurations, and software compliance Ensures documentation is created and up to date on all projects and operational systems Perform bug fixing, trouble shooting and assist in user support for existing applications See the Job Specifications Your Knowledge: 5+ years experience in database development (Microsoft SQL Server, Microsoft SQL Integration Services and Cloud Technologies) Thorough understanding of database theory and practice (ETL,OLTP,OLAP, DataVault And Star Schema design patterns) Financial/Mortgage Industry Experience Your Skills and Abilities: Proven ability to work autonomously Independent thinker with strong communication/interpersonal skills Proven ability to quickly understand client requirements and translate them into software developer requirements Strong technical estimating skills Ability to manage multiple assignments/projects simultaneously. Strong analytical ability Attention to detail Ability to work independently Takes ownership of actions and outcomes Willing and capable to remain open, flexible and adaptable to change Strong customer orientation Strong team orientation Your Prior Work Experience: Technical: 5-8 years Supervisory: None See Your Location Radian is committed to a flexible work environment for many of our roles. This is a *Work From Anywhere* role meaning you have the flexibility to work from home (or another designated workspace that fits your needs). This role provides additional flexibility should you want to work on-site at a Radian office. Explore our office locations here and let your Talent Acquisition Partner know you would be interested in working on-site. *Work From Anywhere is subject to Radian’s Alternative Work Policy and business needs. See Why You Should Work With Us Competitive Compensation: anticipated base salary from $90,000 to $125,000 based on skills and experience. This position is eligible to participate in an annual incentive program. Our Company Makes an Impact. We’ve been recognized by multiple organizations like Bloomberg’s Gender-Equality Index, HousingWire’s Tech 100, and The Forum of Executive Women’s Champion of Board Diversity. Radian has also pledged to PwC’s CEO Action for Diversity & Inclusion commitment. Rest and Relaxation. This role is eligible for 25 days of paid time off annually, which is prorated in the year of hire based on hire date. In addition, based on your hire date, you will be eligible for 9 paid holidays + 2 floating holiday in support of our DEI culture. Health & Welfare Benefits. Multiple medical plan choices, including HSA and FSA options, dental, vision, and basic life insurance. Prepare for your Future. 401(k) with a top of market company match (did we mention the company match is immediately vested?!) and an opportunity to participate in Radian’s Employee Stock Purchase Plan (ESPP). Paid Parental Leave. An opportunity for all new parents to embrace this exciting change in their lives. Employee Assistance and Discount Programs. From helping you navigate the healthcare system, to providing resources and assistance to parents and caregivers of children with development disabilities, to scoring discounts with thousands of retailers. Pet Insurance. To help protect our furry family members. See More About Radian Radian is a fintech servicing the mortgage and real estate services industry. As a team, we pride ourselves on seeing the potential of every person, every idea and every day. Seeing each other at Radian goes far beyond our open, flexible culture. It means seeing our people’s potential – and creating inspiring career paths that help them get there. Or seeing new pathways and innovating for the future of our industry. It means seeing each other for all that we are. And it means seeing our purpose as one that extends beyond the bottom line – having an impact on communities across the country to help more people achieve the American Dream of homeownership. We hope you’ll see yourself at Radian. See more about us at Radian.com. Defining Roles for Radian's Future Understanding the qualities and characteristics that define a Leader and an Employee is important to ​building our future-fit workforce. Radian's future is only as bright as its people. For that reason, our People Plan includes profiles to support the qualities and characteristics that each Leader as well as each Employee should embody upon hire or via development. EEO Statement Radian complies with all applicable federal, state, and local laws prohibiting discrimination in employment. All qualified applicants will receive consideration for employment without regard to gender, age, race, color, religious creed, marital status, gender identity, sexual orientation, national origin, ethnicity, ancestry, citizenship, genetic information, disability, protected veteran status or any other characteristic protected by applicable federal, state, or local law. Accommodation Whether you require an accommodation for the job application or interview process, Radian is dedicated to a barrier-free employment process and encourages a diverse workforce. If you have questions about the accommodation process, please e-mail careers@radian.com."," Mid-Senior level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-diverse-lynx-3488403440?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=4O7CKbFu7RVDfUJ3DDPTrg%3D%3D&position=14&pageNum=15&trk=public_jobs_jserp-result_search-card," Diverse Lynx ",https://www.linkedin.com/company/diverselynx?trk=public_jobs_topcard-org-name," San Diego, CA "," 3 weeks ago "," Be among the first 25 applicants ","Job Description Job Title Data Engineer Experience : 5+ Years Location: San Diego, CA- Initial Remote Job Description Associate should have minimum 5-7 years of working experience in ETL framework and strong knowledge in AWS + Python + SQL . Good experience in ETL pipelines(Snowflake) and Airflow Able to write SQL query during evaluation . Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company."," Entry level "," Contract "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-compunnel-inc-3522118749?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=YZP2ultOM0iAnyFDtIYCQw%3D%3D&position=15&pageNum=15&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," Durham, NC "," 6 days ago "," 176 applicants ","Responsibilities for Data Engineer Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for Data Engineer Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc."," Mid-Senior level "," Contract "," Information Technology and Engineering "," Banking and Investment Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ivy-energy-3484738543?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=kXGwhQmgPryldtp1RUw3LA%3D%3D&position=16&pageNum=15&trk=public_jobs_jserp-result_search-card," Ivy Energy ",https://www.linkedin.com/company/ivyenergy?trk=public_jobs_topcard-org-name," Oakland, CA "," 3 weeks ago "," Be among the first 25 applicants ","Today's financial system is built to favor those with money. Grid's mission is to level that playing field by building financial products that help users better manage their financial future. The Grid app lets users access cash, build credit, spend money, file their taxes and lots, lots more. Grid is a fast growing team that's deeply passionate about making a difference in the lives of millions. We're solving huge problems and believe that a merit driven culture allows every team member to play a big role. Join our growing team in our Bay Area headquarters or additional offices across the US. Our Data Engineers are responsible for converting the data surrounding a customer's financial life into tools that make our customers smarter and better equipped financially for the future. We value making intelligent decisions backed by good data and tools. We're looking for people who share our values, particularly if you have experience analyzing, processing, and learning from large data sets. Problems We Work On We're looking for a seasoned data engineer who can help us lay the foundation of an exceptional data engineering practice. The ideal candidate will be confident with a programming language of their choice, be able to learn new technologies quickly, have strong software engineering and computer science fundamentals and have extensive experience with common big data workflow frameworks and solutions. You will be writing code, setting style guides and collaborating cross-functionally with product, engineering and leadership. Analytics: Collect all the data for a user into tools that help our customers Machine Learning: Using a variety of techniques to reach better insights Data Processing: Managing data & statistics using scalable and efficient technologies Visualization: Envision our data as beautiful graphs and tools that allow customers to explore their data & ask their own questions Risk: Analyze data for anomalous patterns and build tools that allow us to find bad actors quickly We Practice Open collaboration Code reviews Testing Agile development We Use Go Python MySQL Google Cloud Platform Kubernetes Kubeflow Docker Google Pubsub BigQuery Firebase We're looking for Engineers to: Design and implement platform services, frameworks and ecosystems Build a scalable, reliable, operable and performant big data workflow platform for data scientists/engineers, AI/ML engineers, and product/operation team members Drive efficiency and reliability improvements through design and automation: performance, scaling, observability, and monitoring Requirements Strong programming skills with Python Strong programming skills with a typed programming language, such as Java, Scala, Go, etc. Disciplined approach to development, testing, and quality assurance Excellent communication skills, capable of explaining highly technical problems in English Understand data processing and ETL, hands on experience building pipelines and using frameworks such as Hive hdfs, Presto, Spark, etc. Really Strong Candidates May Have Really strong candidates may have: Actively contribute to open source software Worked with a strong, lean-based development environment Previous work experience in a start-up environment Ability to recognize the right tool for the right situation/problem. Will have strong programming skill in Go PI204123141"," Entry level "," Full-time "," Information Technology "," IT System Data Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-staffchase-3527803210?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=kcXrN5XuOI9EGVtO0xIUCQ%3D%3D&position=17&pageNum=15&trk=public_jobs_jserp-result_search-card," StaffChase ",https://www.linkedin.com/company/staffchase?trk=public_jobs_topcard-org-name," United States "," 2 days ago "," Over 200 applicants ","We are currently accepting resumes for a Data Engineer position in Columbus, OH. 8-10 years experience $96-106k The selected candidate will perform the following duties: As a Data Engineer you’ll be responsible for acquiring, curating, and publishing data for analytical or operational uses. Data should be in a ready-to-use form that creates a single version of the truth across all data consumers, including business users, data scientists, and Technology. Ready-to-use data can be for both real time and batch data processes and may include unstructured data. Successful data engineers have the skills typically required for the full lifecycle software engineering development from translating requirements into design, development, testing, deployment, and production maintenance tasks. You’ll have the opportunity to work with various technologies from big data, relational and SQL databases, unstructured data technology, and programming languages. This company is an industry leading workforce and is passionate about creating data solutions that are secure, reliable and efficient in support of their mission to provide extraordinary care. They embrace an agile work environment and collaborative culture through the understanding of business processes, relationship entities and requirements using data analysis, quality, visualization, governance, engineering, robotic process automation, and machine learning to produce targeted data solutions. If you have the drive and desire to be part of a future forward data enabled culture, we want to hear from you. JOB DESCRIPTION Key Responsibilities: Consults on complex data product projects by analyzing moderate to complex end to end data product requirements and existing business processes to lead in the design, development and implementation of data products. Responsible for producing data building blocks, data models, and data flows for varying client demands such as dimensional data, standard and ad hoc reporting, data feeds, dashboard reporting, and data science research & exploration. Translates business data stories into a technical story breakdown structure and work estimate so value and fit for a schedule or sprint. Creates business user access methods to structured and unstructured data by such techniques such as mapping data to a common data model, NLP, transforming data as necessary to satisfy business rules, AI, statistical computations and validation of data content. Builds data cleansing, imputation, and common data meaning and standardization routines from source systems by understanding business and source system data practices and by using data profiling and source data change monitoring, extraction, ingestion and curation data flows. Facilitates medium to large-scale data using cloud technologies – Azure and AWS (i.e. Redshift, S3, EC2, Data-pipeline and other big data technologies). Collaborates with enterprise DevSecOps team and other internal organizations on CI/CD best practices experience using JIRA, Jenkins, Confluence etc. Implements production processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Develops and maintains scalable data pipelines for both streaming and batch requirements and builds out new API integrations to support continuing increases in data volume and complexity Writes and performs data unit/integration tests for data quality With input from a business requirements/story, creates and executes testing data and scripts to validate that quality and completeness criteria are satisfied. Can create automated testing programs and data that are re-usable for future code changes. Practices code management and integration with engineering Git principle and practice repositories. Participates as an expert and learner in team tasks for data analysis, architecture, application design, coding, and testing practices. May perform other responsibilities as assigned. Typical Skills and Experiences: Education: Undergraduate studies in computer science, management information systems, business, statistics, math, a related field or comparable experience and education strongly preferred. Graduate studies in business, statistics, math, computer science or a related field are a plus. License/Certification/Designation: Certifications are not required but encouraged. Experience: Five to eight years of relevant experience with data quality rules, data management organization/standards and practices. Solid experience with software development on large and/or concurrent projects. Experience in data warehousing, statistical analysis, data models, and queries. One to three years’ experience with developing compelling stories and distinctive visualizations. Insurance/financial services industry knowledge a plus. Knowledge, Abilities and Skills: Data application and practices knowledge. Advanced skills with modern programming and scripting languages (e.g., SQL, R, Python, Spark, UNIX Shell scripting, Perl, or Ruby). Strong problem solving, oral and written communication skills. Ability to influence, build relationships, negotiate and present to senior leaders. Other criteria, including leadership skills, competencies and experiences may take precedence. Staffing exceptions to the above must be approved by the hiring manager’s leader and HR Business Partner. Values: Regularly and consistently demonstrates the comapny's values. Job Conditions: Working Conditions: Normal office environment. ADA: The above statements cover what are generally believed to be principal and essential functions of this job. Specific circumstances may allow or require some people assigned to the job to perform a somewhat different combination of duties."," Mid-Senior level "," Contract "," Information Technology, Engineering, and Analyst "," IT Services and IT Consulting, Insurance, and Financial Services " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-new-millenium-consulting-3522205308?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=ZzPOxt3aAMN4FGmTlxRC%2Fg%3D%3D&position=18&pageNum=15&trk=public_jobs_jserp-result_search-card," New Millenium Consulting ",https://www.linkedin.com/company/new-millenium-consulting?trk=public_jobs_topcard-org-name," Philadelphia, PA "," 1 day ago "," 30 applicants ","Job Title: Data Engineer Job Type: Full-time Job Location: Philadelphia, PA ( Hybrid - 2 days onsite as needed- must reside within 2 hours driving distance from Philadelphia) One of the largest hospitals and healthcare in Philadelphia is looking for a full time Data Engineer to join their security team. The Data Engineer has experience with Python (specifically around data applications so PySpark, Pandas, etc…) and/or Azure Synapse experience is highly valued. The Data Engineer – Cybersecurity works to turn data into actionable information and insights to understand our areas of risk and vulnerability and promote data driven decision making. Supports cyber security initiatives through predictive and reactive analytics, articulating emerging trends to leadership and staff. Develops analytical products and processes fusing enterprise and all-source intelligence. This role will work closely with cybersecurity leadership to function as a trusted analytics delivery partner to answer pertinent business questions using all available data assets. ESSENTIAL FUNCTIONS: Identify, develop and deliver analytical solutions that provide impactful insights and critical risk-based decision support. Use knowledge of cybersecurity operations to track key performance metrics and drivers of risk. Work directly with business groups to identify, define and document requirements for reports/dashboards. Use wide variety of analytical tools including data visualization and interactive discovery tools (e.g., Qlik, Tableau, PowerBI, etc.), MS Office applications, SQL programming, and statistical/mathematical programming tools (e.g., R, SAS, SPSS, python, etc.). Perform tasks for the development and/or implementation, maintenance, and improvement of analytical systems including the creating specifications, project plans, test plans and documentation. Operate and maintain existing Azure Synapse pipelines and machine learning solutions. Present key findings and recommendations to improve operations. Maintain familiarity with HIPAA / PCI-DSS and other Information Security regulations. EXPERIENCE REQUIREMENTS: At least three years of experience in a data analytics role, ability to multi-task, a keen eye for detail, strong organizational skills, the ability to thrive in fast-paced, high-stress situations, ability to communicate cyber security issues to peers and management. EDUCATIONAL/TRAINING REQUIREMENTS: Bachelor’s degree in data analytics, information technology, computer science, mathematics or related field preferred. CERTIFICATES, LICENSES, AND REGISTRATION: At least one Microsoft certification related to data analytics/data engineering, such as DP-900, PL-300, DP-100, DP-203, DP-500 preferred."," Mid-Senior level "," Full-time "," Information Technology, Analyst, and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-supernal-3524217924?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=vAg0N4BHN0VqL5ceJCsCKg%3D%3D&position=19&pageNum=15&trk=public_jobs_jserp-result_search-card," Supernal ",https://www.linkedin.com/company/supernalaero?trk=public_jobs_topcard-org-name," Fremont, CA "," 10 hours ago "," 37 applicants ","Supernal, Data Engineer (FTE) Fremont, CA (Onsite) Job Overview: The Sr. Data Engineer is responsible for identifying, implementing, and supporting data use cases and owners throughout the business that can use data engineering and analytics to improve outcomes. Activities include collaboration with business units to identify ways data can improve their outcomes, designing use case ontologies, building, and maintaining start to finish data pipelines across disparate sources, standing up no code dashboards, and designing / guiding standard processes in code, data, and process management. What you can contribute: A minimum of five (5) years of proven experience with big data platforms across diverse use cases, including data management (an equivalent combination of education and experience may be considered) Experience and architectural understanding working with one (1) or more cloud data warehouses—Azure Synapse, Snowflake, Google BigQuery, AWS Redshift. Experience working with highly regulated data in the aerospace, automotive, healthcare, and/or financial industries Professional experience with / comfortable using: Programming languages, cloud services such as Azure (preferable), databases and data lakes, Using SQL, Spark (any interface, but PySpark preferred), Git, Command line, CI/CD Solid expertise and skills in OO programming paradigm and implementation Must have good understanding of the inner workings and hands-on skills with data technologies: Kafka, Spark, etc. Knowledge of various ETL techniques and frameworks; experience integrating data from multiple data sources Solid understanding of distributed system concepts, cloud computing Bachelor’s degree in relevant field such as computer science, mathematics, physics, software engineering, data science, or information science What you can do: Transform noisy, real-world data into valuable information that enables the business to quickly and optimally make data driven decisions Partner with your team and technical stakeholders and deliver internal data and analytics products on critical use cases where data can be used to excavate hidden insights (e.g., battery testing, testing and certification, AI, demand forecasting, cyber security) Write, maintain, and improve code, pipelines, ontologies, visualizations, interactive tools, and dashboards to enable technical and non-technical users to leverage data Identify key data sets through deep engagement with use cases and workflows Design, build, and manage end to end data pipelines in a cohesive data platform Develop scalable, reliable, manageable data pipelines Develop and train CI/CD practices for data integrations Partner with technical team members and collaborators to design solutions for MLOps, data quality verification, anomaly detection, real-time streaming pipelines Design, integrate, and document technical components for seamless data discoverability and usage Triage and address support requests (e.g., workflow and product issues) Actively enable a data driven and innovative culture within Supernal Communicate insights in a way that resonates with internal and external parties Design and guide standard methodologies in data, code, and process management Drive data strategy across the organization including leading training sessions in our data systems Stay up to date on industry best practices and standards The job responsibilities of this position require access to certain technology and/or software source code subject to U.S. export control laws and certain software that can only be used and accessed by U.S. Persons. Accordingly, this position is limited to applicants that are US Persons, i.e., U.S. citizens, lawful permanent residents as defined by 8 U.S.C. 1101(a)(20),or protected individuals as defined by 8 U.S.C."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Technology, Information and Internet and Airlines and Aviation " Data Engineer,United States,Data Engineer / Fully Remote,https://www.linkedin.com/jobs/view/data-engineer-fully-remote-at-motion-recruitment-3515310275?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=CAqmSGdcCMcmUlSF3b%2Bvnw%3D%3D&position=20&pageNum=15&trk=public_jobs_jserp-result_search-card," Motion Recruitment ",https://www.linkedin.com/company/motion-recruitment-partners?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 week ago "," 90 applicants ","This company is looking to bring another data engineer onto their fully-remote team! They are in the e-ticketing space for sporting events, concerts, and more, and aim to put customers first in the process. They are looking for an enthusiastic, driven engineer who is ready to learn and is excited about their product! Required Skills 3-6 years of industry experience (non-internship) Expertise with Python, Kinesis, SQL, AWS Redshift, and Snowflake Databricks, Periscope and Sigma are nice-to-haves Experience building out and automating ETL pipelines Perks Fully-remote workplace with generous hone office stipend Quarterly, week-long offsites to cities in the US $150,000 - $180,000k base (flexible depending on engineer) Stock equity package No C2C or sponsorship provided at this time Posted By: Kelsey Tsonton"," Mid-Senior level "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-anderson-cancer-ctr-3509899975?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=Kv4SsswE64O7off2%2BmqR5w%3D%3D&position=21&pageNum=15&trk=public_jobs_jserp-result_search-card," Anderson Cancer Ctr ",https://www.linkedin.com/company/anderson-cancer-ctr?trk=public_jobs_topcard-org-name," Houston, TX "," 1 week ago "," Be among the first 25 applicants ","We seek a driven and collaborative data engineer to contribute to building the data infrastructure of our flagship platform A 3 D 3 a: Adaptive, AI-augmented, Drug Discovery and Development. With expertise in data architecture, the Data Engineer will directly contribute our mission to discover novel therapies for cancer patients. Led by Prof. Bissan Al-Lazikani, Director of Therapeutics Data Science, the intelligent and ever-learning A3D3a platform is part of the new initiative in Therapeutics Data Science and part of our ambitious Institute for Data Science in Oncology at MD Anderson. A3D3a will accelerate the discovery and impact of novel therapies for cancer by enabling novel opportunities for optimized therapies for patients with a focus on rare and hard-to-treat cancers through the development of novel machine learning and AI technologies. Central to this vision, the Data Engineer will build and maintain data infrastructure to enable the discovery of hidden therapeutic opportunities in integrated patient data and will work closely with data scientists, data engineers, bioinformaticians, and molecular modelers. The candidate must hold a Bachelor of Computer Science or related degree; experience in computing related to the natural sciences would be ideal. Job Responsibilities Work with lead data engineer on establishing architectural plan to encompass local, hybrid, and/or cloud infrastructure Utilize a variety of tools (e.g. Spark, KNIME, Airflow, SQL) to merge and extract data from multiple sources and environments Create data pipelines to validate and enrich data for use in ML models Generate and maintain metadata for all stages of data pipeline Work with a multidisciplinary team and stakeholders to define data requirements Establish and maintain interfaces to the data (APIs) Utilize industry standards for creating, storing, and documenting code Expected Skills Programming Strong Python programming experience is a must and candidates must have demonstrated skills in that area Candidates having experience using Spark (PySpark) will be given preference Solid understanding of CI/CD practices Experience building and querying both relational and graph databases Familiarity with No-SQL experience is a plus Data Engineering Solid knowledge of metadata creation and management Experience with Airflow, Argo or equivalent workflow orchestration is required Must have demonstrated experience working with APIs Good understanding of Container based architectures (e.g. Docker/Kubernetes) Candidate must have demonstrated experience working on data engineering tasks using one of the major cloud vendors. Preference will be given to those with experience with Microsoft Azure Prefer candidates with demonstrated skills in building/deploying ML models Other Candidate must be self-motivated and able to work independently on tasks Strong written and oral communication skills Ability to work in a multidisciplinary team Education Required: Bachelor's degree in Biomedical Engineering, Electrical Engineering, Computer Engineering, Physics, Applied Mathematics, Science, Engineering, Computer Science, Statistics, Computational Biology, or related field. Experience Required: Three years experience in scientific software development/analysis. With Master's degree, one years experience required. With PhD, no experience required. Preferred: Experience includes computing related to the natural sciences. It is the policy of The University of Texas MD Anderson Cancer Center to provide equal employment opportunity without regard to race, color, religion, age, national origin, sex, gender, sexual orientation, gender identity/expression, disability, protected veteran status, genetic information, or any other basis protected by institutional policy or by federal, state or local laws unless such distinction is required by law. http://www.mdanderson.org/about-us/legal-and-policy/legal-statements/eeo-affirmative-action.html"," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer (Infrastructure),https://www.linkedin.com/jobs/view/data-engineer-infrastructure-at-aqua-3529600139?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=p%2BHtiTF7OWsJ%2FoUjdIh9cQ%3D%3D&position=22&pageNum=15&trk=public_jobs_jserp-result_search-card," AQUA ",https://www.linkedin.com/company/aquadotxyz?trk=public_jobs_topcard-org-name," New York, NY "," 1 day ago "," 125 applicants ","AQUA is looking for passionate team members to build the world’s best marketplace for web3 gamers. For years we’ve been captivated by the new experiences that NFTs will bring to gamers, whether they be economic, entertainment, or identity-driven. As the industry exploded last year, we realized players need a marketplace that has features designed with them in mind. From providing great content to displaying tactical information on each and every asset, we’re on a mission to empower players as they navigate this new world. Data is central at AQUA and we are currently building a modern and scalable data platform to democratize data across the organization, granting access to reliable data across internal and external parties. The Data Infrastructure Engineer will have the opportunity to participate in critical decision making as we build and scale the team. What you will be doing Develop and maintain scalable data pipelines and build out new API integrations to support continuing increases in data volume and complexity. Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implement processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Write unit/integration tests, contribute to engineering wiki, and documents work. Perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Work closely with a team of frontend and backend engineers, product managers, and analysts. Design data integrations and data quality framework. Design and evaluate open source and vendor tools for data lineage. Work closely with all business units and engineering teams to develop strategy for long term data platform architecture. What you should have Entrepreneurial abilities to take data-engineering projects from ideation to execution. Ability to work in small and fast paced startup environments. Process oriented with great documentation skills Excellent oral and written communication skills with a keen sense of customer service. BS or MS degree in Computer Science or a related technical field 3+ years of development experience (Python preferred) 3+ years of SQL experience (No-SQL experience is a plus) 3+ years of experience with schema design and dimensional data modeling Ability in managing and communicating data warehouse plans to internal clients. Experience designing, building, and maintaining data pipelines (e.g. ETL/ELT). Bonus Points A passion for W3 and Gaming. Experience with Developing data centric micro-services with FastAPI. Experience with managing Feature-stores (e.g. Feast) for our ML initiative. Experience with writing transformation queries using DBT (data-build-tool). Perks & Benefits: We are a remote-first organization with a distributed team. 401k match. 100% paid insurance for the employee and 90% for dependents. Unlimited PTO. Competitive pay & equity package."," Full-time ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-wise-skulls-3492587242?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=EeitalMWyHjZvI39565KEg%3D%3D&position=23&pageNum=15&trk=public_jobs_jserp-result_search-card," Wise Skulls ",https://www.linkedin.com/company/wearewiseskulls?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Over 200 applicants "," Title: Data EngineerLocation: Long Beach, CA (Remote)Duration: 12+ monthsImplementation Partner: InfosysEnd Client: HealthcareJdMinimum Years of Experience: 7+ YearsNeed a strong Data Engineer with Azure Databricks and Impala development skills. "," Entry level "," Contract "," Information Technology "," Software Development " Data Engineer,United States,Business Intelligence Data Engineer,https://www.linkedin.com/jobs/view/business-intelligence-data-engineer-at-barnes-noble-inc-3516770290?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=QB3%2BiCl7DnBCigSfWZhnBg%3D%3D&position=24&pageNum=15&trk=public_jobs_jserp-result_search-card," Barnes & Noble, Inc. ",https://www.linkedin.com/company/barnes-&-noble?trk=public_jobs_topcard-org-name," New York, NY "," 1 week ago "," 29 applicants ","NY-New York (Union Square) Job Summary The Business Intelligence Data Engineer will be a key member of the Barnes & Noble IT organization. The Business Intelligence Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, support cross functional teams in generating timely insights. The ideal candidate is a data specialist experienced in designing, developing, and deploying complex data pipelines in Azure Cloud platform The Business Intelligence Data Engineer will support our software developers, database architects, data analysts, dashboard developers and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The role requires solid technical skills in designing and delivering large-scale enterprise data platforms on Azure Cloud combined with very strong communication skills. An employee in this position can expect an annual starting rate between $140,000 and $160,000, depending on experience, seniority, geographic locations, and other factors permitted by law. What You Do Deploy new solutions and configurations to meet business and compliance requirements. Participate in 24x7 on call rotations. Discover current technical standards and best practices (R&D). Deploy security patches, updates, and configuration changes. Manage consultants to ensure compliance with Barnes & Noble engineering and business standards. Knowledge & Experience Work with multiple business stakeholders in defining the right data requirements to fulfill growing analytics / insights needs across the enterprise Create and maintain optimal data pipeline architecture Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, optimal cost and performance. Design right infrastructure / compute configuration for optimal extraction, transformation, and loading of data from a wide variety of data sources into ADLS, Databricks and Synapse. Develop data pipelines using PySpark, Python and DB SQL in Databricks in Lakehouse architecture 5+ years of experience in a Data Engineering environment with hands on experience developing ADF (Azure Data Factory) pipelines for an enterprise solution. 3+ years of experience in writing code in Databricks using Python to transform, manipulate (ETL/ELT) data, along with managing objects in Notebooks, Data Lake, ADLS, Azure Synapse. Experience with writing complex SQL Queries, User Defined Function, Stored procedures and Materialized views. Someone who comes from database development background and have transitioned to Azure Cloud/Data Lake/Synapse. Working experience with Azure DevOps and Source controls. Experience working in a large Retail enterprise and understanding of Retail based data and reporting models. Experience with reporting tools like PowerBI/Tableau/MicroStrategy Strong analytic skills related to working with different types of datasets from wide variety of data sources Strong project management and organizational skills Experience supporting and working with cross-functional teams in a dynamic environment Understanding of ELT and ETL patterns and when to use each. Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. Experience using the following software/tools/services: Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse, SQL, PySpark Experience with relational SQL and NoSQL databases Experience with data pipeline and workflow management tools Auto req ID 65382BR Employment Type Full-Time City New York State New York EEO Statement Barnes & Noble is an equal opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to age, race, color, ancestry, national origin, citizenship status, military or veteran status, religion, creed, disability, sex, sexual orientation, marital status, medical condition as defined by applicable law, genetic information, gender, gender identity, gender expression, hairstyle, pregnancy, childbirth and related medical conditions, reproductive health decisions, or any other characteristic protected by applicable federal, state, or local laws and ordinances. Please tell us if you require a reasonable accommodation to apply for a job or to perform your job. Examples of reasonable accommodation include making a change to the application process or work procedures, providing documents in an alternate format using a sign language interpreter, or using specialized equipment. Contact (800) 799-5335. Job Category Information Systems & Technology"," Entry level "," Full-time "," Business Development and Sales "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-trioh-consulting-group-inc-3476758132?refId=ZX8n%2BvB3sO3%2BFjKj8oFRvg%3D%3D&trackingId=3ONvWWzwivrcY8IzqWB8tA%3D%3D&position=25&pageNum=15&trk=public_jobs_jserp-result_search-card," Trioh Consulting Group Inc ",https://www.linkedin.com/company/trioh-consulting-group-inc?trk=public_jobs_topcard-org-name," Washington, DC "," 1 month ago "," Over 200 applicants ","Come join a dynamic small business working with major prime contractors and government agencies in the DC/Metro area. We are growing and need top quality candidates that are not only dedicated to the job roles but have an interest in joining and help guiding growth of a small business with limitless technical and business opportunities. Our data engineers support data collection, ingestion, validation, and loading of optimized data in the appropriate data stores. They work on a team made up of analyst(s), developer(s), data scientist(s), and a product lead and everyone on the team collaborates in support of a specific the mission. Working directly with the analyst(s) and the product lead, the data engineer identifies and implements solutions for the data requirements, including building pipelines to collect data from disparate, external sources, implementing rules to validate that expected data is received, cleansed, transformed, massaged and in an optimized output format for the data store. The Data Engineer performs validation and analytics corresponding with client requirements and evolves solutions through automation, optimizing performance with minimal human involvement. As pipelines are executed, the data engineer monitors their status, performance, and troubleshoots issues while working on improvements to ensure the solution is the very best version to address the customer need. As a Mid-Level Data Engineer, this role focuses specifically on the development and maintenance of scalable data stores that supply big data in forms needed for business analysis. The best athlete candidate for this position will be able to apply advanced consulting skills, extensive technical expertise and has full industry knowledge to develop innovative solutions to complex problems. This candidate is able to work without considerable direction and may mentor or supervise other team members. What we’re looking for: Someone with a solid background developing solutions for high volume, low latency applications and can operate in a fast paced, highly collaborative environment. A candidate with distributed computer understanding and experience with SQL, Spark, ETL. A person who appreciates the opportunity to be independent, creative and challenged. An individual with a curious mind, passionate about solving problems quickly and bringing innovative ideas to the table. Basic Qualifications: 4+ years of experience with SQL 4+ years of experience developing data pipelines using modern Big Data ETL technologies like NiFi or StreamSets. 4+ years of experience with a modern programming language such as Python or Java 4 years of experience working in a big data and cloud environment Secret Clearance or higher Additional Qualifications: 2 years of experience working in an agile development environment Ability to quickly learn technical concepts and communicate with multiple functional groups Ability to display a positive, can-do attitude to solve the challenges of tomorrow Possession of excellent verbal and written communication skills Preferred experience at the respective command with an understanding of analytical and data paint points and challenges across the J-Codes."," Full-time ",,, Data Engineer,United States,Business Intelligence Data Engineer,https://www.linkedin.com/jobs/view/business-intelligence-data-engineer-at-barnes-noble-inc-3516770290?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=mhyurOpy6Et321z2bBqMWA%3D%3D&position=1&pageNum=16&trk=public_jobs_jserp-result_search-card," Barnes & Noble, Inc. ",https://www.linkedin.com/company/barnes-&-noble?trk=public_jobs_topcard-org-name," New York, NY "," 1 week ago "," 29 applicants ","NY-New York (Union Square) Job Summary The Business Intelligence Data Engineer will be a key member of the Barnes & Noble IT organization. The Business Intelligence Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, support cross functional teams in generating timely insights. The ideal candidate is a data specialist experienced in designing, developing, and deploying complex data pipelines in Azure Cloud platform The Business Intelligence Data Engineer will support our software developers, database architects, data analysts, dashboard developers and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The role requires solid technical skills in designing and delivering large-scale enterprise data platforms on Azure Cloud combined with very strong communication skills. An employee in this position can expect an annual starting rate between $140,000 and $160,000, depending on experience, seniority, geographic locations, and other factors permitted by law. What You Do Deploy new solutions and configurations to meet business and compliance requirements. Participate in 24x7 on call rotations. Discover current technical standards and best practices (R&D). Deploy security patches, updates, and configuration changes. Manage consultants to ensure compliance with Barnes & Noble engineering and business standards. Knowledge & Experience Work with multiple business stakeholders in defining the right data requirements to fulfill growing analytics / insights needs across the enterprise Create and maintain optimal data pipeline architecture Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, optimal cost and performance. Design right infrastructure / compute configuration for optimal extraction, transformation, and loading of data from a wide variety of data sources into ADLS, Databricks and Synapse. Develop data pipelines using PySpark, Python and DB SQL in Databricks in Lakehouse architecture 5+ years of experience in a Data Engineering environment with hands on experience developing ADF (Azure Data Factory) pipelines for an enterprise solution. 3+ years of experience in writing code in Databricks using Python to transform, manipulate (ETL/ELT) data, along with managing objects in Notebooks, Data Lake, ADLS, Azure Synapse. Experience with writing complex SQL Queries, User Defined Function, Stored procedures and Materialized views. Someone who comes from database development background and have transitioned to Azure Cloud/Data Lake/Synapse. Working experience with Azure DevOps and Source controls. Experience working in a large Retail enterprise and understanding of Retail based data and reporting models. Experience with reporting tools like PowerBI/Tableau/MicroStrategy Strong analytic skills related to working with different types of datasets from wide variety of data sources Strong project management and organizational skills Experience supporting and working with cross-functional teams in a dynamic environment Understanding of ELT and ETL patterns and when to use each. Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. Experience using the following software/tools/services: Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse, SQL, PySpark Experience with relational SQL and NoSQL databases Experience with data pipeline and workflow management tools Auto req ID 65382BR Employment Type Full-Time City New York State New York EEO Statement Barnes & Noble is an equal opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to age, race, color, ancestry, national origin, citizenship status, military or veteran status, religion, creed, disability, sex, sexual orientation, marital status, medical condition as defined by applicable law, genetic information, gender, gender identity, gender expression, hairstyle, pregnancy, childbirth and related medical conditions, reproductive health decisions, or any other characteristic protected by applicable federal, state, or local laws and ordinances. Please tell us if you require a reasonable accommodation to apply for a job or to perform your job. Examples of reasonable accommodation include making a change to the application process or work procedures, providing documents in an alternate format using a sign language interpreter, or using specialized equipment. Contact (800) 799-5335. Job Category Information Systems & Technology"," Entry level "," Full-time "," Business Development and Sales "," Retail " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-trioh-consulting-group-inc-3476758132?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=3lhSGqaffi9P59npsQyuaQ%3D%3D&position=2&pageNum=16&trk=public_jobs_jserp-result_search-card," Trioh Consulting Group Inc ",https://www.linkedin.com/company/trioh-consulting-group-inc?trk=public_jobs_topcard-org-name," Washington, DC "," 1 month ago "," Over 200 applicants ","Come join a dynamic small business working with major prime contractors and government agencies in the DC/Metro area. We are growing and need top quality candidates that are not only dedicated to the job roles but have an interest in joining and help guiding growth of a small business with limitless technical and business opportunities. Our data engineers support data collection, ingestion, validation, and loading of optimized data in the appropriate data stores. They work on a team made up of analyst(s), developer(s), data scientist(s), and a product lead and everyone on the team collaborates in support of a specific the mission. Working directly with the analyst(s) and the product lead, the data engineer identifies and implements solutions for the data requirements, including building pipelines to collect data from disparate, external sources, implementing rules to validate that expected data is received, cleansed, transformed, massaged and in an optimized output format for the data store. The Data Engineer performs validation and analytics corresponding with client requirements and evolves solutions through automation, optimizing performance with minimal human involvement. As pipelines are executed, the data engineer monitors their status, performance, and troubleshoots issues while working on improvements to ensure the solution is the very best version to address the customer need. As a Mid-Level Data Engineer, this role focuses specifically on the development and maintenance of scalable data stores that supply big data in forms needed for business analysis. The best athlete candidate for this position will be able to apply advanced consulting skills, extensive technical expertise and has full industry knowledge to develop innovative solutions to complex problems. This candidate is able to work without considerable direction and may mentor or supervise other team members. What we’re looking for: Someone with a solid background developing solutions for high volume, low latency applications and can operate in a fast paced, highly collaborative environment. A candidate with distributed computer understanding and experience with SQL, Spark, ETL. A person who appreciates the opportunity to be independent, creative and challenged. An individual with a curious mind, passionate about solving problems quickly and bringing innovative ideas to the table. Basic Qualifications: 4+ years of experience with SQL 4+ years of experience developing data pipelines using modern Big Data ETL technologies like NiFi or StreamSets. 4+ years of experience with a modern programming language such as Python or Java 4 years of experience working in a big data and cloud environment Secret Clearance or higher Additional Qualifications: 2 years of experience working in an agile development environment Ability to quickly learn technical concepts and communicate with multiple functional groups Ability to display a positive, can-do attitude to solve the challenges of tomorrow Possession of excellent verbal and written communication skills Preferred experience at the respective command with an understanding of analytical and data paint points and challenges across the J-Codes."," Full-time ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-petadata-3500366275?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=UEkV8ACBCtKimtTXr%2Frx%2BA%3D%3D&position=3&pageNum=16&trk=public_jobs_jserp-result_search-card," PETADATA ",https://www.linkedin.com/company/petadata?trk=public_jobs_topcard-org-name," Seattle, WA "," 2 weeks ago "," Over 200 applicants ","Job Title: Data Engineer Location: Seattle, WA/ ( 100 % Remote) Experience: 10+ years PETADATA was established to Provide all kinds of careers in IT, including IT support, software engineering, analytics and many other information technology areas of expertise. Our Software Development Delivery Canter’s located in San Francisco (California-USA) & London (UK) Hyderabad (INDIA) and Toronto, CANADA, With its vision to empower Global clients with a complete range of quality-oriented services. Summary: The Sr. Data Engineer is an expert in multiple infrastructure and data platform technologies and is responsible for the ongoing operational excellence of the technology environment. The scope of the job includes the development, design and maintenance of inbound data processing and outbound data processing using the tools within the client’s business. Primary Responsibilities: Should Be able to establish scalable, efficient, automated processes for large dataset analysis, model development, and validation. Ability to support, test, deploy, maintain the AWS ecosystem from an infrastructure standpoint. Coordinate data access and security to enable data scientists and analysts to easily access data whenever they need to and maintain the AWS ecosystem from an infrastructure standpoint. Demonstrates excellent verbal and written communication skills as well as the ability to bridge the gap between data science, computer engineering, and management. Required Technical and Professional Expertise: Understanding of statistics, machine learning, algorithms, predictive modeling, and advanced mathematics. Should have 6+ years of experience in the design and implementation of data pipelines. Proven ability to work in Agile Environments such as early and continuous delivery of valuable software, Embrace change, Frequent delivery, Autonomy and motivation etc. Must be Hands-on experience in programming languages such as Python, Scala or Java. Should have Hands-on experience in batch processing (Spark, Presto, Hive) or streaming (Flink, Beam, Spark Streaming). Good to have 5+ years of experience in AWS with Kubernetes. Must be a constructive communicator and is capable of discussing difficult issues to explain data pipelines effectively with team members and customers. Note: Candidates are required to attend Phone/Video Call / In-person interviews and after the Selection, the candidate (He/She) should go through all background checks on Education and Experience."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting and Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-customer-success-at-square-3507264467?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=t6KnyxggOk23Cy9Vq2VWTA%3D%3D&position=4&pageNum=16&trk=public_jobs_jserp-result_search-card," PETADATA ",https://www.linkedin.com/company/petadata?trk=public_jobs_topcard-org-name," Seattle, WA "," 2 weeks ago "," Over 200 applicants "," Job Title: Data EngineerLocation: Seattle, WA/ ( 100 % Remote)Experience: 10+ yearsPETADATA was established to Provide all kinds of careers in IT, including IT support, software engineering, analytics and many other information technology areas of expertise. Our Software Development Delivery Canter’s located in San Francisco (California-USA) & London (UK) Hyderabad (INDIA) and Toronto, CANADA, With its vision to empower Global clients with a complete range of quality-oriented services.Summary:The Sr. Data Engineer is an expert in multiple infrastructure and data platform technologies and is responsible for the ongoing operational excellence of the technology environment. The scope of the job includes the development, design and maintenance of inbound data processing and outbound data processing using the tools within the client’s business.Primary Responsibilities:Should Be able to establish scalable, efficient, automated processes for large dataset analysis, model development, and validation.Ability to support, test, deploy, maintain the AWS ecosystem from an infrastructure standpoint.Coordinate data access and security to enable data scientists and analysts to easily access data whenever they need to and maintain the AWS ecosystem from an infrastructure standpoint.Demonstrates excellent verbal and written communication skills as well as the ability to bridge the gap between data science, computer engineering, and management.Required Technical and Professional Expertise:Understanding of statistics, machine learning, algorithms, predictive modeling, and advanced mathematics.Should have 6+ years of experience in the design and implementation of data pipelines.Proven ability to work in Agile Environments such as early and continuous delivery of valuable software, Embrace change, Frequent delivery, Autonomy and motivation etc.Must be Hands-on experience in programming languages such as Python, Scala or Java.Should have Hands-on experience in batch processing (Spark, Presto, Hive) or streaming (Flink, Beam, Spark Streaming).Good to have 5+ years of experience in AWS with Kubernetes.Must be a constructive communicator and is capable of discussing difficult issues to explain data pipelines effectively with team members and customers.Note:Candidates are required to attend Phone/Video Call / In-person interviews and after the Selection, the candidate (He/She) should go through all background checks on Education and Experience. "," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting and Software Development " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499586077?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=HeLrYKCahaepwfkOm5QtmQ%3D%3D&position=5&pageNum=16&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Florida, United States "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-crystal-city-at-altamira-technologies-corporation-3518406265?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=B3IEYKSJfOxfpBHVNb49cw%3D%3D&position=6&pageNum=16&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Florida, United States "," 2 weeks ago "," Be among the first 25 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499584441?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=YIITGsjbwFm0wmOx6r4fjA%3D%3D&position=7&pageNum=16&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Florida, United States "," 2 weeks ago "," Be among the first 25 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-amtec-inc-3521469899?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=togakP45IrviKXzt1Vo%2B8g%3D%3D&position=8&pageNum=16&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Florida, United States "," 2 weeks ago "," Be among the first 25 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-compunnel-inc-3522197341?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=TnTzAN3bdAtjLzDltE3prg%3D%3D&position=9&pageNum=16&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," West Lake Hills, TX "," 1 week ago "," Be among the first 25 applicants ","Description The Expertise and Skills You Bring: Bachelor’s or Primary Degree in a technology related field (e.g. Engineering, Computer Science, etc.) required. Extensive experience in relational databases like Oracle or Snowflake Experience in developing data applications in Cloud (AWS, Azure, Google Cloud) Development experience using Python Experience in Data Warehousing, data modeling and creation of data marts. Experience with ETL technologies (Informatica or similar) Experience with Business Analytics and Dashboards is a plus Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) is a plus Experience in Agile methodologies (Kanban and SCRUM) is a plus Experience building scalable and robust ETL data flows using a range of technologies Strong data analysis skills Ability to deal with ambiguity and work in fast paced environment Excellent communication skills, both through written and verbal channels Excellent collaboration skills to work with multiple teams in the organization Education: Bachelors Degree"," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ascendion-3529410074?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=fAuGxsDsQUV1Sb5yM9trwA%3D%3D&position=10&pageNum=16&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," West Lake Hills, TX "," 1 week ago "," Be among the first 25 applicants "," DescriptionThe Expertise and Skills You Bring:Bachelor’s or Primary Degree in a technology related field (e.g. Engineering, Computer Science, etc.) required.Extensive experience in relational databases like Oracle or SnowflakeExperience in developing data applications in Cloud (AWS, Azure, Google Cloud)Development experience using PythonExperience in Data Warehousing, data modeling and creation of data marts.Experience with ETL technologies (Informatica or similar)Experience with Business Analytics and Dashboards is a plusExperience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) is a plusExperience in Agile methodologies (Kanban and SCRUM) is a plusExperience building scalable and robust ETL data flows using a range of technologiesStrong data analysis skillsAbility to deal with ambiguity and work in fast paced environmentExcellent communication skills, both through written and verbal channelsExcellent collaboration skills to work with multiple teams in the organizationEducation: Bachelors Degree "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/senior-software-data-engineer-at-datatribe-3515385115?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=XiDOGDtkru9%2F5wQIkKRaog%3D%3D&position=11&pageNum=16&trk=public_jobs_jserp-result_search-card," Compunnel Inc. ",https://www.linkedin.com/company/compunnel-software-group?trk=public_jobs_topcard-org-name," West Lake Hills, TX "," 1 week ago "," Be among the first 25 applicants "," DescriptionThe Expertise and Skills You Bring:Bachelor’s or Primary Degree in a technology related field (e.g. Engineering, Computer Science, etc.) required.Extensive experience in relational databases like Oracle or SnowflakeExperience in developing data applications in Cloud (AWS, Azure, Google Cloud)Development experience using PythonExperience in Data Warehousing, data modeling and creation of data marts.Experience with ETL technologies (Informatica or similar)Experience with Business Analytics and Dashboards is a plusExperience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) is a plusExperience in Agile methodologies (Kanban and SCRUM) is a plusExperience building scalable and robust ETL data flows using a range of technologiesStrong data analysis skillsAbility to deal with ambiguity and work in fast paced environmentExcellent communication skills, both through written and verbal channelsExcellent collaboration skills to work with multiple teams in the organizationEducation: Bachelors Degree "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Remote Data Engineer,https://www.linkedin.com/jobs/view/remote-data-engineer-at-insight-global-3507150446?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=WZWHZhZLgY%2F6vK%2FzXMW6cQ%3D%3D&position=12&pageNum=16&trk=public_jobs_jserp-result_search-card," Insight Global ",https://www.linkedin.com/company/insight-global?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 124 applicants ","This role is only open to US Citizens and Green Card holders. This is a Direct Placement role and it cannot be worked C2C. Overview * 3+ years building data products using Python * 3+ years of experience in developing BigData and/or machine learning solutions * 3+ Years of experience defining and/or designing data architectures * Experience with the MS Cloud stack (Azure) or AWS * Experience with SQL, NoSQL, BigData and Graph Technologies along with Programming languages like R, Python, Kafka, Storm etc. * Background in agile SW development and Scaled Agile Frameworks * Bachelors Degree or equivalent Nice to Have * Experience in building Snowflake data warehouses Day-to-Day * Understands business needs and develop solutions that delight consumers and customers * Understands Agile artifacts and develops applications based upon business priority. Collaborate with project partners to ensure all requirements are met. Handles relationships with end user communities. Interacts regularly with users to gather feedback, listen to their issues and concerns, recommend solutions. * Build scalable, fault-tolerant batch and real-time data pipelines to power internal applications, operational workflows, and business intelligence platforms * Create and maintain data-driven APIs to support a wide range of integration with NBA partners * Demonstrate your technical abilities and contribute to our overall architecture * Help implement the Enterprise Data Architecture for NBA and help implement it in multi-functional alignment with the Data teams that exist across functions like Marketing, Finance, HR etc. * Provide insights during application design and development for highly complex or critical machine learning projects across numerous lines of business and shared technology. * Ensure alignment to enterprise architecture and usage of enterprise platforms when delivering projects * Continuously improve the quality of deliverables and SDLC processes"," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer III,https://www.linkedin.com/jobs/view/data-engineer-iii-at-ssp-innovations-llc-3497882492?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=54RtfJAKl6UwXJAS4wX7Zg%3D%3D&position=13&pageNum=16&trk=public_jobs_jserp-result_search-card," SSP Innovations, LLC ",https://www.linkedin.com/company/ssp-innovations-llc?trk=public_jobs_topcard-org-name," Huntsville, AL "," 3 weeks ago "," Be among the first 25 applicants ","The purpose of the Data Engineer is to migrate and convert data from foreign data models into our proprietary 3-GIS data model. In addition to data conversion, the data engineer supports existing 3-GIS accounts with data changes for system corrections and upgrade requirements. Data maintenance tasks will require a good working knowledge of ArcGIS tools, SQL, ArcPy and an ability to navigate a relational data model to update data tables and associated relationships on large spatial tables to perform daily tasks. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Requirements Responsible for supporting data migrations and data conversions from disparate systems into the 3-GIS Data Model. Oversee data mapping exercises and work closely with customers to understand their data models and to provide mapping between systems. Employ a variety of tools including but not limited to Python Scripting, Database Scripting languages such as SQL, advanced knowledge of ESRI applications, FME workbenches and internally developed tools, as the assignment dictates. Exhibit strong communication skills to ensure customer understanding of data migrations and conversions and must work with various team members such as Solutions Engineers, Data Technicians, Sales Executives and Project Managers to ensure project success. Must be detail oriented and maintain proper time management as projects generally have specific timelines and milestones required for project success Ability to provide references for users by writing and maintaining documentation Design, develop, and test data transformation, extraction, and migration activities. Prepare technical reports by collecting, analyzing, and summarizing information Perform tasks efficiently while validating methodology. Interact directly (face-to-face and remotely) with clients and project teams Provide best practice recommendations during data mapping and project exercises. Collaborate with leadership to improve customer experience Other duties as assigned Required Qualifications: You are someone who is motivated by helping other people solve problems. You thrive in a busy environment, and are passionate about providing an outstanding experience for our customers. Bachelor’s Degree in Information Technology or related field 5+ years of experience in data conversion, data mapping, and data analysis 5+ years of experience using ESRI, FME, or a combination of both Intermediate database experience including SQL relationship statements Intermediate Programming/Report Analysis experience Strong organizational skills and attention to detail Experience with issue tracking software (i.e. Jira) Communication skills, both oral and written Deadline driven, ability to work independently as well as a team player Preferred: 5+ year of experience with managed telecommunications databases Proficiency in ticketing support tools like Jira Database troubleshooting experience (including management tools such as SQL Developer, SQL Server Manager, PGAdmin) Experience with Python or similar scripting languages Experience with large databases containing millions of rows per table (Oracle experience preferred) Flexible to work across different US time zones Basic 3-GIS data model or application experience Working conditions This position can operate in a professional office environment or remotely. This role requires routine use of standard office equipment such as computers, phones, and copiers. Powered by JazzHR T59zqGYiPq"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer III,https://www.linkedin.com/jobs/view/healthcare-azure-data-engineer-at-omnidata-3505622525?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=tkd9rvOnOdHItCqZMBb47Q%3D%3D&position=14&pageNum=16&trk=public_jobs_jserp-result_search-card," SSP Innovations, LLC ",https://www.linkedin.com/company/ssp-innovations-llc?trk=public_jobs_topcard-org-name," Huntsville, AL "," 3 weeks ago "," Be among the first 25 applicants "," The purpose of the Data Engineer is to migrate and convert data from foreign data models into our proprietary 3-GIS data model. In addition to data conversion, the data engineer supports existing 3-GIS accounts with data changes for system corrections and upgrade requirements. Data maintenance tasks will require a good working knowledge of ArcGIS tools, SQL, ArcPy and an ability to navigate a relational data model to update data tables and associated relationships on large spatial tables to perform daily tasks.Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.RequirementsResponsible for supporting data migrations and data conversions from disparate systems into the 3-GIS Data Model. Oversee data mapping exercises and work closely with customers to understand their data models and to provide mapping between systems. Employ a variety of tools including but not limited to Python Scripting, Database Scripting languages such as SQL, advanced knowledge of ESRI applications, FME workbenches and internally developed tools, as the assignment dictates.Exhibit strong communication skills to ensure customer understanding of data migrations and conversions and must work with various team members such as Solutions Engineers, Data Technicians, Sales Executives and Project Managers to ensure project success. Must be detail oriented and maintain proper time management as projects generally have specific timelines and milestones required for project successAbility to provide references for users by writing and maintaining documentationDesign, develop, and test data transformation, extraction, and migration activities. Prepare technical reports by collecting, analyzing, and summarizing information Perform tasks efficiently while validating methodology. Interact directly (face-to-face and remotely) with clients and project teamsProvide best practice recommendations during data mapping and project exercises.Collaborate with leadership to improve customer experience Other duties as assignedRequired Qualifications: You are someone who is motivated by helping other people solve problems. You thrive in a busy environment, and are passionate about providing an outstanding experience for our customers.Bachelor’s Degree in Information Technology or related field5+ years of experience in data conversion, data mapping, and data analysis5+ years of experience using ESRI, FME, or a combination of bothIntermediate database experience including SQL relationship statementsIntermediate Programming/Report Analysis experienceStrong organizational skills and attention to detailExperience with issue tracking software (i.e. Jira)Communication skills, both oral and writtenDeadline driven, ability to work independently as well as a team playerPreferred: 5+ year of experience with managed telecommunications databasesProficiency in ticketing support tools like Jira Database troubleshooting experience (including management tools such as SQL Developer, SQL Server Manager, PGAdmin)Experience with Python or similar scripting languagesExperience with large databases containing millions of rows per table (Oracle experience preferred)Flexible to work across different US time zones Basic 3-GIS data model or application experienceWorking conditionsThis position can operate in a professional office environment or remotely. This role requires routine use of standard office equipment such as computers, phones, and copiers.Powered by JazzHRT59zqGYiPq "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer III,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499581597?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=0fcd68oCMWuhCTwjZ6mm5Q%3D%3D&position=15&pageNum=16&trk=public_jobs_jserp-result_search-card," SSP Innovations, LLC ",https://www.linkedin.com/company/ssp-innovations-llc?trk=public_jobs_topcard-org-name," Huntsville, AL "," 3 weeks ago "," Be among the first 25 applicants "," The purpose of the Data Engineer is to migrate and convert data from foreign data models into our proprietary 3-GIS data model. In addition to data conversion, the data engineer supports existing 3-GIS accounts with data changes for system corrections and upgrade requirements. Data maintenance tasks will require a good working knowledge of ArcGIS tools, SQL, ArcPy and an ability to navigate a relational data model to update data tables and associated relationships on large spatial tables to perform daily tasks.Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.RequirementsResponsible for supporting data migrations and data conversions from disparate systems into the 3-GIS Data Model. Oversee data mapping exercises and work closely with customers to understand their data models and to provide mapping between systems. Employ a variety of tools including but not limited to Python Scripting, Database Scripting languages such as SQL, advanced knowledge of ESRI applications, FME workbenches and internally developed tools, as the assignment dictates.Exhibit strong communication skills to ensure customer understanding of data migrations and conversions and must work with various team members such as Solutions Engineers, Data Technicians, Sales Executives and Project Managers to ensure project success. Must be detail oriented and maintain proper time management as projects generally have specific timelines and milestones required for project successAbility to provide references for users by writing and maintaining documentationDesign, develop, and test data transformation, extraction, and migration activities. Prepare technical reports by collecting, analyzing, and summarizing information Perform tasks efficiently while validating methodology. Interact directly (face-to-face and remotely) with clients and project teamsProvide best practice recommendations during data mapping and project exercises.Collaborate with leadership to improve customer experience Other duties as assignedRequired Qualifications: You are someone who is motivated by helping other people solve problems. You thrive in a busy environment, and are passionate about providing an outstanding experience for our customers.Bachelor’s Degree in Information Technology or related field5+ years of experience in data conversion, data mapping, and data analysis5+ years of experience using ESRI, FME, or a combination of bothIntermediate database experience including SQL relationship statementsIntermediate Programming/Report Analysis experienceStrong organizational skills and attention to detailExperience with issue tracking software (i.e. Jira)Communication skills, both oral and writtenDeadline driven, ability to work independently as well as a team playerPreferred: 5+ year of experience with managed telecommunications databasesProficiency in ticketing support tools like Jira Database troubleshooting experience (including management tools such as SQL Developer, SQL Server Manager, PGAdmin)Experience with Python or similar scripting languagesExperience with large databases containing millions of rows per table (Oracle experience preferred)Flexible to work across different US time zones Basic 3-GIS data model or application experienceWorking conditionsThis position can operate in a professional office environment or remotely. This role requires routine use of standard office equipment such as computers, phones, and copiers.Powered by JazzHRT59zqGYiPq "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-infogain-3500210068?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=Xq1pJvLT3WaIxErweb0hwA%3D%3D&position=16&pageNum=16&trk=public_jobs_jserp-result_search-card," Infogain ",https://www.linkedin.com/company/infogain?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Job Description:Core Skills – Data Engineering, data governance, data analysis, ETL, cloud computing, data security Job Description: -⇨You will be part of a team that is mostly domain agnostic, working hands-on with data and implementing capabilities that will support a variety of use cases for the Data Analytics organization ⇨ Common tasks you might perform when working with data: Acquire datasets that align with business needs Develop algorithms to transform data into useful, actionable information Build, test, and maintain database pipeline architectures Collaborate with management to understand company objectives Create new data validation methods and data analysis tools Ensure compliance with data governance and security policies You will need to know the fundamentals of cloud computing, coding skills, and database design ⇨ Proficiency in coding languages is essential to this role ⇨ Be familiar with both relational and non-relational databases and how they work ⇨ Know ETL (extract, transform, and load) systems as you design data solutions for a company, you will want to know when to use a data lake versus a data warehouse You should be able to write scripts to automate repetitive tasks ⇨ Know basic concepts of machine learning and understand cloud storage and cloud computing Know Data security There may be times where an enabling member of our team joins another domain engineering team for a limited span engagement, acting as an internal consultant to help understand the teams needs, establish the learning environment, and upskill the team members on the data analytics platform "," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-grote-industries-3520666415?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=I1RoUnxC%2Fy2ICdEYbGbLhw%3D%3D&position=17&pageNum=16&trk=public_jobs_jserp-result_search-card," Grote Industries ",https://www.linkedin.com/company/grote-industries-llc?trk=public_jobs_topcard-org-name," Madison, IN "," 1 month ago "," Be among the first 25 applicants ","Responsibility & Customer-Focused: Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals. Have working experience with the Big Data technologies like Azure, AWS etc. Advanced working SQL knowledge and experience working with relational databases, working familiarity with the variety of databases. Advanced working Python / R knowledge. Basic understanding of Machine Learning techniques. Solve complex data problems to deliver insights that helps the organization’s business to achieve their goals. Create data products for analytics team. Prepare data for prescriptive and predictive modeling. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Advice, consult, mentor and coach other data and analytic professionals on data standards and practices. Foster a culture of sharing, re-use, design for scale stability and operational efficiency of data and analytical solutions Lead the evaluation, implementation and deployment of emerging tools and process for analytic data engineering to improve the organization’s productivity as a team. Experience supporting and working with cross-functional teams in dynamic environment. Partner with business analysts and solution architects to develop technical architectures for strategic enterprise projects and initiatives. Responsible to follow and carry out corporate procedures, policies, guidelines, legal requirements, etc.Knowledge in: Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases such as DB2. Advanced programming experience with Python or R Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. Good project management and organizational skills. Experience supporting and working with cross-functional teams.Skills in: Attention to detail Complex Problem solving Work as a productive member of a team Work independently with direction Multi-tasking Organization Excellent written and oral communicationDemonstrated ability to: Manages multiple assignments simultaneously and demonstrate strong organizational skills Ability to interact with all levels of employees Work with user community and other IS personnel to gather requirements. Work at a highly motivated level with a sense of urgency and dedication to accomplishing tasks Ability to type on computer keyboard for extended periods of time Adapt to frequent changes in work environment & prioritization. Work with general or minimal supervision Leverage existing and potential trainingExperience with: Big Data Tools Relational & NoSQL database Data pipeline and workflow management tools. Azure cloud services. Object oriented languages like Python or R. Conflict resolution In-depth problem identification, analysis and resolution Prioritizing and executing multiple projectsOther: Experience of working with ERP systems Experience with Visual Studio"," Entry level "," Full-time "," Information Technology "," Automotive " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-intuitive-technology-group-3497808141?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=gtU8YVjTk0W1Ju1lf9DzIw%3D%3D&position=18&pageNum=16&trk=public_jobs_jserp-result_search-card," Grote Industries ",https://www.linkedin.com/company/grote-industries-llc?trk=public_jobs_topcard-org-name," Madison, IN "," 1 month ago "," Be among the first 25 applicants "," Responsibility & Customer-Focused:Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals.Have working experience with the Big Data technologies like Azure, AWS etc.Advanced working SQL knowledge and experience working with relational databases, working familiarity with the variety of databases.Advanced working Python / R knowledge.Basic understanding of Machine Learning techniques.Solve complex data problems to deliver insights that helps the organization’s business to achieve their goals.Create data products for analytics team.Prepare data for prescriptive and predictive modeling.Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Advice, consult, mentor and coach other data and analytic professionals on data standards and practices.Foster a culture of sharing, re-use, design for scale stability and operational efficiency of data and analytical solutionsLead the evaluation, implementation and deployment of emerging tools and process for analytic data engineering to improve the organization’s productivity as a team.Experience supporting and working with cross-functional teams in dynamic environment.Partner with business analysts and solution architects to develop technical architectures for strategic enterprise projects and initiatives.Responsible to follow and carry out corporate procedures, policies, guidelines, legal requirements, etc.Knowledge in:Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases such as DB2.Advanced programming experience with Python or RExperience building and optimizing ‘big data’ data pipelines, architectures, and data sets.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Strong analytic skills related to working with unstructured datasets.Build processes supporting data transformation, data structures, metadata, dependency, and workload management.Good project management and organizational skills.Experience supporting and working with cross-functional teams.Skills in:Attention to detailComplex Problem solvingWork as a productive member of a teamWork independently with directionMulti-taskingOrganizationExcellent written and oral communicationDemonstrated ability to:Manages multiple assignments simultaneously and demonstrate strong organizational skillsAbility to interact with all levels of employeesWork with user community and other IS personnel to gather requirements.Work at a highly motivated level with a sense of urgency and dedication to accomplishing tasksAbility to type on computer keyboard for extended periods of timeAdapt to frequent changes in work environment & prioritization.Work with general or minimal supervisionLeverage existing and potential trainingExperience with:Big Data ToolsRelational & NoSQL databaseData pipeline and workflow management tools.Azure cloud services.Object oriented languages like Python or R.Conflict resolutionIn-depth problem identification, analysis and resolutionPrioritizing and executing multiple projectsOther:Experience of working with ERP systemsExperience with Visual Studio "," Entry level "," Full-time "," Information Technology "," Automotive " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-mdi-health-3516966792?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=SCM1BaxWhQV%2FdiLiLmm1ew%3D%3D&position=19&pageNum=16&trk=public_jobs_jserp-result_search-card," MDI Health ",https://www.linkedin.com/company/mdi-health?trk=public_jobs_topcard-org-name," Chicago, IL "," 1 week ago "," Be among the first 25 applicants ","MDI Health is on a mission to prevent medication-related problems, using complex data-driven algorithms, to create immense clinical, financial, and social impact in healthcare. We develop life-saving technology, and data-driven solutions that enable healthcare organizations to improve patient health outcomes, reduce hospitalizations, lower healthcare costs, and manage medication-related risks. We are looking for a superstar Data Integration Engineer to join us in achieving our mission to save lives. If you care deeply about impacting the lives of others, enjoy working in a high-speed environment, and want to be part of the future of healthcare, then join MDI! What you’ll do: Communicate our data requirements to the customers Guide our customers during the data acquisition, quality assurance, and integration Implement and maintain MDI’s ETL processing service using Node.js Work closely with the team of engineers to capture and integrate data into our cloud-based data warehouse What we’re looking for: 3+ years of experience as a Data Engineer working with healthcare data Strong familiarity with the US healthcare system and pharma industries BSc. in computer science or equivalent 2+ years of experience in Node.js/Python/Java/C# Excellent SQL skills (for example stored procedures, inner and outer joins and database indexing and query optimization) Experience working with large datasets Understanding of data schemas, data marts, and/or enterprise data warehouse design 2+ years of experience managing relational databases (MSSQL, Oracle, MySQL, etc.) Experience in ETL development (design and implementation) Self Driven, highly motivated, and independent worker Team player with excellent communication skills Fluent in speaking English as well as reading and writin Advantages: Experience with AWS cloud services (EC2 / Fargate / S3 / RDS / Route53 / Cloudfront) Experience with Node.js Experience with business intelligence platforms such as PowerBI, Tableau "," Not Applicable "," Full-time "," Information Technology "," Health, Wellness & Fitness " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-rei-systems-3493273439?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=aEGFBSB%2BHVb5V1x1g%2FAgiw%3D%3D&position=20&pageNum=16&trk=public_jobs_jserp-result_search-card," REI Systems ",https://www.linkedin.com/company/rei-systems?trk=public_jobs_topcard-org-name," Sterling, VA "," 3 weeks ago "," Be among the first 25 applicants ","REI Systems provides reliable, effective, and innovative technology solutions that advance federal, state, local, and nonprofit missions. Our technologists and consultants are passionate about solving complex challenges that impact millions of lives. We take a Mindful Modernization approach in delivering our application modernization, grants management systems, government data analytics, and advisory services. Mindful Modernization is the REI Way of delivering mission impact by aligning our government customers’ strategic objectives to measurable outcomes through people, processes, and technology. Learn more at REIsystems.com.  Employees voted REI Systems a Washington Post Top Workplace in 2015, 2016, 2018, 2020, 2021 and 2022! As a senior data engineer, you will/may Monitor and troubleshoot operational or data issues in the data pipelines. Develop code based automated data pipelines able to process millions of data points. Improve database and data warehouse performance by tuning inefficient queries. Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems. Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem. Required Qualifications Expertise in Python. Experince in Data Pipeline development and Data Cleansing. Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational). Understand the basics of designing and implementing a data schema (e.g., normalization, relational model vs dimensional model). 5 yr. experience with data mining and data transformation. 5 yr. experience with database and/or data warehouse 5 yr. experience with SQL. 5 yr. experience with Python, Spark (PySpark), Databricks, AWS, Azure Preferred Qualifications Experience building code based on data pipelines in production able to process big datasets. Knowledge of writing and optimizing SQL queries with large-scale, complex datasets. Industry certifications including Databricks and AWS Experience with Spark MLlib and applying existing machine learning algorithms against data lakehouses to drive insight and predictive capabilities Experience with data mining and data transformation. Experience with database and/or data warehouse Experience building data pipelines or automated ETL processes. Experience with Tableau Education Bachelor’s degree in computer science, data analytics, business intelligence, economics, statistics, or mathematics Clearance US Citizen able to obtain Public Trust Certification(s) AWS & Oracle certification is preferred. Location/Remote Hybrid- Sterling, VA - Washington, DC Covid Policy Disclosure Should the essential functions of this position require that the employee performing this role work on-site at REI’s Sterling location the following requirements will apply the individual holding this position must be fully vaccinated, as defined in CDC guidance, as a condition of continued employment. REI will consider requests to be excused from this policy whenever necessary to comply with legal requirements and will consider any requests for reasonable accommodations due to a disability, religion, or other exemptions on an individual basis in accordance with applicable legal requirements. Employees and applicants requesting accommodations should request the accommodation in writing and should explain in detail the reasons why they are seeking an accommodation. REI will request additional information or documentation it deems necessary to inform its decision on an employee’s or applicant’s accommodation request. REI Systems is an Equal Opportunity Employer (Minority/Female/Disability/Vet)"," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-analytics-intern-at-great-lakes-cheese-3522104106?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=dTNapqQzRyIsZNPRVST4zQ%3D%3D&position=21&pageNum=16&trk=public_jobs_jserp-result_search-card," REI Systems ",https://www.linkedin.com/company/rei-systems?trk=public_jobs_topcard-org-name," Sterling, VA "," 3 weeks ago "," Be among the first 25 applicants "," REI Systems provides reliable, effective, and innovative technology solutions that advance federal, state, local, and nonprofit missions. Our technologists and consultants are passionate about solving complex challenges that impact millions of lives. We take a Mindful Modernization approach in delivering our application modernization, grants management systems, government data analytics, and advisory services. Mindful Modernization is the REI Way of delivering mission impact by aligning our government customers’ strategic objectives to measurable outcomes through people, processes, and technology.Learn more at REIsystems.com. Employees voted REI Systems a Washington Post Top Workplace in 2015, 2016, 2018, 2020, 2021 and 2022!As a senior data engineer, you will/mayMonitor and troubleshoot operational or data issues in the data pipelines.Develop code based automated data pipelines able to process millions of data points.Improve database and data warehouse performance by tuning inefficient queries.Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems.Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem.Required QualificationsExpertise in Python.Experince in Data Pipeline development and Data Cleansing.Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational).Understand the basics of designing and implementing a data schema (e.g., normalization, relational model vs dimensional model).5 yr. experience with data mining and data transformation.5 yr. experience with database and/or data warehouse5 yr. experience with SQL.5 yr. experience with Python, Spark (PySpark), Databricks, AWS, AzurePreferred QualificationsExperience building code based on data pipelines in production able to process big datasets.Knowledge of writing and optimizing SQL queries with large-scale, complex datasets.Industry certifications including Databricks and AWSExperience with Spark MLlib and applying existing machine learning algorithms against data lakehouses to drive insight and predictive capabilitiesExperience with data mining and data transformation.Experience with database and/or data warehouseExperience building data pipelines or automated ETL processes.Experience with TableauEducation Bachelor’s degree in computer science, data analytics, business intelligence, economics, statistics, or mathematicsClearance US Citizen able to obtain Public TrustCertification(s) AWS & Oracle certification is preferred.Location/Remote Hybrid- Sterling, VA - Washington, DCCovid Policy Disclosure Should the essential functions of this position require that the employee performing this role work on-site at REI’s Sterling location the following requirements will apply the individual holding this position must be fully vaccinated, as defined in CDC guidance, as a condition of continued employment. REI will consider requests to be excused from this policy whenever necessary to comply with legal requirements and will consider any requests for reasonable accommodations due to a disability, religion, or other exemptions on an individual basis in accordance with applicable legal requirements. Employees and applicants requesting accommodations should request the accommodation in writing and should explain in detail the reasons why they are seeking an accommodation. REI will request additional information or documentation it deems necessary to inform its decision on an employee’s or applicant’s accommodation request.REI Systems is an Equal Opportunity Employer (Minority/Female/Disability/Vet) "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-the-intersect-group-3500040107?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=8GYKXo8JGyGVK0tC9C3Eiw%3D%3D&position=22&pageNum=16&trk=public_jobs_jserp-result_search-card," REI Systems ",https://www.linkedin.com/company/rei-systems?trk=public_jobs_topcard-org-name," Sterling, VA "," 3 weeks ago "," Be among the first 25 applicants "," REI Systems provides reliable, effective, and innovative technology solutions that advance federal, state, local, and nonprofit missions. Our technologists and consultants are passionate about solving complex challenges that impact millions of lives. We take a Mindful Modernization approach in delivering our application modernization, grants management systems, government data analytics, and advisory services. Mindful Modernization is the REI Way of delivering mission impact by aligning our government customers’ strategic objectives to measurable outcomes through people, processes, and technology.Learn more at REIsystems.com. Employees voted REI Systems a Washington Post Top Workplace in 2015, 2016, 2018, 2020, 2021 and 2022!As a senior data engineer, you will/mayMonitor and troubleshoot operational or data issues in the data pipelines.Develop code based automated data pipelines able to process millions of data points.Improve database and data warehouse performance by tuning inefficient queries.Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems.Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem.Required QualificationsExpertise in Python.Experince in Data Pipeline development and Data Cleansing.Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational).Understand the basics of designing and implementing a data schema (e.g., normalization, relational model vs dimensional model).5 yr. experience with data mining and data transformation.5 yr. experience with database and/or data warehouse5 yr. experience with SQL.5 yr. experience with Python, Spark (PySpark), Databricks, AWS, AzurePreferred QualificationsExperience building code based on data pipelines in production able to process big datasets.Knowledge of writing and optimizing SQL queries with large-scale, complex datasets.Industry certifications including Databricks and AWSExperience with Spark MLlib and applying existing machine learning algorithms against data lakehouses to drive insight and predictive capabilitiesExperience with data mining and data transformation.Experience with database and/or data warehouseExperience building data pipelines or automated ETL processes.Experience with TableauEducation Bachelor’s degree in computer science, data analytics, business intelligence, economics, statistics, or mathematicsClearance US Citizen able to obtain Public TrustCertification(s) AWS & Oracle certification is preferred.Location/Remote Hybrid- Sterling, VA - Washington, DCCovid Policy Disclosure Should the essential functions of this position require that the employee performing this role work on-site at REI’s Sterling location the following requirements will apply the individual holding this position must be fully vaccinated, as defined in CDC guidance, as a condition of continued employment. REI will consider requests to be excused from this policy whenever necessary to comply with legal requirements and will consider any requests for reasonable accommodations due to a disability, religion, or other exemptions on an individual basis in accordance with applicable legal requirements. Employees and applicants requesting accommodations should request the accommodation in writing and should explain in detail the reasons why they are seeking an accommodation. REI will request additional information or documentation it deems necessary to inform its decision on an employee’s or applicant’s accommodation request.REI Systems is an Equal Opportunity Employer (Minority/Female/Disability/Vet) "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/sr-data-engineer-peacock-at-peacock-3529811357?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=YzCiNeMQvWe8snJ6GQ%2F5AA%3D%3D&position=23&pageNum=16&trk=public_jobs_jserp-result_search-card," REI Systems ",https://www.linkedin.com/company/rei-systems?trk=public_jobs_topcard-org-name," Sterling, VA "," 3 weeks ago "," Be among the first 25 applicants "," REI Systems provides reliable, effective, and innovative technology solutions that advance federal, state, local, and nonprofit missions. Our technologists and consultants are passionate about solving complex challenges that impact millions of lives. We take a Mindful Modernization approach in delivering our application modernization, grants management systems, government data analytics, and advisory services. Mindful Modernization is the REI Way of delivering mission impact by aligning our government customers’ strategic objectives to measurable outcomes through people, processes, and technology.Learn more at REIsystems.com. Employees voted REI Systems a Washington Post Top Workplace in 2015, 2016, 2018, 2020, 2021 and 2022!As a senior data engineer, you will/mayMonitor and troubleshoot operational or data issues in the data pipelines.Develop code based automated data pipelines able to process millions of data points.Improve database and data warehouse performance by tuning inefficient queries.Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems.Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem.Required QualificationsExpertise in Python.Experince in Data Pipeline development and Data Cleansing.Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational).Understand the basics of designing and implementing a data schema (e.g., normalization, relational model vs dimensional model).5 yr. experience with data mining and data transformation.5 yr. experience with database and/or data warehouse5 yr. experience with SQL.5 yr. experience with Python, Spark (PySpark), Databricks, AWS, AzurePreferred QualificationsExperience building code based on data pipelines in production able to process big datasets.Knowledge of writing and optimizing SQL queries with large-scale, complex datasets.Industry certifications including Databricks and AWSExperience with Spark MLlib and applying existing machine learning algorithms against data lakehouses to drive insight and predictive capabilitiesExperience with data mining and data transformation.Experience with database and/or data warehouseExperience building data pipelines or automated ETL processes.Experience with TableauEducation Bachelor’s degree in computer science, data analytics, business intelligence, economics, statistics, or mathematicsClearance US Citizen able to obtain Public TrustCertification(s) AWS & Oracle certification is preferred.Location/Remote Hybrid- Sterling, VA - Washington, DCCovid Policy Disclosure Should the essential functions of this position require that the employee performing this role work on-site at REI’s Sterling location the following requirements will apply the individual holding this position must be fully vaccinated, as defined in CDC guidance, as a condition of continued employment. REI will consider requests to be excused from this policy whenever necessary to comply with legal requirements and will consider any requests for reasonable accommodations due to a disability, religion, or other exemptions on an individual basis in accordance with applicable legal requirements. Employees and applicants requesting accommodations should request the accommodation in writing and should explain in detail the reasons why they are seeking an accommodation. REI will request additional information or documentation it deems necessary to inform its decision on an employee’s or applicant’s accommodation request.REI Systems is an Equal Opportunity Employer (Minority/Female/Disability/Vet) "," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-insight-global-3504231510?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=LZcRVb%2BlQoIQo8dqyjuuPw%3D%3D&position=24&pageNum=16&trk=public_jobs_jserp-result_search-card," Insight Global ",https://www.linkedin.com/company/insight-global?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Must-Haves 3+ years of data engineering or data warehouse modeling 3+ years of very strong SQL experience 3+ years of dimensional data modeling utilizing Tableau, PowerBI, or other similar tools Hand-on usage of Snowflake or Redshift Experience utilizing Python in a data environment Experience in an AWS environment Understanding of how data warehouses are built Plusses SQL Server background Any database languages Business Intelligence experience Day-to-Day Insight Global is looking for a Data Engineer to join one of our largest analytics clients in their Insurance Claims Solutions division. This role is a mid-to-senior level dimensional modeling position that will provide data pipeline expertise and business intelligence leadership to guide the development of data products and reporting solutions. Majority of this engineer’s day will be focused within SQL and reading and writing code, while the other portion will require dimensional modeling and an understanding of how data warehouses are built. The successful candidate has experience developing data pipelines using cloud services and designing data structures that support data products and reporting systems. In this hands-on role, he/she will also be responsible for collaborating with product owners, tackling technical problems, helping define best practices, and contributing to design/code reviews. In addition, the candidate will help create innovative solutions to satisfy the growing needs of internal and external customers."," Mid-Senior level "," Full-time "," Engineering "," Insurance " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-bi-at-arvest-bank-3525978518?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=LmTnKpQHJJKruBM7jhoB%2Bw%3D%3D&position=25&pageNum=16&trk=public_jobs_jserp-result_search-card," Insight Global ",https://www.linkedin.com/company/insight-global?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants "," Must-Haves 3+ years of data engineering or data warehouse modeling3+ years of very strong SQL experience3+ years of dimensional data modeling utilizing Tableau, PowerBI, or other similar toolsHand-on usage of Snowflake or RedshiftExperience utilizing Python in a data environmentExperience in an AWS environmentUnderstanding of how data warehouses are built PlussesSQL Server background Any database languagesBusiness Intelligence experience Day-to-DayInsight Global is looking for a Data Engineer to join one of our largest analytics clients in their Insurance Claims Solutions division. This role is a mid-to-senior level dimensional modeling position that will provide data pipeline expertise and business intelligence leadership to guide the development of data products and reporting solutions. Majority of this engineer’s day will be focused within SQL and reading and writing code, while the other portion will require dimensional modeling and an understanding of how data warehouses are built. The successful candidate has experience developing data pipelines using cloud services and designing data structures that support data products and reporting systems. In this hands-on role, he/she will also be responsible for collaborating with product owners, tackling technical problems, helping define best practices, and contributing to design/code reviews. In addition, the candidate will help create innovative solutions to satisfy the growing needs of internal and external customers. "," Mid-Senior level "," Full-time "," Engineering "," Insurance " Data Engineer,United States,Data Engineer - Customer Success,https://www.linkedin.com/jobs/view/data-engineer-customer-success-at-square-3507264467?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=t6KnyxggOk23Cy9Vq2VWTA%3D%3D&position=4&pageNum=16&trk=public_jobs_jserp-result_search-card," Square ",https://www.linkedin.com/company/joinsquare?trk=public_jobs_topcard-org-name," California, United States "," 2 weeks ago "," Over 200 applicants ","Company Description Since we opened our doors in 2009, the world of commerce has evolved immensely, and so has Square. After enabling anyone to take payments and never miss a sale, we saw sellers stymied by disparate, outmoded products and tools that wouldn’t work together. To solve this problem, we expanded into software and built integrated solutions to help sellers sell online, manage inventory, book appointments, engage loyal buyers, and hire and pay staff. Across it all, we’ve embedded financial services tools at the point of sale, so merchants can access a business loan and manage their cash flow in one place. Afterpay furthers our goal to provide omnichannel tools that unlock meaningful value and growth, enabling sellers to capture the next generation shopper, increase order sizes, and compete at a larger scale. Today, we are a partner to sellers of all sizes – large, enterprise-scale businesses with complex operations, sellers just starting, as well as merchants who began selling with Square and have grown larger over time. As our sellers grow, so do our solutions. There is a massive opportunity in front of us. We’re building a significant, meaningful, and lasting business, and we are helping sellers worldwide do the same. Job Description Square is looking for a Data Engineer to join the Seller Customer Success Operations team to help define, develop and manage curated datasets, key business metrics and reporting. You will architect, implement and manage Data Models, pipelines and ETLs that will enable various teams access consistent metrics across the Customer Success ecosystem. You will: Partner with functional leads to understand their data and reporting requirements, and translating them into definitions and technical specifications (PRD) Be responsible for defining, developing and optimizing curated datasets and schemas with standardized metrics and definitions across the the organization Be responsible for the data migration to new platforms and tools Develop, deploy, maintain, and optimize data models, pipelines, ETL jobs and visualizations Provide comprehensive day-to-day analytics support to partner teams, develop tools and resources to empower data access and self-service so your expertise can be leveraged Model data in Looker or similar visualization tools, to empower data access and self-service resources so your expertise can be leveraged where it is most impactful Work closely with technical partners in the data platform engineering team on designing and developing robust data structures and highly reliable data pipelines Troubleshoot technical issues with platforms, performance, data discrepancies, alerts etc Perform ad hoc analysis, insight requests, and data extractions to resolve critical business and infrastructure issues Qualifications You have: 2+ years of analytical experience in data engineering, data science, or product / BI analytics Bachelor's degree required, with major in analytical or technical field strongly preferred Strong technical intuition and ability to understand complex business systems Knowledge in data modeling concepts and implementation Strong technical accomplishments in SQL, ETLs and data analysis skills MySQL, Snowflake, Redshift, or similar data handling experience Hands on experience in processing extremely large data sets Experience with Linux/OSX command line, version control software (git), and general software development Strong experience in visualization technologies including Looker, Tableau, or others Familiarity with scripting/programming for data mining and modeling is a plus Work experience with Python and Databricks is a plus Additional Information Block takes a market-based approach to pay, and pay may vary depending on your location. U.S. locations are categorized into one of four zones based on a cost of labor index for that geographic area. The successful candidate’s starting pay will be determined based on job-related skills, experience, qualifications, work location, and market conditions. These ranges may be modified in the future. Zone A: USD $125,600 - USD $153,600 Zone B: USD $119,300 - USD $145,900 Zone C: USD $113,000 - USD $138,200 Zone D: USD $106,800 - USD $130,600 To find a location’s zone designation, please refer to this resource. If a location of interest is not listed, please speak with a recruiter for additional information. Benefits include the following: Healthcare coverage Retirement Plans including company match Employee Stock Purchase Program Wellness programs, including access to mental health, 1:1 financial planners, and a monthly wellness allowance Paid parental and caregiving leave Paid time off Learning and Development resources Paid Life insurance, AD&D. and disability benefits Perks such as WFH reimbursements and free access to caregiving, legal, and discounted resources This role is also eligible to participate in Block's equity plan subject to the terms of the applicable plans and policies, and may be eligible for a sign-on bonus. Sales roles may be eligible to participate in a commission plan subject to the terms of the applicable plans and policies. Pay and benefits are subject to change at any time, consistent with the terms of any applicable compensation or benefit plans. We’re working to build a more inclusive economy where our customers have equal access to opportunity, and we strive to live by these same values in building our workplace. Block is a proud equal opportunity employer. We work hard to evaluate all employees and job applicants consistently, without regard to race, color, religion, gender, national origin, age, disability, veteran status, pregnancy, gender expression or identity, sexual orientation, citizenship, or any other legally protected class. We believe in being fair, and are committed to an inclusive interview experience, including providing reasonable accommodations to disabled applicants throughout the recruitment process. We encourage applicants to share any needed accommodations with their recruiter, who will treat these requests as confidentially as possible. Want to learn more about what we’re doing to build a workplace that is fair and square? Check out our I+D page. Additionally, we consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance. Block, Inc. (NYSE: SQ) is a global technology company with a focus on financial services. Made up of Square, Cash App, Spiral, TIDAL, and TBD, we build tools to help more people access the economy. Square helps sellers run and grow their businesses with its integrated ecosystem of commerce solutions, business software, and banking services. With Cash App, anyone can easily send, spend, or invest their money in stocks or Bitcoin. Spiral (formerly Square Crypto) builds and funds free, open-source Bitcoin projects. Artists use TIDAL to help them succeed as entrepreneurs and connect more deeply with fans. TBD is building an open developer platform to make it easier to access Bitcoin and other blockchain technologies without having to go through an institution."," Associate "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Engineer_Crystal City,https://www.linkedin.com/jobs/view/data-engineer-crystal-city-at-altamira-technologies-corporation-3518406265?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=B3IEYKSJfOxfpBHVNb49cw%3D%3D&position=6&pageNum=16&trk=public_jobs_jserp-result_search-card," Altamira Technologies Corporation ",https://www.linkedin.com/company/altamira-corporation?trk=public_jobs_topcard-org-name," Crystal City, TX "," 1 week ago "," Be among the first 25 applicants ","Team Altamira is seeking data engineers and developers who know how hardware and software components can be leveraged collaboratively and integrated with cloud-based resources to modernize DoD data architecture. We need candidates who are not afraid to take on demanding tasks and stretch their knowledge to learn new technologies and methods, and recommend innovative solutions for DoD data challenges. Location: Ft Bragg NC preferred, but Crystal City VA is possible for the right candidate Job Requirements: This is a classified work environment, a current Top Secret clearance with SCI eligibility is required, no exceptions. We need engineers and developers able to create ETL pipelines, perform ETL on data and load/manage data in both structured and unstructured states utilizing multiple types of databases, ETL tools/services, and development resources. Scripting/Coding and the ability to select the proper language to meet the requirement (i.e. know when and how to use Bash vs Ruby/Perl/Python vs Groovy or Java). Familiarity with data governance best practices as well as team management and collaboration tools. Proficiency in concepts and implementation of data entity relationship and data coherence. Bachelor’s degree or higher in a related field (5+ years of DoD specific experience can be substituted for a degree). An ideal candidate will have the above expertise and proficiency in some or all of the following tools as they are applied within the DoD Intelligence Community. Collaboration and Team Management Atlassian Suite (JIRA & Confluence) GitLab Data ETL & Movement Apache Services/Utilities (Especially NiFi, but also ZooKeeper, Kafka, Tika, Airflow) AWS ETL Services/Utilities (RDS, CLI, Auto Scaling, Lambda, Diode, S3, CloudWatch) Elastic Stack (Especially Kibana) Data Bases (Postgres, SQL Server, Hadoop (Spark, HDFS, Hive/Impala)) Object Storage/Database (Tesseract/Redis, MinIO) App Containerization (Docker, Kubernetes, Helm/YAML) Configuration Management and Orchestration (Salt Automation) Web Server/Web Apps (Nginx & Java) Multiple Engine Anti-Virus Scanners Data Governance Immuta KeyCloak for Identity and Access Management Dev tools Javascript, NPM (node package manager - for building UI), HTML, CSS, Vue for web-based UI Java 11 for services/processes/backend that use pub-sub (via AWS SQS and Redis) Python for developer-support scripts Gradle REST APIs with Swagger Entity Relationship & Data Coherence Spark/Hadoop Elastic Stack (Elasticsearch, Logstash, Kibana, Filebeats) WaremanPro AWS EFS, AWS API Apache Kafka/Nifi for Data Drops Java/Python for fragmentation and ER runs Reactjs & Nodejs for Entity Report Altamira is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, or protected veteran status. We focus on recruiting talented, self-motivated employees that find a way to get things done. Join our team of experts as we engineer national security! Powered by JazzHR 4yGbH7rlb9"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer_Crystal City,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499584441?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=YIITGsjbwFm0wmOx6r4fjA%3D%3D&position=7&pageNum=16&trk=public_jobs_jserp-result_search-card," Altamira Technologies Corporation ",https://www.linkedin.com/company/altamira-corporation?trk=public_jobs_topcard-org-name," Crystal City, TX "," 1 week ago "," Be among the first 25 applicants "," Team Altamira is seeking data engineers and developers who know how hardware and software components can be leveraged collaboratively and integrated with cloud-based resources to modernize DoD data architecture. We need candidates who are not afraid to take on demanding tasks and stretch their knowledge to learn new technologies and methods, and recommend innovative solutions for DoD data challenges.Location: Ft Bragg NC preferred, but Crystal City VA is possible for the right candidateJob Requirements:This is a classified work environment, a current Top Secret clearance with SCI eligibility is required, no exceptions. We need engineers and developers able to create ETL pipelines, perform ETL on data and load/manage data in both structured and unstructured states utilizing multiple types of databases, ETL tools/services, and development resources.Scripting/Coding and the ability to select the proper language to meet the requirement (i.e. know when and how to use Bash vs Ruby/Perl/Python vs Groovy or Java). Familiarity with data governance best practices as well as team management and collaboration tools. Proficiency in concepts and implementation of data entity relationship and data coherence. Bachelor’s degree or higher in a related field (5+ years of DoD specific experience can be substituted for a degree).An ideal candidate will have the above expertise and proficiency in some or all of the following tools as they are applied within the DoD Intelligence Community.Collaboration and Team ManagementAtlassian Suite (JIRA & Confluence)GitLabData ETL & MovementApache Services/Utilities (Especially NiFi, but also ZooKeeper, Kafka, Tika, Airflow)AWS ETL Services/Utilities (RDS, CLI, Auto Scaling, Lambda, Diode, S3, CloudWatch) Elastic Stack (Especially Kibana)Data Bases (Postgres, SQL Server, Hadoop (Spark, HDFS, Hive/Impala))Object Storage/Database (Tesseract/Redis, MinIO)App Containerization (Docker, Kubernetes, Helm/YAML)Configuration Management and Orchestration (Salt Automation)Web Server/Web Apps (Nginx & Java)Multiple Engine Anti-Virus ScannersData GovernanceImmutaKeyCloak for Identity and Access ManagementDev toolsJavascript, NPM (node package manager - for building UI), HTML, CSS, Vue for web-based UIJava 11 for services/processes/backend that use pub-sub (via AWS SQS and Redis)Python for developer-support scriptsGradleREST APIs with SwaggerEntity Relationship & Data CoherenceSpark/HadoopElastic Stack (Elasticsearch, Logstash, Kibana, Filebeats)WaremanProAWS EFS, AWS APIApache Kafka/Nifi for Data DropsJava/Python for fragmentation and ER runsReactjs & Nodejs for Entity ReportAltamira is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, or protected veteran status. We focus on recruiting talented, self-motivated employees that find a way to get things done. Join our team of experts as we engineer national security!Powered by JazzHR4yGbH7rlb9 "," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-amtec-inc-3521469899?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=togakP45IrviKXzt1Vo%2B8g%3D%3D&position=8&pageNum=16&trk=public_jobs_jserp-result_search-card," Amtec Inc. ",https://www.linkedin.com/company/amtec-inc-?trk=public_jobs_topcard-org-name," San Francisco, CA "," 1 day ago "," 65 applicants ","Job Title: Data Engineer Location: San Francisco, CA (Hybrid) Duration: 12-month Contract (Initial) Job Description The Software Engineer will act as a major technical contributor for the Call Center modernization effort in the client's Consumer Deposits team. The call center modernization will be a large-scale project including homegrown development and integration with other tools such as Salesforce. The Software Engineer will specialize in large scale application development efforts and working with cross-functional teams to deliver critical technical initiatives at the Bank. What You'll Do As a Data Engineer Work with the Business and IT team to understand business problems, and to design, implement, and deliver an appropriate solution using Agile methodology across the larger program. Develop code and test artifacts that reuse subroutines or objects, is well structured, backed by automated tests, includes sufficient comments and is easy to maintain. Work independently to implement solutions on multiple platform (DEV, QA, UAT, PROD). Provide technical direction, leadership, and reviews to other engineers working on the same project. Implement, debug subsystems/micro service, and components. Participate in integrated test sessions of components, subsystems on test and production servers. Follows automate-first/automate-everything philosophy. Determine and communicate the implications of system-level decisions on subsystems and components. Help determine how best to mitigate or take advantage of these implications. Perform tasks efficiently and work together with team to ensure project success. Support management of the team's technical infrastructure (e.g., repository, build system, testing system) under guidance from the systems engineer or another project leader. Hands on in multiple programming paradigms, not limited to Object Oriented. You Could Be a Great Fit If You Have 5+ years IT-Software/ Software products. Bachelor's in science – Computer Science or equivalent. Experience with following Data Engineering languages and technologies - AWS Glue, Python, PySpark, Spark, Git, JenkinsCI, aquasec, vericode, etc. SQL Server, ORACLE, PostgreSQL, Stored Procedure. AWS Technologies - S3, AWS Glue, RDS, lambda, cloud watch, etc. DW concepts CI/CD pipeline development experience Kafka and Spark Streaming is nice to have."," Entry level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Senior Software / Data Engineer,https://www.linkedin.com/jobs/view/senior-software-data-engineer-at-datatribe-3515385115?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=XiDOGDtkru9%2F5wQIkKRaog%3D%3D&position=11&pageNum=16&trk=public_jobs_jserp-result_search-card," DataTribe ",https://www.linkedin.com/company/datatribe-?trk=public_jobs_topcard-org-name," Columbia, MD "," 1 week ago "," 63 applicants ","Do you want to help build the next generation network security planning solution? Company Overview: Sixmap is working on leading edge network intrusion detection technology that enables enterprises and network operators to gain insights into their complete network attack surface and identify network vulnerabilities at unheard of speed and comprehensiveness. Sixmap’s platform can complete IPv4 scans with deep and configurable service interrogation that is orders of magnitude faster than anything currently available. The team is building the world’s first platform to perform comprehensive IPv6 scans, previously thought to be impossible. Position Summary: We are looking for a data-oriented senior software engineer to help build the core network mapping and interrogation engine. Candidates should have deep hands-on experience working on data pipelines, ETL, data analysis processes, and database technologies in addition to a solid understanding of TCP/IP networking. The ideal candidate should be a well-rounded developer but be particularly strong in backend business-logic-oriented software development. Come join us, if you are ready to change the world of network security while having some fun along the way. Position Requirements: To be considered for this position, you must: Be a development athlete with at least 5 years’ experience and have a passion in understanding users’ needs and system requirements and turning them into working software Have a BS degree or higher in computer science, electrical/computer engineering, or related technical field Be fully fluent in Python and common data analysis Python libraries, C++, SQL, Airflow or other ETL / data pipeline tools, and preferably be a polyglot comfortable in many additional programming languages. Be an expert in using relational databases and NoSQL data stores - PostgreSQL experience is a must. Be experienced with Linux environments. Have experience working on container-based cloud infrastructure frameworks such as Docker or Kubernetes within common cloud service providers such as AWS, GCP, or Azure. Be experienced using Agile methodologies, operating cloud dev-ops, and coordinating with product development teams Have the ability to thrive when presented a complex challenge in a fast-paced, performance-oriented culture with intelligent people Have exceptional level of integrity, raw intelligence, creativity, energy and passion Operate efficiently with individual responsibility in a highly collaborative environment Powered by JazzHR 4lnILZIcTW"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Healthcare Azure Data Engineer,https://www.linkedin.com/jobs/view/healthcare-azure-data-engineer-at-omnidata-3505622525?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=tkd9rvOnOdHItCqZMBb47Q%3D%3D&position=14&pageNum=16&trk=public_jobs_jserp-result_search-card," OmniData ",https://www.linkedin.com/company/omnidatainsights?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," 65 applicants ","What We Are Looking For A passionate, hungry, and motivated individual that is eager for a chance to join a young startup, experiencing rapid growth. At OmniData, we are searching for a remote Senior Azure Data Engineer that has hands on production experience in developing PySpark solutions in Synapse Analytics on data warehousing and analytics projects on the Healthcare Solutions Team. We need a team player who has an exceptional reputation for mentoring less experienced teammates. We seek someone with a strong technical aptitude with expertise in translating complex technical information that appropriately meets the needs of the client while skillfully deploying the best strategies for client analytics goals. In return, we offer deep mentorship, a great work/life balance, and the opportunity to be part of creating a consulting firm that makes a difference for our clients! What You Will Do You will work on various Big Data, Data Warehouse and Analytics projects for our world class customers. In addressing complex healthcare client needs, you will be integrated into appropriately sized and skilled teams. This will give you the opportunity to analyze requirements, develop data and analytical solutions, and execute as part of the project team, all while working with the latest tools, such as Azure Synapse Analytics and related Microsoft technologies. Your Duties And Responsibilities Contribute collaboratively to team meetings using your experience base to further the cause of innovating for OmniData clients. Instill confidence in the client as well as your teammates Work independently toward client success, at the same time knowing your own limitations and when to call on others for help. Requirements What you must have to be considered 5+ years of experience in Analytics and Data Warehousing on the Microsoft platform 1-2 years advanced experience in PySpark solutions in Synapse analytics Experience working with the Microsoft Azure stack (e.g. Synapse, Databricks, DataFactory etc.) What would be nice for you to have Healthcare related Data Engineering Experience Experience with Python Experience gathering requirements and working within various project delivery methodologies Experienced working as a customer facing consultant Exposure to DAX Strong communication skills tying together technologies and architectures to business results Some travel may be required (up to 20%) Post COVID 19 Benefits Benefits and Perks Health Coverage: 100% Employee Coverage (Up to a 1500 PPO Plan), 60% Coverage for dependents Dental/Vision Life Insurance, Aflac, Short/Long term disability, HSA etc. 9 Company holidays (Ability to use them as ‘floating’ Holidays) Paid time off (PTO) 15 days Flexible Sick Time Maternity/Paternity Leave (Birth Parent: 3 months, Non-birth parent: 1 month 401k - up to a 5% match. Eligibility to contribute the month after start date. Vests as soon as you are eligible for contribution. Opportunities for career growth and advancement, as well as helping to shape a young consulting firm Ability to learn from highly skilled consultants with years of industry experience Exposure to the latest and greatest data warehousing, analytics, and cloud technologies Flexible schedules in a hybrid work environment Basic Life Insurance (50k) LT and ST Disability (paid by employee) Flexible Spending Account (FSA) and Healthcare Savings Account (HSA) Employee Assistance Program (EAP)Mental health and substance abuse conditions are serious and sometimes require 24/7 access to resources. Ethics and Compliance Tool: We provide employees a safe space to speak through AllVoices Employee Engagement & Cultural initiatives: Health & Wellness, Pulse Surveys, and Kolbe Instinctive Strengths Commuter Benefits WeWork office provision OmniData Values We build partnerships that last. We are ambitious and set aggressive goals. We embody professional humility. We are prepared. Visit www.omnidata.com/AboutUs to learn more. About OmniData OmniData is a Portland, Oregon based Data and Analytics focused consulting firm leveraging the Microsoft technology stack to help organizations build their Modern Data Estates, designed to serve their digital innovation needs for many years to come. To do this, we apply deep experience in Solution Architecture, Data, Analytics, and technology to simplify the complex. OmniData is offering you the opportunity to work with the entire lifecycle of large Data Projects, focused on next-generation data warehousing, with surface points to Analytics, Machine Learning and AI. We offer a collaborative work culture, that enables you to produce client results with a safety net from your team. You will get to work closely with very experienced consultants who will be able to provide mentorship and career guidance. At the same time, you will be rewarded for learning fast and executing within our teams to provide solutions for OmniData clients. OmniData Is An Equal Opportunity Employer And All Qualified Applicants Will Receive Consideration For Employment Without Regard To Race, Color, Religion, Sex, National Origin, Disability Status, Protected Veteran Status, Or Any Other Characteristic Protected By Law."," Not Applicable "," Full-time "," Engineering "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499581597?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=0fcd68oCMWuhCTwjZ6mm5Q%3D%3D&position=15&pageNum=16&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Idaho, United States "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-intuitive-technology-group-3497808141?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=gtU8YVjTk0W1Ju1lf9DzIw%3D%3D&position=18&pageNum=16&trk=public_jobs_jserp-result_search-card," Intuitive Technology Group ",https://www.linkedin.com/company/intuitive-technology-group?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Over 200 applicants ","Location: Remote Duration: 6 months Key Responsibilities: • Design and implement scalable data processes focused on fulfilling product requirements. • Large complex components; influencing overall product architecture and patterns. • Autonomously implement component design in line with pre-defined Data and (ELT/ETL) architectural patterns. • Partner with Business, Technical and Strategic Product stakeholders to manage project commitments in an agile framework; rapidly delivering value to our customers via technology solutions. • Develop, construct, test, document and maintain data pipelines. • Identify ways to improve data reliability, efficiency, and quality. • Design and develop resilient, reliable, scalable and self-healing solutions to meet and exceed customer requirements. • Ensure that all parts of the application eco-system are thoroughly and effectively covered with telemetry. • Focus on automation, quality and streamlining new and existing data processing. • Create data monitoring capabilities for each business process and work with data consumers on updates to data processes. • Develop data pipelines using Python, Spark and/or Scala. • Automate and orchestrate data pipelines using Azure Data Factory or Delta Live Tables • Help maintain the integrity and security of the company data. • Communicate clearly and effectively in oral and written forms and be able to present and demonstrate work to technical and non-technical stakeholders. Required Qualifications: • Undergraduate degree or equivalent experience • Minimum 5 to 7 years of IT experience in Software Engineering. • Minimum 3 to 5 years of experience in big data processing for batch and/or streaming data; data includes file systems, data structures/databases, automation, security, messaging, movement, etc. • Minimum 3 to 5 years of experience with Python and Spark in developing data processing pipelines. • Minimum 3 to 5 years of ETL programming experience in Databricks using Scala/Java. • Minimum 3 to 5 years of supporting extensive data analysis using advanced SQL concepts and window functions • Minimum 2 to 3 years of experience working on large scale programs, with multiple concurrent projects. • Minimum 2 to 3 years of experience with Agile methodologies and Test-Driven Development. • Experience with Continuous Development/ Integration (CI/CD) /DevOps skills, (Jenkins, Git, Azure DevOps/ Git Actions). • Strong written and oral communications along with presentation and interpersonal skills • Ability to lead and delegate work across other members of the data engineering team Preferred Qualifications: • Minimum 2 to 3 years of cloud experience, preferably Azure. • Minimum 2 to 3 years’ experience with Databricks or other big data platforms. • Minimum 2 to 3 years automation/orchestration experience using Azure Data Factory • Experience and familiarity with data across Healthcare Provider domains (i.e. Patient, Provider, Encounter, Billing, Claims, Eligibility etc.) • Familiarity/Experience with Clinical data exchange formats like HL7, FHIR, CCD etc. • Familiarity/experience with FHIR resources and data model • Experience in Big data processing in healthcare domain • Cloud development and computing. • Knowledge of cutting to edge technologies (AI, ML, Blockchain, Wearables, IOT). • Knowledge/Experience with Microservice design, Java Spring, Kafka, RabbitMQ. • Ability/Willingness to explore/learn new technologies and techniques on the job."," Mid-Senior level "," Contract "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer/Analytics Intern,https://www.linkedin.com/jobs/view/data-engineer-analytics-intern-at-great-lakes-cheese-3522104106?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=dTNapqQzRyIsZNPRVST4zQ%3D%3D&position=21&pageNum=16&trk=public_jobs_jserp-result_search-card," Great Lakes Cheese ",https://www.linkedin.com/company/great-lakes-cheese?trk=public_jobs_topcard-org-name," Hiram, OH "," 6 days ago "," 173 applicants ","Job Overview Great Lakes Cheese is seeking an intern at our Corporate location. At GLC, you have the opportunity to grow, optimize your performance and unlock your ambitions through positions in many areas of the organization. While working independently or in a team environment, you develop leadership and critical thinking skills to help build a foundation for your career. Internships at GLC allow students to gain real job experience and receive on-the-job training that focuses on our technologies and methodologies. From the start, interns are challenged to demonstrate their strengths and apply their knowledge to help us achieve our business strategy. This position is a paid internship. Job Responsibilities Implement and deploy new data models and data processes. Perform data analysis to generate business insights. Build Reports using SAP reporting tools. Build data expertise and own data quality for allocated areas of ownership. Write code, data mapping, unit tests and integrate code with other software components. Support critical data processes running in production. Participate in a variety of personal and professional developmental workshops. Up to 20% travel required (if needed). All GLC interns are expected to perform any assignment or job task according to the stated safety policies and procedures. All GLC interns are expected to produce our products in a manner that exceeds the quality and value expectation of our customers and consumers by adhering to Good Manufacturing Practices, Policies and Procedures outlined in our Safe Quality Food Program. Other responsibilities as assigned by the manager. Required Education And Experience Actively enrolled in or possessing a bachelor’s degree in Computer Science, or related field. Minimum overall cumulative GPA of 3.0 or higher. Strong proficiency in Microsoft Word, Excel and PowerPoint. Must be legally authorized to work a company in the U.S. without sponsorship. Preferred Education And Experience Programming knowledge in Python, SQL or Java. Problem solving skills. Ability to thrive in a fast-paced work environment. Strong attention to detail, data analysis, and reporting skills. Strong communication, presentation, and team skills. Strong organizational and time management skills. Self-motivated with a high level of initiative, able to work well independently. Working Conditions Work is performed in an office setting. EEOC & Disclaimer Great Lakes Cheese is an Equal Opportunity Affirmative Action Employer"," Internship "," Internship "," Information Technology "," Food Production " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-the-intersect-group-3500040107?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=8GYKXo8JGyGVK0tC9C3Eiw%3D%3D&position=22&pageNum=16&trk=public_jobs_jserp-result_search-card," The Intersect Group ",https://www.linkedin.com/company/the-intersect-group?trk=public_jobs_topcard-org-name," Sandy Springs, GA "," 2 weeks ago "," 150 applicants ","Our client is looking for Data Engineers to work on creating data products and as part of the Data Analytics and Platform Services team to support a variety of users including data scientists, internal big data platform team, and business partners. Our work spans across a broad set of domains including Remarketing, Aftersales, Customer Assistance Center, and Marketing. This is a Direct Hire role and is hybrid. The tasks include, but are not limited to: Build & Manage Data Pipelines: Data pipelines are a series of stages through which data flows from source through datalake, datawarehouses and datamarts. These have to be created, maintained and optimized as workloads move from development to production including data from (1st, 2nd and 3rd party data) Build and maintain the ETL /ELT pipelines to improve developer productivity, agility and code quality throughout the lifecycle of data. Develop out of the box API’s to enable seamless access of Data while adhering to governance, Quality and coding best practices of Data Engineering Education: University degree or above in Computer Engineering, Computer Science or equivalent. Knowledge, Skills & Abilities: Minimum 3+ years of hands on cloud experience – preferably Azure 2+ years of experience in DevOps with hands on skills on Azure Cloud (HDI, Blob, Synapse, SQL Warehouse, Azure Functions, Log Analytics etc.) Strong experience with popular database programming like Python, Scala, SQL etic Basic experience working with popular data discovery, analytics and BI software tools like [Tableau, Qlik, PowerBI and others] for semantic-layer-based data discovery. Experienced with monitoring and logging features within Microsoft Azure Experience working with Docker and Kubernetes Experience working with Databricks and Azure Data Factory."," Mid-Senior level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,"Sr. Data Engineer, Peacock",https://www.linkedin.com/jobs/view/sr-data-engineer-peacock-at-peacock-3529811357?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=YzCiNeMQvWe8snJ6GQ%2F5AA%3D%3D&position=23&pageNum=16&trk=public_jobs_jserp-result_search-card," Peacock ",https://www.linkedin.com/company/peacocktv?trk=public_jobs_topcard-org-name," New York, NY "," 1 day ago "," 41 applicants ","Company Description NBCUniversal owns and operates over 20 different businesses across 30 countries including a valuable portfolio of news and entertainment television networks, a premier motion picture company, significant television production operations, a leading television stations group, world-renowned theme parks and a premium ad-supported streaming service. Here you can be your authentic self. As a company uniquely positioned to educate, entertain and empower through our platforms, Comcast NBCUniversal stands for including everyone. We strive to foster a diverse and inclusive culture where our employees feel supported, embraced and heard. We believe that our workforce should represent the communities we live in, so that together, we can continue to create and deliver content that reflects the current and ever-changing face of the world. Click here to learn more about Comcast NBCUniversal’s commitment and how we are making an impact. Job Description Welcome to Peacock, the dynamic new streaming service from NBCUniversal. Here you’ll find more than a job. You’ll find a fast-paced, high-performance team of incredible colleagues that want to be at the epicenter of technology, sports, news, tv, movies and more. We work hard to connect people to what they love, each other and the world around them by creating shared experiences through culture-defining entertainment. As a company, we embrace the power of difference. Our team is committed to creating an organization that champions diversity and inclusivity for all by curating content and a workforce that represents the world around us. We continue to challenge ourselves and the industry by being customer-centric, data-driven creatures of innovation. At Peacock, we are determined to forge the next frontier of streaming through creativity, teamwork, and talent. As part of the Direct-to-Consumer Decision Sciences team, the Data Engineer will be responsible for creating a connected data ecosystem that unleashes the power of our streaming data. We gather data from across all customer/prospect journeys in near real-time, to allow fast feedback loops across territories; combined with our strategic data platform, this data ecosystem is at the core of being able to make intelligent customer and business decisions. In this role, the Data Engineer will share responsibilities in the development and maintenance of an optimized and highly available data pipelines that facilitate deeper analysis and reporting by the business, as well as support ongoing operations related to the Direct to Consumer data ecosystem Responsibilities Include, But Are Not Limited To Design, build, test, scale and maintain data pipelines from a variety of source systems and streams (Internal, third party, cloud based, etc.), according to business and technical requirements. Deliver observable, reliable and secure software, embracing “you build it you run it” mentality, and focus on automation and GitOps. Continually work on improving the codebase and have active participation in all aspects of the team, including agile ceremonies. Take an active role in story definition, assisting business stakeholders with acceptance criteria. Work with Principal Engineers and Architects to share and contribute to the broader technical vision. Develop and champion best practices, striving towards excellence and raising the bar within the department Develop solutions combining data blending, profiling, mining, statistical analysis, and machine learning, to better define and curate models, test hypothesis, and deliver key insights Operationalize data processing systems (dev ops) This position is eligible for company sponsored benefits, including medical, dental, and vision insurance, 401(k), paid leave, tuition reimbursement, and a variety of other discounts and perks. Learn more about the benefits offered by NBCUniversal by visiting the Benefits page of the Careers website. Salary range: $115,000- $145,000. Qualifications 3+ Years of Experience of near Real Time & Batch Data Pipeline development in a similar Big Data Engineering role. Programming skills in one or more of the following: Java, Scala, R, Python, SQL and experience in writing reusable/efficient code to automate analysis and data processes Experience in processing structured and unstructured data into a form suitable for analysis and reporting with integration with a variety of data metric providers ranging from advertising, web analytics, and consumer devices Experience implementing scalable, distributed, and highly available systems using Google Cloud Hands on programming experience of the following (or similar) technologies: Apache Beam, Scio, Apache Spark, and Snowflake. Experience in progressive data application development, working in large scale/distributed SQL, NoSQL, and/or Hadoop environment. Build and maintain dimensional data warehouses in support of BI tools Develop data catalogs and data cleanliness to ensure clarity and correctness of key business metrics Experience building streaming data pipelines using Kafka, Spark or Flink Data modelling experience (operationalizing data science models/products) a plus Bachelors’ degree with a specialization in Computer Science, Engineering, or other quantitative field or equivalent industry experience. Must submit an attestation disclosing your COVID-19 vaccination status and, if partially or fully vaccinated, submitting your vaccination record no later than 7 days following commencement of employment. Must be fully vaccinated against COVID-19 at the commencement of employment or adhere to enhanced protocols in select work settings or where jurisdictionally mandated. Must be willing to adhere to all Company COVID-19 workplace safety policies and protocols. Desired Characteristics Hands on Experience with Orchestration tool Apache Airflow Hands on Experience with SQL(BigQuery), ETL and Analytical modeling Hands on Experience with Snowflake Technologies and integration with BigQuery Hands on Experience with BI Technologies such as Tableau, Data Studio etc. Hands on Experience building CI/CD pipeline in GCP cloud or other Hands on Experience with Kubernetes and Helm Chart Strong Test-Driven Development background, with understanding of levels of testing required to continuously deliver value to production. Experience with large-scale video assets Ability to work effectively across functions, disciplines, and levels Team-oriented and collaborative approach with a demonstrated aptitude, enthusiasm and willingness to learn new methods, tools, practices and skills Ability to recognize discordant views and take part in constructive dialogue to resolve them Pride and ownership in your work and confident representation of your team to other parts of NBCUniversal Additional Information NBCUniversal's policy is to provide equal employment opportunities to all applicants and employees without regard to race, color, religion, creed, gender, gender identity or expression, age, national origin or ancestry, citizenship, disability, sexual orientation, marital status, pregnancy, veteran status, membership in the uniformed services, genetic information, or any other basis protected by applicable law. NBCUniversal will consider for employment qualified applicants with criminal histories in a manner consistent with relevant legal requirements, including the City of Los Angeles Fair Chance Initiative For Hiring Ordinance, where applicable. If you are a qualified individual with a disability or a disabled veteran, you have the right to request a reasonable accommodation if you are unable or limited in your ability to use or access nbcunicareers.com as a result of your disability. You can request reasonable accommodations in the US by calling 1-818-777-4107 and in the UK by calling +44 2036185726."," Mid-Senior level "," Full-time "," Production "," Broadcast Media Production and Distribution, Entertainment Providers, and Media Production " Data Engineer,United States,Data Engineer - BI ,https://www.linkedin.com/jobs/view/data-engineer-bi-at-arvest-bank-3525978518?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=LmTnKpQHJJKruBM7jhoB%2Bw%3D%3D&position=25&pageNum=16&trk=public_jobs_jserp-result_search-card," Arvest Bank ",https://www.linkedin.com/company/arvest-bank?trk=public_jobs_topcard-org-name," United States "," 3 days ago "," 71 applicants ","Exempt: Yes Salary Grade: Grade 16I Position is Monday through Friday from 8 am to 5 pm with the ability to work additional hours as project needs demand. Incumbent can be located anywhere within the Arvest 4 State Footprint (AR, KS, MO, OK). Remote work options may be available outside of the 4-state footprint upon further review during the interview process. What You'll Bring: Proven expertise in building enterprise BI solutions using enterprise analytical data platforms Experience with deploying BI dashboards/Reporting solutions Familiarity with BI Tools such as Tableau, Athena, Looker, Google Analytics and PowerBI Experience working with business users in supporting reporting functionality The story of Arvest is one of commitment started by our founders in 1961, with an intense dedication to focusing on our customers. We will always be active and involved members of the communities we serve, and we will always work to put the needs of our customers first as we continue to fulfill our mission – People helping people find financial solutions for life. Job Title: Data Engineer - BI A Data Engineer at Arvest is a technical team member who will create, maintain, and evolve the strategy for data storing, transformation and distribution. They use common data architecture practices to translate business requirements into conceptual, logical, and physical data models that will support data analysis/visualization and decision-making across the organization. We are seeking candidates who embrace diversity, equity, and inclusion in a workplace where everyone feels valued and inspired. What You’ll Do at Arvest: (Other duties may be assigned.) • Develop resilient data pipeline solutions that are sustainable, fault-tolerant, and highly scalable using modern and new technologies of varying complexity and scope. • Troubleshoot moderately complex problems and assists with root cause analysis. Support production workloads as necessary. Participate in on-call rotation, as needed. • Utilize technical expertise to develop and execute queries to extract internal and external data from various sources that will be required for a robust and reliable data infrastructure. • Build software that performs well, is secure, and is accessible to customers. Ensure that work product delivered by the team meets standards for reusability, security, and performance and that data is available, usable, and fit-for-purpose. • Partner with Engineers, contractors, and 3rd parties to deliver solutions that are efficient, reusable, and impactful. May work with contractors and 3rd parties to accomplish goals. • Collaborate with the Product Owner and End Users to ensure that acceptance criteria are met and satisfies the business need. • Build and manage data quality and data loads using automated testing frameworks and methodologies such as Data-Driven Testing (DDT). • Mentor and guide less experienced engineers to build skills and adopt practices. • Create proofs-of-concept and proofs-of-technology to evaluate the feasibility of solutions, including recommendations based on the results. • Make sound design/coding decisions keeping customer experience in the forefront. • Research and recommend data for acquisition and evaluates suitability. Support the identification of anomalies and data quality issues. • Participate in cross-product Communities of Practice and/or Guilds by attending sessions, volunteering for research topics, and presenting findings to the group. Promote the re-use of data across the Company. • Perform code reviews. Test own work and reviews tests performed by more junior team members, as appropriate. • Exhibit strong problem solving and analytical skills, as well as strong communication and interpersonal skills. Contribute to healthy working relationships among teams and individuals. • Understand and comply with bank policy, laws, regulations, and the bank's BSA/AML Program, as applicable to your job duties. This includes but is not limited to; complete compliance training and adhere to internal procedures and controls; report any known violations of compliance policy, laws, or regulations and report any suspicious customer and/or account activity. Toolbox for Success: · Bachelor’s Degree in Information Systems, Computer Science, Business Intelligence, or related field, or equivalent related work or military experience, is required. · 3 years of experience in designing and developing data queries for ETL data movement, including merging large data sets for analysis, is required. · Experience with ETL data movement of structured data sources, is required. · Experience with programming languages like Python, Java, and C#, is required. · Experience in the following is preferred: ETL tools such as DataFlow, Dataproc, Data Fusion, or similar tools Transformation tools, such as DBT Pipeline orchestration tools, such Apache Airflow or Cloud Composer Cloud data solutions within Google Cloud Platform, Azure, or AWS Working knowledge of standardization, security, governance, and compliance Hands-on experience with Data Visualization tools, such as Tableau, Looker, or PowerBI) · Prior experience in banking or financial services is preferred. · Relevant military experience is considered for veterans and transitioning service members Physical Demands: The associate must be able to travel occasionally by themselves within the US, possibly overnight. Reasonable accommodations may be made to enable qualified individuals with disabilities to perform the essential functions. We offer competitive compensation, benefits packages, and significant professional growth. Along with an excellent benefits package, our associates are engaged, rewarded for performance, and encouraged to grow professionally and personally. Our future is driven by our associates. If you want to be recognized for your results and empowered to reach your potential, we urge you to apply."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499584441?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=YIITGsjbwFm0wmOx6r4fjA%3D%3D&position=7&pageNum=16&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Texas, United States "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ascendion-3529410074?refId=JaU6MAS9as0jpqdJwTsUqw%3D%3D&trackingId=fAuGxsDsQUV1Sb5yM9trwA%3D%3D&position=10&pageNum=16&trk=public_jobs_jserp-result_search-card," Ascendion ",https://www.linkedin.com/company/ascendion?trk=public_jobs_topcard-org-name," Charlotte, NC "," 3 weeks ago "," Be among the first 25 applicants ","Remote MUST HAVES 2+ years of Python development experience Strong familiarity with Azure data services: HIGHLY Preferred - Azure SQL, Synapse, Data Lake, Data Factory, Databricks, Azure function, Service Bus, etc. Another cloud provider (GCP/AWS) will work if they are strong / they will have to pick Azure up Solid understanding of real-time & batch data processing Strong communication Plusses Familiarity with CI/CD, building pipelines, terraform Domain knowledge Python ,Azure"," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-insight-global-3504231510?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=Q%2FZZ%2Bm1NtfRg%2FxKkyKss7A%3D%3D&position=1&pageNum=17&trk=public_jobs_jserp-result_search-card," Insight Global ",https://www.linkedin.com/company/insight-global?trk=public_jobs_topcard-org-name," United States "," 2 weeks ago "," Over 200 applicants ","Must-Haves 3+ years of data engineering or data warehouse modeling 3+ years of very strong SQL experience 3+ years of dimensional data modeling utilizing Tableau, PowerBI, or other similar tools Hand-on usage of Snowflake or Redshift Experience utilizing Python in a data environment Experience in an AWS environment Understanding of how data warehouses are built Plusses SQL Server background Any database languages Business Intelligence experience Day-to-Day Insight Global is looking for a Data Engineer to join one of our largest analytics clients in their Insurance Claims Solutions division. This role is a mid-to-senior level dimensional modeling position that will provide data pipeline expertise and business intelligence leadership to guide the development of data products and reporting solutions. Majority of this engineer’s day will be focused within SQL and reading and writing code, while the other portion will require dimensional modeling and an understanding of how data warehouses are built. The successful candidate has experience developing data pipelines using cloud services and designing data structures that support data products and reporting systems. In this hands-on role, he/she will also be responsible for collaborating with product owners, tackling technical problems, helping define best practices, and contributing to design/code reviews. In addition, the candidate will help create innovative solutions to satisfy the growing needs of internal and external customers."," Mid-Senior level "," Full-time "," Engineering "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-plexus-resource-solutions-3492588460?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=Xu32AARHamixDKLxv7HChQ%3D%3D&position=2&pageNum=17&trk=public_jobs_jserp-result_search-card," Plexus Resource Solutions ",https://uk.linkedin.com/company/plexus-resource-solutions?trk=public_jobs_topcard-org-name," United States "," 3 weeks ago "," Over 200 applicants ","Plexus have partnered with an NFT infrastructure company answering one of the trickier questions within the space. A well-funded start-up with a strong core development team and backing from the best in the business, they're now on the look out for a Data Engineer to join their ever-expanding team. Requirements: 4+ years experience in data engineering, ideally in both large and small (start-up) organizations. Proficiency in Python/AWS/PostgreSQL Experience in database design and data modelling A strong passion for the space This role is a full-time/permanent position and is also fully remote with a salary package of up to $200K."," Mid-Senior level "," Full-time "," Information Technology and Engineering "," Software Development " Data Engineer,United States,Data Engineer - BI ,https://www.linkedin.com/jobs/view/data-engineer-bi-at-arvest-bank-3525978518?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=Z6L636D4oK7Bmn2aXFr7OA%3D%3D&position=3&pageNum=17&trk=public_jobs_jserp-result_search-card," Arvest Bank ",https://www.linkedin.com/company/arvest-bank?trk=public_jobs_topcard-org-name," United States "," 3 days ago "," 71 applicants ","Exempt: Yes Salary Grade: Grade 16I Position is Monday through Friday from 8 am to 5 pm with the ability to work additional hours as project needs demand. Incumbent can be located anywhere within the Arvest 4 State Footprint (AR, KS, MO, OK). Remote work options may be available outside of the 4-state footprint upon further review during the interview process. What You'll Bring: Proven expertise in building enterprise BI solutions using enterprise analytical data platforms Experience with deploying BI dashboards/Reporting solutions Familiarity with BI Tools such as Tableau, Athena, Looker, Google Analytics and PowerBI Experience working with business users in supporting reporting functionality The story of Arvest is one of commitment started by our founders in 1961, with an intense dedication to focusing on our customers. We will always be active and involved members of the communities we serve, and we will always work to put the needs of our customers first as we continue to fulfill our mission – People helping people find financial solutions for life. Job Title: Data Engineer - BI A Data Engineer at Arvest is a technical team member who will create, maintain, and evolve the strategy for data storing, transformation and distribution. They use common data architecture practices to translate business requirements into conceptual, logical, and physical data models that will support data analysis/visualization and decision-making across the organization. We are seeking candidates who embrace diversity, equity, and inclusion in a workplace where everyone feels valued and inspired. What You’ll Do at Arvest: (Other duties may be assigned.) • Develop resilient data pipeline solutions that are sustainable, fault-tolerant, and highly scalable using modern and new technologies of varying complexity and scope. • Troubleshoot moderately complex problems and assists with root cause analysis. Support production workloads as necessary. Participate in on-call rotation, as needed. • Utilize technical expertise to develop and execute queries to extract internal and external data from various sources that will be required for a robust and reliable data infrastructure. • Build software that performs well, is secure, and is accessible to customers. Ensure that work product delivered by the team meets standards for reusability, security, and performance and that data is available, usable, and fit-for-purpose. • Partner with Engineers, contractors, and 3rd parties to deliver solutions that are efficient, reusable, and impactful. May work with contractors and 3rd parties to accomplish goals. • Collaborate with the Product Owner and End Users to ensure that acceptance criteria are met and satisfies the business need. • Build and manage data quality and data loads using automated testing frameworks and methodologies such as Data-Driven Testing (DDT). • Mentor and guide less experienced engineers to build skills and adopt practices. • Create proofs-of-concept and proofs-of-technology to evaluate the feasibility of solutions, including recommendations based on the results. • Make sound design/coding decisions keeping customer experience in the forefront. • Research and recommend data for acquisition and evaluates suitability. Support the identification of anomalies and data quality issues. • Participate in cross-product Communities of Practice and/or Guilds by attending sessions, volunteering for research topics, and presenting findings to the group. Promote the re-use of data across the Company. • Perform code reviews. Test own work and reviews tests performed by more junior team members, as appropriate. • Exhibit strong problem solving and analytical skills, as well as strong communication and interpersonal skills. Contribute to healthy working relationships among teams and individuals. • Understand and comply with bank policy, laws, regulations, and the bank's BSA/AML Program, as applicable to your job duties. This includes but is not limited to; complete compliance training and adhere to internal procedures and controls; report any known violations of compliance policy, laws, or regulations and report any suspicious customer and/or account activity. Toolbox for Success: · Bachelor’s Degree in Information Systems, Computer Science, Business Intelligence, or related field, or equivalent related work or military experience, is required. · 3 years of experience in designing and developing data queries for ETL data movement, including merging large data sets for analysis, is required. · Experience with ETL data movement of structured data sources, is required. · Experience with programming languages like Python, Java, and C#, is required. · Experience in the following is preferred: ETL tools such as DataFlow, Dataproc, Data Fusion, or similar tools Transformation tools, such as DBT Pipeline orchestration tools, such Apache Airflow or Cloud Composer Cloud data solutions within Google Cloud Platform, Azure, or AWS Working knowledge of standardization, security, governance, and compliance Hands-on experience with Data Visualization tools, such as Tableau, Looker, or PowerBI) · Prior experience in banking or financial services is preferred. · Relevant military experience is considered for veterans and transitioning service members Physical Demands: The associate must be able to travel occasionally by themselves within the US, possibly overnight. Reasonable accommodations may be made to enable qualified individuals with disabilities to perform the essential functions. We offer competitive compensation, benefits packages, and significant professional growth. Along with an excellent benefits package, our associates are engaged, rewarded for performance, and encouraged to grow professionally and personally. Our future is driven by our associates. If you want to be recognized for your results and empowered to reach your potential, we urge you to apply."," Mid-Senior level "," Full-time "," Engineering and Information Technology "," Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499580732?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=TcV73%2BapLB4MuOF6H5OPGg%3D%3D&position=4&pageNum=17&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Miami, FL "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499587057?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=U%2FR1DaD35LrD5ylBtf5klA%3D%3D&position=5&pageNum=17&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Miami, FL "," 2 weeks ago "," Be among the first 25 applicants "," About ONEONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place.The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances.What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent.There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us!The roleAs an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering.This role is responsible for:Building and maintaining ONE's data infrastructure.Developing and owning ONE's streaming data transformation pipeline.Managing and supporting reporting and analytics tools.Collaborating closely with our Data Analysts and Data Scientists.Tracking and defining metrics around performance.Additional duties as assigned by your manager.You bring Mid Career (5-10 Years)Experience in Apache Spark, Scala, Python, SQL.Experience designing and implementing low latency data pipelines using Spark structured streaming.Experience optimizing SQL queries and lake / warehouse data structures.Cost-conscience creative problem solving.Proficient in AWS cloud services and technologies.Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture.A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept.Pay TransparencyThe estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits.Leveling PhilosophyIn order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE.What it's like working @ ONEOur teams collaborate remotely and in our work spaces in New York and Sacramento.Competitive cash Benefits effective on day one Early access to a high potential, high growth fintechGenerous stock option packages in an early-stage startupRemote friendly (anywhere in the US) and office friendly - you pick the scheduleFlexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave401(k) plan with matchInclusion & BelongingTo build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer/Data Scientist/Machine Learning Engineer,https://www.linkedin.com/jobs/view/data-engineer-data-scientist-machine-learning-engineer-at-walletconnect-3519227952?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=aHjE1lu7AT%2B4XEQCrsQbqA%3D%3D&position=6&pageNum=17&trk=public_jobs_jserp-result_search-card," WalletConnect ",https://www.linkedin.com/company/walletconnect-inc?trk=public_jobs_topcard-org-name," New York, NY "," 3 days ago "," 27 applicants ","WalletConnect is the open-source web3 standard to connect blockchain wallets to dapps. Any wallet, any dapp, any chain. Starting in 2018, our mission is to make web3 accessible to everyone. Every month, millions of people use WalletConnect in thousands of integrations. We raised ~$25M from venture investors including 1kx and Coinbase as well as from customers such as Shopify and Circle. We are growing fast both in terms of features and users. To learn more about our plans to create a multi-API messaging network for web3, take a look at our presentation at EthCC. The Role We are looking for someone who can help mature our existing data applications and innovate new use cases. Our existing setup is data lake/warehouse with data science/bi applications as customers. We use DBT/Athena/Glue (Spark)/RDS/Python/Terraform. Data is business critical for us and this quarter we need to improve our monitoring and build new use cases over the existing plumbing. We’re looking for someone who can act as a leader in our data team and help set and fulfill the vision. Responsibilities Design and implement end-to-end data pipelines, and work closely with stakeholders to build instrumentation and define data models Take ownership of monitoring to ensure high availability of our data tooling Build actionable production-quality internal dashboards, with well defined KPI’s Work with the Cloud team to create customer-focused dashboards to help customers understand their WalletConnect usage. Partner closely with product, engineering and other business stakeholders to influence product and business decisions with data Influence leadership to drive more data-informed decisions Requirements Must have: 3+ years of experience as a Data engineer/scientist. Extensive experience with Python/Spark/databases Deep understanding of advanced SQL techniques, dimensional modeling and scaling ETL pipelines Experience with applied statistics and quantitative modeling Demonstrated knowledge in monitoring data systems Nice To Have Terraform knowledge Domain experience in crypto and SaaS would be a plus Knowledge of Rust and TypeScript (help on data ingestion pipeline) Demonstrated ability to translate analytical insights into clear recommendations and effectively communicate them to technical and non-technical stakeholders Strong background in statistical modeling such as regression, survival and cohort analysis, time series forecasting, etc. Experience in the AWS Glue/Athena toolsuite, Spark, Presto Benefits Fully remote position with flexible timezone (CET/EST preferred) Competitive salary Company equity Remote work allowance Token offering"," Entry level "," Full-time "," Information Technology "," Information Technology & Services " Data Engineer,United States,Data Engineer II,https://www.linkedin.com/jobs/view/data-engineer-ii-at-equiliem-3524902932?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=TGW9NShFZHlSCOPUzRr4qw%3D%3D&position=7&pageNum=17&trk=public_jobs_jserp-result_search-card," Equiliem ",https://www.linkedin.com/company/equiliem?trk=public_jobs_topcard-org-name," Philadelphia, PA "," 2 hours ago "," Be among the first 25 applicants ","Hours: 40 hours per. 8 hours per day. 5 days per week. Job Description As a Data Engineer,you will be responsible for driving the analytics product life cycle, expanding our analytics products while optimizing our data architecture, and developing best practices and governance for administrative reporting, data visualization, and data flow for cross-functional teams. We are looking for a candidate who is experienced in all aspects of data from development to implementation. As the Data Engineer II, you will support various stakeholders and ensure consistent optimal product delivery throughout ongoing projects. You will also support non-technical colleagues in collecting and appropriately using administrative data. The ideal candidate must be self-directed, comfortable supporting the data needs of multiple teams, systems, and products. Job Responsibilities Conduct data modeling by evaluating structured and unstructured data and determining the most appropriate schema for new fact tables, data marts, etc. Collaborate with colleagues across the enterprise to scope requests, extract data from various data sources, validate results, create relevant data visualizations, and share with the requester in Tableau. Develop dashboards and automate refreshes as appropriate in Tableau Server. Adhere to and contribute to data governance standards and educate and support colleagues in best practices to ensure that data is used appropriately. Collaborate and act as the voice of the customer to offer concrete feedback and project requests as well as an advocate for analytics from within the business units themselves. Assemble large, complex data sets that meet functional/non-functional business requirements. Identify, design, and implement internal process improvements, such as automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources (including ground, hybrid cloud, and cloud) using SQL and various programming technologies. Develop analytics tools that utilize data resources to provide actionable insights, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Administrative, and Analyst teams to assist with data-related technical issues and support their data infrastructure needs. Develop optimized tools for analytics and data scientist team members that assist them in building and optimizing projects into an innovative industry leaders. Education Requirement Preferred Qualifications: Tableau certification and a strong portfolio on Tableau Public Experience with other data visualization tools like Power BI, QlikView, or Domo Knowledge of ETL processes and tools such as Informatica or Talend At least 5 years of experience in Data Engineering, Business Intelligence, or Data Warehousing Consulting experience is a plus Preferred Education Master's degree in Computer Science, Data Science, Information Systems, or another quantitative field level 2a - fully remote- no covid vaccine required Knowledge, Skills, Abilities Job Responsibilities: Experience in data analysis, design, and development using Tableau Strong understanding of data modeling, data warehousing, and data integration Proficient in SQL for data retrieval, manipulation, and analysis Strong communication and collaboration skills, able to work effectively in a team environment Self-motivated and able to work independently Experience integrating predictive and prescriptive models into applications and processes Develop processes supporting data transformation, data structures, metadata, dependency, and workload management Perform root cause analysis on internal and external data and processes to identify opportunities for improvement Skills Strong analytic skills for structured and unstructured datasets Critical thinking and creative problem-solving skills, ability to communicate with stakeholders Project management and organizational skills Experience with relational SQL and NoSQL databases such as IBM PDA (Netezza), MS SQL Server, and HBase Experience with data integration tools such as Informatica, MS Integration Services, and Sqoop Experience with API consumption and building Knowledge of object-oriented programming languages such as Python, Java, C++, and Scala Familiarity with statistical data analysis tools like R, SAS, and SPSS Proficiency in visual analytics tools including QlikView, Tableau, and Power BI Familiarity with Agile methodology for development."," Entry level "," Contract "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-soni-3521486098?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=TYifeWcGZ%2BFVqdk2VC1FGA%3D%3D&position=8&pageNum=17&trk=public_jobs_jserp-result_search-card," Soni ",https://www.linkedin.com/company/soni-resources-group?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 1 day ago "," Be among the first 25 applicants ","A global leader in construction is growing their data organization -- have just started building out a data platform to help the organization access data and gather insights for business growth + optimization. The Sr Data Engineer is a critical hire for building + maintaining the data environment. This is a greenfield opportunity in a double digit billion dollar organization with a wealth of data. This is a hybrid role, requiring up to 5x/week onsite in midtown NYC office. Salary: $150-180K Responsibilities: Minimum 5 years experience as a Data Engineer (someone growing into architecture, but currently hands-on individual-contributing engineer) Expertise in SQL + Python coding Experienced with multiple data analytics platforms, can approach problems from multiple angles, has implemented/delivered on data platforms Strong understanding around fundamentals around data governance Expert at building pipelines and transformations using Python, Pandas, Spark cluster, Bit Query, Oracle + SQL (MySQL/Postgres) Must understand Agile process, CI/CD, code repositories, test automation. Compensation: $150,000 - 180,000 Salary is based on a range of factors that include relevant experience, knowledge, skills, other job-related qualifications. *Job title may differ on our career portal"," Associate "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting and Information Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-mercer-advisors-3509121888?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=CM4W%2FtouJzWyQQZE94GWxw%3D%3D&position=9&pageNum=17&trk=public_jobs_jserp-result_search-card," Mercer Advisors ",https://www.linkedin.com/company/mercer-advisors?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 142 applicants ","Mercer Advisors is a different kind of wealth management firm. We exist so that our clients don’t have to worry about money. Our firm was founded in 1985, on the belief that families at all wealth levels would benefit from a fully unified approach to managing their money – “A family office for your family.” We connect the dots of our clients’ financial lives by unifying planning, investing, taxes, estate, insurance, trust, and more. Today, we proudly serve over 25,000 families, across over 90 cities, with over $45 Billion in in assets entrusted to our care. And we do this as an independent, national fiduciary – which means we are committed to always working in the clients’ best interest. When you join our team, you will find that it is different from what you typically see in our industry. Our client-facing professionals of in-house experts are 50% women, as is our overall employee base. We bring together the best talent wherever they live –with no formal headquarters, and many flexible working arrangements – so we can assemble the best team. Job Summary: We are looking for a Data Engineer to join our growing technology team. The Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. Essential Job Functions for the Data Engineer will include: Creating and maintaining optimal data pipeline architecture. Enterprise pipeline orchestration set up administration (AWS Glue, Airflow, Fivetran) Assembling large, complex data sets that meet functional & non-functional business requirements. Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability. Building the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using relevant technologies. Working with stakeholders to assist with data-related technical issues and support their data infrastructure needs. Securing the data within our environment. Working with our analytics experts to strive for greater functionality in our data systems. Building processes supporting data transformation, data structures, metadata, dependency, and workload management. Maintaining our data model. Other duties as assigned. Required Knowledge, Skills, and Abilities: Bachelor’s degree in a relevant field is required. Minimum 5 years’ experience in Data Engineering is required. Advanced working SQL knowledge and experience working with relational databases is required. Experience working with SQL Server is required. Experience working with Python for at least 3 years’ is required. Experience building and optimizing data pipelines, architectures, and data sets is required. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement is required. Experience supporting and working with cross-functional teams in a dynamic environment. Knowledge of Salesforce data structure is preferred. Knowledge of working in Azure is preferred. Working Conditions: Professional office environment. Working inside. Standing and sitting. Will be assigned to a workstation. No heavy lifting over 25 lbs. Benefits: Mercer Advisors offers a competitive and robust benefit package to our employees. Our benefit programs are focused on meeting all of our employees and their eligible dependents health and welfare needs. We offer the following: Company Paid Basic Life & AD&D Insurance Company Paid Short-Term and Long-Term Disability Insurance Supplemental Life & AD&D; Short-Term Disability; Accident; Critical Illness; and Hospital Indemnity Insurance Three medical plans offerings including two High Deductible Health Plans and a Traditional Co-Pay medical plan Health Savings Account (HSA) with company contributions on a per pay period basis if enrolled in either HDHP medical plan Two comprehensive Dental Plans Vision Insurance Plan Dependent Care Savings Account for child and dependent care 14 Company Paid Holidays with a full week off at Thanksgiving Generous paid time off program for vacation and sick days Employee Assistance Plan Family Medical Leave Paid Parental Leave (6 weeks) Maternity benefits utilizing company paid STD, any supplemental STD, plus Parental Leave (6 weeks) to provide time for recovery, baby bonding, and enjoying your family time Adoption Assistance Reimbursement Program Company Paid Concierge Services for you and your loved ones for the spectrum of caring needs for your aging parents, young children, life’s challenges and more 401(k) Retirement Plan with both Traditional and Roth plans with per pay period match Pet Insurance We are not accepting unsolicited resumes from agencies and/or search firms for this job posting. Mercer Advisors provides equal employment opportunity to all applicants and employees without regard to age, color, disability, gender, marital status, national origin, race, religion, sexual orientation, gender identity and expression, physical or mental disability, genetic predisposition or carrier status, or any other characteristic protected by law in accordance with all applicable federal, state, and local laws. Mercer Advisors provides equal employment opportunity in all aspects of employment and employee relations, including recruitment, hiring, training and development, promotion, transfer, demotion, termination, layoff, compensation, benefits, and all other terms, conditions, and privileges of employment in accordance with applicable federal, state, and local laws."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cbts-3488687401?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=4SJKhAkCOWc5qC%2FW9DpM0A%3D%3D&position=10&pageNum=17&trk=public_jobs_jserp-result_search-card," Mercer Advisors ",https://www.linkedin.com/company/mercer-advisors?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 142 applicants "," Mercer Advisors is a different kind of wealth management firm. We exist so that our clients don’t have to worry about money. Our firm was founded in 1985, on the belief that families at all wealth levels would benefit from a fully unified approach to managing their money – “A family office for your family.” We connect the dots of our clients’ financial lives by unifying planning, investing, taxes, estate, insurance, trust, and more. Today, we proudly serve over 25,000 families, across over 90 cities, with over $45 Billion in in assets entrusted to our care. And we do this as an independent, national fiduciary – which means we are committed to always working in the clients’ best interest.When you join our team, you will find that it is different from what you typically see in our industry. Our client-facing professionals of in-house experts are 50% women, as is our overall employee base. We bring together the best talent wherever they live –with no formal headquarters, and many flexible working arrangements – so we can assemble the best team.Job Summary:We are looking for a Data Engineer to join our growing technology team. The Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams.Essential Job Functions for the Data Engineer will include: Creating and maintaining optimal data pipeline architecture.Enterprise pipeline orchestration set up administration (AWS Glue, Airflow, Fivetran)Assembling large, complex data sets that meet functional & non-functional business requirements.Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.Building the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using relevant technologies.Working with stakeholders to assist with data-related technical issues and support their data infrastructure needs.Securing the data within our environment.Working with our analytics experts to strive for greater functionality in our data systems.Building processes supporting data transformation, data structures, metadata, dependency, and workload management.Maintaining our data model.Other duties as assigned.Required Knowledge, Skills, and Abilities: Bachelor’s degree in a relevant field is required.Minimum 5 years’ experience in Data Engineering is required.Advanced working SQL knowledge and experience working with relational databases is required.Experience working with SQL Server is required.Experience working with Python for at least 3 years’ is required.Experience building and optimizing data pipelines, architectures, and data sets is required.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement is required.Experience supporting and working with cross-functional teams in a dynamic environment.Knowledge of Salesforce data structure is preferred.Knowledge of working in Azure is preferred.Working Conditions:Professional office environment. Working inside. Standing and sitting. Will be assigned to a workstation. No heavy lifting over 25 lbs.Benefits:Mercer Advisors offers a competitive and robust benefit package to our employees. Our benefit programs are focused on meeting all of our employees and their eligible dependents health and welfare needs. We offer the following:Company Paid Basic Life & AD&D Insurance Company Paid Short-Term and Long-Term Disability InsuranceSupplemental Life & AD&D; Short-Term Disability; Accident; Critical Illness; and Hospital Indemnity InsuranceThree medical plans offerings including two High Deductible Health Plans and a Traditional Co-Pay medical planHealth Savings Account (HSA) with company contributions on a per pay period basis if enrolled in either HDHP medical planTwo comprehensive Dental PlansVision Insurance PlanDependent Care Savings Account for child and dependent care14 Company Paid Holidays with a full week off at ThanksgivingGenerous paid time off program for vacation and sick daysEmployee Assistance PlanFamily Medical LeavePaid Parental Leave (6 weeks)Maternity benefits utilizing company paid STD, any supplemental STD, plus Parental Leave (6 weeks) to provide time for recovery, baby bonding, and enjoying your family timeAdoption Assistance Reimbursement ProgramCompany Paid Concierge Services for you and your loved ones for the spectrum of caring needs for your aging parents, young children, life’s challenges and more401(k) Retirement Plan with both Traditional and Roth plans with per pay period matchPet InsuranceWe are not accepting unsolicited resumes from agencies and/or search firms for this job posting.Mercer Advisors provides equal employment opportunity to all applicants and employees without regard to age, color, disability, gender, marital status, national origin, race, religion, sexual orientation, gender identity and expression, physical or mental disability, genetic predisposition or carrier status, or any other characteristic protected by law in accordance with all applicable federal, state, and local laws. Mercer Advisors provides equal employment opportunity in all aspects of employment and employee relations, including recruitment, hiring, training and development, promotion, transfer, demotion, termination, layoff, compensation, benefits, and all other terms, conditions, and privileges of employment in accordance with applicable federal, state, and local laws. "," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-rei-systems-3519961427?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=Ddc9MPH8JkfO8hX9XppdMw%3D%3D&position=11&pageNum=17&trk=public_jobs_jserp-result_search-card," REI Systems ",https://www.linkedin.com/company/rei-systems?trk=public_jobs_topcard-org-name," Dulles Town Center, VA "," 3 weeks ago "," Be among the first 25 applicants ","REI Systems provides reliable, effective, and innovative technology solutions that advance federal, state, local, and nonprofit missions. Our technologists and consultants are passionate about solving complex challenges that impact millions of lives. We take a Mindful Modernization approach in delivering our application modernization, grants management systems, government data analytics, and advisory services. Mindful Modernization is the REI Way of delivering mission impact by aligning our government customers’ strategic objectives to measurable outcomes through people, processes, and technology. Learn more at REIsystems.com.  Employees voted REI Systems a Washington Post Top Workplace in 2015, 2016, 2018, 2020, 2021 and 2022! As a Senior Data Engineer, You Will/may Monitor and troubleshoot operational or data issues in the data pipelines. Develop code based automated data pipelines able to process millions of data points. Improve database and data warehouse performance by tuning inefficient queries. Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems. Provide assistance to the team with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem. Required Qualifications Expertise in Python. Experince in Data Pipeline development and Data Cleansing. Can articulate the basic differences between datatypes (e.g. JSON/NoSQL, relational). Understand the basics of designing and implementing a data schema (e.g., normalization, relational model vs dimensional model). 5 yr. experience with data mining and data transformation. 5 yr. experience with database and/or data warehouse 5 yr. experience with SQL. 5 yr. experience with Python, Spark (PySpark), Databricks, AWS, Azure Preferred Qualifications Experience building code based on data pipelines in production able to process big datasets. Knowledge of writing and optimizing SQL queries with large-scale, complex datasets. Industry certifications including Databricks and AWS Experience with Spark MLlib and applying existing machine learning algorithms against data lakehouses to drive insight and predictive capabilities Experience with data mining and data transformation. Experience with database and/or data warehouse Experience building data pipelines or automated ETL processes. Experience with Tableau Education: Bachelor’s degree in computer science, data analytics, business intelligence, economics, statistics, or mathematics Clearance: US Citizen able to obtain Public Trust Certification(s): AWS & Oracle certification is preferred. Location/Remote: Hybrid- Sterling, VA - Washington, DC Covid Policy Disclosure: Should the essential functions of this position require that the employee performing this role work on-site at REI’s Sterling location the following requirements will apply: the individual holding this position must be fully vaccinated, as defined in CDC guidance, as a condition of continued employment. REI will consider requests to be excused from this policy whenever necessary to comply with legal requirements and will consider any requests for reasonable accommodations due to a disability, religion, or other exemptions on an individual basis in accordance with applicable legal requirements. Employees and applicants requesting accommodations should request the accommodation in writing and should explain in detail the reasons why they are seeking an accommodation. REI will request additional information or documentation it deems necessary to inform its decision on an employee’s or applicant’s accommodation request. REI Systems is an Equal Opportunity Employer (Minority/Female/Disability/Vet)"," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-mutiny-3518634878?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=8RIYKMFvQgpHLjDycfH63w%3D%3D&position=12&pageNum=17&trk=public_jobs_jserp-result_search-card," Mutiny ",https://www.linkedin.com/company/mutinyhq?trk=public_jobs_topcard-org-name," United States "," 1 month ago "," 32 applicants ","Growth is the top priority for every company. Yet the best tools and practices are restricted to a few elite companies who can hire hundreds of engineers and data scientists. Everybody else loses $19 out of every $20 on generic marketing. Mutiny is building a no-code platform that helps companies convert that waste into actual customers and revenue by personalizing experiences based on who is actually viewing. Our greatest differentiator comes from the machine learning models we’ve built that learn across our platform and help our customers make the content changes to convert their prospects. This problem has not been solved in a decade and we are building an iconic company that will automate growth engineering for every company. That's why Sequoia, YCombinator, and CMOs of companies such as Snowflake and Airbnb invested. Mutiny is beloved by some of the fastest growing companies including Notion, Brex, Carta, Segment, Algolia and Qualtrics. We have quadrupled revenue year-over-year with even more on the horizon! The role in a nutshell We are looking for our first data engineer to design the data architecture that will inform strategic business decisions and improve our product for years to come. You will bring together several sources of product and company data into a cohesive and comprehensive data warehouse. You will partner with our finance, data science, and product teams to build ETLs, a shared data vocabulary, and meaningful data abstractions. This is a unique opportunity to be the first in a new function at a scaling startup and build a foundation to scale our data organization. What you’ll do at Mutiny: Build and maintain scalable ETL pipelines to efficiently transform data from our source systems into our data warehouse/lake Design our data architecture and partner with our infrastructure team to make it a reality Define core data concepts to establish common data-vocabulary Partner with finance and business operations team to build BI layer Support product and data science team to build data infrastructure to drive in-app experiences What you’ll get out of it: You are joining a rocketship! We are backed by Sequoia Capital, Y Combinator and CMOs from some of today's fastest-growing tech companies including AngelList, Carta, Gong, Hopin, Salesforce, and Snowflake. We are growing incredibly fast and about to hit another inflection point. The potential is unreal. Join and you’ll see what we mean. You will create a name for yourself by bringing innovative data science solutions never before seen in this category. You will get exposure to real business problems every company faces (growth) that you can take with you to start your own company (or to help scale another). You will have fun, plain and simple. There is a reason our first company value is that work should feel like play. You will experience a new way of working. Our team is fully distributed across the US and EU. But we come together as a company for quarterly offsites (usually in super fun places). This combination of experience-based working is a competitive advantage we plan on leaning into. What we’re looking for : 5+ yrs experience building and maintaining complex and scalable ETL pipelines Experience with cloud-based infrastructure (GCP, AWS, etc) Experience as a senior member of their team, partnering with cross-functional teams Experience translating business requirements to effective schemas & ETL pipelines Demonstrated proficiency in Python & SQL Experience working in user facing products (not solely internal data engineering) Experience collaborating with data scientists on projects We are fully remote and offer H1-B Sponsorship. Mutiny does not accept agency submitted candidates for this posting."," Entry level "," Full-time "," Information Technology "," Software Development " Data Engineer,United States,Data Analytics Engineer,https://www.linkedin.com/jobs/view/data-analytics-engineer-at-state-farm-3497523619?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=HCgg2%2BuaW71cIUCbF8aPjQ%3D%3D&position=13&pageNum=17&trk=public_jobs_jserp-result_search-card," State Farm ",https://www.linkedin.com/company/state_farm?trk=public_jobs_topcard-org-name," Greater Phoenix Area "," 2 weeks ago "," Be among the first 25 applicants "," Job Description OverviewWe are not just offering a job but a meaningful career! Come join our passionate team!As a Fortune 50 company, we hire the best employees to serve our customers, making us a leader in the insurance and financial services industry. State Farm embraces diversity and inclusion to ensure a workforce that is engaged, builds on the strengths and talents of all associates, and creates a Good Neighbor culture.We offer competitive benefits and pay with the potential for an annual financial award based on both individual and enterprise performance. Our employees have an opportunity to participate in volunteer events within the community and engage in a learning culture. We offer programs to assist with tuition reimbursement, professional designations, employee development, wellness initiatives, and more!Visit our Careers page for more information on our benefits , locations and the process of joining the State Farm team!Responsibilities As a Data Engineer you will: Work closely with data scientists and business experts to develop modeling solutions for actuarial and underwriting business problems Building and maintaining data pipelines for the development, implementation, execution, validation, monitoring, and improvement of data science solutions Establish business domain knowledge for State Farm data sources Investigate, recommend, and initiate acquisition of new data resources from internal and external data sources Identify critical and emerging technologies, techniques, tools, data sources, and platforms in the data engineering field, including cloud-based solutions, that support and extend quantitative analytic deployment solutions Qualifications We are looking for Candidates who have- Required Skills Experience in programming languages such as Python, SAS, R Experience with any opensource database such as PostgreSql, MySQL. Etc. Experience with developing solutions on AWS or other distributed compute platforms. Ability to learn and adopt new technologies and languages Critical thinking skills to challenge current thinking and apply right technology to solve problems. Bachelor’s Degree in Computer Science, Software Engineering, or related field Preferred Skills Experience with the Model Building Lifecycle Experience with CI/CD systems, preferably with GitLab, Jenkins, or AWS Code Deploy. Experience using deployment automation technologies, preferably Terraform Experience with P&C Insurance Data Applicants are required to be eligible to lawfully work in the U.S. immediately; employer will not sponsor applicants for U.S. work authorization (e.g. H-1B visa) for this opportunity***** Office Location: Corporate office located in Bloomington, IL, OR State Farm Hubs: Richardson, TX; Dunwoody, GA; Phoenix AZ.SFARM#JOAS "," Entry level "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-brooksource-3511573227?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=B6oRbJP22CLDY4jWGS6Omg%3D%3D&position=14&pageNum=17&trk=public_jobs_jserp-result_search-card," Brooksource ",https://www.linkedin.com/company/brooksource?trk=public_jobs_topcard-org-name," Cincinnati Metropolitan Area "," 1 week ago "," Over 200 applicants ","Data Engineer Long Term Contract Hybrid in Downtown Cincy Must Have Skills SQL Relational Database Management DB2 Data management experience Power BI Nice to Have Snowflake ABOUT EIGHT ELEVEN: At Eight Eleven, our business is people. Relationships are at the center of what we do. A successful partnership is only as strong as the relationship built. We’re your trusted partner for IT hiring, recruiting and staffing needs. For over 20 years, Eight Eleven has established and maintained relationships that are designed to meet your IT staffing needs. Whether it’s contract, contract-to-hire, or permanent placement work, we customize our search based upon your company's unique initiatives, culture and technologies. With our national team of recruiters placed at 28 major hubs around the nation, Eight Eleven finds the people best suited for your business. When you work with us, we work with you. That’s the Eight Eleven promise. Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws."," Mid-Senior level "," Contract "," Engineering "," Banking " Data Engineer,United States,Data Engineer - Python/Pyspark,https://www.linkedin.com/jobs/view/data-engineer-python-pyspark-at-capstone-it-3514687203?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=%2FEY0MM24wWNOi8NriJmeaw%3D%3D&position=15&pageNum=17&trk=public_jobs_jserp-result_search-card," Capstone IT ",https://www.linkedin.com/company/capstone-it-omaha-kansas-city?trk=public_jobs_topcard-org-name," United States "," 6 days ago "," 119 applicants ","HOT NEED FOR A REMOTE DATA ENGINEER FOR A LONG TERM OPPORTUNITY REQUIRING PYTHON AND PYSPARK EXPERIENCE! We are looking for Sr. Data Engineers to design, develop, test, and maintain the scripts and jobs that are required to extract, transform, clean, and move data and metadata so that it can be loaded into a data warehouse, data mart, or operational data store. They will mentor less experienced developers and serves as a project technical data lead for data projects. Trains, coaches and mentors team members. Duties and responsibilities include: DATA ANALYSIS * Maps source system data to data warehouse models or other file transformations (source to target mappings). * Works with business systems analysts and business team members to understand source systems data and data quality issues/performs data quality analysis. * Defines and captures metadata and rules associated with ETL processes. DATA DESIGN * Creates basic to larger physical designs of ETL processes. May be responsible for the higher-level logical design. For smaller projects, the physical design may be the only design. For larger projects, this may be based on a higher-level logical design. * Employs cost effective techniques in design of physical ETL processes to meet immediate and future business needs. * Conducts ETL process design / sizing estimates for data warehousing projects or other file transformations * Designs data solutions to satisfy business information needs and receive design acceptance. ETL DEVELOPMENT AND TESTING * Develops complex ETL processes. * Creates ETL processes that supports business functionality, performs well, and is easily supported. * Creates scripts when necessary. * Follows ETL standards, goals, and objectives. * Works with teammates to define and resolve complex data warehouse or file transformation project issues. * Adapts extraction, transformation, and load (ETL) processes to accommodate changes in source systems and new business user requirements * Completes work according to the iteration plan or release plan. * Participates in ETL Code Reviews, receives acceptance of code by peers at review. Responsible for code reviews for other developers. * Tests (unit level, integration level, system level, and user acceptance level) complex ETL processes. * Works with testing team to assist them with their testing processes. * Assists with testing upgrades; identifies errors or inconsistencies with vendors’ software applied to Company’s data environment. ETL CHANGE/CONFIGURATION MANAGEMENT AND PRODUCTION SUPPORT * Follows Change Management/Configuration Management processes for the deployment of code into test and production environments. * Leads teammates to define and resolve complex data warehouse or file transformation production issues. * Troubleshoots data problems. Identifies, coordinates and helps resolve the root cause of day to day data production issues. * Supports client areas in their analytical functions; corrects errors/resolves problems associated with data warehouse processes and other file transformations. * Provides rotational on-call support, responding quickly to production issues. Job Qualifications: Minimum of 6 years IT experience to include at least 5 years’ experience required in data warehousing, relational database management systems, multi-dimensional database management systems. Strong knowledge of Business Intelligence architecture and tools such as Hyperion, Business Objects, MicroStrategy, Cognos, SAS, or WebFocus is desirable. Strong knowledge of at least one ETL tool such as IBM InfoSphere DataStage (or comparable ETL tool) is required. Strong knowledge of Data Warehousing data population techniques for target structures such as Star Schemas, Snowflake Schemas, highly normalized data models, and file structures. Strong knowledge of source to target mappings. Strong relational database knowledge; possesses a broad understanding of the current and prospective data architecture, and database performance tuning. Experience using Python/Pyspark to build data pipelines required. Experience with programming languages such as UNIX scripting, Java, or COBOL is essential. Strong ETL staging environment development skills and strong SQL programming skills in a DB2 environment. Strong verbal and written communication skills. Has ability to establish and maintain effective working relationships with external and internal personnel. Must be customer focused and excel in coordination of problem resolution. Flexibility to work in a changing, fast-paced environment and work with a sense of urgency and accountability. Experience with Agile methodologies is desirable. Rotational On-Call support is expected. Ability to effectively provide work direction to team members and delegate tasks; ability to train, mentor and provide constructive feedback to team members to meet or exceed performance in their job responsibilities. Demonstrated project management skills are desirable. Highly desirable: Insurance Information Warehouse (IIW) and DataStage. Capstone Consulting is an EEO employer Capstone website http://www.capstonec.com/ Like us on Facebook: https://www.facebook.com/CapstoneITStaffingSolutions/ Follow us on Twitter: https://twitter.com/capstone__IT/ Connect with us on LinkedIn: https://linkedin.com/company/capstone-consulting/"," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-clifyx-3349164012?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=A1rrbnfUBBb7ICr2WKjb5g%3D%3D&position=16&pageNum=17&trk=public_jobs_jserp-result_search-card," ClifyX ",https://www.linkedin.com/company/clifyx?trk=public_jobs_topcard-org-name," Seattle, WA "," 5 months ago "," Be among the first 25 applicants "," We have a new couple of requirements which has opened up in Seattle, WA/Nordstrom. Skillset requirements: Spark, Python, PySpark, Azure Databricks (Nice to have) .This position requires good data engineering skills, expertise in rewrite existing logic into PySpark, optimizing code for performance. Please share suitable candidates for these positions. Rates may be in range of $85/hr. "," Entry level "," Contract "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Visualization Engineer,https://www.linkedin.com/jobs/view/data-visualization-engineer-at-blue-margin-inc-3525246093?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=40%2Bkw4%2FeAA4x1Vzi4d1PWg%3D%3D&position=17&pageNum=17&trk=public_jobs_jserp-result_search-card," Blue Margin, Inc. ",https://www.linkedin.com/company/bluemargin?trk=public_jobs_topcard-org-name," Colorado, United States "," 1 day ago "," Be among the first 25 applicants ","Data Visualization Engineer I-III Blue Margin, Inc. is on a mission to build great places to work. We help companies improve company culture and consequently, performance. We believe remarkable things are possible when the whole team knows the score and is rowing in the same direction. Using Power BI as the primary catalyst, we partner with our clients to align employees, improve meetings, and increase employee satisfaction. Why are we looking?  We are expanding our Microsoft Power BI team and are looking for people who are flexible and capable of putting themselves in the client's shoes. We are looking for a clever, creative, data-savvy person to produce reports while working alongside a fantastic team of developers and consultants.  Our growth means we are looking for people who will help us, and our clients, build great places to work. It also means we provide an excellent opportunity for someone serious about learning and advancing their career.  We are seeking a candidate to work as a full-time employee in our local office in Fort Collins.  Please note that we are interested in every qualified candidate who is eligible to work in the United States. However, we are not able to sponsor visas. The Sample Report You Create Will Help Distinguish Your Skill Level We are interested in considering candidates for all three of our developer tiers. Data Visualization Engineer I Data Visualization Engineer II Data Visualization Engineer III Responsibilities:   Develop accurate reports in Power BI that are not only visually engaging, but also make customers’ data accessible and actionable.  Regularly interact with clients for project updates and inquiries  Create, enhance, and troubleshoot data models in Power BI and Visual Studio  Author documentation of customer reporting requirements and finished reports  Craft and use T-SQL queries for data validation     Candidates MUST possess the following qualifications:  Working knowledge of Power BI Desktop and Power BI Service Ability to create DAX calculations Some familiarity with data modeling  Proficient ability to talk to executives  clearly and concisely in English Professional demeanor Desire to be a team player and do meaningful work Ideal Candidates Would Possess These Additional Qualifications 1-3 years of experience in Power BI Desktop creating tables, graphs, drill downs, drill throughs, bookmarks, and KPIs Working knowledge of Power BI Service and administration Ability to create intermediate to advanced DAX calculations using functions such as Calculate, Summarize and Filter Experience creating T-SQL queries in SSMS  Comprehensive grasp of data visualization methods  Experience using Visual Studio 2017/2019, DAX Studio, Tabular Editor, ALM Toolkit Familiarity with tabular data models Experience manipulating data in Power Query Editor Our Culture:  Company Core Values: Commit to Quality, Embrace Transparency, Choose to Be Positive, Be Efficient/Systematize, Pursue Learning, Be Generous  We believe that in-person interaction is vital to solid work relationships and our great place to work Weekly personal and professional development programs for all  Teamwork—we maintain company-wide interaction and communication  Entrepreneurism – we want everyone on our team to be eager to adapt and evolve with our advancing business. We are looking for someone who is comfortable wearing more than one hat.  Work Environment And Physical Requirements This position may require minimal physical effort including lifting materials and equipment of less than 10 pounds. This position requires viewing a computer screen more than 80 percent of the time. The job will take place in a normal office environment with controlled temperature and lighting conditions. This position may require some travel and occasional participation in off-site functions. Salary And Benefits The starting salary for a Level 1 position is between $65-85K and is commensurate with experience and qualifications. The starting salary for a Level II position is between $75-95K and is commensurate with experience and qualifications. The starting salary for a Level III position is between $85-105K and is commensurate with experience and qualifications. All positions come with a comprehensive benefits package consisting of medical and dental coverage, paid sick leave, vacation, disability/life insurance, and a retirement plan."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-xorbix-technologies-inc-3521130561?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=cCki8f%2BHDx5GsauZPK0L5g%3D%3D&position=18&pageNum=17&trk=public_jobs_jserp-result_search-card," Xorbix Technologies, Inc. ",https://www.linkedin.com/company/xorbix-technologies?trk=public_jobs_topcard-org-name," Wisconsin, United States "," 3 days ago "," 154 applicants ","Summary: Working under minimal supervision, the Sr. Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The Data Engineer will support our software developers, data architects, data analysts on initiatives and will ensure optimal data delivery is consistent throughout ongoing projects. This position, in partnership with TPM, business partners/product owners, gathers information and analyzes needs to determine feasibility of client requests. This position also takes an active mentoring role and provides design scope and specifications to less experienced team members. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Main Job Responsibilities include: Create and maintain optimal data pipeline architecture. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with data and analytics experts to strive for greater functionality in our data systems. Assemble large, complex data sets that meet functional / non-functional business requirements. Explore and research new and alternate BigData technologies and platforms. Evaluate feasibility and make recommendations, considering things such as customer requirements, time limitations, system limitations. Serve as a mentor to junior staff by conducting technical training sessions and reviewing project outputs. Build documentation repository for knowledge transfer and developing expertise in multiple areas. Provide operational support on complex/escalated issues to diagnose and resolve incidents in production data pipelines. Job Qualifications: Job Qualifications: Education, Experience, Knowledge and Skills BS Degree or equivalent work experience in a software engineering discipline Typically has 5+ years’ experience in an applicable software development environment. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases (SQL, Postgres) Experience with diverse coding, profiling, and visualization approaches including authoring SQL queries, BigQuery, Python, Looker, Google Cloud or equivalent. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Hands on experience with Cloud Platforms (AWS, GCP, or Azure) Experience in designing and implementing large-scale event-driven architectures. Understanding of data warehousing and data modeling techniques Understanding of Big Data, Cloud, Machine Learning approaches and concepts (preferred) Experience working as a member of a distributed team. Ability to organize and coordinate with stakeholders across multiple functions and geographic locations. Ability to develop and write technical specifications. Coaching and teaching skills to mentor less experienced team members Excellent analytical and problem management skills Good interpersonal skills and positive attitude Experience with the following tools and technologies: Elastic Search, Kafka Google Cloud Dataflow and Airflow (preferred) Python, Java, C++ Big Query Remote This position is not eligible for sponsorship or C2C Salary: $140,000-$150,000"," Mid-Senior level "," Full-time "," Information Technology "," Insurance and Real Estate " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-robert-half-3481686692?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=QwdBIzWp0%2FB50KqBjQoEyQ%3D%3D&position=19&pageNum=17&trk=public_jobs_jserp-result_search-card," Robert Half ",https://www.linkedin.com/company/robert-half-international?trk=public_jobs_topcard-org-name," Las Vegas, NV "," 3 weeks ago "," 193 applicants ","THIS IS A REMOTE ROLE FOR CANDIDATES IN NEVADA. The primary responsibility of the Senior Data Engineer is to create and support enterprise class data centric solutions, providing business value from corporate data assets. The Senior Data Engineer is expected to maintain expertise across an application technology stack (Microsoft SQL Server BI) as well as a technology domain (Data Management, BI/DW, MDM) and have the ability to embrace and leverage new technologies while working effectively with other information technology professionals and business users to ensure that the data solutions are stable, efficient and responsive to business needs. This person is expected to possess a strong domain knowledge of gaming and hospitality and experience on how data technologies are best implemented to enhance these domains. The Senior Data Engineer must possess strong communication and people skills for assuming Technical Leadership roles for projects, including the ability to effectively present technical information to stakeholders and management. Essential Duties & Responsibilities Design, build, deliver and support Data Warehouse and ETL structures and solutions. Act as an internal consultant, providing architecture, vision, problem anticipation, and problem solving for the assigned project(s), as well as a Subject Matter Expertise for data and analytic users. Provide subject matter expertise for the data associated with gaming and hospitality systems. Provide L3 application and on-call support for data and integration solutions. Participate and at times lead project teams within an agile environment. Successfully engage in multiple initiatives concurrently, including application and on call support, minor projects, major projects, functional requirements, systems specifications and subject matter expertise. Prepare clear and concise documentation for delivered solutions and processes, integrating documentation with corporate knowledgebase. In partnership with the Product Management team, provide subject matter expertise for determining business requirements to address business opportunities or issues across business functions for data centric solutions. Maintain familiarity with technological trends and innovations in the data warehousing and data management domains. Interpret and satisfy business needs by building and enhancing systems to transform, cleanse and provision corporate data assets in a governed, secure and high performance manner. Provide subject matter expertise for data and analytics. Participate in the ongoing Data Management maturation process. Safety is an essential function of this job. Consistent and regular attendance is an essential function of this job. Minimum Qualifications 21 years of age. Proof of authorization/eligibility to work in the United States. Bachelor’s degree or equivalent in relevant discipline Must be able to obtain and maintain Nevada Gaming Control Board Registration and any other certification or license, as required by law or policy. 8 years ETL experience using SQL Server SSIS 2008r2-2016. 10 years with T-SQL programming. Must exhibit a high level of mastery of the Microsoft BI Stack, including: SSIS, SSRS, SSAS, SharePoint, PowerPivot, etc. Must understand Relational and Dimensional database modeling. Should be experienced in automated file handling with ETL tools via FTP and other means. Should have administrative experience with SQL Server 2008-2016. Prefer experience with Java or .NET development; C#, ASP.NET, scripting languages, and web site operations is a plus. Prefer experience with key concepts of Data Management such as Data Quality and MDM. Experience with Data Preparation for Data Science a plus. Experience with Big Data, Advanced Analytic techniques and Real Time data a plus. Must be willing and capable of adopting new technologies and paradigms such as Big Data technologies (Hadoop, hive, etc.), event based analytics, etc. Experience with analytic tools such as Spotfire, SAS, Cognos, Microsoft BI a plus. Prefer TFS and Agile experience (SCRUM). Prefer gaming and hospitality experience. Must be able to adhere to SOPs and methodologies, and be willing to improve them when necessary. Must have strong organizational skills, customer service focus, attention to detail, and process orientation. Good working knowledge of Windows/Enterprise Systems/Networking/Enterprise Security Must be willing to be 24 x 7 on-call support"," Not Applicable "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-bristol-myers-squibb-3490929319?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=%2FiCQntA1BZTUkv30GIuFrA%3D%3D&position=20&pageNum=17&trk=public_jobs_jserp-result_search-card," Bristol Myers Squibb ",https://www.linkedin.com/company/bristol-myers-squibb?trk=public_jobs_topcard-org-name," Seattle, WA "," 3 weeks ago "," Be among the first 25 applicants ","Working with Us Challenging. Meaningful. Life-changing. Those aren’t words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You’ll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams rich in diversity. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us Work at the interface of pharma, genomics, and data engineering. The candidate will significantly contribute to the development of modern data services for BMS’s research scientists. Job Description Bristol Myers Squibb seeks a highly motivated Data Engineer to enable data integration efforts in support of data science and computational research efforts. The Data Engineer will be responsible for executing an ambitious digital strategy to support BMS’s predictive science capabilities in R&ED (Research & Early Development). The successful candidate will partner closely with computational researcher teams, IT leadership, and various technical functions to design and deliver data solutions that streamline access to computing & data, and help scientists derive insight and value from their research. Job Functions The role requires someone who can seamlessly mesh technical knowledge to help navigate R&D cloud, CoLo, and on-premise computing needs, including planning, infrastructure design, maintenance, and support. The role will lead the development of infrastructure that enables interoperability and comparability of data sets derived from different technologies and biological systems in the context of integrative data analysis. The candidate will create and maintain optimal data pipeline architectures that enable scientific workflow and collaborate with interdisciplinary teams of data curators, software engineers, data scientists and computational biologists as we test new hypotheses through the novel integration of emerging research data types. The work will combine careful resource planning and project management with hands-on data manipulation and implementation of data integration workflows. Responsibilities Include, But Are Not Limited To, The Following Designing and developing an ETL infrastructure to load research data from multiple source systems using languages and frameworks such as Python, R, Docker, Airflow, Glue, etc. Leading the design and implementation of data services solutions that may include relational, NoSQL and graph database components. Collaborating with project managers, solution architects, infrastructure teams, and external vendors as needed to support successful delivery of technical solutions. Experiences And Education Bachelor's Degree with 8+ years of academic / industry experience or master's degree with 6+ years of academic / industry experience or PhD with 3+ years of academic / industry experience in an engineering or biology field. Demonstrated high proficiency with current software engineering methodologies, such as Agile SDLC approaches, distributed source code control, project management, issue tracking, and CI/CD tools and processes. Excellent skills in an object-oriented programming language such as Python or R, and proficiency in SQL High degree of proficiency in cloud computing Solid understanding of container strategies such as Docker, Fargate, ECS and ECR. Excellent skills and deep knowledge of databases such as Postgres, Elasticsearch, Redshift, and Aurora, including distributed database design, SQL vs. NoSQL, and database optimizations Demonstrated high proficiency with current software engineering methodologies, such as Agile SDLC (Software Development Life Cycle) approaches, distributed source code control, project management, issue tracking, and CI/CD tools and processes. Strong technical communication skills. If you come across a role that intrigues you but doesn’t perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as “Transforming patients’ lives through science™ ”, every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in an inclusive culture, promoting diversity in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Physical presence at the BMS worksite or physical presence in the field is a necessary job function of this role, which the Company deems critical to collaboration, innovation, productivity, employee well-being and engagement, and it enhances the Company culture. COVID-19 Information To protect the safety of our workforce, customers, patients and communities, the policy of the Company requires all employees and workers in the U.S. and Puerto Rico to be fully vaccinated against COVID-19, unless they have received an exception based on an approved request for a medical or religious reasonable accommodation. Therefore, all BMS applicants seeking a role located in the U.S. and Puerto Rico must confirm that they have already received or are willing to receive the full COVID-19 vaccination by their start date as a qualification of the role and condition of employment. This requirement is subject to state and local law restrictions and may not be applicable to employees working in certain jurisdictions such as Montana. This requirement is also subject to discussions with collective bargaining representatives in the U.S. BMS is dedicated to ensuring that people with disabilities can perform complex functions through a transparent recruitment process, reasonable workplace adjustments and ongoing support in their roles. Applicants can request an accommodation prior to accepting a job offer. If you require reasonable accommodation in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com. Visit careers.bms.com/eeo-accessibility to access our complete Equal Employment Opportunity statement. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. "," Entry level "," Full-time "," Information Technology "," Pharmaceutical Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-costco-it-3519905708?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=og9GRuZ2kWzKlX4NI2bWkg%3D%3D&position=21&pageNum=17&trk=public_jobs_jserp-result_search-card," Costco IT ",https://www.linkedin.com/company/costco-weareit?trk=public_jobs_topcard-org-name," Dallas, TX "," 1 month ago "," Be among the first 25 applicants ","This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture.The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes.Our employees and our members come FIRST.Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others.In 2021, Costco contributed over $58 million to organizations such as United Way and Children's Miracle Network Hospitals. Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed. As proof, Costco ranks seventh in Forbes “World’s Best Employers”. The Data Engineer is responsible for developing data pipelines and/or data integrations for Costco’s enterprise certified data sets that are used for business critical data consumption use cases (i.e. Reporting, Data Science/Machine Learning, Data APIs, etc.). At Costco, we are on a mission to significantly leverage data to provide better products and services for our members. This role is focused on data engineering to build and deliver automated data pipelines from a plethora of internal and external data sources. The Data Engineer will partner with product owners, data architects, and data platform teams to design, build, test, and automate data pipelines that are relied upon across the company as the single source of truth. If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined. ROLE Develops and operationalizes data pipelines to create enterprise certified data sets that are made available for consumption (BI, Advanced analytics, APIs/Services). Works in tandem with Data Architects, Data Stewards, and Data Quality Engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality and orchestration. Designs, develops, and implements ETL/ELT/CDC processes using Informatica Intelligent Cloud Services (IICS). Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Cosmos, Databricks, Delta-Lake to improve and speed delivery of our data products and services. Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency, and quality of data management. Communicates technical concepts to non-technical audiences both in written and verbal form. Performs peer reviews for other data engineers’ work. Required 5+ years’ experience engineering and operationalizing data pipelines with large and complex datasets. 2+ years’ hands-on experience with Informatica IICS or other ETL tools. 3+ years’ experience working with Cloud technologies such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB and other big data technologies. Extensive experience working with various data sources (DB2, SQL, Oracle, flat files (csv, delimited), APIs, XML, and JSON. Experience implementing data integration techniques such as event/message based integration (Kafka, Azure Event Hub), ETL. Advanced SQL skills; solid understanding of relational databases and business data; ability to write complex SQL queries against a variety of data sources. 5+ years’ experience with Data Pipeline, ETL, and Data Warehousing. Strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing). Experience with Git / Azure DevOps. Able to work in a fast-paced agile development environment. Recommended Azure Certifications. BA/BS in Computer Science, Engineering, or equivalent software/services experience. Experience delivering data solutions through agile software development methodologies. Exposure to the retail industry. Excellent verbal and written communication skills. Experience working with SAP integration tools including BODS. Experience with Job scheduling and orchestration tools. Required Documents Cover Letter Resume California applicants, please click to review the Costco Applicant Privacy Notice. Pay Ranges Level 2 - $100,000 - $135,000 Level 3 - $125,000 - $165,000 We offer a comprehensive package of benefits including paid time off, health benefits - medical/dental/vision/hearing aid/pharmacy/behavioral health/employee assistance, health care reimbursement account, dependent care assistance plan, short-term disability and long-term disability insurance, AD&D insurance, life insurance, 401(k), stock purchase plan to eligible employees. Costco is committed to a diverse and inclusive workplace. Costco is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or any other legally protected status. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to IT-Recruiting@costco.com If hired, you will be required to provide proof of authorization to work in the United States."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-genome-medical-3500325229?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=SYpxvfTAmVdKHUhUGPK3mw%3D%3D&position=22&pageNum=17&trk=public_jobs_jserp-result_search-card," Genome Medical ",https://www.linkedin.com/company/genome-medical?trk=public_jobs_topcard-org-name," South San Francisco, CA "," 2 weeks ago "," 50 applicants ","At Genome Medical, we are dedicated to providing greater access to genetic care for our patients! Accelerate your career as the Data Engineer (DE) with Genome Medical. We are also committed to providing our employees the support they need. The base salary for this position is between $183k-$201k per year. Please note that the base salary range is a guideline, and individual total compensation will vary based on factors such as qualifications, skill level and competencies. We offer medical, dental and vision packages as well as many other perks to make your benefits package truly customizable to you and your family's needs. Some of our benefits for this position include: Flexible PTO Plan Paid Personal Leave Paternity & Maternity Leave 401k Match (after 90 days) Up to $500 remote office reimbursement (after 30 days) AD&D Short & Long-Term Disability HSA & FSA Medical and Dependent Care Life Insurance Pet Insurance And much MORE! This role may be right for you if: you are excited about working with the ""big picture of data"" and can be relied upon for prototyping and production. you possess a startup mindset and are flexible and adaptable to the changing needs of the team. you are motivated to drive efficiencies in the SDLC timeline and seek to make incremental improvements with new capabilities while balancing concurrent development in products and their overall architectural evolution. you are looking to work cross-functionally as a leader with sales, finance, product, and engineering teams to support growth throughout all stakeholder levels and partnerships. Position Summary: This position reports to the Senior Director, Software Engineering and will perform activities consistent with the position of a Data Engineer which will include but are not limited to: Designs and implements scalable robust and secure data lake and/or data warehouse solutions. Serves as the key point of contact from engineering to be able to communicate interdepartmentally and with all stakeholders the full end-end ownership of new features of our platform. Drives the development of new reporting functionalities for our genomics platform to enhance the business intelligence team. Adheres to flexible, simple, well-documented, and secure design of internal and external-facing services and is an enthusiastic supporter of it for the entire team. Enthusiastically participates in design and code reviews to further build a healthy, technical culture at Genome Medical. Dedicated to upholding company values and contributes to making Genome Medical a great place to work. Additional responsibilities as they relate to the position and needs of the department. REQUIREMENTS AND QUALIFICATIONS: Bachelor's degree (or equivalent experience) in computer science, engineering, statistics or a related field and 4-5 years of software development experience with full knowledge of the software development life cycle (SDLC) is required. Excellent communication skills (verbal, written and listening) with ability to collaboratively engage with all levels of stakeholders and teammates in a solution-oriented, remote environment. Strong analytical and organizational mentality. Knowledge of the Genetics/Genomics or healthcare industries is a plus. Working knowledge and experience with the business intelligence analytics tool Looker is preferred. 2+ years of experience building data warehousing technical components such as ETL, ELT, databases and reporting required. 4+ years of experience writing production backend code in Python on Linux systems required. 3+ years of integrating datasets between multiple microservices using one or more of SQL/noSQL databases required. 2+ years of experience designing/developing high performance ETL systems on top of queueing technology components like Kafka, RabbitMQ, Apache Storm is a plus Experience building backends of machine learning pipelines is a plus. Knowledge of software engineering practices including coding standards, code reviews, SCM, CI/CD in a containerized environment (e.g. Docker) About Genome Medical: Genome Medical™ is a genomics technology, services, and strategy company. As a nationwide virtual medical practice, we are dedicated to bringing genome-enabled health care to everyone through our extensive network of genetic specialists. Founded by personalized medicine pioneers Dr. Randy Scott, Dr. Robert Green, and Lisa Alderson, our goal is to bridge the growing gap between available genome technology and current medical practice. As genetic information becomes increasingly important in medicine, there are too few experts to meet the growing demand for interpretation. We are addressing this challenge by creating a scalable, efficient model for lifelong genome-centered health care. To learn more, please visit www.genomemedical.com or find us @GenomeMed on Twitter. Genome Medical is proud to be an equal opportunity employer and committed to creating a workforce that encompasses all cultures and differences. We celebrate our employees' differences, regardless of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, or Veteran status. Information collected and processed as part of your Genome Medical profile, and any job applications you choose to submit is subject to Genome Medical's Candidate Privacy Policy."," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-sayari-commercial-risk-intelligence-3478678402?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=5BZRuEqLuONBb31vWdBgEw%3D%3D&position=23&pageNum=17&trk=public_jobs_jserp-result_search-card," Sayari | Commercial Risk Intelligence ",https://www.linkedin.com/company/sayarilabs?trk=public_jobs_topcard-org-name," United States "," 4 weeks ago "," Over 200 applicants ","Sayari is looking for Data Engineers to join our growing team! We are hiring at all levels and encourage junior through senior level candidates to apply. As a member of Sayari's data team you will work with our Product and Software Engineering teams to collect data from around the globe, maintain existing ETL pipelines, and develop new pipelines that power Sayari Graph. ABOUT SAYARI Sayari is a venture-backed and founder-led global corporate data provider and commercial intelligence platform, serving financial institutions, legal & advisory service providers, multinationals, journalists, and governments. We are building world-class SaaS products that help our clients glean insights from vast datasets that we collect, extract, enrich, match and analyze using a highly scalable data pipeline. From financial intelligence to anti-counterfeiting, and from free trade zones to war zones, Sayari powers cross-border and cross-lingual insight into customers, counterparties, and competitors. Thousands of analysts and investigators in over 30 countries rely on our products to safely conduct cross-border trade, research front-page news stories, confidently enter new markets, and prevent financial crimes such as corruption and money laundering. Our company culture is defined by a dedication to our mission of using open data to prevent illicit commercial and financial activity, a passion for finding novel approaches to complex problems, and an understanding that diverse perspectives create optimal outcomes. We embrace cross-team collaboration, encourage training and learning opportunities, and reward initiative and innovation. If you enjoy working with supportive, high-performing, and curious teams, Sayari is the place for you. POSITION DESCRIPTION Sayari’s flagship product, Sayari Graph, provides instant access to structured business information from hundreds of millions of corporate, legal, and trade records. As a member of Sayari's data team you will work with our Product and Software Engineering teams to collect data from around the globe, maintain existing ETL pipelines, and develop new pipelines that power Sayari Graph. What You Will Need: Professional experience with Python and a JVM language (e.g., Scala) 2+ years of experience designing and maintaining ETL pipelines Experience using Apache Spark and Apache Airflow Experience with SQL (e.g., Postgres) and NoSQL (e.g., Cassandra, TigerGraph, etc.) databases Experience working on a cloud platform like GCP, AWS, or Azure Experience working collaboratively with git What We Would Like: Understanding of Docker/Kubernetes Understanding of or interest in knowledge graphs Who You Are: Experienced in supporting and working with cross-functional teams in a dynamic environment Interested in learning from and mentoring team members Passionate about open source development and innovative technology Benefits What We Offer: A collaborative and positive culture - your team will be as smart and driven as you Limitless growth and learning opportunities A strong commitment to diversity, equity, and inclusion Performance and incentive bonuses Outstanding competitive compensation and comprehensive family-friendly benefits, including full healthcare coverage plans, commuter benefits, 401K matching, generous vacation, and parental leave. Conference & Continuing Education Coverage Team building events & opportunities Sayari is an equal opportunity employer and strongly encourages diverse candidates to apply. We believe diversity and inclusion mean our team members should reflect the diversity of the United States. No employee or applicant will face discrimination or harassment based on race, color, ethnicity, religion, age, gender, gender identity or expression, sexual orientation, disability status, veteran status, genetics, or political affiliation. We strongly encourage applicants of all backgrounds to apply."," Mid-Senior level "," Full-time "," Engineering and Other "," Software Development " Data Engineer,United States,"Associate, Data Engineer",https://www.linkedin.com/jobs/view/associate-data-engineer-at-s-martinelli-company-3525634330?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=3qxQg1RHCIdSxeFP3us6BA%3D%3D&position=24&pageNum=17&trk=public_jobs_jserp-result_search-card," S. Martinelli & Company ",https://www.linkedin.com/company/s.-martinelli-&-co.?trk=public_jobs_topcard-org-name," Watsonville, CA "," 3 weeks ago "," Be among the first 25 applicants "," Job SummaryThe Associate Data Engineer is a key member of the company’s analytical framework that will support our Production, Sales, Marketing, Finance, Human Resources, and Executive Team. They will be involved in various cross-functional efforts such as development of strategic reports, building and optimizing our data storage and pipelines, and analysis of available data to identify trends and improve business outcomes. Using the company’s analytical software toolkit, the Associate Data Engineer captures internal/external data in a data warehouse and applies algorithms to develop reporting, analyses, and other models. They will be responsible for establishing and maintaining flows of information to key stakeholders, as well as maintaining data governance in the data warehouse. The position will leverage the Company’s Enterprise Resource Planning (ERP), Corporate Performance Management (CPM), and Business Intelligence (BI) software to help management understand the past and plan for future. The Associate Data Engineer collaborates with the company’s Data Engineer and Financial Analysts, as well as independently managing key deliverables. They maintain data integrity and efficiency of data flows within Martinelli’s BI system.Essential Job Functions Work with stakeholders throughout the organization to identify opportunities for leveraging data to drive business solutions. Develop and maintain data pipelines that feed information from internal and external sources into the company’s data warehouse. Maintain, monitor, and validate data governance in the company’s data warehouse. Uncover business insights and answer strategic questions by developing new dashboards, data flows, ad hoc analyses, and other content in the company’s BI software. Leverage data management languages (primarily SQL, but also R, Python, and OData) to retrieve and shape raw data into a format that supports efficient reporting. Optimize the efficiency of data flows within the company’s BI software. Provide insightful analysis to identify and communicate financial and operational opportunities. This will require understanding key business drivers, processes, trends and variances to expectations. Acts as an internal ambassador for BI at Martinelli’s, becoming a subject matter expert capable of training stakeholders on the use of the company’s BI system and supporting user issues as they arise. Manage projects and initiatives as a representative of the FP&A Team. Periodically support the Company’s financial planning processes through analysis of key metrics and regression-based forecasting Assist in special projects as requested by Management. Qualifications Ability to prepare, interpret and communicate analyses verbally and with digital tools including spreadsheets and Business Intelligence applications. Familiarity with using statistical programming languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets Experience managing or leveraging databases. Ability to assist in maintaining regular contact and building healthy working relationships with various internal customers to ensure a high level of customer service and work product. Be detail oriented with strong precision and analytical skills. Ability to distill complex data into key drivers and learnings that can be understood by an analytically-minded audience. Demonstrate proficiency in interacting with and communicating to Management Be a team player with strong interpersonal and communication skills including strong internal drive, ethics, conscientiousness, diplomacy, flexibility, and dependability. Ability to focus on solution driven outcomes, work independently, and display ownership and understanding of assigned projects. Work in a rapid-paced environment while meeting deadlines Respect confidential matters by exercising individual initiative and discretion. Experience performing complex analyses and creating Excel models. Strong Microsoft Office skills including advanced Excel (Pivot Tables, Index/Match), Word Excellent oral, written and interpersonal communication skills. Preferred Experience (not Required) Food manufacturing or agricultural goods processing industry SAP, Workday Adaptive Planning (formerly Adaptive Insights), and/or Domo ETL tools (ex. Informatica, Alteryx, Talend) Finance and/or Accounting Education. Certification Requirements Bachelor’s degree in Mathematics, Computer Science, Statistics, Economics or related field Master’s degree in fields listed above preferred but not required. 1+ years of database and analytics experience, either in an academic or professional environment Physical Demands and Working Environment The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of the job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Ability to pass background check, and drug screen. Ability to stand and/or sit in front of a computer screen for extended periods of time. Work is performed in a standard office environment and warehouse food processing environment. The company provides remote working options. This position qualifies for remote working status subject to manager approval and Company policy. All Employees Are Expected To Comply with all Company safety requirements. Adhere to all Company policies and procedures. Pay range $59,000/yr. - $89,000/yr. "," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services " Data Engineer,United States,Associate Data Engineer,https://www.linkedin.com/jobs/view/associate-data-engineer-at-children-international-3486403204?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=Jdk4TTGwAQCXgrBQYo3gOA%3D%3D&position=25&pageNum=17&trk=public_jobs_jserp-result_search-card," Children International ",https://www.linkedin.com/company/children-international?trk=public_jobs_topcard-org-name," Kansas City Metropolitan Area "," 3 weeks ago "," 114 applicants ","Summary of Duties Under direct supervision, work with technology teams to build and implement data management solutions that solve current and future business data needs. Design and build data migration solutions (ETL/T) using t-SQL, SSIS, PowerShell, Azure Data Factory, and other tools. Monitor data systems to ensure proper availability, security, and performance. Assist in maintaining production and non-production DBMS environments. Build and maintain MSSQL database objects including Procedures, functions, and views. Deliver technical documentation including data and process flow diagrams that help facilitate decision making to complex data challenges. Develop physical data schemas based on logical models to satisfy business requirements. Support the development of solutions with a usable enterprise information architecture which may include an enterprise data model, common business vocabulary, and taxonomies. Continuously develop skills around data technologies including but not limited to MS SQL Server, Cosmo DB, Azure, data processing, and data management. Assist with issues that might prevent the organization from making maximum use of its information assets. Work with team members to ensure continuous coverage of all data systems and processes. Other duties as assigned. You should have (requirements) Two years’ experience in MS SQL Server or related bachelor’s degree. One year experience in building and supporting data solutions around MSSQL. Understanding of basic data management concepts (data sourcing, transformation, quality, metadata, etc.). Experience with database implementation, tuning, troubleshooting and maintenance. Basic knowledge of database security, backup/recovery, and disaster planning. Understanding of transactional and reporting database principles. Understanding of SQL and JSON scripting. Basic understanding of shell script programming. Analytical, critical thinking, decision-making, and problem-solving skills. Ability and desire to work in a collaborative environment. Demonstrated passion for learning data technologies. We Value Bachelor’s Degree in a related field Basic Azure (preferred) or AWS cloud data solutions experience Experience with ETL solutions including Microsoft SSIS Experience with logical and physical database design Ability to quickly learn new systems Good communication skills Strong attention to detail Familiar with Azure DevOps CI/CD Strictly observe confidentiality and strong ethics with respect to all beneficiary information/financial and other organizational data. Comply with and ensure adherence to the agency’s policies, safety and security protocols and child safeguarding norms and guidelines by self as well all stakeholders both internal and external. Promote diversity and inclusion, value other cultures, and demonstrate respect while relating with all organizational constituents irrespective of their race, color, faiths, gender, sexual orientation, age, caste, disabilities, experiences, beliefs and ethnicity."," Associate "," Full-time "," Engineering and Information Technology "," Non-profit Organizations " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-one-3499587057?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=U%2FR1DaD35LrD5ylBtf5klA%3D%3D&position=5&pageNum=17&trk=public_jobs_jserp-result_search-card," ONE ",https://www.linkedin.com/company/oneapp?trk=public_jobs_topcard-org-name," Portland, OR "," 2 weeks ago "," Be among the first 25 applicants ","About ONE ONE is on a mission to help people live healthier financial lives. We’re doing this by creating simple solutions to help our customers save, spend, and grow their money – all in one place. The U.S. consumer today deserves better. Millions of Americans today can’t access credit, build savings or wealth, and are left to manage their financial lives through multiple disconnected apps. Almost a quarter of U.S. adults are unbanked or underbanked and roughly 80% of fintech users rely on multiple accounts to manage their finances. What makes us unique? We are backed by a preeminent fintech investor (Ribbit) and the world’s largest retailer (Walmart), maintain the speed and independence of a startup, and employ a strong (and growing) collection of world-class talent. There’s never been a better moment to build a business that helps people live healthier financial lives. Come build with us! The role As an Engineer, Data, your mandate is to build the data transformation pipelines & reporting infrastructure that serve our members and run our company. This role will impact ONE’s vision by writing, reviewing, and shipping code and collaborating with others across the company. You will work closely across the Engineering, Product, Design as well as Sales, Compliance and Customer Support team(s) working with stakeholders as highly technical, communicative, and emotionally intelligent partners. This role reports to Manager, Data Engineering. This role is responsible for: Building and maintaining ONE's data infrastructure. Developing and owning ONE's streaming data transformation pipeline. Managing and supporting reporting and analytics tools. Collaborating closely with our Data Analysts and Data Scientists. Tracking and defining metrics around performance. Additional duties as assigned by your manager. You bring Mid Career (5-10 Years) Experience in Apache Spark, Scala, Python, SQL. Experience designing and implementing low latency data pipelines using Spark structured streaming. Experience optimizing SQL queries and lake / warehouse data structures. Cost-conscience creative problem solving. Proficient in AWS cloud services and technologies. Experience building, shipping — and growing — non-trivial products & services. Passion for your craft, people, and our mission. You value quality across code, communication, and culture. A desire to keep growing your skills, and an ability to learn quickly. We hire for slope, not just y-intercept. Pay Transparency The estimated annual base salary for this position ranges from $175,000 to $190,000. Pay is generally based upon the level, complexity, responsibility, and job duties/requirements of the specific position. We then source candidates with the requisite skills, expertise, education, training, and experience. If you are selected for an interview, please feel welcome to speak to a Talent Partner about our compensation philosophy and other available benefits. Leveling Philosophy In order to thoughtfully scale the company and avoid downstream inequities, we’ve adopted a flat titling structure at ONE. Though we may occasionally post a role externally with a prefix such as “Senior” to reflect the external level of the position, we do not use prefixes in titles like that internally unless in a position which manages a team. Internal titles typically include your specific functional responsibility, such as engineering, product management or sales, and often include additional descriptors to ensure clarity of role and placement within our organization (i.e. “Engineer, Platform”, “Sales, Business Development” or “Manager, Talent”). Employees are paid commensurate with their experience and the internal level within ONE. What it's like working @ ONE Our teams collaborate remotely and in our work spaces in New York and Sacramento. Competitive cash Benefits effective on day one Early access to a high potential, high growth fintech Generous stock option packages in an early-stage startup Remote friendly (anywhere in the US) and office friendly - you pick the schedule Flexible time off programs - vacation, sick, paid parental leave, and paid caregiver leave 401(k) plan with match Inclusion & Belonging To build technology and products that are used and loved by people and solve real-world problems, we need to build a team with many different perspectives and experiences. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage candidates from all backgrounds to apply. Applicants in need of special assistance or accommodation during the interview process or in accessing our website may contact us. Email talent@one.app with any questions."," Entry level "," Full-time "," Information Technology "," Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cbts-3488687401?refId=sxKgaaQf5SyYhA1EIysMQw%3D%3D&trackingId=4SJKhAkCOWc5qC%2FW9DpM0A%3D%3D&position=10&pageNum=17&trk=public_jobs_jserp-result_search-card," CBTS ",https://www.linkedin.com/company/cbts-technology-covered?trk=public_jobs_topcard-org-name," Dayton, OH "," 3 weeks ago "," 128 applicants ","CBTS is currently seeking a Data Engineer for a position located in the Dayton, OH area. This position will play a key role in executing the company’s data strategies by supporting existing ETL processes and creating of new ETL processes. Ensure proper data workflows and architecture are being followed. The individual will work with the business to gather requirements, document processes, work on break / fix remediation and work to provide necessary data to operations for monitoring, data management and process workflow. This is a hybrid position located in Dayton, OH area. Responsibilities: Identify and create strategies to address data quality concerns and enforce standards. Implement data management repositories based on multiple internal and external data sources Manage logical and physical data models and maintain detailed design documents Troubleshoot critical ETL workflow and data centric problems and recommend solutions Work with analytics business partners to analyze business needs, data sources and develop technical data pipeline solutions to ingest raw data into data warehouse Write new or modify existing code and conduct complete end to end unit tests on data and data pipelines Collaborate with the business analytics team to analyze, resolve, and put in place measures to maintain data accuracy and integrity in support of strategic analytics applications Obtain and ingest raw data from a variety of sources and methodologies leveraging appropriate coding languages Write transformation logic on raw data and subsequently create semantic layers to publish data in a form suitable for consumption by business users and BI visualization tools Assist in API development to enable business and/or other system’s consumption of published data Create and maintain pipeline process documentation and recovery procedures on how to resume failed pipeline processing Consult with and assist other programmers to analyze, schedule and implement new or modified workflows May mentor junior engineers in proper coding techniques and practices Experience: · Ability to write code using common scripting languages such as SQL, Python, and/or other scripting languages 2-5 years of hands-on experience implementing, maintaining, and supporting data management solutions including program/project delivery management Experience with ETL tools and techniques and/or pipeline building Proficient with data discovery, data analysis and data virtualization techniques Full understanding of data base concepts, data typing and database cardinality principles Experience in a data analytics environment working with multiple tables and big-data concepts Expertise with SQL, Snowflake, SQLServer or MySQL including data troubleshooting, building complex queries, table joins and views working with multiple tables and big-data concepts Experience building applications or pipelines with serverless technologies like Azure data factory, data bricks or AWS serverless platform a plus Solid MS Excel skills including ability to use advanced formulas, nested if statements, pivot tables and conduct full data analysis of pipelined data Experience with BI application development a plus Proficient with process workflow mapping and business process improvement QUALIFIED CANDIDATES CAN EMAIL THEIR RESUMES TO todd.marinelli@cbts.com. PLEASE INCLUDE “DATA ENGINEER” IN THE SUBJECT OF YOUR EMAIL. Cincinnati Bell Technology Solutions provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, disability, genetic information, marital status, amnesty, or status as a protected veteran in accordance with applicable federal, state and local laws."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-xorbix-technologies-inc-3521130561?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=ff51ZTNVVctL0RSzVKVS9g%3D%3D&position=1&pageNum=18&trk=public_jobs_jserp-result_search-card," Xorbix Technologies, Inc. ",https://www.linkedin.com/company/xorbix-technologies?trk=public_jobs_topcard-org-name," Wisconsin, United States "," 3 days ago "," 154 applicants ","Summary: Working under minimal supervision, the Sr. Data Engineer will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The Data Engineer will support our software developers, data architects, data analysts on initiatives and will ensure optimal data delivery is consistent throughout ongoing projects. This position, in partnership with TPM, business partners/product owners, gathers information and analyzes needs to determine feasibility of client requests. This position also takes an active mentoring role and provides design scope and specifications to less experienced team members. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. Main Job Responsibilities include: Create and maintain optimal data pipeline architecture. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with data and analytics experts to strive for greater functionality in our data systems. Assemble large, complex data sets that meet functional / non-functional business requirements. Explore and research new and alternate BigData technologies and platforms. Evaluate feasibility and make recommendations, considering things such as customer requirements, time limitations, system limitations. Serve as a mentor to junior staff by conducting technical training sessions and reviewing project outputs. Build documentation repository for knowledge transfer and developing expertise in multiple areas. Provide operational support on complex/escalated issues to diagnose and resolve incidents in production data pipelines. Job Qualifications: Job Qualifications: Education, Experience, Knowledge and Skills BS Degree or equivalent work experience in a software engineering discipline Typically has 5+ years’ experience in an applicable software development environment. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases (SQL, Postgres) Experience with diverse coding, profiling, and visualization approaches including authoring SQL queries, BigQuery, Python, Looker, Google Cloud or equivalent. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Hands on experience with Cloud Platforms (AWS, GCP, or Azure) Experience in designing and implementing large-scale event-driven architectures. Understanding of data warehousing and data modeling techniques Understanding of Big Data, Cloud, Machine Learning approaches and concepts (preferred) Experience working as a member of a distributed team. Ability to organize and coordinate with stakeholders across multiple functions and geographic locations. Ability to develop and write technical specifications. Coaching and teaching skills to mentor less experienced team members Excellent analytical and problem management skills Good interpersonal skills and positive attitude Experience with the following tools and technologies: Elastic Search, Kafka Google Cloud Dataflow and Airflow (preferred) Python, Java, C++ Big Query Remote This position is not eligible for sponsorship or C2C Salary: $140,000-$150,000"," Mid-Senior level "," Full-time "," Information Technology "," Insurance and Real Estate " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-robert-half-3481686692?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=%2BnucaEKcVITNWyFR4D9WKg%3D%3D&position=2&pageNum=18&trk=public_jobs_jserp-result_search-card," Robert Half ",https://www.linkedin.com/company/robert-half-international?trk=public_jobs_topcard-org-name," Las Vegas, NV "," 3 weeks ago "," 193 applicants ","THIS IS A REMOTE ROLE FOR CANDIDATES IN NEVADA. The primary responsibility of the Senior Data Engineer is to create and support enterprise class data centric solutions, providing business value from corporate data assets. The Senior Data Engineer is expected to maintain expertise across an application technology stack (Microsoft SQL Server BI) as well as a technology domain (Data Management, BI/DW, MDM) and have the ability to embrace and leverage new technologies while working effectively with other information technology professionals and business users to ensure that the data solutions are stable, efficient and responsive to business needs. This person is expected to possess a strong domain knowledge of gaming and hospitality and experience on how data technologies are best implemented to enhance these domains. The Senior Data Engineer must possess strong communication and people skills for assuming Technical Leadership roles for projects, including the ability to effectively present technical information to stakeholders and management. Essential Duties & Responsibilities Design, build, deliver and support Data Warehouse and ETL structures and solutions. Act as an internal consultant, providing architecture, vision, problem anticipation, and problem solving for the assigned project(s), as well as a Subject Matter Expertise for data and analytic users. Provide subject matter expertise for the data associated with gaming and hospitality systems. Provide L3 application and on-call support for data and integration solutions. Participate and at times lead project teams within an agile environment. Successfully engage in multiple initiatives concurrently, including application and on call support, minor projects, major projects, functional requirements, systems specifications and subject matter expertise. Prepare clear and concise documentation for delivered solutions and processes, integrating documentation with corporate knowledgebase. In partnership with the Product Management team, provide subject matter expertise for determining business requirements to address business opportunities or issues across business functions for data centric solutions. Maintain familiarity with technological trends and innovations in the data warehousing and data management domains. Interpret and satisfy business needs by building and enhancing systems to transform, cleanse and provision corporate data assets in a governed, secure and high performance manner. Provide subject matter expertise for data and analytics. Participate in the ongoing Data Management maturation process. Safety is an essential function of this job. Consistent and regular attendance is an essential function of this job. Minimum Qualifications 21 years of age. Proof of authorization/eligibility to work in the United States. Bachelor’s degree or equivalent in relevant discipline Must be able to obtain and maintain Nevada Gaming Control Board Registration and any other certification or license, as required by law or policy. 8 years ETL experience using SQL Server SSIS 2008r2-2016. 10 years with T-SQL programming. Must exhibit a high level of mastery of the Microsoft BI Stack, including: SSIS, SSRS, SSAS, SharePoint, PowerPivot, etc. Must understand Relational and Dimensional database modeling. Should be experienced in automated file handling with ETL tools via FTP and other means. Should have administrative experience with SQL Server 2008-2016. Prefer experience with Java or .NET development; C#, ASP.NET, scripting languages, and web site operations is a plus. Prefer experience with key concepts of Data Management such as Data Quality and MDM. Experience with Data Preparation for Data Science a plus. Experience with Big Data, Advanced Analytic techniques and Real Time data a plus. Must be willing and capable of adopting new technologies and paradigms such as Big Data technologies (Hadoop, hive, etc.), event based analytics, etc. Experience with analytic tools such as Spotfire, SAS, Cognos, Microsoft BI a plus. Prefer TFS and Agile experience (SCRUM). Prefer gaming and hospitality experience. Must be able to adhere to SOPs and methodologies, and be willing to improve them when necessary. Must have strong organizational skills, customer service focus, attention to detail, and process orientation. Good working knowledge of Windows/Enterprise Systems/Networking/Enterprise Security Must be willing to be 24 x 7 on-call support"," Not Applicable "," Full-time "," Information Technology "," Staffing and Recruiting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-costco-it-3519905708?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=27Q6mb3NgZmyUJlFcXuBBA%3D%3D&position=3&pageNum=18&trk=public_jobs_jserp-result_search-card," Costco IT ",https://www.linkedin.com/company/costco-weareit?trk=public_jobs_topcard-org-name," Dallas, TX "," 1 month ago "," Be among the first 25 applicants ","This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture.The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes.Our employees and our members come FIRST.Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others.In 2021, Costco contributed over $58 million to organizations such as United Way and Children's Miracle Network Hospitals. Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed. As proof, Costco ranks seventh in Forbes “World’s Best Employers”. The Data Engineer is responsible for developing data pipelines and/or data integrations for Costco’s enterprise certified data sets that are used for business critical data consumption use cases (i.e. Reporting, Data Science/Machine Learning, Data APIs, etc.). At Costco, we are on a mission to significantly leverage data to provide better products and services for our members. This role is focused on data engineering to build and deliver automated data pipelines from a plethora of internal and external data sources. The Data Engineer will partner with product owners, data architects, and data platform teams to design, build, test, and automate data pipelines that are relied upon across the company as the single source of truth. If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined. ROLE Develops and operationalizes data pipelines to create enterprise certified data sets that are made available for consumption (BI, Advanced analytics, APIs/Services). Works in tandem with Data Architects, Data Stewards, and Data Quality Engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality and orchestration. Designs, develops, and implements ETL/ELT/CDC processes using Informatica Intelligent Cloud Services (IICS). Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Cosmos, Databricks, Delta-Lake to improve and speed delivery of our data products and services. Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency, and quality of data management. Communicates technical concepts to non-technical audiences both in written and verbal form. Performs peer reviews for other data engineers’ work. Required 5+ years’ experience engineering and operationalizing data pipelines with large and complex datasets. 2+ years’ hands-on experience with Informatica IICS or other ETL tools. 3+ years’ experience working with Cloud technologies such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB and other big data technologies. Extensive experience working with various data sources (DB2, SQL, Oracle, flat files (csv, delimited), APIs, XML, and JSON. Experience implementing data integration techniques such as event/message based integration (Kafka, Azure Event Hub), ETL. Advanced SQL skills; solid understanding of relational databases and business data; ability to write complex SQL queries against a variety of data sources. 5+ years’ experience with Data Pipeline, ETL, and Data Warehousing. Strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing). Experience with Git / Azure DevOps. Able to work in a fast-paced agile development environment. Recommended Azure Certifications. BA/BS in Computer Science, Engineering, or equivalent software/services experience. Experience delivering data solutions through agile software development methodologies. Exposure to the retail industry. Excellent verbal and written communication skills. Experience working with SAP integration tools including BODS. Experience with Job scheduling and orchestration tools. Required Documents Cover Letter Resume California applicants, please click to review the Costco Applicant Privacy Notice. Pay Ranges Level 2 - $100,000 - $135,000 Level 3 - $125,000 - $165,000 We offer a comprehensive package of benefits including paid time off, health benefits - medical/dental/vision/hearing aid/pharmacy/behavioral health/employee assistance, health care reimbursement account, dependent care assistance plan, short-term disability and long-term disability insurance, AD&D insurance, life insurance, 401(k), stock purchase plan to eligible employees. Costco is committed to a diverse and inclusive workplace. Costco is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or any other legally protected status. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to IT-Recruiting@costco.com If hired, you will be required to provide proof of authorization to work in the United States."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-soni-3521486098?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=d6%2F2%2F5Aux1w0COZYSjdhCg%3D%3D&position=4&pageNum=18&trk=public_jobs_jserp-result_search-card," Soni ",https://www.linkedin.com/company/soni-resources-group?trk=public_jobs_topcard-org-name," New York City Metropolitan Area "," 1 day ago "," Be among the first 25 applicants ","A global leader in construction is growing their data organization -- have just started building out a data platform to help the organization access data and gather insights for business growth + optimization. The Sr Data Engineer is a critical hire for building + maintaining the data environment. This is a greenfield opportunity in a double digit billion dollar organization with a wealth of data. This is a hybrid role, requiring up to 5x/week onsite in midtown NYC office. Salary: $150-180K Responsibilities: Minimum 5 years experience as a Data Engineer (someone growing into architecture, but currently hands-on individual-contributing engineer) Expertise in SQL + Python coding Experienced with multiple data analytics platforms, can approach problems from multiple angles, has implemented/delivered on data platforms Strong understanding around fundamentals around data governance Expert at building pipelines and transformations using Python, Pandas, Spark cluster, Bit Query, Oracle + SQL (MySQL/Postgres) Must understand Agile process, CI/CD, code repositories, test automation. Compensation: $150,000 - 180,000 Salary is based on a range of factors that include relevant experience, knowledge, skills, other job-related qualifications. *Job title may differ on our career portal"," Associate "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting and Information Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-bristol-myers-squibb-3490929319?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=QaIjyTyc%2Fdp9eFmEuGq0YA%3D%3D&position=5&pageNum=18&trk=public_jobs_jserp-result_search-card," Bristol Myers Squibb ",https://www.linkedin.com/company/bristol-myers-squibb?trk=public_jobs_topcard-org-name," Seattle, WA "," 3 weeks ago "," Be among the first 25 applicants ","Working with Us Challenging. Meaningful. Life-changing. Those aren’t words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You’ll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams rich in diversity. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us Work at the interface of pharma, genomics, and data engineering. The candidate will significantly contribute to the development of modern data services for BMS’s research scientists. Job Description Bristol Myers Squibb seeks a highly motivated Data Engineer to enable data integration efforts in support of data science and computational research efforts. The Data Engineer will be responsible for executing an ambitious digital strategy to support BMS’s predictive science capabilities in R&ED (Research & Early Development). The successful candidate will partner closely with computational researcher teams, IT leadership, and various technical functions to design and deliver data solutions that streamline access to computing & data, and help scientists derive insight and value from their research. Job Functions The role requires someone who can seamlessly mesh technical knowledge to help navigate R&D cloud, CoLo, and on-premise computing needs, including planning, infrastructure design, maintenance, and support. The role will lead the development of infrastructure that enables interoperability and comparability of data sets derived from different technologies and biological systems in the context of integrative data analysis. The candidate will create and maintain optimal data pipeline architectures that enable scientific workflow and collaborate with interdisciplinary teams of data curators, software engineers, data scientists and computational biologists as we test new hypotheses through the novel integration of emerging research data types. The work will combine careful resource planning and project management with hands-on data manipulation and implementation of data integration workflows. Responsibilities Include, But Are Not Limited To, The Following Designing and developing an ETL infrastructure to load research data from multiple source systems using languages and frameworks such as Python, R, Docker, Airflow, Glue, etc. Leading the design and implementation of data services solutions that may include relational, NoSQL and graph database components. Collaborating with project managers, solution architects, infrastructure teams, and external vendors as needed to support successful delivery of technical solutions. Experiences And Education Bachelor's Degree with 8+ years of academic / industry experience or master's degree with 6+ years of academic / industry experience or PhD with 3+ years of academic / industry experience in an engineering or biology field. Demonstrated high proficiency with current software engineering methodologies, such as Agile SDLC approaches, distributed source code control, project management, issue tracking, and CI/CD tools and processes. Excellent skills in an object-oriented programming language such as Python or R, and proficiency in SQL High degree of proficiency in cloud computing Solid understanding of container strategies such as Docker, Fargate, ECS and ECR. Excellent skills and deep knowledge of databases such as Postgres, Elasticsearch, Redshift, and Aurora, including distributed database design, SQL vs. NoSQL, and database optimizations Demonstrated high proficiency with current software engineering methodologies, such as Agile SDLC (Software Development Life Cycle) approaches, distributed source code control, project management, issue tracking, and CI/CD tools and processes. Strong technical communication skills. If you come across a role that intrigues you but doesn’t perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as “Transforming patients’ lives through science™ ”, every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in an inclusive culture, promoting diversity in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Physical presence at the BMS worksite or physical presence in the field is a necessary job function of this role, which the Company deems critical to collaboration, innovation, productivity, employee well-being and engagement, and it enhances the Company culture. COVID-19 Information To protect the safety of our workforce, customers, patients and communities, the policy of the Company requires all employees and workers in the U.S. and Puerto Rico to be fully vaccinated against COVID-19, unless they have received an exception based on an approved request for a medical or religious reasonable accommodation. Therefore, all BMS applicants seeking a role located in the U.S. and Puerto Rico must confirm that they have already received or are willing to receive the full COVID-19 vaccination by their start date as a qualification of the role and condition of employment. This requirement is subject to state and local law restrictions and may not be applicable to employees working in certain jurisdictions such as Montana. This requirement is also subject to discussions with collective bargaining representatives in the U.S. BMS is dedicated to ensuring that people with disabilities can perform complex functions through a transparent recruitment process, reasonable workplace adjustments and ongoing support in their roles. Applicants can request an accommodation prior to accepting a job offer. If you require reasonable accommodation in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com. Visit careers.bms.com/eeo-accessibility to access our complete Equal Employment Opportunity statement. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. "," Entry level "," Full-time "," Information Technology "," Pharmaceutical Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-genome-medical-3500325229?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=RNaX9zpR13YP7a9F5IsPrg%3D%3D&position=6&pageNum=18&trk=public_jobs_jserp-result_search-card," Genome Medical ",https://www.linkedin.com/company/genome-medical?trk=public_jobs_topcard-org-name," South San Francisco, CA "," 2 weeks ago "," 50 applicants ","At Genome Medical, we are dedicated to providing greater access to genetic care for our patients! Accelerate your career as the Data Engineer (DE) with Genome Medical. We are also committed to providing our employees the support they need. The base salary for this position is between $183k-$201k per year. Please note that the base salary range is a guideline, and individual total compensation will vary based on factors such as qualifications, skill level and competencies. We offer medical, dental and vision packages as well as many other perks to make your benefits package truly customizable to you and your family's needs. Some of our benefits for this position include: Flexible PTO Plan Paid Personal Leave Paternity & Maternity Leave 401k Match (after 90 days) Up to $500 remote office reimbursement (after 30 days) AD&D Short & Long-Term Disability HSA & FSA Medical and Dependent Care Life Insurance Pet Insurance And much MORE! This role may be right for you if: you are excited about working with the ""big picture of data"" and can be relied upon for prototyping and production. you possess a startup mindset and are flexible and adaptable to the changing needs of the team. you are motivated to drive efficiencies in the SDLC timeline and seek to make incremental improvements with new capabilities while balancing concurrent development in products and their overall architectural evolution. you are looking to work cross-functionally as a leader with sales, finance, product, and engineering teams to support growth throughout all stakeholder levels and partnerships. Position Summary: This position reports to the Senior Director, Software Engineering and will perform activities consistent with the position of a Data Engineer which will include but are not limited to: Designs and implements scalable robust and secure data lake and/or data warehouse solutions. Serves as the key point of contact from engineering to be able to communicate interdepartmentally and with all stakeholders the full end-end ownership of new features of our platform. Drives the development of new reporting functionalities for our genomics platform to enhance the business intelligence team. Adheres to flexible, simple, well-documented, and secure design of internal and external-facing services and is an enthusiastic supporter of it for the entire team. Enthusiastically participates in design and code reviews to further build a healthy, technical culture at Genome Medical. Dedicated to upholding company values and contributes to making Genome Medical a great place to work. Additional responsibilities as they relate to the position and needs of the department. REQUIREMENTS AND QUALIFICATIONS: Bachelor's degree (or equivalent experience) in computer science, engineering, statistics or a related field and 4-5 years of software development experience with full knowledge of the software development life cycle (SDLC) is required. Excellent communication skills (verbal, written and listening) with ability to collaboratively engage with all levels of stakeholders and teammates in a solution-oriented, remote environment. Strong analytical and organizational mentality. Knowledge of the Genetics/Genomics or healthcare industries is a plus. Working knowledge and experience with the business intelligence analytics tool Looker is preferred. 2+ years of experience building data warehousing technical components such as ETL, ELT, databases and reporting required. 4+ years of experience writing production backend code in Python on Linux systems required. 3+ years of integrating datasets between multiple microservices using one or more of SQL/noSQL databases required. 2+ years of experience designing/developing high performance ETL systems on top of queueing technology components like Kafka, RabbitMQ, Apache Storm is a plus Experience building backends of machine learning pipelines is a plus. Knowledge of software engineering practices including coding standards, code reviews, SCM, CI/CD in a containerized environment (e.g. Docker) About Genome Medical: Genome Medical™ is a genomics technology, services, and strategy company. As a nationwide virtual medical practice, we are dedicated to bringing genome-enabled health care to everyone through our extensive network of genetic specialists. Founded by personalized medicine pioneers Dr. Randy Scott, Dr. Robert Green, and Lisa Alderson, our goal is to bridge the growing gap between available genome technology and current medical practice. As genetic information becomes increasingly important in medicine, there are too few experts to meet the growing demand for interpretation. We are addressing this challenge by creating a scalable, efficient model for lifelong genome-centered health care. To learn more, please visit www.genomemedical.com or find us @GenomeMed on Twitter. Genome Medical is proud to be an equal opportunity employer and committed to creating a workforce that encompasses all cultures and differences. We celebrate our employees' differences, regardless of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, or Veteran status. Information collected and processed as part of your Genome Medical profile, and any job applications you choose to submit is subject to Genome Medical's Candidate Privacy Policy."," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-sayari-commercial-risk-intelligence-3478678402?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=3b2krh%2BzNzVnSDjpbcAIIQ%3D%3D&position=7&pageNum=18&trk=public_jobs_jserp-result_search-card," Sayari | Commercial Risk Intelligence ",https://www.linkedin.com/company/sayarilabs?trk=public_jobs_topcard-org-name," United States "," 4 weeks ago "," Over 200 applicants ","Sayari is looking for Data Engineers to join our growing team! We are hiring at all levels and encourage junior through senior level candidates to apply. As a member of Sayari's data team you will work with our Product and Software Engineering teams to collect data from around the globe, maintain existing ETL pipelines, and develop new pipelines that power Sayari Graph. ABOUT SAYARI Sayari is a venture-backed and founder-led global corporate data provider and commercial intelligence platform, serving financial institutions, legal & advisory service providers, multinationals, journalists, and governments. We are building world-class SaaS products that help our clients glean insights from vast datasets that we collect, extract, enrich, match and analyze using a highly scalable data pipeline. From financial intelligence to anti-counterfeiting, and from free trade zones to war zones, Sayari powers cross-border and cross-lingual insight into customers, counterparties, and competitors. Thousands of analysts and investigators in over 30 countries rely on our products to safely conduct cross-border trade, research front-page news stories, confidently enter new markets, and prevent financial crimes such as corruption and money laundering. Our company culture is defined by a dedication to our mission of using open data to prevent illicit commercial and financial activity, a passion for finding novel approaches to complex problems, and an understanding that diverse perspectives create optimal outcomes. We embrace cross-team collaboration, encourage training and learning opportunities, and reward initiative and innovation. If you enjoy working with supportive, high-performing, and curious teams, Sayari is the place for you. POSITION DESCRIPTION Sayari’s flagship product, Sayari Graph, provides instant access to structured business information from hundreds of millions of corporate, legal, and trade records. As a member of Sayari's data team you will work with our Product and Software Engineering teams to collect data from around the globe, maintain existing ETL pipelines, and develop new pipelines that power Sayari Graph. What You Will Need: Professional experience with Python and a JVM language (e.g., Scala) 2+ years of experience designing and maintaining ETL pipelines Experience using Apache Spark and Apache Airflow Experience with SQL (e.g., Postgres) and NoSQL (e.g., Cassandra, TigerGraph, etc.) databases Experience working on a cloud platform like GCP, AWS, or Azure Experience working collaboratively with git What We Would Like: Understanding of Docker/Kubernetes Understanding of or interest in knowledge graphs Who You Are: Experienced in supporting and working with cross-functional teams in a dynamic environment Interested in learning from and mentoring team members Passionate about open source development and innovative technology Benefits What We Offer: A collaborative and positive culture - your team will be as smart and driven as you Limitless growth and learning opportunities A strong commitment to diversity, equity, and inclusion Performance and incentive bonuses Outstanding competitive compensation and comprehensive family-friendly benefits, including full healthcare coverage plans, commuter benefits, 401K matching, generous vacation, and parental leave. Conference & Continuing Education Coverage Team building events & opportunities Sayari is an equal opportunity employer and strongly encourages diverse candidates to apply. We believe diversity and inclusion mean our team members should reflect the diversity of the United States. No employee or applicant will face discrimination or harassment based on race, color, ethnicity, religion, age, gender, gender identity or expression, sexual orientation, disability status, veteran status, genetics, or political affiliation. We strongly encourage applicants of all backgrounds to apply."," Mid-Senior level "," Full-time "," Engineering and Other "," Software Development " Data Engineer,United States,"Associate, Data Engineer",https://www.linkedin.com/jobs/view/associate-data-engineer-at-s-martinelli-company-3525634330?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=%2Bc%2BohTAw6rZEnp5IeXDRLg%3D%3D&position=8&pageNum=18&trk=public_jobs_jserp-result_search-card," S. Martinelli & Company ",https://www.linkedin.com/company/s.-martinelli-&-co.?trk=public_jobs_topcard-org-name," Watsonville, CA "," 3 weeks ago "," Be among the first 25 applicants "," Job SummaryThe Associate Data Engineer is a key member of the company’s analytical framework that will support our Production, Sales, Marketing, Finance, Human Resources, and Executive Team. They will be involved in various cross-functional efforts such as development of strategic reports, building and optimizing our data storage and pipelines, and analysis of available data to identify trends and improve business outcomes. Using the company’s analytical software toolkit, the Associate Data Engineer captures internal/external data in a data warehouse and applies algorithms to develop reporting, analyses, and other models. They will be responsible for establishing and maintaining flows of information to key stakeholders, as well as maintaining data governance in the data warehouse. The position will leverage the Company’s Enterprise Resource Planning (ERP), Corporate Performance Management (CPM), and Business Intelligence (BI) software to help management understand the past and plan for future. The Associate Data Engineer collaborates with the company’s Data Engineer and Financial Analysts, as well as independently managing key deliverables. They maintain data integrity and efficiency of data flows within Martinelli’s BI system.Essential Job Functions Work with stakeholders throughout the organization to identify opportunities for leveraging data to drive business solutions. Develop and maintain data pipelines that feed information from internal and external sources into the company’s data warehouse. Maintain, monitor, and validate data governance in the company’s data warehouse. Uncover business insights and answer strategic questions by developing new dashboards, data flows, ad hoc analyses, and other content in the company’s BI software. Leverage data management languages (primarily SQL, but also R, Python, and OData) to retrieve and shape raw data into a format that supports efficient reporting. Optimize the efficiency of data flows within the company’s BI software. Provide insightful analysis to identify and communicate financial and operational opportunities. This will require understanding key business drivers, processes, trends and variances to expectations. Acts as an internal ambassador for BI at Martinelli’s, becoming a subject matter expert capable of training stakeholders on the use of the company’s BI system and supporting user issues as they arise. Manage projects and initiatives as a representative of the FP&A Team. Periodically support the Company’s financial planning processes through analysis of key metrics and regression-based forecasting Assist in special projects as requested by Management. Qualifications Ability to prepare, interpret and communicate analyses verbally and with digital tools including spreadsheets and Business Intelligence applications. Familiarity with using statistical programming languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets Experience managing or leveraging databases. Ability to assist in maintaining regular contact and building healthy working relationships with various internal customers to ensure a high level of customer service and work product. Be detail oriented with strong precision and analytical skills. Ability to distill complex data into key drivers and learnings that can be understood by an analytically-minded audience. Demonstrate proficiency in interacting with and communicating to Management Be a team player with strong interpersonal and communication skills including strong internal drive, ethics, conscientiousness, diplomacy, flexibility, and dependability. Ability to focus on solution driven outcomes, work independently, and display ownership and understanding of assigned projects. Work in a rapid-paced environment while meeting deadlines Respect confidential matters by exercising individual initiative and discretion. Experience performing complex analyses and creating Excel models. Strong Microsoft Office skills including advanced Excel (Pivot Tables, Index/Match), Word Excellent oral, written and interpersonal communication skills. Preferred Experience (not Required) Food manufacturing or agricultural goods processing industry SAP, Workday Adaptive Planning (formerly Adaptive Insights), and/or Domo ETL tools (ex. Informatica, Alteryx, Talend) Finance and/or Accounting Education. Certification Requirements Bachelor’s degree in Mathematics, Computer Science, Statistics, Economics or related field Master’s degree in fields listed above preferred but not required. 1+ years of database and analytics experience, either in an academic or professional environment Physical Demands and Working Environment The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of the job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Ability to pass background check, and drug screen. Ability to stand and/or sit in front of a computer screen for extended periods of time. Work is performed in a standard office environment and warehouse food processing environment. The company provides remote working options. This position qualifies for remote working status subject to manager approval and Company policy. All Employees Are Expected To Comply with all Company safety requirements. Adhere to all Company policies and procedures. Pay range $59,000/yr. - $89,000/yr. "," Mid-Senior level "," Full-time "," Information Technology "," Food and Beverage Services " Data Engineer,United States,Associate Data Engineer,https://www.linkedin.com/jobs/view/associate-data-engineer-at-children-international-3486403204?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=%2FYfaiHg%2BuTK6L3rF8UQTMg%3D%3D&position=9&pageNum=18&trk=public_jobs_jserp-result_search-card," Children International ",https://www.linkedin.com/company/children-international?trk=public_jobs_topcard-org-name," Kansas City Metropolitan Area "," 3 weeks ago "," 114 applicants ","Summary of Duties Under direct supervision, work with technology teams to build and implement data management solutions that solve current and future business data needs. Design and build data migration solutions (ETL/T) using t-SQL, SSIS, PowerShell, Azure Data Factory, and other tools. Monitor data systems to ensure proper availability, security, and performance. Assist in maintaining production and non-production DBMS environments. Build and maintain MSSQL database objects including Procedures, functions, and views. Deliver technical documentation including data and process flow diagrams that help facilitate decision making to complex data challenges. Develop physical data schemas based on logical models to satisfy business requirements. Support the development of solutions with a usable enterprise information architecture which may include an enterprise data model, common business vocabulary, and taxonomies. Continuously develop skills around data technologies including but not limited to MS SQL Server, Cosmo DB, Azure, data processing, and data management. Assist with issues that might prevent the organization from making maximum use of its information assets. Work with team members to ensure continuous coverage of all data systems and processes. Other duties as assigned. You should have (requirements) Two years’ experience in MS SQL Server or related bachelor’s degree. One year experience in building and supporting data solutions around MSSQL. Understanding of basic data management concepts (data sourcing, transformation, quality, metadata, etc.). Experience with database implementation, tuning, troubleshooting and maintenance. Basic knowledge of database security, backup/recovery, and disaster planning. Understanding of transactional and reporting database principles. Understanding of SQL and JSON scripting. Basic understanding of shell script programming. Analytical, critical thinking, decision-making, and problem-solving skills. Ability and desire to work in a collaborative environment. Demonstrated passion for learning data technologies. We Value Bachelor’s Degree in a related field Basic Azure (preferred) or AWS cloud data solutions experience Experience with ETL solutions including Microsoft SSIS Experience with logical and physical database design Ability to quickly learn new systems Good communication skills Strong attention to detail Familiar with Azure DevOps CI/CD Strictly observe confidentiality and strong ethics with respect to all beneficiary information/financial and other organizational data. Comply with and ensure adherence to the agency’s policies, safety and security protocols and child safeguarding norms and guidelines by self as well all stakeholders both internal and external. Promote diversity and inclusion, value other cultures, and demonstrate respect while relating with all organizational constituents irrespective of their race, color, faiths, gender, sexual orientation, age, caste, disabilities, experiences, beliefs and ethnicity."," Associate "," Full-time "," Engineering and Information Technology "," Non-profit Organizations " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-april-3496026464?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=9NZa4o7wj4DKZ4XF222wMw%3D%3D&position=10&pageNum=18&trk=public_jobs_jserp-result_search-card," APRIL ",https://sg.linkedin.com/company/april?trk=public_jobs_topcard-org-name," Beaverton, OR "," 3 weeks ago "," Over 200 applicants ","This is a remote position. Summary Establishes database management systems, standards, guidelines and quality assurance for database deliverables, such as conceptual design, logical database, capacity planning, external data interface specification, data loading plan, data maintenance plan and security policy. Documents and communicates database design. Evaluates and installs database management systems. Codes complex programs and derives logical processes on technical platforms. Builds windows, screens, and reports. Assists in the design of user interface and business application prototypes. Participates in quality assurance and develops test application code in client server environment. Provides expertise in devising, negotiating, and defending the tables and fields provided in the database. Adapts business requirements, developed by modeling/development staff and systems engineers, and develops the data, database specifications, and table and element attributes for an application. At more experienced levels, it helps to develop an understanding of client's original data and storage mechanisms. Determines appropriateness of data for storage and optimum storage organization. Determines how tables relate to each other and how fields interact within the tables for a relational model. What You Will Do Design and build reusable components, frameworks, and libraries at scale to support analytics products. Design and implement product features in collaboration with business and technology stakeholders. Anticipate, identify, and solve issues concerning data management to improve data quality. Clean, prepare and optimize data at scale for ingestion and consumption. Drive the implementation of new data management projects and re-structure of the current data architecture. Implement complex automated workflows and routines using workflow scheduling tools. Build continuous integration, test-driven development, and production deployment frameworks. Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards. Analyze and profile data for designing scalable solutions. Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues. Mentor and develop other data engineers in adopting best practices. Top 3 Skills: PySpark, AWS, Big Data Work Schedule "," Mid-Senior level "," Contract "," Information Technology "," Nanotechnology Research " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-zortech-solutions-3528105992?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=9zA63Z60egFZidIpL4tSTg%3D%3D&position=11&pageNum=18&trk=public_jobs_jserp-result_search-card," Zortech Solutions ",https://ca.linkedin.com/company/zortech?trk=public_jobs_topcard-org-name," West New York, NJ "," 3 weeks ago "," Be among the first 25 applicants "," Role: Data EngineerLocation: Remote/US/NJ (Hybrid-Onsite)Duration: 3-6+ MonthsJob Description5-8 years of total experienceSkills: Python/Pyspark "," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-bristol-myers-squibb-3490929337?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=3JJQpFQyz6DqeL5yhuPV2w%3D%3D&position=12&pageNum=18&trk=public_jobs_jserp-result_search-card," Bristol Myers Squibb ",https://www.linkedin.com/company/bristol-myers-squibb?trk=public_jobs_topcard-org-name," Brisbane, CA "," 3 weeks ago "," Be among the first 25 applicants ","Working with Us Challenging. Meaningful. Life-changing. Those aren’t words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You’ll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams rich in diversity. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us Work at the interface of pharma, genomics, and data engineering. The candidate will significantly contribute to the development of modern data services for BMS’s research scientists. Job Description Bristol Myers Squibb seeks a highly motivated Data Engineer to enable data integration efforts in support of data science and computational research efforts. The Data Engineer will be responsible for executing an ambitious digital strategy to support BMS’s predictive science capabilities in R&ED (Research & Early Development). The successful candidate will partner closely with computational researcher teams, IT leadership, and various technical functions to design and deliver data solutions that streamline access to computing & data, and help scientists derive insight and value from their research. Job Functions The role requires someone who can seamlessly mesh technical knowledge to help navigate R&D cloud, CoLo, and on-premise computing needs, including planning, infrastructure design, maintenance, and support. The role will lead the development of infrastructure that enables interoperability and comparability of data sets derived from different technologies and biological systems in the context of integrative data analysis. The candidate will create and maintain optimal data pipeline architectures that enable scientific workflow and collaborate with interdisciplinary teams of data curators, software engineers, data scientists and computational biologists as we test new hypotheses through the novel integration of emerging research data types. The work will combine careful resource planning and project management with hands-on data manipulation and implementation of data integration workflows. Responsibilities Include, But Are Not Limited To, The Following Designing and developing an ETL infrastructure to load research data from multiple source systems using languages and frameworks such as Python, R, Docker, Airflow, Glue, etc. Leading the design and implementation of data services solutions that may include relational, NoSQL and graph database components. Collaborating with project managers, solution architects, infrastructure teams, and external vendors as needed to support successful delivery of technical solutions. Experiences And Education Bachelor's Degree with 8+ years of academic / industry experience or master's degree with 6+ years of academic / industry experience or PhD with 3+ years of academic / industry experience in an engineering or biology field. Demonstrated high proficiency with current software engineering methodologies, such as Agile SDLC approaches, distributed source code control, project management, issue tracking, and CI/CD tools and processes. Excellent skills in an object-oriented programming language such as Python or R, and proficiency in SQL High degree of proficiency in cloud computing Solid understanding of container strategies such as Docker, Fargate, ECS and ECR. Excellent skills and deep knowledge of databases such as Postgres, Elasticsearch, Redshift, and Aurora, including distributed database design, SQL vs. NoSQL, and database optimizations Demonstrated high proficiency with current software engineering methodologies, such as Agile SDLC (Software Development Life Cycle) approaches, distributed source code control, project management, issue tracking, and CI/CD tools and processes. Strong technical communication skills. If you come across a role that intrigues you but doesn’t perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as “Transforming patients’ lives through science™ ”, every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in an inclusive culture, promoting diversity in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Physical presence at the BMS worksite or physical presence in the field is a necessary job function of this role, which the Company deems critical to collaboration, innovation, productivity, employee well-being and engagement, and it enhances the Company culture. COVID-19 Information To protect the safety of our workforce, customers, patients and communities, the policy of the Company requires all employees and workers in the U.S. and Puerto Rico to be fully vaccinated against COVID-19, unless they have received an exception based on an approved request for a medical or religious reasonable accommodation. Therefore, all BMS applicants seeking a role located in the U.S. and Puerto Rico must confirm that they have already received or are willing to receive the full COVID-19 vaccination by their start date as a qualification of the role and condition of employment. This requirement is subject to state and local law restrictions and may not be applicable to employees working in certain jurisdictions such as Montana. This requirement is also subject to discussions with collective bargaining representatives in the U.S. BMS is dedicated to ensuring that people with disabilities can perform complex functions through a transparent recruitment process, reasonable workplace adjustments and ongoing support in their roles. Applicants can request an accommodation prior to accepting a job offer. If you require reasonable accommodation in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com. Visit careers.bms.com/eeo-accessibility to access our complete Equal Employment Opportunity statement. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. "," Entry level "," Full-time "," Information Technology "," Pharmaceutical Manufacturing " Data Engineer,United States,Data Engineer ,https://www.linkedin.com/jobs/view/data-engineer-at-hays-3507001787?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=HvpZuxEP4nKpkbhEvPSeNg%3D%3D&position=13&pageNum=18&trk=public_jobs_jserp-result_search-card," Hays ",https://uk.linkedin.com/company/hays?trk=public_jobs_topcard-org-name," Atlanta, GA "," 2 weeks ago "," 115 applicants ","Data Engineer – Perm – Hybrid / Atlanta, GA. - $120,000 - $135,000 The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US without sponsorship are encouraged to apply. An American Company is seeking a Data Engineer in Hybrid / Atlanta, GA. Roles & Responsibilities • Contribute to a team of data engineers through design, demand delivery, code reviews, release management, implementation, presentations, and meetings. • Mentor fellow data engineers and contribute to ongoing process improvements for the team • Evaluate business needs and objectives and align architecture/designs with business requirements • Build the data pipelines required for the optimal extraction, transformation, integration, and loading of raw data from a wide variety of data sources • Assemble large, complex data sets and model our data in a way that meets functional / non-functional business requirements • Create data tools for analytics team members that assist them in generating innovative industry insights that provide our business a competitive advantage • Implement data tagging mechanisms and metadata management so data is accurately classified and visible to the organization • Build processes to help identify and improve data quality, consistency, and effectiveness • Ensure our data is managed in a way that conforms to all information privacy and protection policies • Use agile software development processes to iteratively make improvements to our data management systems • Identify opportunities for automation • Be an advocate for best practices and continued learning Skills & Requirements • 3+ years’ experience in data engineering/data management • Cloud experience with AWS or GCP or Azure (in that order) • Experience with Modern data platforms, Snowflake, dbt, Fivetran and Airflow or Informatica, Kafka, CDC, SQL, Irwin, Python, AWS (S3, Athena, Glue, Kinesis, Redshift), Spark, Scala, AI/ML, • Be able to speak to Data and Cloud technologies on Resume • Data Warehouse technical architectures, infrastructure components, ETL / ELT Top Plus Skills • Data engineering certification is a plus. • Bachelor's/Tech School degree in Computer Science, Information Systems, Engineering or equivalent and/or commensurate years of real-world experience in software engineering. • Beverage industry or Retail industry: These companies in Atlanta are good Retail companies: Macy's or CRH or OldCastle or Coke or Home Depot or Kimberly Clarke, Georgia Pacific, AGCO, HD Supply, JMHuber, Novelis, Newell/Rubbermaidm, Carter's, Spanx or CPG Why Hays? You will be working with a professional recruiter who has intimate knowledge of the industry and market trends. Your Hays recruiter will lead you through a thorough screening process in order to understand your skills, experience, needs, and drivers. You will also get support on resume writing, interview tips, and career planning, so when there’s a position you really want, you’re fully prepared to get it. Nervous about an upcoming interview? Unsure how to write a new resume? Visit the Hays Career Advice section to learn top tips to help you stand out from the crowd when job hunting. Hays is an Equal Opportunity Employer including disability/veteran. In accordance with applicable federal and state law protecting qualified individuals with known disabilities, Hays U.S. Corporation will attempt to reasonably accommodate those individuals unless doing so would create an undue hardship on the company. Any qualified applicant or consultant with a disability who requires an accommodation in order to perform the essential functions of the job should call or text 813.336.5570 Drug testing may be required; please contact a recruiter for more information."," Mid-Senior level "," Full-time "," Information Technology "," Banking and Financial Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-just-slide-media-3514663590?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=4eq1TFYCEYXhEwqxfpN4xQ%3D%3D&position=14&pageNum=18&trk=public_jobs_jserp-result_search-card," Just Slide Media ",https://www.linkedin.com/company/just-slide-media?trk=public_jobs_topcard-org-name," Los Angeles Metropolitan Area "," 6 days ago "," Over 200 applicants ","Just Slide Media is building the worlds leading growth tech stack and growth team, supporting category leading startups and incumbent large-scale brands undertaking digital transformation, across fintech, insurance, telco, ecommerce and entertainment. We are proven entrepreneurs and technology operators combining the speed of a startup, the expertise of a digital agency, the strategic thinking of a consultancy, and the analytics of technology leaders to digitally transform products, connect consumers with better experiences, and unlock exponential value for brands. At Just Slide Media, we do everything in our power to help our clients ""Grow baby, grow!"" As a Senior Data Engineer you will get to play a key role in the delivery of powerful data-driven products that enable innovative direct to consumer models across all vertical categories including telco, fintech, ecommerce, insurance and others. Program in SQL, Python, Linux bash, and cloud services APIs to automate the processing of transactional and marketing data from different networks Serve as the data wrangler and ETL expert for the company. Ingest, transform, cleanse and augment internal and external data assets. Leverage a range of cloud services, ETL frameworks and libraries, such as AWS  (EC2, RDS, S3, Lambda, Redshift, Athena, Glue), Postgres, Python, Pandas, Apache Airflow, and DST. REQUIREMENTS Computer Science degree from a competitive university program. (Alternatively, 5+ years relevant experience with history of skills progression and demonstrated accomplishments.) Minimum 3+ years in experience specifically in data engineer or ETL developer roles 5+ years in SQL: Programming complex queries, dynamic SQL, stored procedures, user defined functions, performance tuning 3+ years in ETL: Data pipelines, ETL concepts and frameworks (DST), data-oriented cloud architecture, data warehousing, scalable technologies (such as column Redshift and BQ) 2+ years in data oriented cloud services: Preferably AWS (S3, RDS, Redshift, Athena, Glue) 1+ years with Python 1+ years with Linux command line, shell scripting and Git / GitHub."," Full-time ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-anteriad-3499208985?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=OmHxY555TFJODZVmj4ORwQ%3D%3D&position=15&pageNum=18&trk=public_jobs_jserp-result_search-card," Just Slide Media ",https://www.linkedin.com/company/just-slide-media?trk=public_jobs_topcard-org-name," Los Angeles Metropolitan Area "," 6 days ago "," Over 200 applicants "," Just Slide Media is building the worlds leading growth tech stack and growth team, supporting category leading startups and incumbent large-scale brands undertaking digital transformation, across fintech, insurance, telco, ecommerce and entertainment. We are proven entrepreneurs and technology operators combining the speed of a startup, the expertise of a digital agency, the strategic thinking of a consultancy, and the analytics of technology leaders to digitally transform products, connect consumers with better experiences, and unlock exponential value for brands. At Just Slide Media, we do everything in our power to help our clients ""Grow baby, grow!""As a Senior Data Engineer you will get to play a key role in the delivery of powerful data-driven products that enable innovative direct to consumer models across all vertical categories including telco, fintech, ecommerce, insurance and others.Program in SQL, Python, Linux bash, and cloud services APIs to automate the processing of transactional and marketing data from different networks Serve as the data wrangler and ETL expert for the company. Ingest, transform, cleanse and augment internal and external data assets. Leverage a range of cloud services, ETL frameworks and libraries, such as AWS  (EC2, RDS, S3, Lambda, Redshift, Athena, Glue), Postgres, Python, Pandas, Apache Airflow, and DST.REQUIREMENTSComputer Science degree from a competitive university program. (Alternatively, 5+ years relevant experience with history of skills progression and demonstrated accomplishments.)Minimum 3+ years in experience specifically in data engineer or ETL developer roles5+ years in SQL: Programming complex queries, dynamic SQL, stored procedures, user defined functions, performance tuning3+ years in ETL: Data pipelines, ETL concepts and frameworks (DST), data-oriented cloud architecture, data warehousing, scalable technologies (such as column Redshift and BQ)2+ years in data oriented cloud services: Preferably AWS (S3, RDS, Redshift, Athena, Glue)1+ years with Python1+ years with Linux command line, shell scripting and Git / GitHub. "," Full-time ",,, Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-meharry-medical-college-3523061217?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=E6cXEFK6Hl2DQnzSbedBpQ%3D%3D&position=16&pageNum=18&trk=public_jobs_jserp-result_search-card," Meharry Medical College ",https://www.linkedin.com/company/meharry-medical-college?trk=public_jobs_topcard-org-name," Nashville, TN "," 1 month ago "," Be among the first 25 applicants "," This position is based in the Data Infrastructure department of the Enterprise Data and Analytics (EDA) division at Meharry Medical College (MMC). The EDA’s Data Infrastructure department supports the organization’s data strategy by providing an on-Prem and on-Cloud Data Management Platform which is managed and maintained by a closely-knit team of data engineers, warehouse specialists, and security professionals. The Data Infrastructure department oversees the implementation and management of technology solutions to effectively create, collect, store, protect, and share data across the MMC enterprise and with external entities. The MMC enterprise comprises the Academic enterprise, the Business enterprise, the Clinical enterprise, and the Research enterprise. The department is responsible for ensuring enterprise-wide data is securely managed, accessible, and organized in a manner that supports the data and analytics teams’ initiatives which include reporting, analytics, and data science.The Data Engineer will be responsible for the design and development of data workflows, ETL-like processes, queries, data modeling, and visualization of various clinical and non-clinical databases in the Meharry Medical College Data Ecosystem. Candidates must have knowledge of data architecture, modern cloud technologies, data modeling, and data pipeline. Data Engineer must also demonstrate advanced analytical skills, and technical and business knowledge and have a strong understanding of how to leverage industry-standard tools and methods to solve problems. The Data Engineer will work closely with other team members by providing data mapping and wrangling expertise. The candidate may often wrestle with problems associated with database integration and messy data, and unstructured data sets. He/She will ultimately deliver organized data in a manner that supports all aspects of data science, e.g., data reporting, analytics, and visualization, and clean data for researchers.Daily Operations Design, build, test, and maintain scalable data pipeline architectures and tools. Build orchestration pipelines and automate data workflow. Maintain automation process, orchestrator, or job scheduler, execute workflows, and coordinate dependencies among tasks. Develop current software development CI/CD pipeline. Build ETL (Extract, Transform, Load) pipelines and manage ETLs functions. Perform complex queries using SQL and/or other tools (e.g., R, Python Pandas, PySpark). Assess and optimize cloud environment specifications and requirements for big data and analytics. Work with data integration and management tools such as databases (cloud and on-prem; relational and NoSQL), Databricks or similar, and a variety of tools from major cloud providers. Contribute to the standards and tooling of the data platform to improve productivity and data quality Understand the needs of the project and translate to technical requirements. Partner with data scientists/analysts and cross-functional teams to discover, collect, cleanse, and refine the data needed for analysis and modeling, and support of day-to-day technology operations. Supporting data science and machine learning engineers in automating model development and deployment Design and maintain current data models that are optimized for storage and read query patterns. Design and develop best practices for documenting data flows and schemas throughout the enterprise. Maintain architecture diagrams and documentation and identify and eliminate redundancy. Ensure industry best practices around data pipelines, metadata management, data quality, data governance and data privacy. Support implementation of data governance policies and standards for the entire MMC enterprise. Support cloud framework for Governance and Compliance, Hybrid Connectivity, and Identity and security framework. Be a thought leader and partner in the development and execution of the Enterprise Data Strategy. Support initiatives to improves data reliability, efficiency, and quality. Supporting data access and querying/visualization needs. Write complex SQL queries across multiple data sources. Provide support in data analysis, reports, dashboards, and tools for business-users Empower business leaders across product, clinical, and operations teams to adopt more innovative approaches to data collection, management and utilization. Performs other related duties as assigned.Required Skills Knowledge of data structures, data management tools, and related software. Knowledge of building tools for the electronic medical record, and other clinical systems. Understanding of general medical terminology and healthcare clinical code. Knowledge of healthcare operations, healthcare data exchange, and interoperability. Self-starter, self-motivated, high level of initiative within a fast-paced, constantly evolving data management environment. Possess an ability to thrive and succeed in a high-performing team environment. Result-focused, able to solve complex problems and resolve conflicts in a timely manner. Demonstrative ability to collaborate with multiple stakeholders to define and deliver complex technical solutions. Ability to establish credibility and work with key internal and external stakeholders to get things done. Ability to think clearly, analyze quantitatively, problem-solve, scope technical requirements, and prioritize tasks. Motivation to learn, lead, and contribute as a team player on a variety of projects. Ability to communicate ideas and execution plans clearly to both technical and non-technical teams. Analytical and problem-solving skills. Excellent attention to detail.Required Experience Bachelor’s or master’s degree in Data Science, Data Engineering, Informatics, Computer Science/engineering, or a related field OR professional development/bootcamp on data engineering from an accredited university. 3+ years of relevant industry experience including developing data workflow, implementing tools, developing ETL, integration, and/or technical development, SQL Server, scripting languages, and relational databases. Experience working with at least one cloud platform and/or on-prem data platforms and its data offerings (AWS, GCP, Azure experience huge plus) for data warehousing, analytics, and machine learning architecture. Experience with complex data modeling and designing ETL preferably in healthcare space (OMOP or similar). Experience working with orchestration systems. Experience working with Data Governance tools and processes. Experience with languages like Python, Ruby, Java, JavaScript or similar language Experience with one of the Big Data technologies. Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions. "," Entry level "," Full-time "," Information Technology "," Higher Education " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cvs-health-3517687503?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=SyBwkqdzJLqnf2PF%2F9I6IQ%3D%3D&position=17&pageNum=18&trk=public_jobs_jserp-result_search-card," CVS Health ",https://www.linkedin.com/company/cvs-health?trk=public_jobs_topcard-org-name," Connecticut, United States "," 4 weeks ago "," 61 applicants ","Job Description Join the Medicare Stars Data Intelligence Operational Excellence Team and open yourself to a career opportunity with rapid growth potential. Provide operational and production support for web applications, databases, reports, dashboards and data processes. Use your innovation skills to automate manual processes and eliminate manual jobs. Implement Robot Process Automation (RPA)- Perform Quality Assurance (QA) to ensure the best outcomes for our business partners and end users. Multiple positions with varying skill sets are available. Performs operational and production support on varied healthcare applications, databases, processes, reports and dashboards. Performs analyses of varied healthcare data to evaluate programs and product solutions using healthcare medical, pharmacy, lab, survey and utilization data. Designs and codes efficiently using SAS/SQL or C#.NET and other programs. Extracts data from multiple sources and creates integrated analytic datasets and creates reports and studies to address business and research questions. Develops, validates and executes algorithms and predictive models to solve business problems. Executes univariate and multivariate statistical analyses and interprets results in the context of the business problem or question. Conducts periodic updates of analytical routines and department production systems Investigates and evaluates alternative solutions to meeting business needs. Participates in presentations and consultations to constituents on information services, capabilities and performance results. Documents methods, specifications and findings clearly; contributes to writing and presentation of results, findings and conclusions. Creates and evaluates the data needs of assigned projects and assures the integrity of the data. Manages personal resources efficiently to complete assigned projects accurately and on time. Participates in the development of project plans and timelines. Responsible for commitments to quality and on-time deliverables. Motivated and is willing to understand and probe into technical details and data irregularities. Contributes to a motivated work environment by working effectively to achieve common goals Pay Range The typical pay range for this role is: Minimum: $ 43,700 Maximum: $ 100,000 Please keep in mind that this range represents the pay range for all positions in the job grade within which this position falls. The actual salary offer will take into account a wide range of factors, including location. Required Qualifications Demonstrated written and verbal communication skills. Able to present information to various audiences. 3+ years of relevant programming or analytic experience. Knowledge of other reporting platforms such as: Web development technology C#.NET), big data analytics, data mining, Visual Basic. Understanding of relational databases, data systems and data warehouses. Experience in statistical analysis and database management. Knowledge of analytic programming tools and methods (SQL, OLAP, C# .NET) Solid Understanding of complex databases. Demonstrated analytical and problem-solving skills. Preferred Qualifications Healthcare or pharmaceutical industry experience is preferred. Strong knowledge of health care claims, products, and systems. Education Bachelor's degree in Mathematics, Economics, Statistics, Computer Science, Actuarial Science, Social Science, Healthcare or related discipline. or equivalent experience. Business Overview Bring your heart to CVS Health Every one of us at CVS Health shares a single, clear purpose: Bringing our heart to every moment of your health. This purpose guides our commitment to deliver enhanced human-centric health care for a rapidly changing world. Anchored in our brand — with heart at its center — our purpose sends a personal message that how we deliver our services is just as important as what we deliver. Our Heart At Work Behaviors™ support this purpose. We want everyone who works at CVS Health to feel empowered by the role they play in transforming our culture and accelerating our ability to innovate and deliver solutions to make health care more personal, convenient and affordable. We strive to promote and sustain a culture of diversity, inclusion and belonging every day. CVS Health is an affirmative action employer, and is an equal opportunity employer, as are the physician-owned businesses for which CVS Health provides management services. We do not discriminate in recruiting, hiring, promotion, or any other personnel action based on race, ethnicity, color, national origin, sex/gender, sexual orientation, gender identity or expression, religion, age, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law. We proudly support and encourage people with military experience (active, veterans, reservists and National Guard) as well as military spouses to apply for CVS Health job opportunities."," Entry level "," Full-time "," Health Care Provider "," Wellness and Fitness Services " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-new-century-health-3522443467?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=Qo1bUtr0%2FTja9nFOjLwiwA%3D%3D&position=18&pageNum=18&trk=public_jobs_jserp-result_search-card," New Century Health ",https://www.linkedin.com/company/new-century-health?trk=public_jobs_topcard-org-name," United States "," 2 days ago "," 147 applicants ","Your Future Evolves Here New Century Health (NCH) has been transforming the delivery of specialty care and driving radical cost and quality improvement across the member journey for patients with cancer and cardiovascular disease. As part of Evolent Health, we are on a bold mission to change the health of the nation by changing the way health care is delivered. Evolenteers make a difference wherever they are, whether it is at a medical center, in the office, or while working from home across 48 states. We empower you to work from where you work best, which makes juggling careers, families, and social lives so much easier. Through our recognition programs, we also highlight employees who live our values, give back to our communities each year, and are champions for bringing their whole selves to work each day. If you’re looking for a place where your work can be personally and professionally rewarding, don’t just join a company with a mission. Join a mission with a company behind it. Why We’re Worth the Application: We continue to grow year over year. Recognized as a leader in driving important diversity, equity, and inclusion (DE&I) efforts. Achieved a 100% score two years in a row on the Human Rights Campaign's Corporate Equality Index recognizing us as a best place to work for LGBTQ+ equality. Named to Parity.org’s list of the best companies for women to advance for 3 years in a row (2020, 2021 and 2022). Continue to prioritize the employee experience and achieved a 90% overall engagement score on our employee survey in May 2022. Publish an annual DE&I report to share our progress on how we’re building an equitable workplace. What You’ll Be Doing: The Role in Brief and Who You’ll Be Working With: You will be responsible for getting requirements from business, understanding how those requirements relate to data, loading/importing relevant data to meet those requirements, exporting data for user usage, and generating reports from said data. You will also interact with technical and non-technical peers both in the US and India. The Experience You’ll Need (Required): Experience with SQL, SSIS, SSRS (min 3 years) Experience loading, importing, and exporting data into and from databases (min 3 years) Experience generating PDFs using RDL (min 1 year) Experience designing databases (min 1 years) Good written and verbal communication Bachelor’s degree (or equivalent experience) Finishing Touches (Preferred): Experience in healthcare Experience with Google Analytics Experience generating reports from databases Technical Requirements: Currently, Evolent employees work remotely temporarily due to COVID-19. As such, we require that all employees have the following technical capability at their home: High speed internet over 10 Mbps, the ability to plug in directly to the home internet router. These at-home technical requirements are subject to change with any scheduled re-opening of our office locations. Evolent Health is committed to the safety and wellbeing of all its employees, partners and patients and complies with all applicable local, state, and national law regarding COVID health and vaccination requirements. Evolent expects all employees to also comply. We currently require all employees who may voluntarily return to our Evolent offices to be vaccinated and invite all employees regardless of vaccination status to remain working from home. Evolent Health is an equal opportunity employer and considers all qualified applicants equally without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability status. Compensation Range: The minimum salary for this position is $105,000, plus benefits. Salaries are determined by the skill set required for the position and commensurate with experience and may vary above and below the stated amounts."," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-advanced-knowledge-tech-llc-3528107908?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=WiNwp0GRjabeYlO32N6gIQ%3D%3D&position=19&pageNum=18&trk=public_jobs_jserp-result_search-card," New Century Health ",https://www.linkedin.com/company/new-century-health?trk=public_jobs_topcard-org-name," United States "," 2 days ago "," 147 applicants "," Your Future Evolves HereNew Century Health (NCH) has been transforming the delivery of specialty care and driving radical cost and quality improvement across the member journey for patients with cancer and cardiovascular disease. As part of Evolent Health, we are on a bold mission to change the health of the nation by changing the way health care is delivered. Evolenteers make a difference wherever they are, whether it is at a medical center, in the office, or while working from home across 48 states. We empower you to work from where you work best, which makes juggling careers, families, and social lives so much easier. Through our recognition programs, we also highlight employees who live our values, give back to our communities each year, and are champions for bringing their whole selves to work each day. If you’re looking for a place where your work can be personally and professionally rewarding, don’t just join a company with a mission. Join a mission with a company behind it.Why We’re Worth the Application:We continue to grow year over year.Recognized as a leader in driving important diversity, equity, and inclusion (DE&I) efforts.Achieved a 100% score two years in a row on the Human Rights Campaign's Corporate Equality Index recognizing us as a best place to work for LGBTQ+ equality.Named to Parity.org’s list of the best companies for women to advance for 3 years in a row (2020, 2021 and 2022).Continue to prioritize the employee experience and achieved a 90% overall engagement score on our employee survey in May 2022.Publish an annual DE&I report to share our progress on how we’re building an equitable workplace.What You’ll Be Doing:The Role in Brief and Who You’ll Be Working With:You will be responsible for getting requirements from business, understanding how those requirements relate to data, loading/importing relevant data to meet those requirements, exporting data for user usage, and generating reports from said data. You will also interact with technical and non-technical peers both in the US and India.The Experience You’ll Need (Required):Experience with SQL, SSIS, SSRS (min 3 years)Experience loading, importing, and exporting data into and from databases (min 3 years)Experience generating PDFs using RDL (min 1 year)Experience designing databases (min 1 years)Good written and verbal communicationBachelor’s degree (or equivalent experience)Finishing Touches (Preferred):Experience in healthcareExperience with Google AnalyticsExperience generating reports from databasesTechnical Requirements:Currently, Evolent employees work remotely temporarily due to COVID-19. As such, we require that all employees have the following technical capability at their home: High speed internet over 10 Mbps, the ability to plug in directly to the home internet router. These at-home technical requirements are subject to change with any scheduled re-opening of our office locations.Evolent Health is committed to the safety and wellbeing of all its employees, partners and patients and complies with all applicable local, state, and national law regarding COVID health and vaccination requirements. Evolent expects all employees to also comply. We currently require all employees who may voluntarily return to our Evolent offices to be vaccinated and invite all employees regardless of vaccination status to remain working from home.Evolent Health is an equal opportunity employer and considers all qualified applicants equally without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability status. Compensation Range: The minimum salary for this position is $105,000, plus benefits. Salaries are determined by the skill set required for the position and commensurate with experience and may vary above and below the stated amounts. "," Entry level "," Full-time "," Information Technology "," Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-cars-com-3516641350?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=Hmzy4AgYtMHWRAjZ7apmVQ%3D%3D&position=20&pageNum=18&trk=public_jobs_jserp-result_search-card," Cars.com ",https://www.linkedin.com/company/cars-com?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 152 applicants "," About UsAt Cars.com, we help shoppers meet their perfect car match, and people find their perfect career match. As one of the top places to work in Chicago, according to The Chicago Tribune, Built-In Chicago and others, we pride ourselves on a culture of growth and innovation.Cars.com has revolutionized the automotive industry for both shoppers and sellers through technology and solutions for buyers and sellers alike. We never shy away from a challenge, move fast, collaborate across functions to approach problems from every angle. We’ve built a culture that’s second-to-none and share core values that keep everyone working full-speed at the same goals with the same open, outcome-driven and bold attitudes.Cars.com is a CARS brand. CARS includes the following brands: Cars.com, Dealer Inspire, DealerRater, FUEL, CreditIQ & Accu-Trade. Learn more here !Data is the driver for our future at Cars. We’re searching for a collaborative, analytical, and innovative engineer to build scalable and highly performant platforms, systems and tools to enable innovations with data. If you are passionate about building large scale systems and data driven products, we want to hear from you.Responsibilities Include Build data pipelines and deriving insights out of the data using advanced analytic techniques, streaming and machine learning at scale Work within a dynamic, forward thinking team environment where you will design, develop, and maintain mission-critical, highly visible Big Data and Machine Learning applicationsBuild, deploy and support data pipelines and ML models into production.Work in close partnership with other Engineering teams, including Data Science, & cross-functional teams, such as Product Management & Product DesignOpportunity to mentor others on the team and share your knowledge across the Cars.com organizationRequired Skills Ability to develop Spark jobs to cleanse/enrich/process large amounts of data. Experience with tuning Spark jobs for efficient performance including execution time of a job, execution memory, etc. Experience with dimensional data modeling concepts. Sound understanding of various file formats and compression techniques. Experience with source code management systems such as Github and developing CI/CD pipelines with tools such as Jenkins for data. Ability to understand deeply the entire architecture for a major part of the business and be able to articulate the scaling and reliability limits of that area; design, develop and debug at an enterprise level and design and estimate at a cross-project level. Ability to mentor developers and lead projects of medium to high complexity. Excellent communication and collaboration skills. Required Experience Software Engineering: 3 - 5 years of designing & developing complex, batch processes at enterprise scale; specifically utilizing Python and/or Scala. Big Data Ecosystem: 2+ years of hands-on, professional experience with tools and platforms like PySpark, Airflow, and Redshift. AWS Cloud: 2+ years of professional experience in developing Big Data applications in the cloud, specifically AWS. Preferred Experience working with Clickstream Data Experience working with digital marketing data Experience with developing REST APIs. Experience in deploying ML models into production and integrating them into production applications for use. Experience with machine learning / deep learning using R, Python, Jupyter, Zeppelin, TensorFlow, etc. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. "," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-fortress-information-security-3494484680?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=KTEQUDSiR4BrOpsHxvEbyw%3D%3D&position=21&pageNum=18&trk=public_jobs_jserp-result_search-card," Fortress Information Security ",https://www.linkedin.com/company/fortress-information-security?trk=public_jobs_topcard-org-name," Florida, United States "," 3 weeks ago "," 37 applicants ","What you can expect as a Data Engineer at Fortress: The Data Engineer will be responsible for defining data input rules and processes to ensure data quality and integrity. The data input comes from many stakeholders and the candidate will need to identify anomalies and develop programmatic solutions to solve problems. A successful candidate will be well versed in the implementation of data management strategies and data technologies. Day to day activities include data cleanup, documentation and definition of solution architectures, and hands on development activities. Responsibilities Include Develop an in-depth knowledge of Fortress’s data, data flows, and processes Create and document data management standards, policies, and best practices Drive alignment on enterprise data models and definitions Provide technical guidance Develop with a focus on automation and consistency Can work independently or as a member of a team environment Anticipate, recognize, report, and help resolve issues Ability to understand how technical requirements’ impact current processes Ability to understand and facilitate coordination of data requirements across teams Minimum Qualifications 2+ years of experience in a data engineer or equivalent role Expertise in structured and unstructured databases such as PostgreSQL, MongoDB, and ElasticSearch Expertise in building data models and complex SQL queries Expertise in data quality validation with the ability to conduct data analysis, investigation, and document resolution Experience in data process design, implementation, and improvement Ability to lead your own projects and operate with a high degree of autonomy in a remote working environment Must have ability to explain technical concepts and make decisions with non-technical team members Develops clean and intuitive code Excellent written and verbal communication skills Preferred Experience Experience with Jira Experience with Python and data analysis libraries such as Pandas Use of agile and DevOps practices for project and software management including continuous integration and continuous delivery Excellent time management skills and proven ability to multi-task competing priorities Education Bachelor's Degree in Information Technology, Computer Science, Data Engineering, Data Analytics or equivalent degree from an accredited University Employee Benefits Remote and Hybrid working environment Competitive pay structure Medical, dental, vision plans with employees covered up to 90% with highly progressive options for dependents and families Company paid life, short- and long-term disability insurance Employee Assistance Program 401(k) match Paid time off and holiday pay Access to thousands of Learning & Development courses that range from mental health and wellbeing, stress, and time management to an array of technical and business-related courses Employment Perks We provide each employee with professional growth opportunities through succession planning, up-skilling, and certifications Tuition and certification reimbursement Employee Referral Programs Company Sponsored Events Fortress is proud to be an Equal Opportunity Employer. All employees and applicants will receive consideration for employment without regard to age, color, disability, gender, national origin, race, religion, sexual orientation, gender identity, protected veteran status, or any other classification protected by federal, state, or local law. Fortress Information Security takes part in the E-Verify process for all new hires. For positions located in the US, the following conditions apply. If you are made a conditional offer of employment, you will have to undergo a drug test. ADA Disclaimer: In developing this job description care was taken to include all competencies needed to successfully perform in this position. However, for Americans with Disabilities Act (ADA) purposes, the essential functions of the job may or may not have been described for purposes of ADA reasonable accommodation. All reasonable accommodation requests will be reviewed and evaluated on a case-by-case basis."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-angle-health-3479600858?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=HS15j4w09v%2BhIu4HaPgOCA%3D%3D&position=22&pageNum=18&trk=public_jobs_jserp-result_search-card," Angle Health ",https://www.linkedin.com/company/anglehealth?trk=public_jobs_topcard-org-name," United States "," 4 weeks ago "," 67 applicants ","Changing Healthcare For Good At Angle Health, we believe the healthcare system should be accessible, transparent, and easy to navigate. As a digital-first, data-driven health plan, we are replacing legacy systems with modern infrastructure to deliver our members the care they need when they need it. If you want to build the future of healthcare, we'd love for you to join us. The Role As a Data Engineer on the Strategy team at Angle Health, you will be part of an elite team of problem solvers tackling some of the hardest operational challenges across the business. You will be working closely with Technical Product Managers and other engineers in small teams embedded within functions across the company—from sales to operations to finance and more. In partnership with operational stakeholders, your job will be to quickly understand business workflows, and design and implement technical, data-driven solutions to achieve the intended business outcomes. That may include architecting backend micro-services, building and maintaining robust data pipelines, standing up health metrics and automated alerts, and developing scalable technical solutions to support Angle Health's day-to-day operations. Every day will be a new learning experience in this role and may span discussing architecture with fellow engineers, wrangling large-scale data, coding a custom web app or micro-service, or speaking with customers and executives. This role is ideal for entrepreneurial engineers, applied data scientists, and creative, intellectually curious thinkers that want to solve high-value problems in healthcare. This position may be based in San Francisco, New York City, Salt Lake City, or Remote. We are currently trialing various titles for the same role. Please consider the following posted roles as the same position: Deployed Engineer, Software Engineer, Business Operations, and Data Engineer. Please note these are the same, so if you have already applied for one position, there is no need to reapply for the others What We Value A strong engineering background in computer science, software engineering, data science, mathematics, or similar technical field is required for this role Proficiency in programming languages (e.g. Python, SQL, Java, TypeScript/JavaScript, or similar) and data engineering frameworks A highly analytical mindset and an eagerness to build technical solutions to complex business problems High attention to detail and intellectual curiosity—you're not satisfied with surface-level answers. You want to dive into the data, the ""how,"" and the ""why"" because ""the way it's always been done"" is not always the way it should be done Low ego—the outcome matters more than who gets the credit Demonstrated ability to collaborate effectively in teams of technical and non-technical individuals Highly organized with an ability to multitask, problem solve, and balance competing priorities in a rapidly changing environment Because We Value You: Competitive compensation and stock options 100% company paid comprehensive health, vision & dental insurance for you and your dependents Supplemental Life, AD&D and Short Term Disability coverage options Discretionary time off Opportunity for rapid career progression Relocation assistance (if relocation is required) 3 months of paid parental leave and flexible return to work policy (after 10 months of employment) Work-from-home stipend for remote employees Company provided lunch for in-office employees 401(k) account Other benefits coming soon! Backed by a team of world class investors, we are a healthcare startup on a mission to make our health system more effective, accessible, and affordable to everyone. From running large hospitals and health plans to serving on federal healthcare advisory boards to solving the world's hardest problems at Palantir, our team has done it all. As part of this core group at Angle Health, you will have the right balance of support and autonomy to grow both personally and professionally and the opportunity to own large parts of the business and scale with the company. Angle Health is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. Angle Health is committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities."," Not Applicable "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer - Data Analytics,https://www.linkedin.com/jobs/view/data-engineer-data-analytics-at-costco-it-3518671901?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=1RnonBRi8WQp%2F9GZ19yWeA%3D%3D&position=23&pageNum=18&trk=public_jobs_jserp-result_search-card," Costco IT ",https://www.linkedin.com/company/costco-weareit?trk=public_jobs_topcard-org-name," Dallas, TX "," 1 month ago "," Be among the first 25 applicants ","This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture. The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes. Our employees and our members come FIRST. Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others. In 2021, Costco contributed over $58 million to organizations such as United Way and Children's Miracle Network Hospitals. Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed. As proof, Costco ranks seventh in Forbes “World’s Best Employers”. The Data Engineer - Data Analytics is responsible for the end to end data pipelines to power analytics and data services. This role is focused on data engineering to build and deliver automated data pipelines from a plethora of internal and external data sources. The Data Engineer will partner with product owners, engineering and data platform teams to design, build, test and automate data pipelines that are relied upon across the company as the single source of truth. If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined. ROLE Develops and operationalizes data pipelines to make data available for consumption (BI, Advanced analytics, Services). Works in tandem with data architects and data/BI engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality and orchestration. Designs, develops and implements ETL/ELT processes using IICS (Informatica cloud). Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Azure Data Factory to improve and speed up delivery of our data products and services. Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency and quality of data management. Communicates technical concepts to non-technical audiences both in written and verbal form. Performs peer reviews for other data engineer’s work. Required 5+ years’ experience engineering and operationalizing data pipelines with large and complex datasets. 5+ years’ of hands on experience with Informatica PowerCenter 2+ years’ of hands on experience with Informatica IICS 3+ years’ experience working with Cloud technologies such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB and other big data technologies. Extensive experience working with various data sources (SQL,Oracle database, flat files (csv, delimited), Web API, XML. Advanced SQL skills required. Solid understanding of relational databases and business data; ability to write complex SQL queries against a variety of data sources. 5+ years’ experience with Data Modeling, ETL, and Data Warehousing. Strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing). Scheduling flexibility to meet the needs of the business including weekends, holidays, and 24/7 on call responsibilities on a rotational basis. Able to work in a fast-paced agile development environment. Recommended BA/BS in Computer Science, Engineering, or equivalent software/services experience. Azure Certifications Experience implementing data integration techniques such as event / message based integration (Kafka, Azure Event Hub), ETL. Experience with Git / Azure DevOps Experience delivering data solutions through agile software development methodologies. Exposure to the retail industry. Excellent verbal and written communication skills. Experience working with SAP integration tools including BODS. Experience with UC4 Job Scheduler Required Documents Cover Letter Resume California applicants, please click to review the Costco Applicant Privacy Notice. Pay Ranges Level 2 - $100,000 - $135,000 Level 3 - $125,000 - $165,000 Level 4 - $155,000 - 195,000 - Potential Bonus and Restricted Stock Unit (RSU) eligible level We offer a comprehensive package of benefits including paid time off, health benefits — medical/dental/vision/hearing aid/pharmacy/behavioral health/employee assistance, health care reimbursement account, dependent care assistance plan, commuter benefits, short-term disability and long-term disability insurance, AD&D insurance, life insurance, 401(k), stock purchase plan, SmartDollar financial wellness program, to eligible employees. Costco is committed to a diverse and inclusive workplace. Costco is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or any other legally protected status. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to IT-Recruiting@costco.com If hired, you will be required to provide proof of authorization to work in the United States. In some cases, applicants and employees for selected positions will not be sponsored for work authorization, including, but not limited to H1-B visas."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer Intern,https://www.linkedin.com/jobs/view/data-engineer-intern-at-houlihan-lokey-3531339549?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=lj8WfGyE5z5np2LaSF4irA%3D%3D&position=24&pageNum=18&trk=public_jobs_jserp-result_search-card," Houlihan Lokey ",https://www.linkedin.com/company/houlihan-lokey?trk=public_jobs_topcard-org-name," Los Angeles, CA "," 11 hours ago "," 116 applicants ","Business Unit Information Technology Industry No Industry Houlihan Lokey Data Engineer Intern Houlihan Lokey (NYSE:HLI) is a global investment bank with expertise in mergers and acquisitions, capital markets, financial restructuring, and valuation. The firm serves corporations, institutions, and governments worldwide with offices in the United States, Europe, the Middle East, and the Asia-Pacific region. Independent advice and intellectual rigor are hallmarks of the firm’s commitment to client success across its advisory services. Houlihan Lokey is the No. 1 investment bank for all global mergers and acquisitions (M&A) transactions, the No. 1 M&A advisor for the past seven consecutive years in the U.S., the No. 1 global restructuring advisor for the past eight consecutive years, and the No. 1 global M&A fairness opinion advisor for the past 20 years, all based on number of transactions and according to data provided by Refinitiv. For more information, please visit www.hl.com. Scope The ideal candidate is someone who has a passion for all things data and automation including data quality, API integrations, and machine learning. As a Data Engineer Intern in Information Technology, you will get hands-on experience working with enterprise-level data at a No. 1 ranked global company utilizing technologies such as Azure, Snowflake, Python, and SQL. If you’re excited by the prospect of optimizing or even re-designing a company’s data architecture to support the next generation of products and data initiatives, then this might be the right opportunity for you. Responsibilities Create and maintain optimal data pipeline architecture. Assemble data sets (ranging from small to large and simple to complex) that meet both functional and non-functional business requirements. Identify, design, and implement internal process improvements (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.) Build the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from a wide variety of data sources using Python, SQL Server, and Snowflake. Work with teams to assist with data-related technical issues and support their data infrastructure needs. Create data tools for analytics and data scientist team members that assist them in building and optimizing our organization into an innovative industry leader. Prototype machine learning models to provide useful insights for existing data. Basic Qualifications Basic working SQL skills and experience working with relational databases . Experience in building and optimizing data pipelines, architectures, and data sets. Experience in performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Motivated, detail-oriented, and passionate about data Strong analytical skills with ability to analyze problems and devise viable solutions. Ability to collaborate with colleagues and be a team player. Demonstrate initiative and ownership of tasks and projects. Ability to work effectively both independently and in team environments. Strong interpersonal and communication skills with a proven ability to communicate effectively and confidently at all levels BA/BS Computer Science / Business / MIS (or equivalent work experience to substitute for education) Preferred Qualifications Experience with complex data structures and Relational Databases desired. Experience in Python (Anaconda or Python). Salary Range Role The firm’s good faith and reasonable estimate of the possible salary range for this role at the time of posting is Houlihan Lokey is committed to providing its employees with an exciting career opportunity and competitive total compensation package. $20.00-$30.00 Actual salary at the time of hire may vary and may be above or below the range based on various factors, including, but not limited to, the candidate’s relevant qualifications, skills, and experience and the location where this position may be filled. We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, gender identity, sexual orientation, protected veteran status, or any other characteristic protected by law."," Internship "," Full-time "," Information Technology "," Investment Banking " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-wurl-3507137094?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=ihOhSV46uTwtXP%2Ba9rKtkw%3D%3D&position=25&pageNum=18&trk=public_jobs_jserp-result_search-card," Wurl ",https://www.linkedin.com/company/wurl-ctv?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 141 applicants ","About Wurl, LLC. Wurl is a global streaming network. Our B2B services provide streamers, content companies, and advertisers with a powerful, integrated network to distribute and monetize streaming television reaching hundreds of million of connected televisions in over 50 countries. This year Wurl was acquired by AppLovin (Nasdaq: APP), an industry-leading mobile marketing ad tech company, bringing together the technology and innovation of the mobile and television industries. With the merger, Wurl employees enjoy the best of both worlds: the dynamic environment of a 160+ person start-up and the stability of a high-growth public tech company. Wurl is a fully-remote company that has been recognized for a second year in a row as a Great Place to Work. Wurl invests in providing a culture that fosters passion, drives excellence, and encourages collaboration to drive innovation. We hire the world’s best to build a bunch of cool stuff on an interface fully integrated with their own from streaming, advertising and software technology to continue to help us disrupt the way the world watches television. Data Engineer (Remote): Wurl is seeking a data engineer which will contribute to the addition of new features to our data platform. Wurl collects data from a range of OTT video streaming ecosystem components. We digest it, analyze it, and present it. The data engineer reports to the director of data engineering and is interacting with Product, Solution Architects and various other functions across the organization. Responsibilities: Own and drive highly impactful data services and product initiatives which leverage the rich data that Wurl collects and processes. Design, implement and deliver innovative features that scale with Wurl’s media network and beyond. Build highly secured, scalable and reliable ETLs that runs 24x7 that is cloud native Collaborate with project managers, product managers, solutions architects, other engineering teams and support for productization of new software services and features Contribute to enforce SOC2 compliance across the data platform Requirements: Bachelors or Masters degree in Computer Science or Data Science. 3+ years of complete software lifecycle experience. Demonstrable fluency with SQL & Python. Experience designing and securing all components of the ETL. Deep knowledge of AWS cloud services. Experience with SOC2 implementation on a large scale product Understanding the infrastructure of CDNs Advertising Infrastructure You need to be: Comfortable working in agile environments. Able to adapt quickly to frequent changes. Evangelizing good software development practices and leading from the front. A master in database design principles. A strong communicator and a leader. Team player, product owner and committed to the results. Location: US Remote A Plus if you have: Streaming video delivery experience. Advertising infrastructure experience. AWS Cloud experience. Knowledge of streaming video formats. Understanding of ETL processes. BI toolset experience highly desired. We recognize that not all applicants will not always met 100% of the qualifications. Wurl is fiercely passionate and contagiously enthusiastic about what we are building and seeking individuals who are equally passion-driven with diverse backgrounds and educations. While we are seeking those who know our industry, there is no perfect candidate and we want to encourage you to apply even if you do not meet all requirements. What We Offer Competitive Salary Strong Medical, Dental and Vision Benefits, 90% paid by Wurl Remote First Policy Discretionary Time Off, with minimum at 4 weeks of time off 12 US Holidays 401(k) Matching Pre-Tax Savings Plans, HSA & FSA Carrot and Headspace Subscriptions for Family Planning & Mental Wellness OneMedical Subscription for 24/7 Convenient Medical Care Paid Maternity and Parental Leave for All Family Additions Discounted PetPlan Easy at Home Access to Covid Testing with EmpowerDX $1,000 Work From Home Stipend to Set Up Your Home Office Few companies allow you to thrive like you will at Wurl. You will have the opportunity to collaborate with the industry’s brightest minds and most innovative thinkers. You will enjoy ongoing mentorship, team collaboration and you will have a place to grow your career. You will be proud to say you're a part of the company revolutionizing TV. Wurl provides a competitive total compensation package with a pay-for-performance rewards approach. Total compensation at Wurl is based on a number of factors, including market location and may vary depending on job-related knowledge, skills, and experience. Depending on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical and other benefits."," Mid-Senior level "," Full-time "," Information Technology, Engineering, and Other "," Online Audio and Video Media, Broadcast Media Production and Distribution, and Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-anteriad-3499208985?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=OmHxY555TFJODZVmj4ORwQ%3D%3D&position=15&pageNum=18&trk=public_jobs_jserp-result_search-card," Anteriad ",https://www.linkedin.com/company/anteriad?trk=public_jobs_topcard-org-name," Denver, CO "," 2 weeks ago "," 95 applicants ","Come Join Our Team At Anteriad and innovate the way B2B marketers make data-driven business decisions. The Opportunity: Anteriad is seeking a Data Engineer to join our Database Solutions Division in Littleton, Colorado. This is an excellent opportunity for an intelligent, energetic, and self-motivated individual to play a vital role within a growing part of Anteriad. The Data Engineer will have an important role in receiving, organizing, and loading data into Anteriad’s Data Warehouse. This data will then be used by our clients and throughout the Anteriad organization. This is a hybrid position (two days/week remote, three days/week on-site at our Littleton, Colorado office - MWF). Only local candidates will be considered. This position is not eligible for employer-visa sponsorship. How You Will Help Us Do That: Primary responsibility is to receive, organize, coordinate and load data using Anteriad’s proprietary Multi-file Automated Processing System (MAPS) Work with others to manage the flow of over 1,000 weekly files destined for the Anteriad Data Warehouse Develop strong relationships with key personnel in other Anteriad departments, communicating as needed to facilitate quality, receipt, and timely processing of data Understand and apply special client-specific business rules Troubleshoot data issues with data providers and other applicable parties to successfully remedy problems As Our Next Data Engineer, You Will Bring: 4-year college degree 2+ years of professional experience Microsoft Office proficiency, with emphasis on Excel and Outlook Highly organized Detail oriented Quick learner Good at multitasking Strong written and verbal communication skills Passion for problem-solving Data experience is preferred What We Bring to You: A choice of 3 Healthcare Plans, including Dental, Vision, Short-Term Disability, and Life Insurance Unlimited PTO and Holidays 401K with company matching Fully paid Primary Caregiver Leave for up to 12 weeks & Parental Bonding Leave for up to 2 weeks Optional Supplemental Life, Accident and Critical Illness Insurance Plans Free Apple Fitness Free Peloton Fitness Our Values: Lead & Learn We lead with unrivaled vision, innovation and execution, always learning and embracing new ways of doing things to stay out in front Collaborate & Celebrate We build great things when we work together as one Anteriad team, celebrating our achievements – both great and small – along the way Innovate & Inspire We are always looking for bold new ways to exceed the expectations of our customers and to inspire each other to even greater success Do More & Do Good We go above and beyond in the service of our clients and colleagues, and the communities where we"," Entry level "," Full-time "," Information Technology "," Advertising Services " Data Engineer,United States,DATA ENGINEER,https://www.linkedin.com/jobs/view/data-engineer-at-advanced-knowledge-tech-llc-3528107908?refId=b9Frz1BJaPdsIGXxP9IqhQ%3D%3D&trackingId=WiNwp0GRjabeYlO32N6gIQ%3D%3D&position=19&pageNum=18&trk=public_jobs_jserp-result_search-card," Advanced Knowledge Tech LLC ",https://www.linkedin.com/company/advanced-knowledge-tech-llc?trk=public_jobs_topcard-org-name," Seattle, WA "," 3 weeks ago "," Be among the first 25 applicants ","Position: Data Engineer Location: Seattle, WA (Initial remote) Required Skills AWS data pipeline, Glue, S3, Redshift skills to aggregate data from various AWS S3 and other sources, cleanse, format it and organize in S3/Redshift data model Thanks & Regards, Akash Kumar Rathore Advanced Knowledge Tech, LLC Hebron Office Plaza, 751 Hebron Pkwy, Suite# 325, Lewisville, Texas 75057 :- US: +1 732 992 6563 Fax No.: +1 214-307-6572 Email: akash@akt-corp.com Website : www.akt-corp.com LinkedIn Link: (1) Akash Rathore | LinkedIn"," Entry level "," Full-time "," Other "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-angle-health-3479600858?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=SIlAUzAlWc3rdFVdLgTXZQ%3D%3D&position=1&pageNum=19&trk=public_jobs_jserp-result_search-card," Angle Health ",https://www.linkedin.com/company/anglehealth?trk=public_jobs_topcard-org-name," United States "," 4 weeks ago "," 67 applicants ","Changing Healthcare For Good At Angle Health, we believe the healthcare system should be accessible, transparent, and easy to navigate. As a digital-first, data-driven health plan, we are replacing legacy systems with modern infrastructure to deliver our members the care they need when they need it. If you want to build the future of healthcare, we'd love for you to join us. The Role As a Data Engineer on the Strategy team at Angle Health, you will be part of an elite team of problem solvers tackling some of the hardest operational challenges across the business. You will be working closely with Technical Product Managers and other engineers in small teams embedded within functions across the company—from sales to operations to finance and more. In partnership with operational stakeholders, your job will be to quickly understand business workflows, and design and implement technical, data-driven solutions to achieve the intended business outcomes. That may include architecting backend micro-services, building and maintaining robust data pipelines, standing up health metrics and automated alerts, and developing scalable technical solutions to support Angle Health's day-to-day operations. Every day will be a new learning experience in this role and may span discussing architecture with fellow engineers, wrangling large-scale data, coding a custom web app or micro-service, or speaking with customers and executives. This role is ideal for entrepreneurial engineers, applied data scientists, and creative, intellectually curious thinkers that want to solve high-value problems in healthcare. This position may be based in San Francisco, New York City, Salt Lake City, or Remote. We are currently trialing various titles for the same role. Please consider the following posted roles as the same position: Deployed Engineer, Software Engineer, Business Operations, and Data Engineer. Please note these are the same, so if you have already applied for one position, there is no need to reapply for the others What We Value A strong engineering background in computer science, software engineering, data science, mathematics, or similar technical field is required for this role Proficiency in programming languages (e.g. Python, SQL, Java, TypeScript/JavaScript, or similar) and data engineering frameworks A highly analytical mindset and an eagerness to build technical solutions to complex business problems High attention to detail and intellectual curiosity—you're not satisfied with surface-level answers. You want to dive into the data, the ""how,"" and the ""why"" because ""the way it's always been done"" is not always the way it should be done Low ego—the outcome matters more than who gets the credit Demonstrated ability to collaborate effectively in teams of technical and non-technical individuals Highly organized with an ability to multitask, problem solve, and balance competing priorities in a rapidly changing environment Because We Value You: Competitive compensation and stock options 100% company paid comprehensive health, vision & dental insurance for you and your dependents Supplemental Life, AD&D and Short Term Disability coverage options Discretionary time off Opportunity for rapid career progression Relocation assistance (if relocation is required) 3 months of paid parental leave and flexible return to work policy (after 10 months of employment) Work-from-home stipend for remote employees Company provided lunch for in-office employees 401(k) account Other benefits coming soon! Backed by a team of world class investors, we are a healthcare startup on a mission to make our health system more effective, accessible, and affordable to everyone. From running large hospitals and health plans to serving on federal healthcare advisory boards to solving the world's hardest problems at Palantir, our team has done it all. As part of this core group at Angle Health, you will have the right balance of support and autonomy to grow both personally and professionally and the opportunity to own large parts of the business and scale with the company. Angle Health is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. Angle Health is committed to working with and providing reasonable accommodations to applicants with physical and mental disabilities."," Not Applicable "," Full-time "," Information Technology "," Insurance " Data Engineer,United States,Data Engineer - Data Analytics,https://www.linkedin.com/jobs/view/data-engineer-data-analytics-at-costco-it-3518671901?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=CBrxDphkveFoHpY8Yk79Gg%3D%3D&position=2&pageNum=19&trk=public_jobs_jserp-result_search-card," Costco IT ",https://www.linkedin.com/company/costco-weareit?trk=public_jobs_topcard-org-name," Dallas, TX "," 1 month ago "," Be among the first 25 applicants ","This is an environment unlike anything in the high-tech world and the secret of Costco’s success is its culture. The value Costco puts on its employees is well documented in articles from a variety of publishers including Bloomberg and Forbes. Our employees and our members come FIRST. Costco is well known for its generosity and community service and has won many awards for its philanthropy. The company joins with its employees to take an active role in volunteering by sponsoring many opportunities to help others. In 2021, Costco contributed over $58 million to organizations such as United Way and Children's Miracle Network Hospitals. Costco IT is responsible for the technical future of Costco Wholesale, the third largest retailer in the world with wholesale operations in fourteen countries. Despite our size and explosive international expansion, we continue to provide a family, employee centric atmosphere in which our employees thrive and succeed. As proof, Costco ranks seventh in Forbes “World’s Best Employers”. The Data Engineer - Data Analytics is responsible for the end to end data pipelines to power analytics and data services. This role is focused on data engineering to build and deliver automated data pipelines from a plethora of internal and external data sources. The Data Engineer will partner with product owners, engineering and data platform teams to design, build, test and automate data pipelines that are relied upon across the company as the single source of truth. If you want to be a part of one of the worldwide BEST companies “to work for”, simply apply and let your career be reimagined. ROLE Develops and operationalizes data pipelines to make data available for consumption (BI, Advanced analytics, Services). Works in tandem with data architects and data/BI engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality and orchestration. Designs, develops and implements ETL/ELT processes using IICS (Informatica cloud). Uses Azure services such as Azure SQL DW (Synapse), ADLS, Azure Event Hub, Azure Data Factory to improve and speed up delivery of our data products and services. Implements big data and NoSQL solutions by developing scalable data processing platforms to drive high-value insights to the organization. Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery. Identifies ways to improve data reliability, efficiency and quality of data management. Communicates technical concepts to non-technical audiences both in written and verbal form. Performs peer reviews for other data engineer’s work. Required 5+ years’ experience engineering and operationalizing data pipelines with large and complex datasets. 5+ years’ of hands on experience with Informatica PowerCenter 2+ years’ of hands on experience with Informatica IICS 3+ years’ experience working with Cloud technologies such as ADLS, Azure Databricks, Spark, Azure Synapse, Cosmos DB and other big data technologies. Extensive experience working with various data sources (SQL,Oracle database, flat files (csv, delimited), Web API, XML. Advanced SQL skills required. Solid understanding of relational databases and business data; ability to write complex SQL queries against a variety of data sources. 5+ years’ experience with Data Modeling, ETL, and Data Warehousing. Strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing). Scheduling flexibility to meet the needs of the business including weekends, holidays, and 24/7 on call responsibilities on a rotational basis. Able to work in a fast-paced agile development environment. Recommended BA/BS in Computer Science, Engineering, or equivalent software/services experience. Azure Certifications Experience implementing data integration techniques such as event / message based integration (Kafka, Azure Event Hub), ETL. Experience with Git / Azure DevOps Experience delivering data solutions through agile software development methodologies. Exposure to the retail industry. Excellent verbal and written communication skills. Experience working with SAP integration tools including BODS. Experience with UC4 Job Scheduler Required Documents Cover Letter Resume California applicants, please click to review the Costco Applicant Privacy Notice. Pay Ranges Level 2 - $100,000 - $135,000 Level 3 - $125,000 - $165,000 Level 4 - $155,000 - 195,000 - Potential Bonus and Restricted Stock Unit (RSU) eligible level We offer a comprehensive package of benefits including paid time off, health benefits — medical/dental/vision/hearing aid/pharmacy/behavioral health/employee assistance, health care reimbursement account, dependent care assistance plan, commuter benefits, short-term disability and long-term disability insurance, AD&D insurance, life insurance, 401(k), stock purchase plan, SmartDollar financial wellness program, to eligible employees. Costco is committed to a diverse and inclusive workplace. Costco is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or any other legally protected status. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request to IT-Recruiting@costco.com If hired, you will be required to provide proof of authorization to work in the United States. In some cases, applicants and employees for selected positions will not be sponsored for work authorization, including, but not limited to H1-B visas."," Mid-Senior level "," Full-time "," Information Technology "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-wurl-3507137094?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=nQtfA3MYpNkI05UuHGccXA%3D%3D&position=3&pageNum=19&trk=public_jobs_jserp-result_search-card," Wurl ",https://www.linkedin.com/company/wurl-ctv?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," 141 applicants ","About Wurl, LLC. Wurl is a global streaming network. Our B2B services provide streamers, content companies, and advertisers with a powerful, integrated network to distribute and monetize streaming television reaching hundreds of million of connected televisions in over 50 countries. This year Wurl was acquired by AppLovin (Nasdaq: APP), an industry-leading mobile marketing ad tech company, bringing together the technology and innovation of the mobile and television industries. With the merger, Wurl employees enjoy the best of both worlds: the dynamic environment of a 160+ person start-up and the stability of a high-growth public tech company. Wurl is a fully-remote company that has been recognized for a second year in a row as a Great Place to Work. Wurl invests in providing a culture that fosters passion, drives excellence, and encourages collaboration to drive innovation. We hire the world’s best to build a bunch of cool stuff on an interface fully integrated with their own from streaming, advertising and software technology to continue to help us disrupt the way the world watches television. Data Engineer (Remote): Wurl is seeking a data engineer which will contribute to the addition of new features to our data platform. Wurl collects data from a range of OTT video streaming ecosystem components. We digest it, analyze it, and present it. The data engineer reports to the director of data engineering and is interacting with Product, Solution Architects and various other functions across the organization. Responsibilities: Own and drive highly impactful data services and product initiatives which leverage the rich data that Wurl collects and processes. Design, implement and deliver innovative features that scale with Wurl’s media network and beyond. Build highly secured, scalable and reliable ETLs that runs 24x7 that is cloud native Collaborate with project managers, product managers, solutions architects, other engineering teams and support for productization of new software services and features Contribute to enforce SOC2 compliance across the data platform Requirements: Bachelors or Masters degree in Computer Science or Data Science. 3+ years of complete software lifecycle experience. Demonstrable fluency with SQL & Python. Experience designing and securing all components of the ETL. Deep knowledge of AWS cloud services. Experience with SOC2 implementation on a large scale product Understanding the infrastructure of CDNs Advertising Infrastructure You need to be: Comfortable working in agile environments. Able to adapt quickly to frequent changes. Evangelizing good software development practices and leading from the front. A master in database design principles. A strong communicator and a leader. Team player, product owner and committed to the results. Location: US Remote A Plus if you have: Streaming video delivery experience. Advertising infrastructure experience. AWS Cloud experience. Knowledge of streaming video formats. Understanding of ETL processes. BI toolset experience highly desired. We recognize that not all applicants will not always met 100% of the qualifications. Wurl is fiercely passionate and contagiously enthusiastic about what we are building and seeking individuals who are equally passion-driven with diverse backgrounds and educations. While we are seeking those who know our industry, there is no perfect candidate and we want to encourage you to apply even if you do not meet all requirements. What We Offer Competitive Salary Strong Medical, Dental and Vision Benefits, 90% paid by Wurl Remote First Policy Discretionary Time Off, with minimum at 4 weeks of time off 12 US Holidays 401(k) Matching Pre-Tax Savings Plans, HSA & FSA Carrot and Headspace Subscriptions for Family Planning & Mental Wellness OneMedical Subscription for 24/7 Convenient Medical Care Paid Maternity and Parental Leave for All Family Additions Discounted PetPlan Easy at Home Access to Covid Testing with EmpowerDX $1,000 Work From Home Stipend to Set Up Your Home Office Few companies allow you to thrive like you will at Wurl. You will have the opportunity to collaborate with the industry’s brightest minds and most innovative thinkers. You will enjoy ongoing mentorship, team collaboration and you will have a place to grow your career. You will be proud to say you're a part of the company revolutionizing TV. Wurl provides a competitive total compensation package with a pay-for-performance rewards approach. Total compensation at Wurl is based on a number of factors, including market location and may vary depending on job-related knowledge, skills, and experience. Depending on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical and other benefits."," Mid-Senior level "," Full-time "," Information Technology, Engineering, and Other "," Online Audio and Video Media, Broadcast Media Production and Distribution, and Technology, Information and Internet " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-lithia-driveway-3529657057?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=W0HFuEFDM565QSlRsbX38w%3D%3D&position=4&pageNum=19&trk=public_jobs_jserp-result_search-card," Lithia & Driveway ",https://www.linkedin.com/company/lithia-motors?trk=public_jobs_topcard-org-name," United States "," 1 day ago "," 74 applicants ","Dealership L0105 Lithia Home Office Data Engineer The Data Engineer is responsible for developing and supporting the cutting-edge data solutions by using the Azure stake (Data Lake, Data Warehouse, Data Factory, Functions) SQL script design/dev, and stored procedures. The Data Engineer reports to a lead Data Engineer. Responsibilities Participate in designing and implementation of data load processes from disparate data sources into Azure Data Lake and subsequent Azure SQL & SQL Data Warehouse Migrate existing processes and data from our On Premises SQL Server and other environments to Azure Data Lake Explore and learn the latest Azure technologies to provide new capabilities and increase efficiency Ensure all existing data is created in the right way, and that new data is created according to appropriate standards and with proper documentation Read, write, and configure code for end-to-end service telemetry, alerting and self-healing capabilities Strive for continuous improvement of code quality and development practices Work closely with the Lead Data Engineer and other Data Engineers to develop and document solutions for providing data to the enterprise Mentor and teach more junior developers Skills And Qualifications 1+ years of experience in working as an analytics or data engineering member working with cross functional teams 2+ years of SQL Server development or equivalent Azure SQL DB, SQL Data Warehouse, Azure Data Factory a plus Version control using Git or TFS Bachelor’s Degree in computer sciences, Analytics, Systems Eng., Statistics or related field Strong attention to detail and sense of urgency Competencies Does the right thing, takes action and adapts to change Self-motivates, believes in accountability, focuses on results, makes plans and follows through Believes in humility, shares best practices, desires to keep learning, measures performance and adapts to improve results Thrives on a team, stays positive, lives our values Physical Demands The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of the job.* Up to 1/3 of time: standing, walking, lifting up to 25 pounds Up to 2/3 of time: sitting, kneeling, reaching, talking, hearing Reasonable accommodations may be made to enable individuals to perform the essential functions. NOTE: This is not necessarily an exhaustive list of responsibilities, skills, or working conditions associated with the job. While this list is intended to be an accurate reflection of the current job, the company reserves the right to revise the functions and duties of the job or to require that additional or different tasks be performed. We Offer Best In Class Industry Benefits Competitive pay Medical, Dental and Vision Plans Paid Holidays & PTO Short and Long-Term Disability Paid Life Insurance 401(k) Retirement Plan Employee Stock Purchase Plan Lithia Learning Center Vehicle Purchase Discounts Wellness Programs High School graduate or equivalent, 18 years or older required. Acceptable driving record and a valid driver's license in your state of residence necessary for select roles. We are a drug free workplace. We are committed to equal employment opportunity (regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status). We also consider qualified applicants regardless of criminal histories, consistent with legal requirements."," Entry level "," Full-time "," Information Technology "," Retail Motor Vehicles " Data Engineer,United States,Data Engineer/Data Analyst,https://www.linkedin.com/jobs/view/data-engineer-data-analyst-at-diverse-lynx-3531526200?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=OsPoB3QMEY7bo0ybkjvkIg%3D%3D&position=5&pageNum=19&trk=public_jobs_jserp-result_search-card," Diverse Lynx ",https://www.linkedin.com/company/diverselynx?trk=public_jobs_topcard-org-name," Seattle, WA "," 3 weeks ago "," Be among the first 25 applicants ","Job Description Role Data Engineer/ Data Analyst Location Seattle Remote until end of COVID Job Description CRE Business Data Architecture team is responsible for driving critical cross-functional projects in support of designing, building, and implementing customer-centric data assets for use by the CRE stakeholders across the firm. The candidate must exhibit a thorough understanding of data structures, data manipulation, metadata, data security, and data quality management. In addition, the candidate should have a good understanding of CB businesses, functions, systems, data environments, and processes that are necessary for the production and utilization of modeling data. We are searching for a Data engineer who is very enthusiastic about data and building high-quality data solutions as part of our team committed to continuous learning and building a diverse & inclusive environment. The candidate must have the ability to collaborate with stakeholders and partners inside and outside of the department to understand their needs and design and develop robust data solutions that meet or exceed customer expectations. The ideal candidate will possess JPM institutional data and system knowledge, technical skills, an understanding of data science, and a commitment to producing high quality results. Key Qualities A team player and adaptable to changing environment Strong communication and presentation skills, leadership skills, ability to drive the stakeholders to determine the Requirements. Excellent written and oral communication skills with the ability to present information in differing degrees of detail and form depending on the audience Excellent organizational skills, time management skills, sharp analytical and problem-solving skills Ability to multi-task and perform in a fast paced environment with tight deadlines and demanding environment Ability to liaison between lines of business and the development team, while working with a wide range of stakeholders and collaborating with your team. Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals Experience working both independently and in a virtual, collaborative environment is essential. Comfortable working in a matrix management model with stakeholders and virtual teams across global locations Ability to navigate and understand technical documentation and/or system documentation Strong knowledge of back-end and front-end application components Required Skills And Qualifications 3 to 5 years of experience with in a financial services organization, exposure to any Regulatory or compliance product is a Plus. 3 to 5 years of experience in gathering business requirements, understanding and articulating them in forums where need be, interaction with Stakeholders and Business owners is a must have 5+ years experience working in AWS 10+ years of industry experience, including deep technical experience with big data platforms and data management organizations doing hands-on Data Engineering or data-related Software Engineering. Strong design, coding, debugging and analytical skills, especially across the big data ecosystem (e.g. Cloudera Hadoop) Excellent command of the SQL language Strong knowledge of data structures, algorithms and big data tools (Spark, Hive, HDFS, etc.). Experience in architecture, data management, application ownership, AWS, Qlik Sense and other IT role experience a plus Plug-and-Play Area Product Owner who can create JIRAs, groom them with tech partners and supervise testing. Prior Project Managements and test management experience would be helpful. Must have ability to deliver high-quality results under tight deadlines and be comfortable manipulating and summarizing large quantities of data Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company."," Entry level "," Contract "," Information Technology "," Software Development " Data Engineer,United States,Data Visualization Engineer,https://www.linkedin.com/jobs/view/data-visualization-engineer-at-humana-3519388737?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=fY5xxumBg6snGsqx4wp8PQ%3D%3D&position=6&pageNum=19&trk=public_jobs_jserp-result_search-card," Humana ",https://www.linkedin.com/company/humana?trk=public_jobs_topcard-org-name," Louisville, KY "," 5 days ago "," 164 applicants ","R-301601 Description The Data Visualization Engineer coordinates with key analytics and business partners to create BI Tools that are leveraged for clinical population exploration and resource allocation in the Patient Safety space. Responsibilities The Data Visualization Engineer leverages advanced knowledge of data, clinical measures, and BI tools to create easily interpreted data driven assets to drive performance tracking and evaluation. Understands and analyzes complex data, articulates to various units within the company at the appropriate level, impacts the organization through user experience driven tool sets which have a potentially sizeable dollar impact on the business. Understands department, segment, and organizational strategy and operating objectives. Makes decisions regarding own work methods, occasionally in ambiguous situations, and requires minimal direction requesting guidance where needed. Follows established guidelines/procedures. Required Qualifications Bachelor's degree 2-5 years of BI Tool development experience Ability to use appropriate problem solving, research and analysis tools Experience with BI tools like Power BI and Tableau Strong attention to detail Comprehensive knowledge of all Microsoft Office applications Excellent written and verbal communication Preferred Qualifications Advanced Degree Six Sigma certification Preferred technologies such as SAS, SQL, R, Python, QlikView, Tableau, or similar tools Scheduled Weekly Hours 40 "," Not Applicable "," Full-time "," Information Technology "," Insurance, Wellness and Fitness Services, and Hospitals and Health Care " Data Engineer,United States,Data Visualization Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-nutrafol-3514455334?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=g8PM17pWVEMoglqXC%2F4KAw%3D%3D&position=7&pageNum=19&trk=public_jobs_jserp-result_search-card," Humana ",https://www.linkedin.com/company/humana?trk=public_jobs_topcard-org-name," Louisville, KY "," 5 days ago "," 164 applicants "," R-301601DescriptionThe Data Visualization Engineer coordinates with key analytics and business partners to create BI Tools that are leveraged for clinical population exploration and resource allocation in the Patient Safety space.ResponsibilitiesThe Data Visualization Engineer leverages advanced knowledge of data, clinical measures, and BI tools to create easily interpreted data driven assets to drive performance tracking and evaluation. Understands and analyzes complex data, articulates to various units within the company at the appropriate level, impacts the organization through user experience driven tool sets which have a potentially sizeable dollar impact on the business. Understands department, segment, and organizational strategy and operating objectives. Makes decisions regarding own work methods, occasionally in ambiguous situations, and requires minimal direction requesting guidance where needed. Follows established guidelines/procedures.Required QualificationsBachelor's degree2-5 years of BI Tool development experienceAbility to use appropriate problem solving, research and analysis toolsExperience with BI tools like Power BI and TableauStrong attention to detailComprehensive knowledge of all Microsoft Office applicationsExcellent written and verbal communicationPreferred QualificationsAdvanced DegreeSix Sigma certificationPreferred technologies such as SAS, SQL, R, Python, QlikView, Tableau, or similar toolsScheduled Weekly Hours40 "," Not Applicable "," Full-time "," Information Technology "," Insurance, Wellness and Fitness Services, and Hospitals and Health Care " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-syndicatebleu-3499027319?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=Z8DS36b%2FImcPcxDo70JarA%3D%3D&position=8&pageNum=19&trk=public_jobs_jserp-result_search-card," Syndicatebleu ",https://www.linkedin.com/company/syndicatebleu?trk=public_jobs_topcard-org-name," West Palm Beach, FL "," 2 weeks ago "," 61 applicants ","Data Engineer - West Palm Beach, FL This is a hybrid role; requiring a mix of remote and onsite work in the West Palm Beach area. We are looking to hire a Data Engineer to join the team of a growing manufacturing and distribution company. This opportunity offers excellent benefits, immense growth, and a thriving culture. Responsibilities and Duties: You are comfortable interpreting data and statistics in pursuit of constant improvements. You have prior experience successfully managing a small team of remote data engineers and providing technical guidance and oversight. This role will report directly to Executive Management on a consistent basis and will troubleshoot and report network performance issues as needed. You will provide ongoing support, monitoring, and maintenance of deployed products. Provide bug scrubs and code recommendations in pursuit of constant improvements and efficiencies. Actively participates in the engineering community, staying up to date on new data technologies and best practices and shares insights with others in the organization. Requirements: Python experience is required. Experience with Django is a plus. Minimum 5-10 years’ experience in software development and data engineering You’ve worked with data stores and/or data warehouses, such as AWS Redshift Experience customizing analytical and visualization tools for end-user consumption and display Experience with ecommerce is preferred. Benefits & Perks: We offer a full range of benefits including 401k, health, and life insurance, disability coverage, vacation, unlimited PTO, paid holidays, and more. Family Culture: We are a family owned company and have been for three generations. We treat our employees like family too! Diversity: You’ll find an environment packed with different cultures, personalities, and backgrounds because we know it takes many kinds of people to make us successful. Work-Life Balance: We believe promoting a healthy work/life balance is one of the keys to success at home and work. About Us: We are a fast-growing company with a great family-oriented culture."," Mid-Senior level "," Full-time "," Engineering "," Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-eliassen-group-3478616149?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=RcxTHffsOeCPgJKQVEXg3A%3D%3D&position=9&pageNum=19&trk=public_jobs_jserp-result_search-card," Eliassen Group ",https://www.linkedin.com/company/eliassen-group?trk=public_jobs_topcard-org-name," Durham, NC "," 4 weeks ago "," Over 200 applicants ","Hybrid | Durham, NC | Westlake, TX | Merrimack, NH | Smithfield, RI - 1 week per month onsite required We have a wonderful opportunity for a talented Data Engineer with our client, a leader in the financial services industry. In this role, you will set the technical direction for the team and work closely with the Data Architect to create secure, scalable, and resilient cloud-based services. We can facilitate w2 and corp-to-corp consultants. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance. Requirements of the Data Engineer: Strong experience with relational database technologies (Oracle SQL & PL/SQL or similar RDBMS), preferably Snowflake or other cloud data platforms Proficiency in any programming language such as Python or Java Expertise in all aspects of data movement technologies (ETL/ELT) and experience with schedulers Practical experience delivering and supporting cloud strategies including migrating legacy products and implementing SaaS integrations Proven experience understanding multi-functional enterprise data, navigating between business analytic needs and data Able to work hand-in-hand with other members of technical teams to execute on product roadmaps to enable new insights with our data Experience crafting and implementing operational data stores, as well as data lakes in production environments Experience with DevOps, CI/CD, and deploying pipelines Able to work with geographically distributed teams Please be advised- If anyone reaches out to you about an open position connected with Eliassen Group, please confirm that they have an Eliassen.com email address and never provide personal or financial information to anyone who is not clearly associated with Eliassen Group. If you have any indication of fraudulent activity, please contact InfoSec@eliassen.com. Job ID: 375374"," Mid-Senior level "," Contract "," Information Technology and Engineering "," IT Services and IT Consulting, Staffing and Recruiting, and Financial Services " Data Engineer,United States,Data Engineer TS/SCI,https://www.linkedin.com/jobs/view/data-engineer-ts-sci-at-cyberjin-3509624996?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=ieY2Qz0AEOUl6EL2iw%2BBgA%3D%3D&position=10&pageNum=19&trk=public_jobs_jserp-result_search-card," Cyberjin ",https://www.linkedin.com/company/cyberjin?trk=public_jobs_topcard-org-name," Honolulu, HI "," 2 weeks ago "," Be among the first 25 applicants ","looking for a talented Data Engineer to support the acquisition of mission critical and mission support data sets. The preferred candidate will have a background in supporting cyber and/or network related missions within the military spaces, as either a developer, analyst or engineer. Work is mostly on customer site in San Antonio, TX with some hybrid support. Essential Job Responsibilities The ideal candidate will have worked with big data systems, complex structured and unstructured data sets, and have supported government data acquisition, analysis, and/or sharing efforts in the past. To excel in the position, the candidate shall have a strong attention to detail, be able to understand technical complexities, and have the willingness to learn and adapt to the situation. The candidate will work both independently and as part of a large team to accomplish client objectives. Minimum Qualifications Security Clearance - Must have a current TS/SCI level security clearance and therefore all candidates must be a U.S. Citizen. 5 years experience as a developer, analyst, or engineer with a Bachelors in related field; OR 3 years relevant experience with Masters in related field; OR High School Diploma or equivalent and 9 years relevant experience. Experience with programming languages such as Python and Java. Proficiency with acquisition and understanding of network data and the associated metadata. Fluency with data extraction, translation, and loading including data prep and labeling to enable data analytics. Experience with Kibana and Elasticsearch. Familiarity with various log formats such as JSON, XML, and others. Experience with data flow, management, and storage solutions (i.e. Kafka, NiFi, and AWS S3 and SQS solutions). Ability to decompose technical problems and troubleshoot system and dataflow issues. Must be able to work on customer site most of the time. Preferred Requirements Experience with NOSQL databases such as Accumulo desired Prior Experience supporting cyber and/or network security operations within a large enterprise, as either an analyst, engineer, architect, or developer. Powered by JazzHR HVUIlxEl6Z"," Mid-Senior level "," Full-time "," Information Technology "," Internet Publishing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-bristol-myers-squibb-3490931296?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=snnwc%2FD66EibaQFkuJ8Ckw%3D%3D&position=11&pageNum=19&trk=public_jobs_jserp-result_search-card," Bristol Myers Squibb ",https://www.linkedin.com/company/bristol-myers-squibb?trk=public_jobs_topcard-org-name," San Diego, CA "," 3 weeks ago "," Be among the first 25 applicants ","Working with Us Challenging. Meaningful. Life-changing. Those aren’t words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You’ll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams rich in diversity. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us Work at the interface of pharma, genomics, and data engineering. The candidate will significantly contribute to the development of modern data services for BMS’s research scientists. Job Description Bristol Myers Squibb seeks a highly motivated Data Engineer to enable data integration efforts in support of data science and computational research efforts. The Data Engineer will be responsible for executing an ambitious digital strategy to support BMS’s predictive science capabilities in R&ED (Research & Early Development). The successful candidate will partner closely with computational researcher teams, IT leadership, and various technical functions to design and deliver data solutions that streamline access to computing & data, and help scientists derive insight and value from their research. Job Functions The role requires someone who can seamlessly mesh technical knowledge to help navigate R&D cloud, CoLo, and on-premise computing needs, including planning, infrastructure design, maintenance, and support. The role will lead the development of infrastructure that enables interoperability and comparability of data sets derived from different technologies and biological systems in the context of integrative data analysis. The candidate will create and maintain optimal data pipeline architectures that enable scientific workflow and collaborate with interdisciplinary teams of data curators, software engineers, data scientists and computational biologists as we test new hypotheses through the novel integration of emerging research data types. The work will combine careful resource planning and project management with hands-on data manipulation and implementation of data integration workflows. Responsibilities Include, But Are Not Limited To, The Following Designing and developing an ETL infrastructure to load research data from multiple source systems using languages and frameworks such as Python, R, Docker, Airflow, Glue, etc. Leading the design and implementation of data services solutions that may include relational, NoSQL and graph database components. Collaborating with project managers, solution architects, infrastructure teams, and external vendors as needed to support successful delivery of technical solutions. Experiences And Education Bachelor's Degree with 8+ years of academic / industry experience or master's degree with 6+ years of academic / industry experience or PhD with 3+ years of academic / industry experience in an engineering or biology field. Demonstrated high proficiency with current software engineering methodologies, such as Agile SDLC approaches, distributed source code control, project management, issue tracking, and CI/CD tools and processes. Excellent skills in an object-oriented programming language such as Python or R, and proficiency in SQL High degree of proficiency in cloud computing Solid understanding of container strategies such as Docker, Fargate, ECS and ECR. Excellent skills and deep knowledge of databases such as Postgres, Elasticsearch, Redshift, and Aurora, including distributed database design, SQL vs. NoSQL, and database optimizations Demonstrated high proficiency with current software engineering methodologies, such as Agile SDLC (Software Development Life Cycle) approaches, distributed source code control, project management, issue tracking, and CI/CD tools and processes. Strong technical communication skills. If you come across a role that intrigues you but doesn’t perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as “Transforming patients’ lives through science™ ”, every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in an inclusive culture, promoting diversity in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Physical presence at the BMS worksite or physical presence in the field is a necessary job function of this role, which the Company deems critical to collaboration, innovation, productivity, employee well-being and engagement, and it enhances the Company culture. COVID-19 Information To protect the safety of our workforce, customers, patients and communities, the policy of the Company requires all employees and workers in the U.S. and Puerto Rico to be fully vaccinated against COVID-19, unless they have received an exception based on an approved request for a medical or religious reasonable accommodation. Therefore, all BMS applicants seeking a role located in the U.S. and Puerto Rico must confirm that they have already received or are willing to receive the full COVID-19 vaccination by their start date as a qualification of the role and condition of employment. This requirement is subject to state and local law restrictions and may not be applicable to employees working in certain jurisdictions such as Montana. This requirement is also subject to discussions with collective bargaining representatives in the U.S. BMS is dedicated to ensuring that people with disabilities can perform complex functions through a transparent recruitment process, reasonable workplace adjustments and ongoing support in their roles. Applicants can request an accommodation prior to accepting a job offer. If you require reasonable accommodation in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com. Visit careers.bms.com/eeo-accessibility to access our complete Equal Employment Opportunity statement. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. "," Entry level "," Full-time "," Information Technology "," Pharmaceutical Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-healthedge-3525655926?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=xU0R6uKDot7mB6EJIC0P%2BA%3D%3D&position=12&pageNum=19&trk=public_jobs_jserp-result_search-card," HealthEdge ",https://www.linkedin.com/company/healthedge?trk=public_jobs_topcard-org-name," Boston, MA "," 1 day ago "," 74 applicants ","Overview Wellframe helps healthcare organizations support every aspect of health beyond the four walls of care delivery. We provide care transformation services, a patented engagement platform, clinical programs that support the clinical and social determinants of health, and rigorous measurement. Wellframe translates evidence-based, peer-reviewed guidelines and literature into an interactive daily checklist delivered to patients through the Wellframe mobile app. As patients engage with the Wellframe app, their data is shared in real time with their care team through the care team dashboard, which utilizes advanced algorithms to generate early intervention alerts. With secure two-way messaging, Wellframe facilitates long-term, trusted relationships between patients and care teams. Role Overview In this role, you will be working in the Data Engineering team that builds ETL pipelines for retrieving and integrating customer data with our internal systems, moving data between our internal systems, and delivering data from our applications to our customers. You will work closely with our Customer Success Organization to get deliverables implemented on time that match customer specifications, as well as work with other teams in Data & Analytics, Product and Engineering to design and build their data pipelines. Areas of Responsibility: Implement ETL pipelines for processing customer and member data Work in a cross functional environment collaboratively with customer success organization to translate requirements into technical design Help monitor and maintain operational functionality of data pipelines and other jobs that fall under the ETL domain Debug and fix production issues Education, Experience, & Skills Required: Bachelor’s degree in Computer Science, Computer Engineering, or a closely related field of study; Master's degree preferred. 2+ years development experience building ETL pipelines Proficient in Python and SQL; additional scripting languages like java a plus Experienced with cloud technologies such as GCP Excellent communicator, comfortable explaining technical problems and plans in person and in writing Passionate about leveraging their technical skills to help improve patient care Works effectively in fast-paced, agile startup environment, and finds fulfillment delivering innovative solutions Bonus Skills: Experience with Apache Airflow Familiarity with other ETL technologies or tools Familiarity with Docker / Kubernetes Experience in a startup environment Experience in healthcare tech industry Experience in a remote or a hybrid environment This posting is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee and any percentages listed are approximate. Duties, responsibilities and activities may change or new ones may be assigned at any time with or without notice. Wellframe, Inc. is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status or any other characteristic protected by local, state, or federal laws, rules, or regulations."," Entry level "," Full-time "," Information Technology "," IT Services and IT Consulting, Software Development, and Hospitals and Health Care " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/junior-data-engineer-at-novetta-3509836365?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=3UqdulBBhV6Ikwi1ENN7Yw%3D%3D&position=13&pageNum=19&trk=public_jobs_jserp-result_search-card," Novetta ",https://www.linkedin.com/company/novetta?trk=public_jobs_topcard-org-name," Fort Belvoir, VA "," 1 week ago "," Be among the first 25 applicants ","Accenture Federal Services delivers a range of innovative, tech-enabled services for the U.S. Federal Government to address the complex, sensitive challenges of national security and intelligence missions. Refer a qualified candidate and earn up to $20K. Learn more here > Job Description: Accenture Federal Services is seeking a Junior Data Engineer to assist with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Responsibilities include: Develop stages of a distributed parallel data processing pipelines, which includes but is not limited to processes such as configuring data connections, data parsing, data normalization, data mapping and modeling, data enrichment, and integration with data analytics Operate and maintain the data processing pipelines in accordance with the availability requirements of the platform Follow agile methodologies when applied to the data engineering part of the system development life cycle (SDLC) Update technical documentation such as system design documentation (SDD); standard operating procedures (SOPs); and tactics, techniques, and procedures (TTPs); and training material Here's what you need: Bachelor's degree in a discipline covered under Science, Technology, Engineering or Mathematics (STEM) with 2 years' experience as a software engineer or a data engineer 2 years' experience with Python, Java, or other programming languages 2 years' experience applying agile methodologies to the software development life cycle (SDLC) 2 years' experience with Git repositories and CI/CD pipelines. Examples include but are not limited to GitHub and GitLab 2 years' experience with distributed parallel streaming and batch data processing pipelines 2 years' experience integrating with data SDKs / APIs and data analytics SDKs / APIs Bonus points if you have: 2 years' experience as a data engineer 1 year of experience developing, operating, and maintaining data processing pipelines in a classified environment 1 year of experience data mapping, modeling, enriching, and correlating classified data 1 year of experience with Python / PySpark 1 year of experience with Java / Java interface to Spark 1 year of experience with Palantir Foundry Security Clearance: An active TS/SCI clearance is required to start Compensation for roles at Accenture Federal Services varies depending on a wide array of factors including but not limited to the specific office location, role, skill set and level of experience. As required by local law, Accenture Federal Services provides a reasonable range of compensation for roles that may be hired in California, Colorado, New York City or Washington as set forth below and information on benefits offered is here.    Role Location: Range of Starting Pay for role  California: $73,900 - $115,100 Colorado: $73,900 - $99,400 New York City: $85,500 - $115,100 Washington: $78,600 - $105,900 Eligibility Requirements US Citizenship required. Important information Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture Federal Services Accenture Federal Services is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities. An active security clearance or the ability to obtain one may be required for this role. Accenture Federal Services is committed to providing veteran employment opportunities to our service men and women. Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. What We Believe We have an unwavering commitment to diversity with the aim that every one of our people has a full sense of belonging within our organization. As a business imperative, every person at Accenture has the responsibility to create and sustain an inclusive environment. Inclusion and diversity are fundamental to our culture and core values. Our rich diversity makes us more innovative and more creative, which helps us better serve our clients and our communities. Read more here Equal Employment Opportunity Statement Accenture is an Equal Opportunity Employer. We believe that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion or sexual orientation. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Accenture is committed to providing veteran employment opportunities to our service men and women. For details, view a copy of the Accenture Equal Opportunity and Affirmative Action Policy Statement. Requesting An Accommodation Accenture is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. If you are hired by Accenture and require accommodation to perform the essential functions of your role, you will be asked to participate in our reasonable accommodation process. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired. If you would like to be considered for employment opportunities with Accenture and have accommodation needs for a disability or religious observance, please call us toll free at 1 (877) 889-9009, send us an email or speak with your recruiter. Other Employment Statements The Company will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. Additionally, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the Company's legal duty to furnish information."," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Junior Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-remote-at-firstenergy-3520281596?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=VotjyEUey25ycjS4pt8qjQ%3D%3D&position=14&pageNum=19&trk=public_jobs_jserp-result_search-card," Novetta ",https://www.linkedin.com/company/novetta?trk=public_jobs_topcard-org-name," Fort Belvoir, VA "," 1 week ago "," Be among the first 25 applicants "," Accenture Federal Services delivers a range of innovative, tech-enabled services for the U.S. Federal Government to address the complex, sensitive challenges of national security and intelligence missions.Refer a qualified candidate and earn up to $20K. Learn more here >Job Description: Accenture Federal Services is seeking a Junior Data Engineer to assist with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models.Responsibilities include:Develop stages of a distributed parallel data processing pipelines, which includes but is not limited to processes such as configuring data connections, data parsing, data normalization, data mapping and modeling, data enrichment, and integration with data analyticsOperate and maintain the data processing pipelines in accordance with the availability requirements of the platformFollow agile methodologies when applied to the data engineering part of the system development life cycle (SDLC)Update technical documentation such as system design documentation (SDD); standard operating procedures (SOPs); and tactics, techniques, and procedures (TTPs); and training materialHere's what you need:Bachelor's degree in a discipline covered under Science, Technology, Engineering or Mathematics (STEM) with 2 years' experience as a software engineer or a data engineer2 years' experience with Python, Java, or other programming languages2 years' experience applying agile methodologies to the software development life cycle (SDLC)2 years' experience with Git repositories and CI/CD pipelines. Examples include but are not limited to GitHub and GitLab2 years' experience with distributed parallel streaming and batch data processing pipelines2 years' experience integrating with data SDKs / APIs and data analytics SDKs / APIsBonus points if you have:2 years' experience as a data engineer1 year of experience developing, operating, and maintaining data processing pipelines in a classified environment1 year of experience data mapping, modeling, enriching, and correlating classified data1 year of experience with Python / PySpark1 year of experience with Java / Java interface to Spark1 year of experience with Palantir FoundrySecurity Clearance:An active TS/SCI clearance is required to startCompensation for roles at Accenture Federal Services varies depending on a wide array of factors including but not limited to the specific office location, role, skill set and level of experience. As required by local law, Accenture Federal Services provides a reasonable range of compensation for roles that may be hired in California, Colorado, New York City or Washington as set forth below and information on benefits offered is here.   Role Location: Range of Starting Pay for role California: $73,900 - $115,100Colorado: $73,900 - $99,400New York City: $85,500 - $115,100Washington: $78,600 - $105,900Eligibility RequirementsUS Citizenship required.Important informationApplicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture Federal ServicesAccenture Federal Services is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities. An active security clearance or the ability to obtain one may be required for this role. Accenture Federal Services is committed to providing veteran employment opportunities to our service men and women. Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.What We BelieveWe have an unwavering commitment to diversity with the aim that every one of our people has a full sense of belonging within our organization. As a business imperative, every person at Accenture has the responsibility to create and sustain an inclusive environment. Inclusion and diversity are fundamental to our culture and core values. Our rich diversity makes us more innovative and more creative, which helps us better serve our clients and our communities. Read more hereEqual Employment Opportunity StatementAccenture is an Equal Opportunity Employer. We believe that no one should be discriminated against because of their differences, such as age, disability, ethnicity, gender, gender identity and expression, religion or sexual orientation.All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.Accenture is committed to providing veteran employment opportunities to our service men and women.For details, view a copy of the Accenture Equal Opportunity and Affirmative Action Policy Statement.Requesting An AccommodationAccenture is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. If you are hired by Accenture and require accommodation to perform the essential functions of your role, you will be asked to participate in our reasonable accommodation process. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired.If you would like to be considered for employment opportunities with Accenture and have accommodation needs for a disability or religious observance, please call us toll free at 1 (877) 889-9009, send us an email or speak with your recruiter.Other Employment StatementsThe Company will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. Additionally, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by the employer, or (c) consistent with the Company's legal duty to furnish information. "," Mid-Senior level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,DATA ENGINEER (i360),https://www.linkedin.com/jobs/view/data-engineer-i360-at-i360-3528035027?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=wEHT9IL0Z0cLvlOXMAKSMQ%3D%3D&position=15&pageNum=19&trk=public_jobs_jserp-result_search-card," i360 ",https://www.linkedin.com/company/i360-llc?trk=public_jobs_topcard-org-name," Washington, DC "," 2 days ago "," Be among the first 25 applicants ","Description i360, where The Data is the Difference, is the leading data and technology provider for those advancing a free and prosperous society through the campaign, nonprofit, and advocacy communities. i360 is a dynamic workplace sitting at the intersection of public policy, technology, and business, and is seeking team members who are excited about building the next generation of political technology. i360 is seeking a mid-level Database Engineer to join its data engineering team. With your technical expertise, you will design, implement, and improve processes, procedures, and automation for all database-centric areas. You will tune our relational database systems and NoSQL systems for performance and reliability. You are responsible for building tools and scripts to monitor, troubleshoot and automate our systems. You propose test plans and interface with other teams, developers, and application owners to arrive at optimal solutions. Successful candidates will solve problems unique in scale and concept in the pursuit of new and original features. So, bring your ingenious mind, great team spirit and excellent communication skills to this great opportunity at i360. What You Will Do In Your Role Work with senior engineers and architects on the team to build and test SQL and NoSQL database solutions Proficient with programming languages such as Python, Java, Spark, or Scala Proficiency with relational database concepts & query optimization Experience working with cloud platforms such as AWS or Azure Experience with building data pipelines and implementing ETL processes Perform code reviews and QA data imported by various processes Investigate, analyze, correct and document reported data defects Create and maintain technical specification documentation Communicate effectively with stakeholders and collaborate with cross-functional teams What You Will Need BA/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines 4+ years of experience with relational database(s) 4+ years of coding experience in Python and/or Java What Will Put You Ahead? Knowledgeable on NoSQL, Columnar and/or Graph databases such as Elasticsearch, neo4j, Redshift, & Snowflake Knowledgeable on architecting, developing, and maintaining solutions in AWS Working knowledge of UNIX/Linux platforms Familiarity with streaming services & event buses Our goal is for each employee, and their families, to live fulfilling and healthy lives. We provide essential resources and support to build and maintain physical, financial, and emotional strength - focusing on overall wellbeing so you can focus on what matters most. Our benefits plan includes - medical, dental, vision, flexible spending and health savings accounts, life insurance, ADD, disability, retirement, paid vacation/time off, educational assistance, and may also include infertility assistance, paid parental leave and adoption assistance. Specific eligibility criteria is set by the applicable Summary Plan Description, policy or guideline and benefits may vary by geographic region. If you have questions on what benefits apply to you, please speak to your recruiter. At Koch companies, we are entrepreneurs. This means we openly challenge the status quo, find new ways to create value and get rewarded for our individual contributions. Any compensation range provided for a role is an estimate determined by available market data. The actual amount may be higher or lower than the range provided considering each candidate’s knowledge, skills, abilities, and geographic location. If you have questions, please speak to your recruiter about the flexibility and detail of our compensation philosophy. Equal Opportunity Employer, including disability and protected veteran status. Except where prohibited by state law, all offers of employment are conditioned upon successfully passing a drug test. This employer uses E-Verify. Please visit the following website for additional information: www.kochcareers.com/doc/Everify.pdf "," Entry level "," Full-time "," Information Technology and Engineering "," IT Services and IT Consulting " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-bibleproject-3485129980?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=dE5LL9tm2PVytXL237ecDg%3D%3D&position=16&pageNum=19&trk=public_jobs_jserp-result_search-card," BibleProject ",https://www.linkedin.com/company/the-bible-project?trk=public_jobs_topcard-org-name," United States "," 1 week ago "," Over 200 applicants ","Data Engineer About this Role As our Data Engineer, you will be responsible for expanding and optimizing our data and data pipeline architecture. You will be our in-house data pipeline builder and data wrangler who enjoys standardizing data systems and building them from the ground up. You will also support our software developers, operations team, and IT, on data initiatives and will ensure optimal data delivery architecture is consistent across ongoing projects. To be successful, you must be self-directed, excited by improving and evolving data architecture, and comfortable supporting the data needs of multiple teams, systems and products. In all your work, you will bring your skills and experience to help further our mission of helping people experience the Bible as a unified story that leads to Jesus. What You’ll Be Doing Create and maintain optimal, automated data pipeline architecture that meets business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, including using SQL, Big Query, and other data technologies. Lead the creation of a standardized process for instrumenting components and collecting data across multiple products in our ecosystem. Collaborate regularly with stakeholders, primarily in our Insights and Operations teams to assist with data-related technical issues, ad-hoc requests, and ongoing support of their data infrastructure needs. Create data tools and analytics reports and assist team members in building and optimizing our products, while maintaining industry standard privacy practices. What We Are Looking For 3+ years of experience in a data and analytics role with either a Computer Science, Statistics, Informatics, or Information System background. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases including Postgres, Cassandra, and NoSQL. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Experience with data tools like Looker, Tableau, Hadoop, Spark, or Kafka, as well as Google cloud and marketing services like BigQuery, Cloud Functions, Google Tag Manager, and Google Analytics. Experience with data pipeline and workflow management tools: Fivetran, Airbyte, etc. Experience with object-oriented/object function scripting languages like Python About BibleProject Portland, Oregon, Founded in 2014 BibleProject is a nonprofit, crowdfunded organization that produces 100% free Bible videos, podcasts, blogs, classes, and educational Bible resources. Our mission is to help people experience the Bible as a unified story that leads to Jesus. We are rapidly growing in the area of multimedia technology. What began with two animated videos now encompasses multiple platforms and products—including over 160 videos. Our website and app serve as connection hubs to our ever-growing library of resources. Classroom, our online learning platform, offers accessible, graduate-level Bible classes. Learn.Bible helps leaders build Bible curriculum for their specific ministries. These, and all of our supporting products, continue to be completely free to audiences around the world, thanks to the ongoing generosity of our patrons. Location: This role must be performed within the United States. Occasional travel to Portland, Oregon. Beginning: March 2023 Reporting to: Jon Horton, Platform Engineering Manager Compensation & Benefits: Minimum annual salary for this role is $85,000. Competitive salary that scales with experience directly related to this role. Top tier in non-profit market but will not match top technology companies Medical, dental, vision, life, short and long term disability insurance for employee and family with premiums covered 100% by BibleProject 401(k) with 4% employer match 160 hours of PTO annually* 80 hours of vacation time available on your first day. Paid personal time that accrues weekly (80 hours accrued annually) Paid sabbatical after five years of employment Paid parental leave Paid learning stipend A culture focused on belonging and thriving BibleProject is an equal opportunity employer"," Associate "," Full-time "," Information Technology "," Software Development and Non-profit Organizations " Data Engineer,United States,Data Engineer III,https://www.linkedin.com/jobs/view/data-engineer-iii-at-mindsource-3504800593?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=oZNuMYA9KBSmDQ%2Bkr7fNdw%3D%3D&position=17&pageNum=19&trk=public_jobs_jserp-result_search-card," MindSource ",https://www.linkedin.com/company/mindsource?trk=public_jobs_topcard-org-name," Seattle, WA "," 2 weeks ago "," 62 applicants ","Duration: 3 months; will likely extend Location: REMOTE Team is in charge of improving services for internal customers. Maintenance for data pipelines, adding tests. Qualified TAWs will help with upgrades and implementing new strategies. Qualifications : Bachelor or Master's degree in Computer Science, Engineering, Information Systems or relevant degree is required 5+ years of professional work experience designing and implementing data pipelines in on-prem and cloud environments ( S3, EKS ) Experience with SQL/Relational databases. Experience with manipulating structured and unstructured data. Experience with distributed data systems such as Hadoop and related technologies (Spark, Trino, etc.). Background in both programming languages (Python & Scala ). Experience working with databases that power APIs for front-end applications. Experience with modern schedulers ( Airflow ) Responsibilities Support, Design, develop, test, deploy, maintain and improve data pipeline Designing and developing data processing techniques: automating manual processes, data delivery, data validation, data quality and integrity Communicate effectively with customers/team members & help with site up challenges. Must have skills: Python SQL Spark Scala AWS ecosystem ( S3, EKS ) Airflow Kafka Catalog store ( hive or similar ) Nice to have skills: Iceberg"," Mid-Senior level "," Contract "," Information Technology "," Information Technology & Services " Data Engineer,United States,Data Engineer III,https://www.linkedin.com/jobs/view/data-engineer-at-msd-3531396338?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=mGKwhCIiCWzUZeFXIiSlFA%3D%3D&position=18&pageNum=19&trk=public_jobs_jserp-result_search-card," MindSource ",https://www.linkedin.com/company/mindsource?trk=public_jobs_topcard-org-name," Seattle, WA "," 2 weeks ago "," 62 applicants "," Duration: 3 months; will likely extendLocation: REMOTETeam is in charge of improving services for internal customers. Maintenance for data pipelines, adding tests. Qualified TAWs will help with upgrades and implementing new strategies.Qualifications :Bachelor or Master's degree in Computer Science, Engineering, Information Systems or relevant degree is required5+ years of professional work experience designing and implementing data pipelines in on-prem and cloud environments ( S3, EKS )Experience with SQL/Relational databases.Experience with manipulating structured and unstructured data.Experience with distributed data systems such as Hadoop and related technologies (Spark, Trino, etc.).Background in both programming languages (Python & Scala ).Experience working with databases that power APIs for front-end applications.Experience with modern schedulers ( Airflow )ResponsibilitiesSupport, Design, develop, test, deploy, maintain and improve data pipelineDesigning and developing data processing techniques: automating manual processes, data delivery, data validation, data quality and integrityCommunicate effectively with customers/team members & help with site up challenges.Must have skills:PythonSQLSparkScalaAWS ecosystem ( S3, EKS )AirflowKafkaCatalog store ( hive or similar )Nice to have skills:Iceberg "," Mid-Senior level "," Contract "," Information Technology "," Information Technology & Services " Data Engineer,United States,Data Engineer - BI,https://www.linkedin.com/jobs/view/data-engineer-bi-at-arkansas-workforce-centers-3524818081?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=24blk%2B73yPhf3RytpTejAA%3D%3D&position=19&pageNum=19&trk=public_jobs_jserp-result_search-card," Arkansas Workforce Centers ",https://www.linkedin.com/company/arkansas-workforce-centers?trk=public_jobs_topcard-org-name," Lowell, AR "," 4 days ago "," Be among the first 25 applicants ","This job was posted by https://www.arjoblink.arkansas.gov : For more information, please see: https://www.arjoblink.arkansas.gov/jobs/3743730 Job Title: Data Engineer - BI A Data Engineer at Arvest is a technical team member who will create, maintain, and evolve the strategy for data storing, transformation and distribution. They use common data architecture practices to translate business requirements into conceptual, logical, and physical data models that will support data analysis/visualization and decision-making across the organization. We are seeking candidates who embrace diversity, equity, and inclusion in a workplace where everyone feels valued and inspired. What Youll Do at Arvest: (Other duties may be assigned.) Develop resilient data pipeline solutions that are sustainable, fault-tolerant, and highly scalable using modern and new technologies of varying complexity and scope. Troubleshoot moderately complex problems and assists with root cause analysis. Support production workloads as necessary. Participate in on-call rotation, as needed. Utilize technical expertise to develop and execute queries to extract internal and external data from various sources that will be required for a robust and reliable data infrastructure. Build software that performs well, is secure, and is accessible to customers. Ensure that work product delivered by the team meets standards for reusability, security, and performance and that data is available, usable, and fit-for-purpose. Partner with Engineers, contractors, and 3rd parties to deliver solutions that are efficient, reusable, and impactful. May work with contractors and 3rd parties to accomplish goals. Collaborate with the Product Owner and End Users to ensure that acceptance criteria are met and satisfies the business need. Build and manage data quality and data loads using automated testing frameworks and methodologies such as Data-Driven Testing (DDT). Mentor and guide less experienced engineers to build skills and adopt practices. Create proofs-of-concept and proofs-of-technology to evaluate the feasibility of solutions, including recommendations based on the results. Make sound design/coding decisions keeping customer experience in the forefront. Research and recommend data for acquisition and evaluates suitability. Support the identification of anomalies and data quality issues. Participate in cross-product Communities of Practice and/or Guilds by attending sessions, volunteering for research topics, and presenting findings to the group. Promote the re-use of data across the Company. Perform code reviews. Test own work and reviews tests performed by more junior team members, as appropriate. Exhibit strong problem solving and analytical skills, as well as strong communication and interpersonal skills. Contribute to healthy working relationships among teams and individuals. Understand and comply with bank policy, laws, regulations, and the bank\'s BSA/AML Program, as applicable to your job duties. This includes but is not limited to; complete compliance training and adhere to internal procedures and controls; report any known violations of compliance policy, laws, or regulations and report any suspicious customer and/or account activity."," Entry level "," Full-time "," Information Technology "," Retail Apparel and Fashion " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-mcubed-staffing-3511754137?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=ai%2FtzO%2F5UVo3TE1dC%2B6btA%3D%3D&position=20&pageNum=19&trk=public_jobs_jserp-result_search-card," mCubed Staffing ",https://www.linkedin.com/company/mcubed-staffing?trk=public_jobs_topcard-org-name," Detroit Metropolitan Area "," 2 days ago "," 85 applicants ","**NO C2C OR SPONSORSHIP AVAILABLE! We are seeking a Senior Data Engineer/Developer who possesses a strong passion for designing, optimizing, refactoring, and upgrading complex data solutions. Responsibilities: • Research, architect, and develop innovative multi-tiered solutions using modern tools and methodologies with a goal for technical excellence • Work closely with other developers, architects, and stakeholders to provide estimates based on customer experience, features, and envisioned solutions • Ensure architecture and design of the solution is in alignment with overall enterprise architecture • Solve problems and proactively look for ways to improve our products and platform Requirements: • Expert SQL Server Development experience • Expert backend designer, capable of managing high-level architecture while adding or optimizing components • Adept at optimizing and refactoring and upgrading legacy systems • Excellent performance tuning skills regarding data models and SQL code • Expert data modeler with strong data analytic skills • Strong ETL experience, with emphasis on complex transformations and large data sets using SSIS or Talend • Relentless troubleshooter and investigator, proficient with SQL Profiler, Extended Events, Query Store etc. • Strong automated testing experience • Strong AWS Experience"," Mid-Senior level "," Full-time "," Information Technology, General Business, and Production "," IT Services and IT Consulting, Software Development, and Motor Vehicle Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ivy-tech-solutions-inc-3493990706?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=zwZxvH0TEFB9eO%2BpPtUh1g%3D%3D&position=21&pageNum=19&trk=public_jobs_jserp-result_search-card," mCubed Staffing ",https://www.linkedin.com/company/mcubed-staffing?trk=public_jobs_topcard-org-name," Detroit Metropolitan Area "," 2 days ago "," 85 applicants "," **NO C2C OR SPONSORSHIP AVAILABLE! We are seeking a Senior Data Engineer/Developer who possesses a strong passion for designing, optimizing, refactoring, and upgrading complex data solutions.Responsibilities:• Research, architect, and develop innovative multi-tiered solutions using modern tools and methodologies with a goal for technical excellence• Work closely with other developers, architects, and stakeholders to provide estimates based on customer experience, features, and envisioned solutions• Ensure architecture and design of the solution is in alignment with overall enterprise architecture• Solve problems and proactively look for ways to improve our products and platformRequirements:• Expert SQL Server Development experience• Expert backend designer, capable of managing high-level architecture while adding or optimizing components• Adept at optimizing and refactoring and upgrading legacy systems• Excellent performance tuning skills regarding data models and SQL code• Expert data modeler with strong data analytic skills• Strong ETL experience, with emphasis on complex transformations and large data sets using SSIS or Talend• Relentless troubleshooter and investigator, proficient with SQL Profiler, Extended Events, Query Store etc.• Strong automated testing experience• Strong AWS Experience "," Mid-Senior level "," Full-time "," Information Technology, General Business, and Production "," IT Services and IT Consulting, Software Development, and Motor Vehicle Manufacturing " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-ivy-tech-solutions-inc-3493988959?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=U4YnuX4K1dSc6NGxQblmaA%3D%3D&position=22&pageNum=19&trk=public_jobs_jserp-result_search-card," mCubed Staffing ",https://www.linkedin.com/company/mcubed-staffing?trk=public_jobs_topcard-org-name," Detroit Metropolitan Area "," 2 days ago "," 85 applicants "," **NO C2C OR SPONSORSHIP AVAILABLE! We are seeking a Senior Data Engineer/Developer who possesses a strong passion for designing, optimizing, refactoring, and upgrading complex data solutions.Responsibilities:• Research, architect, and develop innovative multi-tiered solutions using modern tools and methodologies with a goal for technical excellence• Work closely with other developers, architects, and stakeholders to provide estimates based on customer experience, features, and envisioned solutions• Ensure architecture and design of the solution is in alignment with overall enterprise architecture• Solve problems and proactively look for ways to improve our products and platformRequirements:• Expert SQL Server Development experience• Expert backend designer, capable of managing high-level architecture while adding or optimizing components• Adept at optimizing and refactoring and upgrading legacy systems• Excellent performance tuning skills regarding data models and SQL code• Expert data modeler with strong data analytic skills• Strong ETL experience, with emphasis on complex transformations and large data sets using SSIS or Talend• Relentless troubleshooter and investigator, proficient with SQL Profiler, Extended Events, Query Store etc.• Strong automated testing experience• Strong AWS Experience "," Mid-Senior level "," Full-time "," Information Technology, General Business, and Production "," IT Services and IT Consulting, Software Development, and Motor Vehicle Manufacturing " Data Engineer,United States,Data Engineer II,https://www.linkedin.com/jobs/view/data-engineer-ii-at-vuori-3500231549?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=F8YM1Icmr%2BuN25qMMXg3vg%3D%3D&position=23&pageNum=19&trk=public_jobs_jserp-result_search-card," Vuori ",https://www.linkedin.com/company/vuori-inc-?trk=public_jobs_topcard-org-name," Carlsbad, CA "," 2 weeks ago "," Over 200 applicants ","Company Description Vuori is re-defining what athletic apparel looks like: built to move and sweat in but designed with a casual aesthetic to transition into everyday life. We draw inspiration from an active coastal California lifestyle; an integration of fitness, creative expression and life. Our high energy fast paced office environment is reflected in the clothes we make. We aim to inspire others to take on all aspects of their lives with clarity, enthusiasm and purpose…while having a lot of fun along the way. We are proud to be an outlet for opportunity and for personal growth and success. Job Description The Data Engineer II will construct and maintain pipelines and data structures within the enterprise data platform. Comprised of the data lake and data warehouse, the enterprise data platform supports the data and analytics needs across Vuori. In partnership with the Product and Data teams, the Data Engineer II will gain a thorough knowledge of business requirements and work alongside the Data Architect to build data structures that best deliver conformed, consistent, and accessible information. Additionally, the Data Engineer II will create optimized extraction, transform, and load processes that deliver data to users in a timely and accurate manner. Responsibilities include, but are not limited to: Strategize and continue to build out a flexible data lake/data warehouse model with the ability to evolve based on new sources, removal of legacy sources, and to support enhanced BI/analytic capabilities. Create reliable, clear, and sustainable data pipelines using best practices to deliver trusted data in an “on-time” demand environment. Source data via multiple strategies including but not limited to APIs, flat files, webhooks and SFTP/email connections. Collaborate in the definition of logical data models, and construct the physical database structure. Participate in enhancing current state SDLC; maintain Development and QA environments. Create robust data auditing and reconciliation processes to ensure high data quality within the enterprise data platform; proactively work with source data providers to identify and resolve data inconsistencies. In cooperation with the Data Architect, participate in creating architecture artifacts that capture the complete data lake/data warehouse landscape. Contribute to Technical Knowledge Base by creating and enhancing technical and operational documentation to provide visibility and continuity of processes. Qualifications B.S., ideally in Computer Science, or equivalent work experience is preferred. 5+ years data engineering experience with multiple data lake and data warehouse builds completed. Proficient in multiple lake/warehousing architecture approaches and experience designing logical data models. Experience creating solutions that include data governance concepts related to data quality, privacy, security, retention, etc. Experience working with modern data pipeline orchestration tools to create complex ETL pipelines. Azure Data Factory experience preferred; Fivetran experience a plus. Snowflake and Azure Synapse data platforms. SQL, mySQL, R, Python languages. Able to simplify complex data concepts in order to clearly communicate overarching data structures and strategy. Comfortable working in a matrixed organization involving cross-functional projects; strong communication and collaboration skills. Love working with data and tackling complex data problems. Excellent organizational skills to manage priorities. Well organized, adaptable and a clear thinker. Additional Information Pay Range: From $120,000 - $140,000 annually Benefits: Health Insurance Paid Time Off Employee Discount 401(k) All your information will be kept confidential according to EEO guidelines."," Entry level "," Full-time "," Information Technology "," Apparel & Fashion " Data Engineer,United States,Data Engineer,https://www.linkedin.com/jobs/view/data-engineer-at-zifo-3518625691?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=8GANgrgBv3yux3SqnwspPA%3D%3D&position=24&pageNum=19&trk=public_jobs_jserp-result_search-card," Zifo ",https://www.linkedin.com/company/zifo-technologies?trk=public_jobs_topcard-org-name," Raleigh, NC "," 1 month ago "," Be among the first 25 applicants ","Curious about this position? Does the prospect of combining Data management, cloud, and technology Transformation to help our BioPharma customers become data centric, AI/ML driven enterprises excite you? We are looking for young and experienced Data engineers to join our expanding team Our customers are Scientific Informatics landscape to accelerate the discovery and manufacture of new Therapeutics, Drugs and Vaccines to save lives Faster!   Consequently, we are rapidly growing and looking to expand our technical team in North America. This is an exciting opportunity to join our Data Team and help leading Pharmaceutical and Biotech companies deploy and support innovative scientific R&D technology globally.   Working to enable R&D Digital Transformation, you will design and engineer Data pipelines using the latest in Big Data, Cloud Infrastructure, Analytics and AI/ML. You will be part of a team of talented Zifo Data Scientists in helping our customers build their future data landscape. Curious about your responsibilities? Evaluate business needs and objectives Analyze and organize raw data Build/Assess FAIR strategy/maturity model, data models, ETL pipelines Perform data analysis, detect patterns, and report findings Build algorithms, models, and visualization Dashboards Provide consulting inputs to develop guidelines & provide design ideas Research, assess & provide recommendations on Technology platforms & standards and best practices Help customers estimate, plan, and prioritize data engineering efforts Deliver guidance and oversight for project teams Build relationship with Customer, Internal and vendor leadership team Help grow and develop Zifo US Data Science business function We are curious about YOU! We get excited about Data Engineers who: Experience with Scientific Informatics solutions Background in Chemistry or Biology Science Experience in Life Sciences Industry Understanding of Machine Learning concepts Understanding of Cloud Architecture & ML Ops Experience in any Data focused solutions such as Snowflake, Databricks, PySpark, Cloudera, Knime, Airflow Data Engineer certification from cloud platform like Azure/AWS A successful Zifo-ite is: Independent, Self-Motivated & Results driven Willing & able to quickly acquire new Technical Skills & Business Principles A critical thinker who possesses logical reasoning Curious and always looking for creative solutions to complex problems What you bring to the table: Strong programming skills Strong database knowledge and query language skills (SQL, PL/SQL, NOSQL) Proven experience in at least one of the following languages: Python, R, Julia, Scala Technical expertise with ETL/ELT, Data modeling, Data wrangling, Data Integrations, Visualization and Data Governance Experience with building and administrating data pipeline platforms, big data solutions, Relational and Graph databases, Data Lake and Cloud services Experience with legacy data warehouse modernization and data migration Experience with CI/CD DevOps process Understanding in designing enterprise data sciences infrastructure & and deploying ML models Understanding of Data Cataloging, Data Storage, Data Engineering, Knowledge Graphs, Distributed Computing Plugged-in to emerging commercial and open-source technologies & solutions Good Understanding of Mathematical concepts, Statistical techniques At ease with Problem Solving, Analytical Thinking and Decision Making in real-time Understanding of SDLC processes, MLOps and associated tools Curiosity to learn and Passion to join the vanguard driving evolution in Scientific Informatics A passion for ensuring customers (new and existing) have an amazing experience when they interact with Zifo. Able to relay, complex information in simple terms to internal, partner and customer stakeholders with diverse levels of technical knowledge. Confident and effective in communicating across teams, management tiers and with customers both in writing and verbally. What we bring to the table: CURIOSITY DRIVEN, SCIENCE FOCUSED, EMPLOYEE BUILT. Our culture is unlike any other, one where we debate, challenge ourselves, and interact with all alike. We are a curious bunch, characterized by our passion to learn and spirit of teamwork. Zifo is a global R&D solutions provider focused on the industries of Pharma, Biotech, Manufacturing QC, Medical Devices, specialty chemicals and other research-based organizations. Our team's knowledge of science and expertise in technology help Zifo better serve our customers around the globe, including 7 of the Top 10 Biopharma companies. We look for Science – Biotechnology, Pharmaceutical Technology, Biomedical Engineering, Microbiology etc. We possess scientific and technical knowledge and bear professional and personal goals. While we have a ""no doors"" policy to promote free access within, we do have a tough door to walk in. We search with a two-point agenda – technical competency and cultural adaptability. We offer a competitive compensation package including accrued vacation, medical, dental, vision, 401k with company matching, life insurance, and flexible spending accounts. If you share these sentiments and are prepared for the atypical, then Zifo is your calling! Zifo is an equal opportunity employer, and we value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status."," Entry level "," Full-time "," Information Technology "," Technology, Information and Internet " Data Engineer,United States,Data Engineer/Analyst,https://www.linkedin.com/jobs/view/data-engineer-analyst-at-veryfi-3522664313?refId=qBNRN7AmIYeeJjjUhFWVqw%3D%3D&trackingId=fPTk%2FcRPd9IvTUe6FJkGhQ%3D%3D&position=25&pageNum=19&trk=public_jobs_jserp-result_search-card," Veryfi ",https://www.linkedin.com/company/veryfi-inc?trk=public_jobs_topcard-org-name," San Mateo, CA "," 3 weeks ago "," Be among the first 25 applicants ","Veryfi is looking for our next great data engineer that will build out and scale our analytics platform and corresponding data pipelines. Responsible for building and scaling a robust platform that will deliver our ML/AI driven insights to coordinate with the data visualization team to create engaging and insightful content Responsibilities Craft data engineering components, applications and entities to empower self-service of our big data Develop and implement technical best ETL practices for data movement, data quality and data cleansing Optimize and tune ETL processes, utilize reusability, parameterization, workflow design, caching, parallel processing, and other performance tuning techniques. Qualifications Knowledgeable about data engineering best practices, comfortable in a fast-paced startup Experience with data warehousing, streaming data and supporting architectures: pub/sub, stream processor/data aggregator, realtime analytics, data lake cluster computing framework Master of components necessary to architect solutions for complex data platforms, and large scale CI/CD data pipelines using a variety of technologies (REST APIs, Advanced SQL, Amazon S3, Apache Kafka, Data-Lakes, etc.), relational SQL DBs (e.g. MySQL, Postgres), newer (e.g. Mongo, Neo4j) to in-memory caches (e.g. Redis, Memcache) Working knowledge of distributed computing and data modeling principles. Experience with object-oriented design and coding and testing patterns, including experience with engineering software platforms and data infrastructures. Experience in Big Data, PySpark, Streaming Data. Knowledge of data management standards, data governance practices and data quality dimensions. Experience in UNIX systems, writing shell scripts and programming in Python Hands on experience in Python using libraries like NumPy, Pandas, PySpark."," Entry level "," Full-time "," Information Technology "," Software Development "