Hadoop mapreduce jobs

Filter

My recent searches
Filter by:
Budget
to
to
to
Type
Skills
Languages
    Job State
    4,460 hadoop mapreduce jobs found, pricing in NZD

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $284 (Avg Bid)
    $284 Avg Bid
    14 bids

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $304 (Avg Bid)
    $304 Avg Bid
    26 bids

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $35 (Avg Bid)
    $35 Avg Bid
    11 bids

    ...commonly used packages specially with GCP. Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: Big Query Cloud Dataflow Cloud DataProc Cloud Storage Cloud DataPrep Cloud PubSub Cloud Composer & Airflow Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Hands on Experience with Python-Json nested data operation. Exposure or Knowledge of API design, REST including versioning, isolation, and micro-services. Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integra...

    $22 / hr (Avg Bid)
    $22 / hr Avg Bid
    11 bids

    I am looking for a skilled professional who can efficiently set up an big data cluster. REQUIREMENTS: • Proficiency in Elasticsearch,Hadoop,Spark,Cassandra • Experience in working with large-scale data storage (10+ terabytes). • Able to structure data effectively. SPECIFIC TASKS INCLUDE: - Setting up the Elasticsearch,Hadoop,Spark,Cassandra big data cluster. - Ensuring the data to be stored is structured. - Prep for the ability to handle more than 10 terabytes of data. The ideal candidate will have substantial experience in large data structures and a deep understanding of the bigdata database technology. I encourage experts in big data management and those well-versed with the best practices of bigdata to bid for this project.

    $50 / hr (Avg Bid)
    $50 / hr Avg Bid
    3 bids

    We are looking for an Informatica BDM developer with 7+ yrs of experience, who can support us for 8 hours in a day from Mon - Friday. Title : Informatica BDM Developer Experience : 5 + Yrs 100%Remote Contract : Long term Timings: 10:30 am - 07:30 pm IST Required Skills: Informatica Data Engineering, DIS and MAS • Databricks, Hadoop • Relational SQL and NoSQL databases, including some of the following: Azure Synapse/SQL DW and SQL Database, SQL Server and Oracle • Core cloud services from at least one of the major providers in the market (Azure, AWS, Google) • Agile Methodologies, such as SCRUM • Task tracking tools, such as TFS and JIRA

    $2030 (Avg Bid)
    $2030 Avg Bid
    3 bids

    I am seeking a skilled professional proficient in managing big data tasks with Hadoop, Hive, and PySpark. The primary aim of this project involves processing and analyzing structured data. Key Tasks: - Implementing Hadoop, Hive, and PySpark for my project to analyze large volumes of structured data. - Use Hive and PySpark for sophisticated data analysis and processing techniques. Ideal Skills: - Proficiency in Hadoop ecosystem - Experience with Hive and PySpark - Strong background in working with structured data - Expertise in big data processing and data analysis - Excellent problem-solving and communication skills Deliverables: - Converting raw data into useful information using Hive and Visualizing the results of queries into the graphical representations. - C...

    $28 / hr (Avg Bid)
    $28 / hr Avg Bid
    15 bids

    ...currently seeking a Hadoop Professional with strong expertise in Pyspark for a multi-faceted project. Your responsibilities will extend to but not limited to: - Data analysis: You'll be working with diverse datasets including customer data, sales data and sensor data. Your role will involve deciphering this data, identifying key patterns and drawing out impactful insights. - Data processing: A major part of this role will be processing the mentioned datasets, and preparing them effectively for analysis. - Performance optimization: The ultimate aim is to enhance our customer targeting, boost sales revenue and identify patterns in sensor data. Utilizing your skills to optimize performance in these sectors will be highly appreciated. The ideal candidate will be skilled in ...

    $766 (Avg Bid)
    $766 Avg Bid
    25 bids

    ...R), and other BI essentials, join us for global projects. What We're Looking For: Business Intelligence Experts with Training Skills: Data analysis, visualization, and SQL Programming (Python, R) Business acumen and problem-solving Effective communication and domain expertise Data warehousing and modeling ETL processes and OLAP Statistical analysis and machine learning Big data technologies (Hadoop, Spark) Agile methodologies and data-driven decision-making Cloud technologies (AWS, Azure) and data security NoSQL databases and web scraping Natural Language Processing (NLP) and sentiment analysis API integration and data architecture Why Work With Us: Global Opportunities: Collaborate worldwide across diverse industries. Impactful Work: Empower businesses through data-drive...

    $35 / hr (Avg Bid)
    $35 / hr Avg Bid
    24 bids

    I'm launching an extensive project that needs a proficient expert in Google Cloud Platform (including BigQuery, GCS, Airflow/Composer), Hadoop, Java, Python, and Splunk. The selected candidate should display exemplary skills in these tools, and offer long-term support. Key Responsibilities: - Data analysis and reporting - Application development - Log monitoring and analysis Skills Requirements: - Google Cloud Platform (BigQuery, GCS, Airflow/Composer) - Hadoop - Java - Python - Splunk The data size is unknown at the moment, but proficiency in managing large datasets will be advantageous. Please place your bid taking into account all these factors. Your prior experience handling similar projects will be a plus. I look forward to working with a dedicated and know...

    $807 (Avg Bid)
    $807 Avg Bid
    54 bids

    ...commonly used packages specially with GCP. Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: Big Query Cloud Dataflow Cloud DataProc Cloud Storage Cloud DataPrep Cloud PubSub Cloud Composer & Airflow Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Hands on Experience with Python-Json nested data operation. Exposure or Knowledge of API design, REST including versioning, isolation, and micro-services. Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integra...

    $23 / hr (Avg Bid)
    $23 / hr Avg Bid
    6 bids

    As an ecommerce platform looking to optimize our data management, I require assistance with several key aspects of my AWS big data project, including: - Data lake setup and configuration - Development of AWS Glue jobs - Deployment of Hadoop and Spark clusters - Kafka data streaming The freelancer hired for this project must possess expertise in AWS, Kafka, and Hadoop. Strong experience with AWS Glue is essential given the heavy utilization planned for the tool throughout the project. Your suggestions and recommendations regarding these tools and technologies will be heartily welcomed, but keep in mind specific tools are needed to successfully complete this project.

    $1396 (Avg Bid)
    $1396 Avg Bid
    20 bids

    I'm in search of a professional proficient in AWS and MapReduce. My project involves: - Creation and execution of MapReduce jobs within the AWS infrastructure. - Specifically, these tasks will focus on processing a sizeable amount of text data. - The goal of this data processing is to perform an in-depth word frequency analysis, thereby extracting meaningful answers prompted by the data. The ideal freelancer for this job will have substantial experience handling data within these systems. Expertise in optimizing performance of MapReduce jobs is also greatly desirable. For anyone dabbling in AWS, MapReduce and data analytics, this project can provide a challenging and rewarding experience.

    $58 (Avg Bid)
    $58 Avg Bid
    7 bids

    ...Queries: Write a SQL query to find the second highest salary. Design a database schema for a given problem statement. Optimize a given SQL query. Solution Design: Design a parking lot system using object-oriented principles. Propose a data model for an e-commerce platform. Outline an approach to scale a given algorithm for large datasets. Big Data Technologies (if applicable): Basic questions on Hadoop, Spark, or other big data tools. How to handle large datasets efficiently. Writing map-reduce jobs (if relevant to the role). Statistical Analysis and Data Processing: Write a program to calculate statistical measures like mean, median, mode. Implement data normalization or standardization techniques. Process and analyze large datasets using Python libraries like Pandas. Rememb...

    $13 / hr (Avg Bid)
    $13 / hr Avg Bid
    36 bids

    ...customer-centric software products · Analyze existing software implementations to identify areas of improvement and provide deadline estimates for implementing new features · Develop software applications using technologies that include and not limited to core Java (11+ ), Kafka or messaging system, Web Frameworks like Struts / Spring, relational (Oracle) and non-relational databases (SQL, MongoDB, Hadoop, etc), with RESTful microservice architecture · Implement security and data protection features · Update and maintain documentation for team processes, best practices, and software runbooks · Collaborating with git in a multi-developer team · Appreciation for clean and well documented code · Contribution to database design ...

    $2326 (Avg Bid)
    $2326 Avg Bid
    51 bids

    a project of data analysis/data engineering involving big data needs to be done. Candidate must have command on big data solutions like hadoop

    $18 / hr (Avg Bid)
    $18 / hr Avg Bid
    8 bids

    I'm in search of an intermediate-level Java programmer well-versed in MapReduce. Your responsibility will be to implement the conceptual methods outlined in a given academic paper. What sets this task apart is that you're encouraged to positively augment the methodologies used: • Efficiency: Be creative with the paper's strategies and look for room for improvement in the program's efficiency. This could include enhancements to the program's capacity to process data, or to its speed. Ideal candidate should be seasoned in Java Programming, specifically MapReduce operations. Moreover, the ability to critically analyze and improve upon existing concepts will ensure success in this task. Don't hesitate to innovate, as long as you maintain the ...

    $230 (Avg Bid)
    $230 Avg Bid
    33 bids

    Project Title: Advanced Hadoop Administrator Description: - We are seeking an advanced Hadoop administrator for an inhouse Hadoop setup project. - The ideal candidate should have extensive experience and expertise in Hadoop administration. - The main tasks of the Hadoop administrator will include data processing, data storage, and data analysis. - The project is expected to be completed in less than a month. - The Hadoop administrator will be responsible for ensuring the smooth functioning of the Hadoop system and optimizing its performance. - The candidate should have a deep understanding of Hadoop architecture, configuration, and troubleshooting. - Experience in managing large-scale data processing and storage environments is requi...

    $513 (Avg Bid)
    $513 Avg Bid
    3 bids

    I am looking for a freelancer to help me with a Proof of Concept (POC) project focusing on Hadoop. Requirement: We drop a file in HDFS, which is then pushed to Spark or Kafka and it pushes final output/results into a database. Objective is to show we can handle million of records as input and put it in destination. The POC should be completed within 3-4 days and should have a simple level of complexity. Skills and experience required: - Strong knowledge and experience with Hadoop - Familiarity with HDFS and Kafka/Spark - Ability to quickly understand and implement a simple POC project - Good problem-solving skills and attention to detail

    $280 (Avg Bid)
    $280 Avg Bid
    9 bids

    ...upload the Hadoop package to HDFS. Use commands to show the IP addresses of all DataNodes. Provide detailed information (ls -l) of the blocks on each DataNode. Provide detailed information (ls -l) of the fsimage file and edit log file. Include screenshots of the Overview module, Startup Process module, DataNodes module, and Browse Directory module on the Web UI of HDFS. MapReduce Temperature Analysis You are given a collection of text documents containing temperature data. Your task is to implement a MapReduce program to find the maximum and minimum temperatures for each year. Data Format: Year: Second item in each line Minimum temperature: Fourth item in each line Maximum temperature: Fifth item in each line Submission Requirements: Submit the source code...

    $25 (Avg Bid)
    $25 Avg Bid
    2 bids

    Big data project in java needed to be done in 24 hrs. Person needs to be experienced in spark. hadoop.

    $218 (Avg Bid)
    $218 Avg Bid
    10 bids

    Looking for hadoop specialist to design the query optimisation design . Currently when the search is made its getting freezing when the user tries to run more than one search at a time . Need to implement a solution . This is a remote project . Share your idea first if you have done any such work . Here the UI is in React and Backend is in Node js .

    $26 / hr (Avg Bid)
    $26 / hr Avg Bid
    38 bids

    #Your code goes here import '' import '' def jbytes(*args) { |arg| arg.to_s.to_java_bytes } end def put_many(table_name, row, column_values) table = (@, table_name) p = (*jbytes(row)) do |column, value| family, qualifier = (':') (jbytes(family, qualifier), jbytes(value)) end (p) end # Call put_many function with sample data put_many 'wiki', 'DevOps', { "text:" => "What DevOps IaC do you use?", "revision:author" => "Frayad Gebrehana", "revision:comment" => "Terraform" } # Get data from the 'wiki' table get 'wiki', 'DevOps' #Do not remove the exit call below exit

    $99 (Avg Bid)
    $99 Avg Bid
    7 bids

    I am in need of assistance with Hadoop for the installation and setup of the platform. Skills and experience required: - Proficiency in Hadoop installation and setup - Knowledge of different versions of Hadoop (Hadoop 1.x and Hadoop 2.x) - Ability to work within a tight timeline (project needs to be completed within 7 hours) Please note that there is no specific preference for the version of Hadoop to be used.

    $21 (Avg Bid)
    $21 Avg Bid
    2 bids

    Wordpress Black theme Design in photo Images can take from udemy Content here Content Coupon Code: 90OFFOCT23 (subscribe by 7 Oct’23 or till stock lasts) Data Engineering Career Path: Big Data Hadoop and Spark with Scala: Scala Programming In-Depth: Apache Spark In-Depth (Spark with Scala): DP-900: Microsoft Azure Data Fundamentals: Data Science Career Path: Data Analysis In-Depth (With Python): https://www

    $12 (Avg Bid)
    Guaranteed
    $12
    4 entries

    Seeking an expert in both Hadoop and Spark to assist with various big data projects. The ideal candidate should have intermediate level expertise in both Hadoop and Spark. Skills and experience needed for the job: - Proficiency in Hadoop and Spark - Intermediate level expertise in Hadoop and Spark - Strong understanding of big data concepts and tools - Experience working on big data projects - Familiarity with data processing and analysis using Hadoop and Spark - Ability to troubleshoot and optimize big data tools - Strong problem-solving skills and attention to detail

    $36 / hr (Avg Bid)
    $36 / hr Avg Bid
    12 bids

    I am looking for a freelancer to compare the performance metrics of Hadoop, Spark, and Kafka using the data that I will provide. Skills and experience required: - Strong knowledge of big data processing architectures, specifically Hadoop, Spark, and Kafka - Proficiency in analyzing and comparing performance metrics - Ability to present findings through written analysis, graphs and charts, and tables and figures The comparison should focus on key performance metrics such as processing speed, scalability, fault tolerance, throughput, and latency. The freelancer should be able to provide a comprehensive analysis of these metrics and present them in a clear and visually appealing manner. I will explain more about the data

    $255 (Avg Bid)
    $255 Avg Bid
    24 bids

    Looking for Hadoop Hive Experts I am seeking experienced Hadoop Hive experts for a personal project. Requirements: - Advanced level of expertise in Hadoop Hive - Strong understanding of big data processing and analysis - Proficient in Hive query language (HQL) - Experience with data warehousing and ETL processes - Familiarity with Apache Hadoop ecosystem tools (e.g., HDFS, MapReduce) - Ability to optimize and tune Hadoop Hive queries for performance If you have a deep understanding of Hadoop Hive and can effectively analyze and process big data, then this project is for you. Please provide examples of your previous work in Hadoop Hive and any relevant certifications or qualifications. I am flexible with the timeframe for complet...

    $33 (Avg Bid)
    $33 Avg Bid
    2 bids

    I am looking for a Kafka Admin who can assist me with the following tasks: - Onboarding Kafka cluster - Managing Kafka topics and partitions - Its already available in the company and we need to onboard it for our project . -Should be able to Size and scope . - We will start with small data ingestion from Hadoop datalake . -Should be willing to work on remote machine . The ideal candidate should have experience in: - Setting up and configuring Kafka clusters - Managing Kafka topics and partitions - Troubleshooting Kafka performance issues The client already has all the necessary hardware and software for the Kafka cluster setup.

    $30 / hr (Avg Bid)
    $30 / hr Avg Bid
    11 bids

    Over the past years, I have devoted myself to a project involving Algorithmic Trading. My system leverages only pricing and volume data at market closing. It studies technical indicators for every stock in the S&P 500 from its IPO date, testing all possible indicator 'settings', as I prefer to call them. This process uncovers microscopic signals that suggest beneficial buying at market close and selling at the next day's close. Any signal with a p-value below 0.01 is added to my portfolio. Following this, the system removes correlated signals to prevent duplication. A Bayesian ranking of signals is calculated, and correlated signals with a lower rank are eliminated. The result is a daily optimized portfolio of buy/sell signals. This system, primarily built with numpy...

    $63 / hr (Avg Bid)
    NDA
    $63 / hr Avg Bid
    13 bids

    I am looking for a Hadoop developer with a strong background in data analysis. The scope of the project involves analyzing and interpreting data using Hadoop. The ideal candidate should have experience in Hadoop data analysis and be able to work on the project within a timeline of less than 1 month.

    $409 (Avg Bid)
    $409 Avg Bid
    4 bids

    I am looking for a Hadoop developer with a strong background in data analysis. The scope of the project involves analyzing and interpreting data using Hadoop. The ideal candidate should have experience in Hadoop data analysis and be able to work on the project within a timeline of less than 1 month.

    $20 (Avg Bid)
    $20 Avg Bid
    3 bids

    1: model and implement efficient big data solutions for various application areas using appropriately selected algorithms and data structures. 2: analyse methods and algorithms, to compare and evaluate them with respect to time and space requirements and make appropriate design choices when solving real-world problems. 3: motivate and explai...choices when solving real-world problems. 3: motivate and explain trade-offs in big data processing technique design and analysis in written and oral form. 4: explain the Big Data Fundamentals, including the evolution of Big Data, the characteristics of Big Data and the challenges introduced. 6: apply the novel architectures and platforms introduced for Big data, i.e., Hadoop, MapReduce and Spark complex problems on Hadoop execu...

    $213 (Avg Bid)
    $213 Avg Bid
    9 bids

    I am looking for a freelancer who can help me with an issue I am fac...who can help me with an issue I am facing with launching Apache Gobblin in YARN. Here are the details of the project: Error Message: NoClassDefFoundError (Please note that this question was skipped, so the error message may not be accurate) Apache Gobblin Version: 2.0.0 YARN Configuration: Not sure Skills and Experience: - Strong knowledge and experience with Apache Gobblin - Expertise in Hadoop,YARN configuration and troubleshooting - Familiarity with Interrupt exception and related issues - Ability to diagnose and resolve issues in a timely manner - Excellent communication skills to effectively collaborate with me and understand the problem If you have the required skills and experience, please bid on thi...

    $41 / hr (Avg Bid)
    $41 / hr Avg Bid
    10 bids

    Write MapReduce programs that give you a chance to develop an understanding of principles when solving complex problems on the Hadoop execution platform.

    $41 (Avg Bid)
    $41 Avg Bid
    9 bids

    I am looking for a Python expert who can help me with a specific task of implementing a MapReducer. The ideal candidate should have the following skills and experience: - Proficient in Python programming language - Strong knowledge and experience in MapReduce framework - Familiarity with web scraping, data analysis, and machine learning would be a plus The specific library or framework that I have in mind for this project is [insert library/framework name]. I have a tight deadline for this task, and I prefer it to be completed in less than a week.

    $18 / hr (Avg Bid)
    $18 / hr Avg Bid
    52 bids

    I am looking for a freelancer to develop a Mapreduce program in Python for data processing. The ideal candidate should have experience in Python programming and a strong understanding of Mapreduce concepts. Requirements: - Proficiency in Python programming language - Knowledge of Mapreduce concepts and algorithms - Ability to handle large data sets efficiently - Experience with data processing and manipulation - Familiarity with data analysis and mining techniques The program should be flexible enough to handle any data set, but the client will provide specific data sets for the freelancer to work with. The freelancer should be able to process and analyze the provided data sets efficiently using the Mapreduce program.

    $179 (Avg Bid)
    $179 Avg Bid
    27 bids

    It's java hadoop mapreduce task. The program should run on windows OS. An algorithm must be devised and implemented that can recognize the language of a given text. Thank you.

    $55 (Avg Bid)
    $55 Avg Bid
    8 bids

    Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: Write a report that contains the following steps: 1. Write steps of Spark & Hadoop setup with some screenshots. 2. Import Libraries and Set Work Background (Steps +screen shots) 3. Load and Discover Data (Steps +screen shots + Codes) 4. Data Cleaning and Preprocessing (Steps +screen shots + Codes) 5. Data Analysis - Simple Analysis (explanation, print screen codes) - Moderate Analysis (explanation

    $55 (Avg Bid)
    $55 Avg Bid
    8 bids

    Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: Write a report that contains the following steps: 1. Write steps of Spark & Hadoop setup with some screenshots. 2. Import Libraries and Set Work Background (Steps +screen shots) 3. Load and Discover Data (Steps +screen shots + Codes) 4. Data Cleaning and Preprocessing (Steps +screen shots + Codes) 5. Data Analysis - Simple Analysis (explanation, print screen codes) - Moderate Analysis (explanation

    $50 (Avg Bid)
    $50 Avg Bid
    6 bids

    I am looking for an advanced Hadoop trainer for an online training program. I have some specific topics to be covered as part of the program, and it is essential that the trainer can provide in-depth knowledge and expertise in Hadoop. The topics to be discussed include Big Data technologies, Hadoop administration, Data warehousing, MapReduce, HDFS Architecture, Cluster Management, Real Time Processing, HBase, Apache Sqoop, and Flume. Of course, the trainer should also have good working knowledge about other Big Data topics and techniques. In addition to the topics mentioned, the successful candidate must also demonstrate the ability to tailor the course to meet the learner’s individual needs, making sure that the classes are engaging and fun. The traine...

    $23 / hr (Avg Bid)
    $23 / hr Avg Bid
    1 bids

    I am looking for a freelancer with some experience in working with Hadoop and Spark, specifically in setting up a logging platform. I need full assistance in setting up the platform and answering analytical questions using log files within Hadoop. Ideal skills and experience for this project include: - Experience working with Hadoop and Spark - Knowledge of setting up logging platforms - Analytical skills to answer questions using log files

    $68 (Avg Bid)
    $68 Avg Bid
    4 bids

    Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: 1- Using Hadoop SPARK software execute three examples: simple, moderate, and advanced over the chosen DS. 2- For each case write the code and screenshot for the output. 3- Visualize the results of each example with appropriate method.

    $40 (Avg Bid)
    $40 Avg Bid
    5 bids

    ...procorpsystem rn <<mail here >> rn Reply to:  << mail here >>rn rn rn rn rn rn rn rn rn Role:  Hadoop Developer / Admin with Production SupportLocation: Austin, TXDuration: 12 Months Job Description: We are looking someone having strong experience in production support, administration and Development experience with Hadoop technologies.• Minimum Experience 8 Years• Must have Hands-on experience on managing Multiple Hortonworks Clusters. Troubleshooting, Maintaining and Monitoring is the key responsibility here.• M...

    $466 (Avg Bid)
    $466 Avg Bid
    17 bids

    ...System Development Engineer to join our team for a remote project in Korea. The ideal candidate should have expertise in Hadoop, HDFS. Responsibilities: - Coding and implementing HDFS system development tasks - Collaborating with the team to design and develop efficient solutions - Conducting thorough testing and debugging to ensure system reliability [Project Overview] We are looking for skilled professionals capable of enhancing and developing the HDFS technology used in the NMS (Network Management System) related systems of one of the three major Korean telecommunication companies. [Detailed Job Description] 1. Establish HDFS cluster using the vanilla version of Hadoop - Expected integration with 16 servers 2. Build a Data Warehouse based on HDFS - Construct a sys...

    $7608 (Avg Bid)
    $7608 Avg Bid
    2 bids

    I am seeking an experienced data engineer to perform training for my company. We are looking for somebody with advanced knowledge in Python, as well as Big Data technologies such as Hadoop and Spark. The training should last 1-2 months, teaching our team the fundamentals as well as more advanced applications of these skills in data engineering. If you think you have the experience and qualifications to provide this training, I encourage you to submit your proposal.

    $1478 (Avg Bid)
    $1478 Avg Bid
    9 bids

    I am looking for a talented data scientist to help with a project that requires data analysis, machine learning and data visualization. I have medium-sized data sets ready to go, at between 1,000 and 10,000 rows. The data sets are...I'm seeking someone who can make sense of the data and use it to create data visualizations. This person should possess a strong understanding of machine learning and data analysis principles. The successful applicant will be expected to translate data into a visual form that will be easy to understand and communicate to others. Any experience with software such as Python, R, SPSS, Apache, and Hadoop will be greatly beneficial. If you think you have the skills to produce great results and get the job done, then please get in touch and let me know ho...

    $21 / hr (Avg Bid)
    $21 / hr Avg Bid
    26 bids

    Quantori is a new company with a long history. We have over twenty years' experience in developing software for the pharmaceutical industry and driving advanced strategies in the world of Big Data revolution. ...- Good written and spoken English skills (upper-intermediate or higher) Nice to have: - Knowledge of web-based frameworks (Flask, Django, FastAPI) - Knowledge of and experience in working with Kubernetes - Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, etc.) - Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) - Good understanding of application architecture principles We offer: - Competitive compensation - Remote work - Flexible working hours - A team with an excellent...

    $58 / hr (Avg Bid)
    $58 / hr Avg Bid
    81 bids

    I am looking...looking for an experienced Hadoop engineer to assist with troubleshooting and optimization of our existing Hadoop cluster. Ideally, the hiring candidate will need to demonstrate a high level of proficiency in relevant Hadoop technologies, as well as experience in troubleshooting and optimization. This engineer will be responsible for monitoring the performance of our Hadoop cluster, making adjustments and ensuring the environment is running efficiently. This individual must be able to identify possible inefficiencies and areas for improvement, providing solutions and suggestions for achieving better performance and scalability. This engineer should also be knowledgeable of data processing and analysis, as well as provide the necessary tech support...

    $43 / hr (Avg Bid)
    $43 / hr Avg Bid
    14 bids

    Need to fix the missing files and blocks issue on AWS EMR cluster on has corrupt files/jars/blocks.

    $26 / hr (Avg Bid)
    $26 / hr Avg Bid
    11 bids