Hortonworks Jobs

7 were found based on your criteria {{ paging.total|number:0 }} were found based on your criteria

show all
  • Hourly ({{ jobTypeController.getFacetCount("0")|number:0}})
  • Fixed Price ({{ jobTypeController.getFacetCount("1")|number:0}})
Fixed-Price - Entry Level ($) - Est. Budget: $50 - Posted
Hello, I have to do some tasks on big data with hadoop hortonworks distribution. The first task will be to download data in json format and insert into Hortonworks Hadoop MongoDB or HBase databases. ... The first task will be to download data in json format and insert into Hortonworks Hadoop MongoDB or HBase databases. My idea is to use apache nifi.
Fixed-Price - Intermediate ($$) - Est. Budget: $40 - Posted
References (to supplant your own research): https://www.infoq.com/articles/apache-spark- introduction http://hortonworks.com/apache/spark/ https://databricks.com/spark/about https://www.tutorialspoint.com/apache_spark/apache_spark_introduction.htm Guidelines: 1.
Hourly - Entry Level ($) - Est. Time: More than 6 months, Less than 10 hrs/week - Posted
Looking for Hadoop Hortonworks Administrator for training, support and enhancements.
Skills: Hadoop
Hourly - Expert ($$$) - Est. Time: 1 to 3 months, 10-30 hrs/week - Posted
Looking for a Hortonworks expert architect. My goal is to design a new data sciences platform. The purpose of this platform for 1) a team of data scients to train predictive models and 2) an tech operations team to productionalize and run these predictive models in a recurring manner. ... The team has strong expertise and best practices with data sciences, data architecture, data governance and data management. Our need is for a Hortonworks expert. This Hortonworks expert would be an advisor to the existing team. ... This expert is needed to provide a “generic” Hortonworks architecture and then work with the team to adapt this architecture to our specific needs.
Skills: Apache Hive Atlas Bash shell scripting Hadoop HBase Python
Fixed-Price - Intermediate ($$) - Est. Budget: $20 - Posted
I am setting up hortonworks for the first time and everything is working fine but there seems to be a problem with the user rights - i cannot get a pig job to run from the ambari view - from the cli everything works fine - File does not exist: /user/kwhitson/pig/jobs/g_10-09-2016-19-34-48/stdout at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71) at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1828) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1712) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:672) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)
Skills: Data Science