在Hue中的Oozie Editor中提交mapreduce作业时如何解决这个问题? :
JA017:无法查找与操作 [0000009-150711083342968-oozie-root-W@mapreduce-f660] 关联的 hadoop 作业 ID [job_local152843681_0009]。此操作失败!
更新:
Here are log file:
2015-07-15 04:54:40,304 INFO ActionStartXCommand:520 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@:start:] Start action [0000010-150711083342968-oozie-root-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2015-07-15 04:54:40,321 INFO ActionStartXCommand:520 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@:start:] [***0000010-150711083342968-oozie-root-W@:start:***]Action status=DONE
2015-07-15 04:54:40,325 INFO ActionStartXCommand:520 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@:start:] [***0000010-150711083342968-oozie-root-W@:start:***]Action updated in DB!
2015-07-15 04:54:40,501 INFO WorkflowNotificationXCommand:520 - SERVER[myserver] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@:start:] No Notification URL is defined. Therefore nothing to notify for job 0000010-150711083342968-oozie-root-W@:start:
2015-07-15 04:54:40,502 INFO WorkflowNotificationXCommand:520 - SERVER[myserver] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000010-150711083342968-oozie-root-W] ACTION[] No Notification URL is defined. Therefore nothing to notify for job 0000010-150711083342968-oozie-root-W
2015-07-15 04:54:40,713 INFO ActionStartXCommand:520 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] Start action [0000010-150711083342968-oozie-root-W@mapreduce-52d9] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
2015-07-15 04:54:43,216 WARN MapReduceActionExecutor:523 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] Exception in check(). Message[JA017: Could not lookup launched hadoop Job ID [job_local1099179300_0010] which was associated with action [0000010-150711083342968-oozie-root-W@mapreduce-52d9]. Failing this action!]
org.apache.oozie.action.ActionExecutorException: JA017: Could not lookup launched hadoop Job ID [job_local1099179300_0010] which was associated with action [0000010-150711083342968-oozie-root-W@mapreduce-52d9]. Failing this action!
at org.apache.oozie.action.hadoop.JavaActionExecutor.check(JavaActionExecutor.java:1359)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1288)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:321)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:250)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-07-15 04:54:43,230 WARN ActionStartXCommand:523 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] Error starting action [mapreduce-52d9]. ErrorType [FAILED], ErrorCode [JA017], Message [JA017: Could not lookup launched hadoop Job ID [job_local1099179300_0010] which was associated with action [0000010-150711083342968-oozie-root-W@mapreduce-52d9]. Failing this action!]
org.apache.oozie.action.ActionExecutorException: JA017: Could not lookup launched hadoop Job ID [job_local1099179300_0010] which was associated with action [0000010-150711083342968-oozie-root-W@mapreduce-52d9]. Failing this action!
at org.apache.oozie.action.hadoop.JavaActionExecutor.check(JavaActionExecutor.java:1359)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1288)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:321)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:250)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-07-15 04:54:43,234 WARN ActionStartXCommand:523 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] Failing Job due to failed action [mapreduce-52d9]
2015-07-15 04:54:43,247 WARN LiteWorkflowInstance:523 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] Workflow Failed. Failing node [mapreduce-52d9]
2015-07-15 04:54:43,548 INFO WorkflowNotificationXCommand:520 - SERVER[myserver] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] No Notification URL is defined. Therefore nothing to notify for job 0000010-150711083342968-oozie-root-W@mapreduce-52d9
2015-07-15 04:54:43,615 INFO KillXCommand:520 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[] STARTED WorkflowKillXCommand for jobId=0000010-150711083342968-oozie-root-W
2015-07-15 04:54:43,758 INFO CallbackServlet:520 - SERVER[myserver] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] callback for action [0000010-150711083342968-oozie-root-W@mapreduce-52d9]
2015-07-15 04:54:43,782 INFO KillXCommand:520 - SERVER[myserver] USER[root] GROUP[-] TOKEN[] APP[My_Workflow] JOB[0000010-150711083342968-oozie-root-W] ACTION[] ENDED WorkflowKillXCommand for jobId=0000010-150711083342968-oozie-root-W
2015-07-15 04:54:43,791 INFO WorkflowNotificationXCommand:520 - SERVER[myserver] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000010-150711083342968-oozie-root-W] ACTION[] No Notification URL is defined. Therefore nothing to notify for job 0000010-150711083342968-oozie-root-W
2015-07-15 04:54:43,789 ERROR CompletedActionXCommand:517 - SERVER[myserver] USER[-] GROUP[-] TOKEN[] APP[-] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] XException,
org.apache.oozie.command.CommandException: E0800: Action it is not running its in [FAILED] state, action [0000010-150711083342968-oozie-root-W@mapreduce-52d9]
at org.apache.oozie.command.wf.CompletedActionXCommand.eagerVerifyPrecondition(CompletedActionXCommand.java:92)
at org.apache.oozie.command.XCommand.call(XCommand.java:257)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-07-15 04:54:43,803 WARN CallableQueueService$CallableWrapper:523 - SERVER[myserver] USER[-] GROUP[-] TOKEN[] APP[-] JOB[0000010-150711083342968-oozie-root-W] ACTION[0000010-150711083342968-oozie-root-W@mapreduce-52d9] exception callable [callback], E0800: Action it is not running its in [FAILED] state, action [0000010-150711083342968-oozie-root-W@mapreduce-52d9]
org.apache.oozie.command.CommandException: E0800: Action it is not running its in [FAILED] state, action [0000010-150711083342968-oozie-root-W@mapreduce-52d9]
at org.apache.oozie.command.wf.CompletedActionXCommand.eagerVerifyPrecondition(CompletedActionXCommand.java:92)
at org.apache.oozie.command.XCommand.call(XCommand.java:257)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
最佳答案
我刚刚解决了完全相同的问题。 Oozie 默认运行作业为“mapreduce”。在我的例子中,Oozie 无法到达 hdfs 目录/user/history/done_intermediate/hdfs 并且失败了。
所以,我敢打赌你是以其他登录方式登录的。切换到 mapred 登录,或添加行
user.name=mapred
如果您想从命令行调用 Oozie,请转到 job.properties。
关于hadoop - JA017 : Could not lookup launched hadoop Job ID,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/31379961/
1.1.1 YARN的介绍 为克服Hadoop1.0中HDFS和MapReduce存在的各种问题⽽提出的,针对Hadoop1.0中的MapReduce在扩展性和多框架⽀持⽅⾯的不⾜,提出了全新的资源管理框架YARN. ApacheYARN(YetanotherResourceNegotiator的缩写)是Hadoop集群的资源管理系统,负责为计算程序提供服务器计算资源,相当于⼀个分布式的操作系统平台,⽽MapReduce等计算程序则相当于运⾏于操作系统之上的应⽤程序。 YARN被引⼊Hadoop2,最初是为了改善MapReduce的实现,但是因为具有⾜够的通⽤性,同样可以⽀持其他的分布式计算模
目录:一、简介二、HQL的执行流程三、索引四、索引案例五、Hive常用DDL操作六、Hive常用DML操作七、查询结果插入到表八、更新和删除操作九、查询结果写出到文件系统十、HiveCLI和Beeline命令行的基本使用十一、Hive配置一、简介Hive是一个构建在Hadoop之上的数据仓库,它可以将结构化的数据文件映射成表,并提供类SQL查询功能,用于查询的SQL语句会被转化为MapReduce作业,然后提交到Hadoop上运行。特点:简单、容易上手(提供了类似sql的查询语言hql),使得精通sql但是不了解Java编程的人也能很好地进行大数据分析;灵活性高,可以自定义用户函数(UDF)和
云计算实验中要求我们在Linux系统安装Hadoop,故来做一个简单的记录。· 注:我的操作系统环境是Ubuntu-20.04.3,安装的JDK版本为jdk1.8.0_301,安装的Hadoop版本为hadoop2.7.1。(不确定其他版本是否会出现版本兼容问题)Hadoop安装步骤如下: 一、更新apt和安装vim编辑器 二、配置本机无密码登录SSH 三、安装JAVA环境 四、下载安装Hadoop 五、伪分布式搭建一、更新apt和安装vim编辑器1、更新aptsudoapt-getupdate2、安装vim
一、设置免密登录1、系统偏好设置-----共享----勾选远程登录,所有用户2、打开终端,输入命令ssh-keygen-trsa,一直回车即可2.查看生成的公钥和私钥 cd~/.ssh ls会看到~/.ssh目录下有两个文件:①私钥:id_rsa②公钥:id_rsa.pub3.将公钥内容写入到~/.ssh/authorized_keys中 cat~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys4.测试在terminal终端输入 sshlocalhost如果出现以下询问输入yes,不需要输入密码就能登录,说明配置成功Areyousureyouw
博学之,审问之,慎思之,明辨之,笃行之🏂hiveonspark搭建好后,任务提交会有问题,因为通过hive会话提交的任务一直存在且不会结束(除非关掉这个hive会话),根本原因是这些任务提交到了Yarn的同一个队列中,前面的任务没有执行完毕后面的任务不会执行,所以解决办法是增加一个Yarn队列,指定任务提交的队列,这样就不会出现任务的阻塞。目录一、情景复现二、原因三、Yarn队列配置—增加队列1.情景复现:搭建好hiveonspark后,在命令行直接进入hive会话,提交任务后,在ResourceManager上jps查看进程可以看到有个进程ApplicationMaster一直存在,打开Re
目录SparkStreaming的核心是DStream一、DStream简介二.DStream编程模型三.DStream转换操作SparkStreaming的核心是DStream一、DStream简介1.Spark Streaming提供了一个高级抽象的流,即DStream(离散流)。2.DStream的内部结构是由一系列连续的RDD组成,每个RDD都是一小段由时间分隔开来的数据集。二.DStream编程模型三.DStream转换操作transform()1.在3个节点启动zookeeper集群服务$zkServer.shstart2.启动kafka(3个节点都要)$/opt/module/k
目录基本语法一、上传二、下载三、其他增删改查操作3.1增3.2删3.3改3.4查基本语法hadoopfs和 hdfsdfs(hadoopfs和hdfsdfs命令等效。)-hdfs dfs只能操作HDFS文件系统-hadoopfs可操作任意文件系统,不仅仅是hdfs文件系统,使用范围更广[root@hadoop102hadoop-3.1.3]$bin/hadoopfs[-appendToFile...][-cat[-ignoreCrc]...][-chgrp[-R]GROUPPATH...][-chmod[-R]PATH...][-chown[-R][OWNER][:[GROUP]]PATH..
我正在为HDFS中的写入实现一个数据节点故障转移,当block的第一个数据节点发生故障时,HDFS仍然可以写入一个block。算法是。首先,将识别故障节点。然后,请求一个新block。HDFSportapi提供了excludeNodes,我用它来告诉Namenode不要在那里分配新的block。failedDatanodes被识别为失败的数据节点,它们在日志中是正确的。req:=&hdfs.AddBlockRequestProto{Src:proto.String(bw.src),ClientName:proto.String(bw.clientName),ExcludeNodes:f
我们在搭建完hadoop集群时,初次启动HDFS集群,需要对主节点进行格式化操作,其本质是清理和做一些准备工作,因为此时的HDFS在物理上还是存在的。而且主节点格式化操作只能进行一次。那我们在格式化时,不小心格式化多次,就会导致主从节点之间互相不识别。然后导致启动hadoop集群时,主节点的namenode进程可能不会启动或者从节点的datanode可能不会启动。这里给出一种解决方法:我们在配置hadoop的配置文件core-site.xml时,其中有一组参数hadoop.tmp.dir,它的值指定的是配置hadoop的临时目录我们把tmp目录删除,再重新格式化即可。先进入/export/se
目录摘要大屏可视化预览如何安装Hadoop集群数据集介绍项目部署流程一键化配置环境和参数一键化建立hive数据表Flume配置及自动加载数据到hive中数据分析mysql接收数据表格sqoop将hive表导入到MySQL中可视化效果总结每文一语摘要本项目需要部署的可以私信博主!!!!!!!!!本文介绍了基于Hadoop的电商广告点击数的分析与可视化,以及相应的Shell脚本执行和大屏可视化设计。首先,我们介绍了Hadoop的基本原理和使用方法,包括如何安装和配置Hadoop集群。然后,我们介绍了如何使用HadoopMapReduce框架对电商广告点击数据进行分析和处理,包括数据的清洗、转换和统