Sqoop2 job配置:
Job with id 1 and name Sqoopy (Enabled: true, Created by root at 5/7/15 5:26 PM, Updated by root at 5/7/15 5:26 PM)
Using link id 2 and Connector id 1
From database configuration
Schema name: rural_biz
Table name: RefundOrder
Table SQL statement:
Table column names:
Partition column name:
Null value allowed for the partition column:
Boundary query:
Throttling resources
Extractors: 2
Loaders: 2 PXA270的zigbee的无线电子点菜系统+参考文献+流程图
ToJob configuration
Override null value:
Null value:
Output format: TEXT_FILE
Compression format: NONE
Custom compression format:
Output directory: /
在从节点的job日志中有失败信息显示:
attempt_1430983821763_0004_r_000000_0 TaskAttempt Transitioned from RUNNING to FAIL_CONTAINER_CLEANUP
Diagnostics report from attempt_1430983821763_0004_r_000001_0: AttemptID:attempt_1430983821763_0004_r_000001_0 Timed out after 600 secs
[AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1430983821763_0004_r_000001_0 TaskAttempt Transitioned from RUNNING to FAIL_CONTAINER_CLEANUP
可能是计算资源不够,虽然物理资源很多,但是hadoop使用的有限,跑程序的时候可以再监控页面查看一下资源使用情况。