我正在学习hadoop,我正在尝试设置http://hadoop.apache.org/common/docs/current/single_node_setup.html中定义的单节点测试
我已经配置了ssh(我可以在没有密码的情况下登录)。
我的服务器在我们的内部网上,在代理后面。
当我想跑的时候
bin / hadoop ford-format
我得到以下java.net.UnknownHostException异常:
$ bin/hadoop namenode -format
11/06/10 15:36:47 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3: srv-clc-04.univ-nantes.prive3
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.203.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
************************************************************/
Re-format filesystem in /home/lindenb/tmp/HADOOP/dfs/name ? (Y or N) Y
11/06/10 15:36:50 INFO util.GSet: VM type = 64-bit
11/06/10 15:36:50 INFO util.GSet: 2% max memory = 19.1675 MB
11/06/10 15:36:50 INFO util.GSet: capacity = 2^21 = 2097152 entries
11/06/10 15:36:50 INFO util.GSet: recommended=2097152, actual=2097152
11/06/10 15:36:50 INFO namenode.FSNamesystem: fsOwner=lindenb
11/06/10 15:36:50 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/10 15:36:50 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/10 15:36:50 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/06/10 15:36:50 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/06/10 15:36:50 INFO namenode.NameNode: Caching file names occuring more than 10 times
11/06/10 15:36:50 INFO common.Storage: Image file of size 113 saved in 0 seconds.
11/06/10 15:36:50 INFO common.Storage: Storage directory /home/lindenb/tmp/HADOOP/dfs/name has been successfully formatted.
11/06/10 15:36:50 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3: srv-clc-04.univ-nantes.prive3
************************************************************/
之后,hadoop开始了
./bin/start-all.sh
但是当我尝试复制本地文件时还有另一个新的异常:
bin/hadoop fs -copyFromLocal ~/file.txt file.txt
DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lindenb/file.txt could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
我该如何解决这个问题呢?
谢谢
当hadoop尝试将DNS名称(srv-clc-04.univ-nantes.prive3)解析为IP地址时,会抛出UnknownHostException。这失败了。
在配置文件中查找域名,并将其替换为“localhost”。 (或者更新DNS,将名称解析为IP地址)
首先获取计算机的主机名。它可以通过运行$hostname
命令获得。然后将127.0.0.1 localhost hostname
添加到/etc/hosts
文件中。那应该可以解决问题。
您创建的tmp目录应该具有所有权问题。这就是为什么hadoop无法写入tmp directoy来修复它运行以下命令
sudo chown hduser:hadoop /app/<your hadoop tmp dir>
将以下内容附加到/etc/hosts
可能会有所帮助:
127.0.0.1 localhost yourhostname