All Products
Search
Document Center

ApsaraDB for HBase:Access ApsaraDB for HBase HDFS

Last Updated:Oct 30, 2023

In some scenarios such as using bulk loads to import data to ApsaraDB for HBase, you must enable the HDFS ports for your ApsaraDB for HBase cluster.

  • Note: If you enable the HDFS ports, Alibaba Cloud is not responsible for any data loss in HDFS caused by user mistakes. Make sure that you are familiar with the HDFS operations.

  • Contact the ApsaraDB for HBase Q&A DingTalk group to activate HDFS. Caution: After you activate HDFS, your cluster is exposed to malicious attacks, which may cause performance instability or even data loss. To ensure data security, you are not allowed to activate HDFS as needed. You must contact our ApsaraDB for HBase Q&A DingTalk group to activate HDFS. We will disable it after you complete your tasks.

  • Test the HDFS ports by using an HDFS client to connect to the HDFS cluster in ApsaraDB for HBase.

  • Create an Hadoop client configuration folder named conf. If the folder already exists, you do not need to create a new one.

  • Add the following HDFS configuration files to the folder. For more information about how to set the hosts {hbase-header-1-host} and {hbase-header-1-host}, consult the ApsaraDB for HBase Q&A DingTalk group.

    • core-site.xml

<configuration>
  <property>
     <name>fs.defaultFS</name>
     <value>hdfs://hbase-cluster</value>
  </property>
</configuration>
        

- hdfs-site.xml

<configuration>
<property>
        <name>dfs.nameservices</name>
        <value>hbase-cluster</value>
</property>
  <property>
 <name>dfs.client.failover.proxy.provider.hbase-cluster</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
 <name>dfs.ha.automatic-failover.enabled.hbase-cluster</name>
 <value>true</value>
</property>
<property>
        <name>dfs.ha.namenodes.hbase-cluster</name>
        <value>nn1,nn2</value>
</property>
<property>
        <name>dfs.namenode.rpc-address.hbase-cluster.nn1</name>
        <value>{hbase-header-1-host}:8020</value>
</property>
<property>
        <name>dfs.namenode.rpc-address.hbase-cluster.nn2</name>
        <value>{hbase-header-2-host}:8020</value>
</property>
</configuration>
        
  • Add conf to the classpath of the Hadoop client.

  • Test the HDFS ports by reading data from HDFS and writing data to HDFS.

echo "hdfs port test"  >/tmp/test
hadoop dfs -put /tmp/test  /
hadoop dfs -cat /test