Run Apache Hive Metastore inside docker container
- Quick-start for Hive Metastore
- Pull the image from DockerHub: https://hub.docker.com/r/apache/hive/tags.
Here are the latest images:
- standalone-metastore-4.0.0
docker pull apache/hive:standalone-metastore-4.0.0export HIVE_VERSION=4.0.0docker run -d -p 9083:9083 --name metastore-standalone apache/hive:standalone-metastore-${HIVE_VERSION}Apache Hive Metastore relies on Hadoop and some others to facilitate managing metadata of large datasets.
The build.sh provides ways to build the image against specified version of the dependent, as well as build from source.
mvn clean package -DskipTests -PdockerThere are some arguments to specify the component version:
-hadoop <hadoop version>
-hive <hive version> If the version is not provided, it will read the version from current pom.xml:
project.version, hadoop.version for Hive and Hadoop respectively.
For example, the following command uses Hive 4.0.0 and Hadoop hadoop.version to build the image,
./build.sh -hive 4.0.0If the command does not specify the Hive version, it will use the local hive-standalone-metastore-${project.version}-bin.tar.gz(will trigger a build if it doesn't exist),
together with Hadoop 3.1.0 to build the image,
./build.sh -hadoop 3.1.0After building successfully, we can get a Docker image named apache/hive tagged with standalone-metastore prefix and the provided Hive version.
Before going further, we should define the environment variable HIVE_VERSION first.
For example, if -hive 4.0.0 is specified to build the image,
export HIVE_VERSION=4.0.0or assuming that you're relying on current project.version from pom.xml,
export HIVE_VERSION=$(mvn -f pom.xml -q help:evaluate -Dexpression=project.version -DforceStdout)For a quick start, launch the Metastore with Derby,
docker run -d -p 9083:9083 --name metastore-standalone apache/hive:standalone-metastore-${HIVE_VERSION}Everything would be lost when the service is down. In order to save the Hive table's schema and data, start the container with an external Postgres and Volume to keep them,
docker run -d -p 9083:9083 --env DB_DRIVER=postgres \
--env SERVICE_OPTS="-Djavax.jdo.option.ConnectionDriverName=org.postgresql.Driver -Djavax.jdo.option.ConnectionURL=jdbc:postgresql://postgres:5432/metastore_db -Djavax.jdo.option.ConnectionUserName=hive -Djavax.jdo.option.ConnectionPassword=password" \
--mount source=warehouse,target=/opt/hive/data/warehouse \
--mount type=bind,source=`mvn help:evaluate -Dexpression=settings.localRepository -q -DforceStdout`/org/postgresql/postgresql/42.7.3/postgresql-42.7.3.jar,target=/opt/hive/lib/postgres.jar \
--name metastore-standalone apache/hive:standalone-metastore-${HIVE_VERSION}If you want to use your own hdfs-site.xml for the service, you can provide the environment variable HIVE_CUSTOM_CONF_DIR for the command. For instance, put the custom configuration file under the directory /opt/hive/conf, then run,
docker run -d -p 9083:9083 --env DB_DRIVER=postgres \
-v /opt/hive/conf:/hive_custom_conf --env HIVE_CUSTOM_CONF_DIR=/hive_custom_conf \
--mount type=bind,source=`mvn help:evaluate -Dexpression=settings.localRepository -q -DforceStdout`/org/postgresql/postgresql/42.7.3/postgresql-42.7.3.jar,target=/opt/hive/lib/postgres.jar \
--name metastore apache/hive:standalone-metastore-${HIVE_VERSION}NOTE:
-
For Hive releases before 4.0, if you want to upgrade the existing external Metastore schema to the target version, then add "--env SCHEMA_COMMAND=upgradeSchema" to the command.
-
If the full Acid support (Compaction) is needed, use the Hive docker image to bring up the container.
To spin up Metastore with a remote DB, there is a docker-compose.yml placed under packaging/src/docker for this purpose,
specify the POSTGRES_LOCAL_PATH first:
export POSTGRES_LOCAL_PATH=your_local_path_to_postgres_driverExample:
mvn dependency:copy -Dartifact="org.postgresql:postgresql:42.7.3" && \
export POSTGRES_LOCAL_PATH=`mvn help:evaluate -Dexpression=settings.localRepository -q -DforceStdout`/org/postgresql/postgresql/42.7.3/postgresql-42.7.3.jar If you don't install maven or have problem in resolving the postgres driver, you can always download this jar yourself,
change the POSTGRES_LOCAL_PATH to the path of the downloaded jar.
- Download the AWS SDK bundle and place it under jars/ directory.
Disclaimer:
Hadoop 3.4.1 requires AWS SDK v2.
wget https://repo1.maven.org/maven2/software/amazon/awssdk/bundle/2.42.25/bundle-2.42.25.jar -P jars/- Set the following environment variables:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- DEFAULT_FS
- HIVE_WAREHOUSE_PATH
- S3_ENDPOINT_URL
Then,
DEFAULT_FS="s3a://dw-team-bucket" \
HIVE_WAREHOUSE_PATH="/data/warehouse/tablespace/managed/hive" \
S3_ENDPOINT_URL="s3.us-west-2.amazonaws.com" \
docker-compose upMetastore and Postgres services will be started as a consequence.
Volumes:
-
hive_db
The volume persists the metadata of Hive tables inside Postgres container.
-
warehouse
The volume stores tables' files inside HiveServer2 container.
To stop/remove them all,
docker compose down