Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/connector/redshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,5 +260,5 @@ deactivate the parallel read during a client session for a specific query, and
potentially re-activate it again afterward.

Additionally, define further required [S3 configuration such as IAM key, role,
Comment thread
electrum marked this conversation as resolved.
or region](/object-storage/file-system-s3), except `fs.native-s3.enabled`,
or region](/object-storage/file-system-s3), except `fs.s3.enabled`,

Comment thread
electrum marked this conversation as resolved.
6 changes: 3 additions & 3 deletions docs/src/main/sphinx/object-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,13 @@ system support.

* - Property
- Description
* - `fs.native-azure.enabled`
* - `fs.azure.enabled`
- Activate the [native implementation for Azure Storage
support](/object-storage/file-system-azure). Defaults to `false`.
* - `fs.native-gcs.enabled`
* - `fs.gcs.enabled`
- Activate the [native implementation for Google Cloud Storage
support](/object-storage/file-system-gcs). Defaults to `false`.
* - `fs.native-s3.enabled`
* - `fs.s3.enabled`
- Activate the [native implementation for S3 storage
support](/object-storage/file-system-s3). Defaults to `false`.
* - `fs.hadoop.enabled`
Expand Down
6 changes: 3 additions & 3 deletions docs/src/main/sphinx/object-storage/file-system-azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Trino includes a native implementation to access [Azure Data Lake Storage
Gen2](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview#about-azure-data-lake-storage-gen2)
with a catalog using the Delta Lake, Hive, Hudi, or Iceberg connectors.

Enable the native implementation with `fs.native-azure.enabled=true` in your
Enable the native implementation with `fs.azure.enabled=true` in your
catalog properties file. Additionally, the Azure storage account must have
hierarchical namespace enabled.

Expand All @@ -19,7 +19,7 @@ system support:

* - Property
- Description
* - `fs.native-azure.enabled`
* - `fs.azure.enabled`
- Activate the native implementation for Azure Storage support. Defaults to
`false`. Set to `true` to use Azure Storage and enable all other properties.
* - `azure.auth-type`
Expand Down Expand Up @@ -156,7 +156,7 @@ native Azure file system implementation.
To migrate a catalog to use the native file system implementation for Azure,
make the following edits to your catalog configuration:

1. Add the `fs.native-azure.enabled=true` catalog configuration property.
1. Add the `fs.azure.enabled=true` catalog configuration property.
2. If your catalog enabled `fs.hadoop.enabled` only for legacy Azure Storage
access, remove that property.
3. Configure the `azure.auth-type` catalog configuration property.
Expand Down
6 changes: 3 additions & 3 deletions docs/src/main/sphinx/object-storage/file-system-gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Trino includes a native implementation to access [Google Cloud Storage
(GCS)](https://cloud.google.com/storage/) with a catalog using the Delta Lake,
Hive, Hudi, or Iceberg connectors.

Enable the native implementation with `fs.native-gcs.enabled=true` in your
Enable the native implementation with `fs.gcs.enabled=true` in your
catalog properties file.

## General configuration
Expand All @@ -18,7 +18,7 @@ Storage file system support:

* - Property
- Description
* - `fs.native-gcs.enabled`
* - `fs.gcs.enabled`
- Activate the native implementation for Google Cloud Storage support.
Defaults to `false`. Set to `true` to use Google Cloud Storage and enable
all other properties.
Expand Down Expand Up @@ -99,7 +99,7 @@ Google Cloud Storage file system implementation.
To migrate a catalog to use the native file system implementation for Google
Cloud Storage, make the following edits to your catalog configuration:

1. Add the `fs.native-gcs.enabled=true` catalog configuration property.
1. Add the `fs.gcs.enabled=true` catalog configuration property.
2. If your catalog enabled `fs.hadoop.enabled` only for legacy Google Cloud
Storage access, remove that property.
3. Refer to the following table to rename your existing legacy catalog
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/object-storage/file-system-local.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ support:

* - Property
- Description
* - `fs.native-local.enabled`
* - `fs.local.enabled`
- Activate the support for local file system access. Defaults to `false`. Set
to `true` to use local file system and enable all other properties.
* - `local.location`
Expand All @@ -36,7 +36,7 @@ The coordinator and all workers nodes have an external storage mounted at
```properties
connector.name=hive
...
fs.native-local.enabled=true
fs.local.enabled=true
local.location=local:///storage/datalake
```

Expand Down
6 changes: 3 additions & 3 deletions docs/src/main/sphinx/object-storage/file-system-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ to support S3-compatible storage systems, only AWS S3 and MinIO are tested for
compatibility. For other storage systems, perform your own testing and consult
your vendor for more information.

Enable the native implementation with `fs.native-s3.enabled=true` in your
Enable the native implementation with `fs.s3.enabled=true` in your
catalog properties file.

## General configuration
Expand All @@ -21,7 +21,7 @@ support:

* - Property
- Description
* - `fs.native-s3.enabled`
* - `fs.s3.enabled`
- Activate the native implementation for S3 storage support. Defaults to
`false`. Set to `true` to use S3 and enable all other properties.
* - `s3.endpoint`
Expand Down Expand Up @@ -298,7 +298,7 @@ system implementation.
To migrate a catalog to use the native file system implementation for S3, make
the following edits to your catalog configuration:

1. Add the `fs.native-s3.enabled=true` catalog configuration property.
1. Add the `fs.s3.enabled=true` catalog configuration property.
2. If your catalog enabled `fs.hadoop.enabled` only for legacy S3 access,
remove that property.
3. Refer to the following table to rename your existing legacy catalog
Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/object-storage/metastores.md
Original file line number Diff line number Diff line change
Expand Up @@ -571,7 +571,7 @@ iceberg.rest-catalog.uri=https://biglake.googleapis.com/iceberg/v1beta/restcatal
iceberg.rest-catalog.security=GOOGLE
iceberg.rest-catalog.google-project-id=example-project-id
iceberg.rest-catalog.view-endpoints-enabled=false
fs.native-gcs.enable=true
fs.gcs.enabled=true
gcs.json-key-file-path=/path/to/gcs_keyfile.json
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@

import io.airlift.configuration.Config;
import io.airlift.configuration.ConfigDescription;
import io.airlift.configuration.LegacyConfig;

import static java.lang.System.getenv;

Expand Down Expand Up @@ -60,7 +61,8 @@ public boolean isNativeAzureEnabled()
return nativeAzureEnabled;
}

@Config("fs.native-azure.enabled")
@LegacyConfig("fs.native-azure.enabled")
Comment thread
electrum marked this conversation as resolved.
@Config("fs.azure.enabled")
public FileSystemConfig setNativeAzureEnabled(boolean nativeAzureEnabled)
{
this.nativeAzureEnabled = nativeAzureEnabled;
Expand All @@ -72,7 +74,8 @@ public boolean isNativeS3Enabled()
return nativeS3Enabled;
}

@Config("fs.native-s3.enabled")
@LegacyConfig("fs.native-s3.enabled")
@Config("fs.s3.enabled")
public FileSystemConfig setNativeS3Enabled(boolean nativeS3Enabled)
{
this.nativeS3Enabled = nativeS3Enabled;
Expand All @@ -84,7 +87,8 @@ public boolean isNativeGcsEnabled()
return nativeGcsEnabled;
}

@Config("fs.native-gcs.enabled")
@LegacyConfig("fs.native-gcs.enabled")
@Config("fs.gcs.enabled")
public FileSystemConfig setNativeGcsEnabled(boolean nativeGcsEnabled)
{
this.nativeGcsEnabled = nativeGcsEnabled;
Expand All @@ -96,7 +100,8 @@ public boolean isNativeLocalEnabled()
return nativeLocalEnabled;
}

@Config("fs.native-local.enabled")
@LegacyConfig("fs.native-local.enabled")
@Config("fs.local.enabled")
public FileSystemConfig setNativeLocalEnabled(boolean nativeLocalEnabled)
{
this.nativeLocalEnabled = nativeLocalEnabled;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@

import java.util.Map;

import static io.airlift.configuration.testing.ConfigAssertions.assertDeprecatedEquivalence;
import static io.airlift.configuration.testing.ConfigAssertions.assertFullMapping;
import static io.airlift.configuration.testing.ConfigAssertions.assertRecordedDefaults;
import static io.airlift.configuration.testing.ConfigAssertions.recordDefaults;
Expand Down Expand Up @@ -47,10 +48,10 @@ public void testExplicitPropertyMappings()
Map<String, String> properties = ImmutableMap.<String, String>builder()
.put("fs.hadoop.enabled", "true")
.put("fs.alluxio.enabled", "true")
.put("fs.native-azure.enabled", "true")
.put("fs.native-s3.enabled", "true")
.put("fs.native-gcs.enabled", "true")
.put("fs.native-local.enabled", "true")
.put("fs.azure.enabled", "true")
.put("fs.s3.enabled", "true")
.put("fs.gcs.enabled", "true")
.put("fs.local.enabled", "true")
.put("fs.cache.enabled", "true")
.put("fs.tracking.enabled", Boolean.toString(!RUNNING_IN_CI))
.buildOrThrow();
Expand All @@ -67,4 +68,60 @@ public void testExplicitPropertyMappings()

assertFullMapping(properties, expected);
}

@Test
public void testLegacyPropertyMappings()
{
assertDeprecatedEquivalence(
FileSystemConfig.class,
Map.of(
"fs.azure.enabled", "true",
"fs.s3.enabled", "false",
"fs.gcs.enabled", "false",
"fs.local.enabled", "false"),
Map.of(
"fs.native-azure.enabled", "true",
"fs.native-s3.enabled", "false",
"fs.native-gcs.enabled", "false",
"fs.native-local.enabled", "false"));

assertDeprecatedEquivalence(
FileSystemConfig.class,
Map.of(
"fs.azure.enabled", "false",
"fs.s3.enabled", "true",
"fs.gcs.enabled", "false",
"fs.local.enabled", "false"),
Map.of(
"fs.native-azure.enabled", "false",
"fs.native-s3.enabled", "true",
"fs.native-gcs.enabled", "false",
"fs.native-local.enabled", "false"));

assertDeprecatedEquivalence(
FileSystemConfig.class,
Map.of(
"fs.azure.enabled", "false",
"fs.s3.enabled", "false",
"fs.gcs.enabled", "true",
"fs.local.enabled", "false"),
Map.of(
"fs.native-azure.enabled", "false",
"fs.native-s3.enabled", "false",
"fs.native-gcs.enabled", "true",
"fs.native-local.enabled", "false"));

assertDeprecatedEquivalence(
FileSystemConfig.class,
Map.of(
"fs.azure.enabled", "false",
"fs.s3.enabled", "false",
"fs.gcs.enabled", "false",
"fs.local.enabled", "true"),
Map.of(
"fs.native-azure.enabled", "false",
"fs.native-s3.enabled", "false",
"fs.native-gcs.enabled", "false",
"fs.native-local.enabled", "true"));
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ public Builder addMetastoreProperties(HiveHadoop hiveHadoop)
public Builder addS3Properties(Minio minio, String bucketName)
{
addDeltaProperties(ImmutableMap.<String, String>builder()
.put("fs.native-s3.enabled", "true")
.put("fs.s3.enabled", "true")
Comment thread
coderabbitai[bot] marked this conversation as resolved.
Comment thread
electrum marked this conversation as resolved.
.put("s3.aws-access-key", MINIO_ROOT_USER)
.put("s3.aws-secret-key", MINIO_ROOT_PASSWORD)
.put("s3.region", MINIO_REGION)
Expand Down Expand Up @@ -162,7 +162,7 @@ public DistributedQueryRunner build()
}

if (deltaProperties.keySet().stream().noneMatch(key ->
key.equals("fs.hadoop.enabled") || key.startsWith("fs.native-"))) {
key.matches("fs\\.(azure|gcs|s3|local|hadoop)\\.enabled"))) {
deltaProperties.put("fs.hadoop.enabled", "true");
}
queryRunner.createCatalog(DELTA_CATALOG, CONNECTOR_NAME, deltaProperties);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ protected HiveHadoop createHiveHadoop()
protected Map<String, String> hiveStorageConfiguration()
{
return ImmutableMap.<String, String>builder()
.put("fs.native-azure.enabled", "true")
.put("fs.azure.enabled", "true")
.put("azure.auth-type", "ACCESS_KEY")
.put("azure.access-key", accessKey)
.buildOrThrow();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ protected QueryRunner createQueryRunner()
return DeltaLakeQueryRunner.builder()
.setDeltaProperties(ImmutableMap.<String, String>builder()
.put("hive.metastore.uri", hiveHadoop.getHiveMetastoreEndpoint().toString())
.put("fs.native-azure.enabled", "true")
.put("fs.azure.enabled", "true")
.put("azure.auth-type", "ACCESS_KEY")
.put("azure.access-key", accessKey)
.put("delta.register-table-procedure.enabled", "true")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ protected QueryRunner createQueryRunner()
.put("hive.metastore.disable-location-checks", "true")
// required by the file metastore
.put("fs.hadoop.enabled", "true")
.put("fs.native-s3.enabled", "true")
.put("fs.s3.enabled", "true")
.put("s3.aws-access-key", MINIO_ROOT_USER)
.put("s3.aws-secret-key", MINIO_ROOT_PASSWORD)
.put("s3.region", MINIO_REGION)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ protected HiveHadoop createHiveHadoop()
protected Map<String, String> hiveStorageConfiguration()
{
return ImmutableMap.<String, String>builder()
.put("fs.native-gcs.enabled", "true")
.put("fs.gcs.enabled", "true")
.put("gcs.json-key", gcpCredentials)
.buildOrThrow();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ public class TestDeltaLakeMinioAndHmsConnectorSmokeTest
protected Map<String, String> hiveStorageConfiguration()
{
return ImmutableMap.<String, String>builder()
.put("fs.native-s3.enabled", "true")
.put("fs.s3.enabled", "true")
.put("s3.aws-access-key", MINIO_ROOT_USER)
.put("s3.aws-secret-key", MINIO_ROOT_PASSWORD)
.put("s3.region", MINIO_REGION)
Expand All @@ -49,7 +49,7 @@ protected Map<String, String> hiveStorageConfiguration()
protected Map<String, String> deltaStorageConfiguration()
{
return ImmutableMap.<String, String>builder()
.put("fs.native-s3.enabled", "true")
.put("fs.s3.enabled", "true")
.put("s3.aws-access-key", MINIO_ROOT_USER)
.put("s3.aws-secret-key", MINIO_ROOT_PASSWORD)
.put("s3.region", MINIO_REGION)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ protected QueryRunner createQueryRunner()
.put("hive.metastore.disable-location-checks", "true")
// required by the file metastore
.put("fs.hadoop.enabled", "true")
.put("fs.native-s3.enabled", "true")
.put("fs.s3.enabled", "true")
.put("s3.aws-access-key", MINIO_ROOT_USER)
.put("s3.aws-secret-key", MINIO_ROOT_PASSWORD)
.put("s3.region", MINIO_REGION)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ protected QueryRunner createQueryRunner()
.put("hive.metastore.catalog.dir", queryRunner.getCoordinator().getBaseDataDir().resolve("file-metastore").toString())
// required by the file metastore
.put("fs.hadoop.enabled", "true")
.put("fs.native-s3.enabled", "true")
.put("fs.s3.enabled", "true")
.put("s3.aws-access-key", MINIO_ROOT_USER)
.put("s3.aws-secret-key", MINIO_ROOT_PASSWORD)
.put("s3.region", MINIO_REGION)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ protected QueryRunner createQueryRunner()
queryRunner.createCatalog("hive", "hive", ImmutableMap.<String, String>builder()
.put("hive.metastore", "thrift")
.put("hive.metastore.uri", hiveMinioDataLake.getHiveMetastoreEndpoint().toString())
.put("fs.native-s3.enabled", "true")
.put("fs.s3.enabled", "true")
.put("s3.aws-access-key", MINIO_ROOT_USER)
.put("s3.aws-secret-key", MINIO_ROOT_PASSWORD)
.put("s3.region", MINIO_REGION)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ protected QueryRunner createQueryRunner()
.setDeltaProperties(ImmutableMap.<String, String>builder()
.put("hive.metastore", "glue")
.put("hive.metastore.glue.default-warehouse-dir", schemaPath())
.put("fs.native-s3.enabled", "true")
.put("fs.s3.enabled", "true")
.put("delta.enable-non-concurrent-writes", "true")
.buildOrThrow())
.setSchemaLocation(schemaPath())
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ public DistributedQueryRunner build()
Path dataDir = queryRunner.getCoordinator().getBaseDataDir().resolve("hive_data");

if (hiveProperties.buildOrThrow().keySet().stream().noneMatch(key ->
key.equals("fs.hadoop.enabled") || key.startsWith("fs.native-"))) {
key.matches("fs\\.(azure|gcs|s3|local|hadoop)\\.enabled"))) {
hiveProperties.put("fs.hadoop.enabled", "true");
Comment thread
coderabbitai[bot] marked this conversation as resolved.
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ protected QueryRunner createQueryRunner()
.addHiveProperty("hive.metastore", "thrift")
.addHiveProperty("hive.metastore.uri", hiveMinioDataLake.getHiveMetastoreEndpoint().toString())
.addHiveProperty("hive.metastore.thrift.catalog-name", HIVE_CUSTOM_CATALOG)
.addHiveProperty("fs.native-s3.enabled", "true")
.addHiveProperty("fs.s3.enabled", "true")
.addHiveProperty("s3.path-style-access", "true")
.addHiveProperty("s3.region", MINIO_REGION)
.addHiveProperty("s3.endpoint", hiveMinioDataLake.getMinio().getMinioAddress())
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ protected QueryRunner createQueryRunner()
.addHiveProperty("hive.metastore.glue.default-warehouse-dir", schemaPath())
.addHiveProperty("hive.security", "allow-all")
.addHiveProperty("hive.non-managed-table-writes-enabled", "true")
.addHiveProperty("fs.native-s3.enabled", "true")
.addHiveProperty("fs.s3.enabled", "true")
Comment thread
electrum marked this conversation as resolved.
.build();
queryRunner.execute("CREATE SCHEMA " + schemaName + " WITH (location = '" + schemaPath() + "')");
queryRunner.execute("CREATE SCHEMA IF NOT EXISTS functions");
Expand Down
Loading