diff --git a/docs/src/main/sphinx/connector/redshift.md b/docs/src/main/sphinx/connector/redshift.md index 255934a2a4a0..d8f4aba3fafe 100644 --- a/docs/src/main/sphinx/connector/redshift.md +++ b/docs/src/main/sphinx/connector/redshift.md @@ -260,5 +260,5 @@ deactivate the parallel read during a client session for a specific query, and potentially re-activate it again afterward. Additionally, define further required [S3 configuration such as IAM key, role, -or region](/object-storage/file-system-s3), except `fs.native-s3.enabled`, +or region](/object-storage/file-system-s3), except `fs.s3.enabled`, diff --git a/docs/src/main/sphinx/object-storage.md b/docs/src/main/sphinx/object-storage.md index 0b37cbcadfd3..13367e88dfb7 100644 --- a/docs/src/main/sphinx/object-storage.md +++ b/docs/src/main/sphinx/object-storage.md @@ -35,13 +35,13 @@ system support. * - Property - Description -* - `fs.native-azure.enabled` +* - `fs.azure.enabled` - Activate the [native implementation for Azure Storage support](/object-storage/file-system-azure). Defaults to `false`. -* - `fs.native-gcs.enabled` +* - `fs.gcs.enabled` - Activate the [native implementation for Google Cloud Storage support](/object-storage/file-system-gcs). Defaults to `false`. -* - `fs.native-s3.enabled` +* - `fs.s3.enabled` - Activate the [native implementation for S3 storage support](/object-storage/file-system-s3). Defaults to `false`. * - `fs.hadoop.enabled` diff --git a/docs/src/main/sphinx/object-storage/file-system-azure.md b/docs/src/main/sphinx/object-storage/file-system-azure.md index 72627fb4da86..d37cbed68eed 100644 --- a/docs/src/main/sphinx/object-storage/file-system-azure.md +++ b/docs/src/main/sphinx/object-storage/file-system-azure.md @@ -4,7 +4,7 @@ Trino includes a native implementation to access [Azure Data Lake Storage Gen2](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview#about-azure-data-lake-storage-gen2) with a catalog using the Delta Lake, Hive, Hudi, or Iceberg connectors. -Enable the native implementation with `fs.native-azure.enabled=true` in your +Enable the native implementation with `fs.azure.enabled=true` in your catalog properties file. Additionally, the Azure storage account must have hierarchical namespace enabled. @@ -19,7 +19,7 @@ system support: * - Property - Description -* - `fs.native-azure.enabled` +* - `fs.azure.enabled` - Activate the native implementation for Azure Storage support. Defaults to `false`. Set to `true` to use Azure Storage and enable all other properties. * - `azure.auth-type` @@ -156,7 +156,7 @@ native Azure file system implementation. To migrate a catalog to use the native file system implementation for Azure, make the following edits to your catalog configuration: -1. Add the `fs.native-azure.enabled=true` catalog configuration property. +1. Add the `fs.azure.enabled=true` catalog configuration property. 2. If your catalog enabled `fs.hadoop.enabled` only for legacy Azure Storage access, remove that property. 3. Configure the `azure.auth-type` catalog configuration property. diff --git a/docs/src/main/sphinx/object-storage/file-system-gcs.md b/docs/src/main/sphinx/object-storage/file-system-gcs.md index 25ecda690d61..b34a7e38a619 100644 --- a/docs/src/main/sphinx/object-storage/file-system-gcs.md +++ b/docs/src/main/sphinx/object-storage/file-system-gcs.md @@ -4,7 +4,7 @@ Trino includes a native implementation to access [Google Cloud Storage (GCS)](https://cloud.google.com/storage/) with a catalog using the Delta Lake, Hive, Hudi, or Iceberg connectors. -Enable the native implementation with `fs.native-gcs.enabled=true` in your +Enable the native implementation with `fs.gcs.enabled=true` in your catalog properties file. ## General configuration @@ -18,7 +18,7 @@ Storage file system support: * - Property - Description -* - `fs.native-gcs.enabled` +* - `fs.gcs.enabled` - Activate the native implementation for Google Cloud Storage support. Defaults to `false`. Set to `true` to use Google Cloud Storage and enable all other properties. @@ -99,7 +99,7 @@ Google Cloud Storage file system implementation. To migrate a catalog to use the native file system implementation for Google Cloud Storage, make the following edits to your catalog configuration: -1. Add the `fs.native-gcs.enabled=true` catalog configuration property. +1. Add the `fs.gcs.enabled=true` catalog configuration property. 2. If your catalog enabled `fs.hadoop.enabled` only for legacy Google Cloud Storage access, remove that property. 3. Refer to the following table to rename your existing legacy catalog diff --git a/docs/src/main/sphinx/object-storage/file-system-local.md b/docs/src/main/sphinx/object-storage/file-system-local.md index 3ea2492925f7..0fb2059da245 100644 --- a/docs/src/main/sphinx/object-storage/file-system-local.md +++ b/docs/src/main/sphinx/object-storage/file-system-local.md @@ -19,7 +19,7 @@ support: * - Property - Description -* - `fs.native-local.enabled` +* - `fs.local.enabled` - Activate the support for local file system access. Defaults to `false`. Set to `true` to use local file system and enable all other properties. * - `local.location` @@ -36,7 +36,7 @@ The coordinator and all workers nodes have an external storage mounted at ```properties connector.name=hive ... -fs.native-local.enabled=true +fs.local.enabled=true local.location=local:///storage/datalake ``` diff --git a/docs/src/main/sphinx/object-storage/file-system-s3.md b/docs/src/main/sphinx/object-storage/file-system-s3.md index d53824de822f..39c4d38b4f85 100644 --- a/docs/src/main/sphinx/object-storage/file-system-s3.md +++ b/docs/src/main/sphinx/object-storage/file-system-s3.md @@ -7,7 +7,7 @@ to support S3-compatible storage systems, only AWS S3 and MinIO are tested for compatibility. For other storage systems, perform your own testing and consult your vendor for more information. -Enable the native implementation with `fs.native-s3.enabled=true` in your +Enable the native implementation with `fs.s3.enabled=true` in your catalog properties file. ## General configuration @@ -21,7 +21,7 @@ support: * - Property - Description -* - `fs.native-s3.enabled` +* - `fs.s3.enabled` - Activate the native implementation for S3 storage support. Defaults to `false`. Set to `true` to use S3 and enable all other properties. * - `s3.endpoint` @@ -298,7 +298,7 @@ system implementation. To migrate a catalog to use the native file system implementation for S3, make the following edits to your catalog configuration: -1. Add the `fs.native-s3.enabled=true` catalog configuration property. +1. Add the `fs.s3.enabled=true` catalog configuration property. 2. If your catalog enabled `fs.hadoop.enabled` only for legacy S3 access, remove that property. 3. Refer to the following table to rename your existing legacy catalog diff --git a/docs/src/main/sphinx/object-storage/metastores.md b/docs/src/main/sphinx/object-storage/metastores.md index 3d5e98e3a1ba..fb66bd7c2b48 100644 --- a/docs/src/main/sphinx/object-storage/metastores.md +++ b/docs/src/main/sphinx/object-storage/metastores.md @@ -571,7 +571,7 @@ iceberg.rest-catalog.uri=https://biglake.googleapis.com/iceberg/v1beta/restcatal iceberg.rest-catalog.security=GOOGLE iceberg.rest-catalog.google-project-id=example-project-id iceberg.rest-catalog.view-endpoints-enabled=false -fs.native-gcs.enable=true +fs.gcs.enabled=true gcs.json-key-file-path=/path/to/gcs_keyfile.json ``` diff --git a/lib/trino-filesystem-manager/src/main/java/io/trino/filesystem/manager/FileSystemConfig.java b/lib/trino-filesystem-manager/src/main/java/io/trino/filesystem/manager/FileSystemConfig.java index 12eabc49b8bf..0d5c8addff82 100644 --- a/lib/trino-filesystem-manager/src/main/java/io/trino/filesystem/manager/FileSystemConfig.java +++ b/lib/trino-filesystem-manager/src/main/java/io/trino/filesystem/manager/FileSystemConfig.java @@ -15,6 +15,7 @@ import io.airlift.configuration.Config; import io.airlift.configuration.ConfigDescription; +import io.airlift.configuration.LegacyConfig; import static java.lang.System.getenv; @@ -60,7 +61,8 @@ public boolean isNativeAzureEnabled() return nativeAzureEnabled; } - @Config("fs.native-azure.enabled") + @LegacyConfig("fs.native-azure.enabled") + @Config("fs.azure.enabled") public FileSystemConfig setNativeAzureEnabled(boolean nativeAzureEnabled) { this.nativeAzureEnabled = nativeAzureEnabled; @@ -72,7 +74,8 @@ public boolean isNativeS3Enabled() return nativeS3Enabled; } - @Config("fs.native-s3.enabled") + @LegacyConfig("fs.native-s3.enabled") + @Config("fs.s3.enabled") public FileSystemConfig setNativeS3Enabled(boolean nativeS3Enabled) { this.nativeS3Enabled = nativeS3Enabled; @@ -84,7 +87,8 @@ public boolean isNativeGcsEnabled() return nativeGcsEnabled; } - @Config("fs.native-gcs.enabled") + @LegacyConfig("fs.native-gcs.enabled") + @Config("fs.gcs.enabled") public FileSystemConfig setNativeGcsEnabled(boolean nativeGcsEnabled) { this.nativeGcsEnabled = nativeGcsEnabled; @@ -96,7 +100,8 @@ public boolean isNativeLocalEnabled() return nativeLocalEnabled; } - @Config("fs.native-local.enabled") + @LegacyConfig("fs.native-local.enabled") + @Config("fs.local.enabled") public FileSystemConfig setNativeLocalEnabled(boolean nativeLocalEnabled) { this.nativeLocalEnabled = nativeLocalEnabled; diff --git a/lib/trino-filesystem-manager/src/test/java/io/trino/filesystem/manager/TestFileSystemConfig.java b/lib/trino-filesystem-manager/src/test/java/io/trino/filesystem/manager/TestFileSystemConfig.java index 76d0f8e5eb4f..2f5ffa24b166 100644 --- a/lib/trino-filesystem-manager/src/test/java/io/trino/filesystem/manager/TestFileSystemConfig.java +++ b/lib/trino-filesystem-manager/src/test/java/io/trino/filesystem/manager/TestFileSystemConfig.java @@ -18,6 +18,7 @@ import java.util.Map; +import static io.airlift.configuration.testing.ConfigAssertions.assertDeprecatedEquivalence; import static io.airlift.configuration.testing.ConfigAssertions.assertFullMapping; import static io.airlift.configuration.testing.ConfigAssertions.assertRecordedDefaults; import static io.airlift.configuration.testing.ConfigAssertions.recordDefaults; @@ -47,10 +48,10 @@ public void testExplicitPropertyMappings() Map properties = ImmutableMap.builder() .put("fs.hadoop.enabled", "true") .put("fs.alluxio.enabled", "true") - .put("fs.native-azure.enabled", "true") - .put("fs.native-s3.enabled", "true") - .put("fs.native-gcs.enabled", "true") - .put("fs.native-local.enabled", "true") + .put("fs.azure.enabled", "true") + .put("fs.s3.enabled", "true") + .put("fs.gcs.enabled", "true") + .put("fs.local.enabled", "true") .put("fs.cache.enabled", "true") .put("fs.tracking.enabled", Boolean.toString(!RUNNING_IN_CI)) .buildOrThrow(); @@ -67,4 +68,60 @@ public void testExplicitPropertyMappings() assertFullMapping(properties, expected); } + + @Test + public void testLegacyPropertyMappings() + { + assertDeprecatedEquivalence( + FileSystemConfig.class, + Map.of( + "fs.azure.enabled", "true", + "fs.s3.enabled", "false", + "fs.gcs.enabled", "false", + "fs.local.enabled", "false"), + Map.of( + "fs.native-azure.enabled", "true", + "fs.native-s3.enabled", "false", + "fs.native-gcs.enabled", "false", + "fs.native-local.enabled", "false")); + + assertDeprecatedEquivalence( + FileSystemConfig.class, + Map.of( + "fs.azure.enabled", "false", + "fs.s3.enabled", "true", + "fs.gcs.enabled", "false", + "fs.local.enabled", "false"), + Map.of( + "fs.native-azure.enabled", "false", + "fs.native-s3.enabled", "true", + "fs.native-gcs.enabled", "false", + "fs.native-local.enabled", "false")); + + assertDeprecatedEquivalence( + FileSystemConfig.class, + Map.of( + "fs.azure.enabled", "false", + "fs.s3.enabled", "false", + "fs.gcs.enabled", "true", + "fs.local.enabled", "false"), + Map.of( + "fs.native-azure.enabled", "false", + "fs.native-s3.enabled", "false", + "fs.native-gcs.enabled", "true", + "fs.native-local.enabled", "false")); + + assertDeprecatedEquivalence( + FileSystemConfig.class, + Map.of( + "fs.azure.enabled", "false", + "fs.s3.enabled", "false", + "fs.gcs.enabled", "false", + "fs.local.enabled", "true"), + Map.of( + "fs.native-azure.enabled", "false", + "fs.native-s3.enabled", "false", + "fs.native-gcs.enabled", "false", + "fs.native-local.enabled", "true")); + } } diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/DeltaLakeQueryRunner.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/DeltaLakeQueryRunner.java index 2dd59a9db1c4..c500ca4169c8 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/DeltaLakeQueryRunner.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/DeltaLakeQueryRunner.java @@ -119,7 +119,7 @@ public Builder addMetastoreProperties(HiveHadoop hiveHadoop) public Builder addS3Properties(Minio minio, String bucketName) { addDeltaProperties(ImmutableMap.builder() - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) @@ -162,7 +162,7 @@ public DistributedQueryRunner build() } if (deltaProperties.keySet().stream().noneMatch(key -> - key.equals("fs.hadoop.enabled") || key.startsWith("fs.native-"))) { + key.matches("fs\\.(azure|gcs|s3|local|hadoop)\\.enabled"))) { deltaProperties.put("fs.hadoop.enabled", "true"); } queryRunner.createCatalog(DELTA_CATALOG, CONNECTOR_NAME, deltaProperties); diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAdlsConnectorSmokeTest.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAdlsConnectorSmokeTest.java index 8def6aa8c60c..87dcb0933fb2 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAdlsConnectorSmokeTest.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAdlsConnectorSmokeTest.java @@ -112,7 +112,7 @@ protected HiveHadoop createHiveHadoop() protected Map hiveStorageConfiguration() { return ImmutableMap.builder() - .put("fs.native-azure.enabled", "true") + .put("fs.azure.enabled", "true") .put("azure.auth-type", "ACCESS_KEY") .put("azure.access-key", accessKey) .buildOrThrow(); diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAdlsStorage.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAdlsStorage.java index b15f350e0dcd..4e08e1a21215 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAdlsStorage.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAdlsStorage.java @@ -84,7 +84,7 @@ protected QueryRunner createQueryRunner() return DeltaLakeQueryRunner.builder() .setDeltaProperties(ImmutableMap.builder() .put("hive.metastore.uri", hiveHadoop.getHiveMetastoreEndpoint().toString()) - .put("fs.native-azure.enabled", "true") + .put("fs.azure.enabled", "true") .put("azure.auth-type", "ACCESS_KEY") .put("azure.access-key", accessKey) .put("delta.register-table-procedure.enabled", "true") diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeConnectorTest.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeConnectorTest.java index 8fd3f3610c05..fafb506c8262 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeConnectorTest.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeConnectorTest.java @@ -141,7 +141,7 @@ protected QueryRunner createQueryRunner() .put("hive.metastore.disable-location-checks", "true") // required by the file metastore .put("fs.hadoop.enabled", "true") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeGcsConnectorSmokeTest.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeGcsConnectorSmokeTest.java index dc87e9107196..15d607f365be 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeGcsConnectorSmokeTest.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeGcsConnectorSmokeTest.java @@ -136,7 +136,7 @@ protected HiveHadoop createHiveHadoop() protected Map hiveStorageConfiguration() { return ImmutableMap.builder() - .put("fs.native-gcs.enabled", "true") + .put("fs.gcs.enabled", "true") .put("gcs.json-key", gcpCredentials) .buildOrThrow(); } diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeMinioAndHmsConnectorSmokeTest.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeMinioAndHmsConnectorSmokeTest.java index 9b836421cb9c..233a3d5a819b 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeMinioAndHmsConnectorSmokeTest.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeMinioAndHmsConnectorSmokeTest.java @@ -35,7 +35,7 @@ public class TestDeltaLakeMinioAndHmsConnectorSmokeTest protected Map hiveStorageConfiguration() { return ImmutableMap.builder() - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) @@ -49,7 +49,7 @@ protected Map hiveStorageConfiguration() protected Map deltaStorageConfiguration() { return ImmutableMap.builder() - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeMinioAndLockBasedSynchronizerSmokeTest.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeMinioAndLockBasedSynchronizerSmokeTest.java index 8209cd2b8719..8a93d317cba0 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeMinioAndLockBasedSynchronizerSmokeTest.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeMinioAndLockBasedSynchronizerSmokeTest.java @@ -89,7 +89,7 @@ protected QueryRunner createQueryRunner() .put("hive.metastore.disable-location-checks", "true") // required by the file metastore .put("fs.hadoop.enabled", "true") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakePreferredPartitioning.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakePreferredPartitioning.java index cc507e742850..6b874a5334e8 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakePreferredPartitioning.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakePreferredPartitioning.java @@ -66,7 +66,7 @@ protected QueryRunner createQueryRunner() .put("hive.metastore.catalog.dir", queryRunner.getCoordinator().getBaseDataDir().resolve("file-metastore").toString()) // required by the file metastore .put("fs.hadoop.enabled", "true") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeSharedHiveMetastoreWithViews.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeSharedHiveMetastoreWithViews.java index bf490daae3ad..3fd4fa2dbc84 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeSharedHiveMetastoreWithViews.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeSharedHiveMetastoreWithViews.java @@ -55,7 +55,7 @@ protected QueryRunner createQueryRunner() queryRunner.createCatalog("hive", "hive", ImmutableMap.builder() .put("hive.metastore", "thrift") .put("hive.metastore.uri", hiveMinioDataLake.getHiveMetastoreEndpoint().toString()) - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/metastore/glue/TestDeltaS3AndGlueMetastoreTest.java b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/metastore/glue/TestDeltaS3AndGlueMetastoreTest.java index 7df2239ba704..7c8a89a856a3 100644 --- a/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/metastore/glue/TestDeltaS3AndGlueMetastoreTest.java +++ b/plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/metastore/glue/TestDeltaS3AndGlueMetastoreTest.java @@ -45,7 +45,7 @@ protected QueryRunner createQueryRunner() .setDeltaProperties(ImmutableMap.builder() .put("hive.metastore", "glue") .put("hive.metastore.glue.default-warehouse-dir", schemaPath()) - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("delta.enable-non-concurrent-writes", "true") .buildOrThrow()) .setSchemaLocation(schemaPath()) diff --git a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/HiveQueryRunner.java b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/HiveQueryRunner.java index d8a37349b419..086c7dabb4f0 100644 --- a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/HiveQueryRunner.java +++ b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/HiveQueryRunner.java @@ -236,7 +236,7 @@ public DistributedQueryRunner build() Path dataDir = queryRunner.getCoordinator().getBaseDataDir().resolve("hive_data"); if (hiveProperties.buildOrThrow().keySet().stream().noneMatch(key -> - key.equals("fs.hadoop.enabled") || key.startsWith("fs.native-"))) { + key.matches("fs\\.(azure|gcs|s3|local|hadoop)\\.enabled"))) { hiveProperties.put("fs.hadoop.enabled", "true"); } diff --git a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveCustomCatalogConnectorSmokeTest.java b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveCustomCatalogConnectorSmokeTest.java index 3569a514e8d5..35805fef5165 100644 --- a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveCustomCatalogConnectorSmokeTest.java +++ b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveCustomCatalogConnectorSmokeTest.java @@ -59,7 +59,7 @@ protected QueryRunner createQueryRunner() .addHiveProperty("hive.metastore", "thrift") .addHiveProperty("hive.metastore.uri", hiveMinioDataLake.getHiveMetastoreEndpoint().toString()) .addHiveProperty("hive.metastore.thrift.catalog-name", HIVE_CUSTOM_CATALOG) - .addHiveProperty("fs.native-s3.enabled", "true") + .addHiveProperty("fs.s3.enabled", "true") .addHiveProperty("s3.path-style-access", "true") .addHiveProperty("s3.region", MINIO_REGION) .addHiveProperty("s3.endpoint", hiveMinioDataLake.getMinio().getMinioAddress()) diff --git a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveS3AndGlueMetastoreTest.java b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveS3AndGlueMetastoreTest.java index fe290c46a56d..3c906bccaaf2 100644 --- a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveS3AndGlueMetastoreTest.java +++ b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/TestHiveS3AndGlueMetastoreTest.java @@ -62,7 +62,7 @@ protected QueryRunner createQueryRunner() .addHiveProperty("hive.metastore.glue.default-warehouse-dir", schemaPath()) .addHiveProperty("hive.security", "allow-all") .addHiveProperty("hive.non-managed-table-writes-enabled", "true") - .addHiveProperty("fs.native-s3.enabled", "true") + .addHiveProperty("fs.s3.enabled", "true") .build(); queryRunner.execute("CREATE SCHEMA " + schemaName + " WITH (location = '" + schemaPath() + "')"); queryRunner.execute("CREATE SCHEMA IF NOT EXISTS functions"); diff --git a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/gcs/GcsHiveQueryRunner.java b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/gcs/GcsHiveQueryRunner.java index ec31976c084c..d862389ec7bd 100644 --- a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/gcs/GcsHiveQueryRunner.java +++ b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/gcs/GcsHiveQueryRunner.java @@ -110,7 +110,7 @@ public DistributedQueryRunner build() byte[] jsonKeyBytes = Base64.getDecoder().decode(gcpCredentialKey); String gcpCredentials = new String(jsonKeyBytes, UTF_8); - addHiveProperty("fs.native-gcs.enabled", "true"); + addHiveProperty("fs.gcs.enabled", "true"); addHiveProperty("gcs.json-key", gcpCredentials); addHiveProperty("hive.non-managed-table-writes-enabled", "true"); addHiveProperty("hive.non-managed-table-creates-enabled", "true"); diff --git a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/metastore/thrift/TestHiveMetastoreCatalogs.java b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/metastore/thrift/TestHiveMetastoreCatalogs.java index ccca864872be..e12aa4038c4b 100644 --- a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/metastore/thrift/TestHiveMetastoreCatalogs.java +++ b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/metastore/thrift/TestHiveMetastoreCatalogs.java @@ -80,7 +80,7 @@ private static Map buildHiveProperties(Hive3MinioDataLake hiveMi return ImmutableMap.builder() .put("hive.metastore", "thrift") .put("hive.metastore.uri", hiveMinioDataLake.getHiveMetastoreEndpoint().toString()) - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.path-style-access", "true") .put("s3.region", MINIO_REGION) .put("s3.endpoint", hiveMinioDataLake.getMinio().getMinioAddress()) diff --git a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/s3/S3HiveQueryRunner.java b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/s3/S3HiveQueryRunner.java index caf6c2b927b0..7a7f551ce663 100644 --- a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/s3/S3HiveQueryRunner.java +++ b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/s3/S3HiveQueryRunner.java @@ -155,7 +155,7 @@ public DistributedQueryRunner build() String lowerCaseS3Endpoint = s3Endpoint.toLowerCase(Locale.ENGLISH); checkArgument(lowerCaseS3Endpoint.startsWith("http://") || lowerCaseS3Endpoint.startsWith("https://"), "Expected http URI for S3 endpoint; got %s", s3Endpoint); - addHiveProperty("fs.native-s3.enabled", "true"); + addHiveProperty("fs.s3.enabled", "true"); addHiveProperty("s3.region", s3Region); addHiveProperty("s3.endpoint", s3Endpoint); addHiveProperty("s3.aws-access-key", s3AccessKey); diff --git a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/s3/TestS3FileSystemAccessOperations.java b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/s3/TestS3FileSystemAccessOperations.java index 0bd721bac530..7a64bf3f5e7a 100644 --- a/plugin/trino-hive/src/test/java/io/trino/plugin/hive/s3/TestS3FileSystemAccessOperations.java +++ b/plugin/trino-hive/src/test/java/io/trino/plugin/hive/s3/TestS3FileSystemAccessOperations.java @@ -66,7 +66,7 @@ protected QueryRunner createQueryRunner() return HiveQueryRunner.builder() .setHiveProperties(ImmutableMap.builder() .put("hive.metastore.disable-location-checks", "true") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-hudi/src/test/java/io/trino/plugin/hudi/HudiQueryRunner.java b/plugin/trino-hudi/src/test/java/io/trino/plugin/hudi/HudiQueryRunner.java index b2f89a196579..63635aeee2cb 100644 --- a/plugin/trino-hudi/src/test/java/io/trino/plugin/hudi/HudiQueryRunner.java +++ b/plugin/trino-hudi/src/test/java/io/trino/plugin/hudi/HudiQueryRunner.java @@ -59,7 +59,7 @@ public static Builder builder() public static Builder builder(Hive3MinioDataLake hiveMinioDataLake) { return new Builder("s3://" + hiveMinioDataLake.getBucketName() + "/") - .addConnectorProperty("fs.native-s3.enabled", "true") + .addConnectorProperty("fs.s3.enabled", "true") .addConnectorProperty("s3.aws-access-key", MINIO_ROOT_USER) .addConnectorProperty("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .addConnectorProperty("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/BaseIcebergMinioConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/BaseIcebergMinioConnectorSmokeTest.java index 3c1dd0bb756e..173680efeb01 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/BaseIcebergMinioConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/BaseIcebergMinioConnectorSmokeTest.java @@ -85,7 +85,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.catalog.type", "HIVE_METASTORE") .put("hive.metastore.uri", hiveMinioDataLake.getHiveMetastoreEndpoint().toString()) .put("hive.metastore.thrift.client.read-timeout", "1m") // read timed out sometimes happens with the default timeout - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/BaseIcebergMinioOrcConnectorTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/BaseIcebergMinioOrcConnectorTest.java index fe2b4f298d23..abe9bb582b66 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/BaseIcebergMinioOrcConnectorTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/BaseIcebergMinioOrcConnectorTest.java @@ -67,7 +67,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.file-format", format.name()) .put("iceberg.format-version", String.valueOf(formatVersion)) .put("fs.hadoop.enabled", "true") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/IcebergQueryRunner.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/IcebergQueryRunner.java index 94223be64593..9a30adbafde1 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/IcebergQueryRunner.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/IcebergQueryRunner.java @@ -175,7 +175,7 @@ public DistributedQueryRunner build() } if (icebergProperties.buildOrThrow().keySet().stream().noneMatch(key -> - key.equals("fs.hadoop.enabled") || key.startsWith("fs.native-"))) { + key.matches("fs\\.(azure|gcs|s3|local|hadoop)\\.enabled"))) { icebergProperties.put("fs.hadoop.enabled", "true"); } @@ -278,7 +278,7 @@ static void main() .put("iceberg.rest-catalog.uri", "http://" + restCatalogBackendContainer.getRestCatalogEndpoint()) .put("iceberg.rest-catalog.vended-credentials-enabled", "true") .put("iceberg.writer-sort-buffer-size", "1MB") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.region", MINIO_REGION) .put("s3.endpoint", minio.getMinioAddress()) .put("s3.path-style-access", "true") @@ -316,7 +316,7 @@ static void main() .addIcebergProperty("iceberg.rest-catalog.security", "GOOGLE") .addIcebergProperty("iceberg.rest-catalog.google-project-id", projectId) .addIcebergProperty("iceberg.rest-catalog.view-endpoints-enabled", "false") - .addIcebergProperty("fs.native-gcs.enabled", "true") + .addIcebergProperty("fs.gcs.enabled", "true") .addIcebergProperty("gcs.json-key-file-path", gcpCredentialsFile.toString()) .disableSchemaInitializer() .build(); @@ -378,7 +378,7 @@ static void main() .put("iceberg.rest-catalog.signing-name", "s3tables") .put("iceberg.rest-catalog.view-endpoints-enabled", "false") .put("fs.hadoop.enabled", "false") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", requireEnv("S3_TABLES_ACCESS_KEY")) .put("s3.aws-secret-key", requireEnv("S3_TABLES_SECRET_KEY")) .put("s3.region", requireEnv("AWS_REGION")) @@ -407,7 +407,7 @@ static void main() .addIcebergProperty("iceberg.rest-catalog.security", "OAUTH2") .addIcebergProperty("iceberg.rest-catalog.oauth2.token", requireEnv("DATABRICKS_TOKEN")) .addIcebergProperty("iceberg.rest-catalog.vended-credentials-enabled", "true") - .addIcebergProperty("fs.native-s3.enabled", "true") + .addIcebergProperty("fs.s3.enabled", "true") .addIcebergProperty("s3.region", requireEnv("AWS_REGION")) .disableSchemaInitializer() .build(); @@ -488,7 +488,7 @@ static void main() .setIcebergProperties(Map.of( "iceberg.catalog.type", "HIVE_METASTORE", "hive.metastore.uri", hiveMinioDataLake.getHiveHadoop().getHiveMetastoreEndpoint().toString(), - "fs.native-s3.enabled", "true", + "fs.s3.enabled", "true", "s3.aws-access-key", MINIO_ROOT_USER, "s3.aws-secret-key", MINIO_ROOT_PASSWORD, "s3.region", MINIO_REGION, @@ -529,7 +529,7 @@ static void main() .setIcebergProperties(Map.of( "iceberg.catalog.type", "TESTING_FILE_METASTORE", "hive.metastore.catalog.dir", "s3://%s/".formatted(bucketName), - "fs.native-s3.enabled", "true", + "fs.s3.enabled", "true", "s3.aws-access-key", MINIO_ROOT_USER, "s3.aws-secret-key", MINIO_ROOT_PASSWORD, "s3.region", MINIO_REGION, @@ -566,7 +566,7 @@ static void main() .setIcebergProperties(Map.of( "iceberg.catalog.type", "HIVE_METASTORE", "hive.metastore.uri", sparkIcebergHive3MinioDataLake.hiveHadoop().getHiveMetastoreEndpoint().toString(), - "fs.native-s3.enabled", "true", + "fs.s3.enabled", "true", "s3.aws-access-key", MINIO_ROOT_USER, "s3.aws-secret-key", MINIO_ROOT_PASSWORD, "s3.region", MINIO_REGION, @@ -619,7 +619,7 @@ static void main() .setIcebergProperties(Map.of( "iceberg.catalog.type", "HIVE_METASTORE", "hive.metastore.uri", hiveHadoop.getHiveMetastoreEndpoint().toString(), - "fs.native-azure.enabled", "true", + "fs.azure.enabled", "true", "azure.auth-type", "ACCESS_KEY", "azure.access-key", azureAccessKey)) .setSchemaInitializer( @@ -680,7 +680,7 @@ static void main() QueryRunner queryRunner = icebergQueryRunnerMainBuilder() .setIcebergProperties(ImmutableMap.builder() .put("iceberg.catalog.type", "snowflake") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", requiredNonEmptySystemProperty("testing.snowflake.catalog.s3.access-key")) .put("s3.aws-secret-key", requiredNonEmptySystemProperty("testing.snowflake.catalog.s3.secret-key")) .put("s3.region", requiredNonEmptySystemProperty("testing.snowflake.catalog.s3.region")) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergAbfsConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergAbfsConnectorSmokeTest.java index ffce8904a0f7..ff7199ec67ca 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergAbfsConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergAbfsConnectorSmokeTest.java @@ -87,7 +87,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.catalog.type", "HIVE_METASTORE") .put("hive.metastore.uri", hiveHadoop.getHiveMetastoreEndpoint().toString()) .put("hive.metastore.thrift.client.read-timeout", "1m") // read timed out sometimes happens with the default timeout - .put("fs.native-azure.enabled", "true") + .put("fs.azure.enabled", "true") .put("azure.auth-type", "ACCESS_KEY") .put("azure.access-key", accessKey) .put("iceberg.register-table-procedure.enabled", "true") diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergGcsConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergGcsConnectorSmokeTest.java index 20682fce5083..2edafb994e65 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergGcsConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergGcsConnectorSmokeTest.java @@ -94,7 +94,7 @@ protected QueryRunner createQueryRunner() return IcebergQueryRunner.builder() .setIcebergProperties(ImmutableMap.builder() .put("iceberg.catalog.type", "hive_metastore") - .put("fs.native-gcs.enabled", "true") + .put("fs.gcs.enabled", "true") .put("gcs.json-key", gcpCredentials) .put("hive.metastore.uri", hiveHadoop.getHiveMetastoreEndpoint().toString()) .put("iceberg.file-format", format.name()) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergInvalidCompressionCodecs.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergInvalidCompressionCodecs.java index 1c58cbefea73..bfdd2e263234 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergInvalidCompressionCodecs.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergInvalidCompressionCodecs.java @@ -54,7 +54,7 @@ protected QueryRunner createQueryRunner() .build()) .setIcebergProperties( ImmutableMap.builder() - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMaterializedViewExpiredSnapshotCleanup.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMaterializedViewExpiredSnapshotCleanup.java index 11ec26030438..ba32296cb1a1 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMaterializedViewExpiredSnapshotCleanup.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMaterializedViewExpiredSnapshotCleanup.java @@ -87,7 +87,7 @@ protected QueryRunner createQueryRunner() return IcebergQueryRunner.builder() .setIcebergProperties(ImmutableMap.of( "iceberg.materialized-views.refresh-max-snapshots-to-expire", "5", - "fs.native-s3.enabled", "true", + "fs.s3.enabled", "true", "s3.aws-access-key", MINIO_ROOT_USER, "s3.aws-secret-key", MINIO_ROOT_PASSWORD, "s3.region", MINIO_REGION, diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMotoConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMotoConnectorSmokeTest.java index 67ea2b51fe5c..ddb0bb8c12aa 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMotoConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergMotoConnectorSmokeTest.java @@ -67,7 +67,7 @@ protected QueryRunner createQueryRunner() .put("hive.metastore.glue.aws-access-key", MOTO_ACCESS_KEY) .put("hive.metastore.glue.aws-secret-key", MOTO_SECRET_KEY) .put("hive.metastore.glue.default-warehouse-dir", "s3://%s/".formatted(bucketName)) - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.region", MOTO_REGION) .put("s3.endpoint", moto.getEndpoint().toString()) .put("s3.aws-access-key", MOTO_ACCESS_KEY) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergParquetWithBloomFiltersMixedCase.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergParquetWithBloomFiltersMixedCase.java index 62e69e3c79f1..3672a06aa5f3 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergParquetWithBloomFiltersMixedCase.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergParquetWithBloomFiltersMixedCase.java @@ -56,7 +56,7 @@ protected QueryRunner createQueryRunner() QueryRunner queryRunner = IcebergQueryRunner.builder() .setIcebergProperties( ImmutableMap.builder() - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergPartitionEvolutionOnSameColumn.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergPartitionEvolutionOnSameColumn.java index 93eaaedcb018..77dddb1adc8f 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergPartitionEvolutionOnSameColumn.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergPartitionEvolutionOnSameColumn.java @@ -48,7 +48,7 @@ protected QueryRunner createQueryRunner() QueryRunner queryRunner = IcebergQueryRunner.builder() .setIcebergProperties( ImmutableMap.builder() - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergReadVersionedTableByTemporal.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergReadVersionedTableByTemporal.java index d2556b3ee485..27686289a037 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergReadVersionedTableByTemporal.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestIcebergReadVersionedTableByTemporal.java @@ -51,7 +51,7 @@ protected QueryRunner createQueryRunner() QueryRunner queryRunner = IcebergQueryRunner.builder() .setIcebergProperties( ImmutableMap.builder() - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestSharedHiveThriftMetastore.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestSharedHiveThriftMetastore.java index 7faa88e0f115..b334d567085f 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestSharedHiveThriftMetastore.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/TestSharedHiveThriftMetastore.java @@ -83,7 +83,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.catalog.type", "HIVE_METASTORE") .put("hive.metastore.uri", hiveMinioDataLake.getHiveHadoop().getHiveMetastoreEndpoint().toString()) .put("hive.metastore.thrift.client.read-timeout", "1m") // read timed out sometimes happens with the default timeout - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) @@ -101,7 +101,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.catalog.type", "HIVE_METASTORE") .put("hive.metastore.uri", hiveMinioDataLake.getHiveHadoop().getHiveMetastoreEndpoint().toString()) .put("hive.metastore.thrift.client.read-timeout", "1m") // read timed out sometimes happens with the default timeout - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) @@ -118,7 +118,7 @@ protected QueryRunner createQueryRunner() Map hiveProperties = ImmutableMap.builder() .put("hive.metastore", "thrift") .put("hive.metastore.uri", hiveMinioDataLake.getHiveHadoop().getHiveMetastoreEndpoint().toString()) - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/glue/TestIcebergGlueCatalogConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/glue/TestIcebergGlueCatalogConnectorSmokeTest.java index 5b59fa3544c2..b095d337cbce 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/glue/TestIcebergGlueCatalogConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/glue/TestIcebergGlueCatalogConnectorSmokeTest.java @@ -79,7 +79,7 @@ protected QueryRunner createQueryRunner() "iceberg.file-format", format.name(), "iceberg.catalog.type", "glue", "hive.metastore.glue.default-warehouse-dir", schemaPath(), - "fs.native-s3.enabled", "true", + "fs.s3.enabled", "true", "iceberg.register-table-procedure.enabled", "true", "iceberg.writer-sort-buffer-size", "1MB")) .setSchemaInitializer( diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/glue/TestIcebergS3AndGlueMetastoreTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/glue/TestIcebergS3AndGlueMetastoreTest.java index a9d05c8a9f5f..465482de11de 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/glue/TestIcebergS3AndGlueMetastoreTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/glue/TestIcebergS3AndGlueMetastoreTest.java @@ -49,7 +49,7 @@ protected QueryRunner createQueryRunner() .setIcebergProperties(ImmutableMap.builder() .put("iceberg.catalog.type", "glue") .put("hive.metastore.glue.default-warehouse-dir", schemaPath()) - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .buildOrThrow()) .setSchemaInitializer(SchemaInitializer.builder() .withSchemaName(schemaName) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/hms/TestIcebergHiveCatalogWithoutLock.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/hms/TestIcebergHiveCatalogWithoutLock.java index f2d03c4f3d02..48043fd3b8f9 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/hms/TestIcebergHiveCatalogWithoutLock.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/hms/TestIcebergHiveCatalogWithoutLock.java @@ -51,7 +51,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.catalog.type", "HIVE_METASTORE") .put("hive.metastore.uri", hiveMinioDataLake.getHiveMetastoreEndpoint().toString()) .put("iceberg.hive-catalog.locking-enabled", "false") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .put("s3.region", MINIO_REGION) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergAbfsVendingRestCatalogConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergAbfsVendingRestCatalogConnectorSmokeTest.java index a0219689db36..001f358e0901 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergAbfsVendingRestCatalogConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergAbfsVendingRestCatalogConnectorSmokeTest.java @@ -113,7 +113,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.rest-catalog.vended-credentials-enabled", "true") .put("iceberg.rest-catalog.socket-timeout", "30s") .put("iceberg.writer-sort-buffer-size", "1MB") - .put("fs.native-azure.enabled", "true") + .put("fs.azure.enabled", "true") .put("azure.auth-type", "DEFAULT") .buildOrThrow()) .setInitialTables(REQUIRED_TPCH_TABLES) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergBigLakeMetastoreConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergBigLakeMetastoreConnectorSmokeTest.java index 8b0571263b58..2618d3cfc221 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergBigLakeMetastoreConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergBigLakeMetastoreConnectorSmokeTest.java @@ -91,7 +91,7 @@ protected QueryRunner createQueryRunner() .addIcebergProperty("iceberg.rest-catalog.view-endpoints-enabled", "false") .addIcebergProperty("iceberg.writer-sort-buffer-size", "1MB") .addIcebergProperty("iceberg.allowed-extra-properties", "write.metadata.delete-after-commit.enabled,write.metadata.previous-versions-max") - .addIcebergProperty("fs.native-gcs.enabled", "true") + .addIcebergProperty("fs.gcs.enabled", "true") .addIcebergProperty("gcs.json-key-file-path", gcpCredentialsFile.toString()) .setSchemaInitializer(SchemaInitializer.builder() .withSchemaName(SCHEMA) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergGcsVendingRestCatalogConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergGcsVendingRestCatalogConnectorSmokeTest.java index 718fdb865950..aece49d9fbb7 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergGcsVendingRestCatalogConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergGcsVendingRestCatalogConnectorSmokeTest.java @@ -116,7 +116,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.rest-catalog.uri", restCatalog.catalogUri()) .put("iceberg.rest-catalog.vended-credentials-enabled", "true") .put("iceberg.writer-sort-buffer-size", "1MB") - .put("fs.native-gcs.enabled", "true") + .put("fs.gcs.enabled", "true") .put("gcs.auth-type", "APPLICATION_DEFAULT") .buildOrThrow()) .setInitialTables(REQUIRED_TPCH_TABLES) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergS3TablesConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergS3TablesConnectorSmokeTest.java index 469ccbf13d8c..7138f0b08ffe 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergS3TablesConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergS3TablesConnectorSmokeTest.java @@ -75,7 +75,7 @@ protected QueryRunner createQueryRunner() .addIcebergProperty("iceberg.rest-catalog.security", "sigv4") .addIcebergProperty("iceberg.rest-catalog.signing-name", "glue") .addIcebergProperty("iceberg.writer-sort-buffer-size", "1MB") - .addIcebergProperty("fs.native-s3.enabled", "true") + .addIcebergProperty("fs.s3.enabled", "true") .addIcebergProperty("s3.region", AWS_REGION) .addIcebergProperty("s3.aws-access-key", AWS_ACCESS_KEY_ID) .addIcebergProperty("s3.aws-secret-key", AWS_SECRET_ACCESS_KEY) diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergS3VendingRestCatalogConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergS3VendingRestCatalogConnectorSmokeTest.java index bb8b94e45e47..e506dd022d75 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergS3VendingRestCatalogConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/rest/TestIcebergS3VendingRestCatalogConnectorSmokeTest.java @@ -122,7 +122,7 @@ protected QueryRunner createQueryRunner() .put("iceberg.rest-catalog.uri", "http://" + restCatalogBackendContainer.getRestCatalogEndpoint()) .put("iceberg.rest-catalog.vended-credentials-enabled", "true") .put("iceberg.writer-sort-buffer-size", "1MB") - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.region", MINIO_REGION) .put("s3.endpoint", minio.getMinioAddress()) .put("s3.path-style-access", "true") diff --git a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/snowflake/TestIcebergSnowflakeCatalogConnectorSmokeTest.java b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/snowflake/TestIcebergSnowflakeCatalogConnectorSmokeTest.java index f22d6042c018..bd98945268aa 100644 --- a/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/snowflake/TestIcebergSnowflakeCatalogConnectorSmokeTest.java +++ b/plugin/trino-iceberg/src/test/java/io/trino/plugin/iceberg/catalog/snowflake/TestIcebergSnowflakeCatalogConnectorSmokeTest.java @@ -100,7 +100,7 @@ REGIONKEY NUMBER(38,0), } Map properties = ImmutableMap.builder() - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("s3.aws-access-key", S3_ACCESS_KEY) .put("s3.aws-secret-key", S3_SECRET_KEY) .put("s3.region", S3_REGION) diff --git a/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/BaseLakehouseConnectorSmokeTest.java b/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/BaseLakehouseConnectorSmokeTest.java index b5f2ae98474c..633ae6dcb568 100644 --- a/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/BaseLakehouseConnectorSmokeTest.java +++ b/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/BaseLakehouseConnectorSmokeTest.java @@ -55,7 +55,7 @@ protected QueryRunner createQueryRunner() .addLakehouseProperty("hive.metastore", "thrift") .addLakehouseProperty("hive.metastore.uri", hiveMinio.getHiveMetastoreEndpoint().toString()) .addLakehouseProperty("fs.hadoop.enabled", "true") - .addLakehouseProperty("fs.native-s3.enabled", "true") + .addLakehouseProperty("fs.s3.enabled", "true") .addLakehouseProperty("s3.aws-access-key", MINIO_ROOT_USER) .addLakehouseProperty("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .addLakehouseProperty("s3.region", MINIO_REGION) diff --git a/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseConnectorTest.java b/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseConnectorTest.java index 9f83570222ce..71b33dd7c8db 100644 --- a/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseConnectorTest.java +++ b/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseConnectorTest.java @@ -71,7 +71,7 @@ protected QueryRunner createQueryRunner() .addExtraProperty("sql.default-function-schema", "functions") .addLakehouseProperty("hive.metastore.uri", hiveMinio.getHiveMetastoreEndpoint().toString()) .addLakehouseProperty("fs.hadoop.enabled", "true") - .addLakehouseProperty("fs.native-s3.enabled", "true") + .addLakehouseProperty("fs.s3.enabled", "true") .addLakehouseProperty("s3.aws-access-key", MINIO_ROOT_USER) .addLakehouseProperty("s3.aws-secret-key", MINIO_ROOT_PASSWORD) .addLakehouseProperty("s3.region", MINIO_REGION) diff --git a/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseFileConnectorSmokeTest.java b/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseFileConnectorSmokeTest.java index 45de4bf98792..39745b384d72 100644 --- a/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseFileConnectorSmokeTest.java +++ b/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseFileConnectorSmokeTest.java @@ -44,7 +44,7 @@ protected QueryRunner createQueryRunner() return LakehouseQueryRunner.builder() .addLakehouseProperty("hive.metastore", "file") .addLakehouseProperty("hive.metastore.catalog.dir", "s3://test-bucket/") - .addLakehouseProperty("fs.native-s3.enabled", "true") + .addLakehouseProperty("fs.s3.enabled", "true") .addLakehouseProperty("s3.region", MOTO_REGION) .addLakehouseProperty("s3.endpoint", moto.getEndpoint().toString()) .addLakehouseProperty("s3.aws-access-key", MOTO_ACCESS_KEY) diff --git a/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseMotoConnectorSmokeTest.java b/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseMotoConnectorSmokeTest.java index 02754305adeb..03eec7d564db 100644 --- a/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseMotoConnectorSmokeTest.java +++ b/plugin/trino-lakehouse/src/test/java/io/trino/plugin/lakehouse/TestLakehouseMotoConnectorSmokeTest.java @@ -47,7 +47,7 @@ protected QueryRunner createQueryRunner() .addLakehouseProperty("hive.metastore.glue.aws-access-key", MOTO_ACCESS_KEY) .addLakehouseProperty("hive.metastore.glue.aws-secret-key", MOTO_SECRET_KEY) .addLakehouseProperty("hive.metastore.glue.default-warehouse-dir", "s3://test-bucket/") - .addLakehouseProperty("fs.native-s3.enabled", "true") + .addLakehouseProperty("fs.s3.enabled", "true") .addLakehouseProperty("s3.region", MOTO_REGION) .addLakehouseProperty("s3.endpoint", moto.getEndpoint().toString()) .addLakehouseProperty("s3.aws-access-key", MOTO_ACCESS_KEY) diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/delta.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/delta.properties index 008ffc7dcbce..f09c462bbf9d 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/delta.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/delta.properties @@ -1,6 +1,6 @@ connector.name=delta_lake hive.metastore.uri=thrift://hadoop-master:9083 fs.hadoop.enabled=false -fs.native-azure.enabled=true +fs.azure.enabled=true azure.auth-type=ACCESS_KEY azure.access-key=${ENV:ABFS_ACCESS_KEY} diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/hive.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/hive.properties index afb5386e6850..003d9f0396ef 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/hive.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/hive.properties @@ -1,7 +1,7 @@ connector.name=hive hive.metastore.uri=thrift://hadoop-master:9083 fs.hadoop.enabled=false -fs.native-azure.enabled=true +fs.azure.enabled=true azure.auth-type=ACCESS_KEY azure.access-key=${ENV:ABFS_ACCESS_KEY} hive.non-managed-table-writes-enabled=true diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/iceberg.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/iceberg.properties index 7b608dd8658e..cfea3234b933 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/iceberg.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-azure/iceberg.properties @@ -2,6 +2,6 @@ connector.name=iceberg hive.metastore.uri=thrift://hadoop-master:9083 iceberg.file-format=PARQUET fs.hadoop.enabled=false -fs.native-azure.enabled=true +fs.azure.enabled=true azure.auth-type=ACCESS_KEY azure.access-key=${ENV:ABFS_ACCESS_KEY} diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/delta.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/delta.properties index 94f99cc7410b..5dc36bd5cde5 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/delta.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/delta.properties @@ -1,5 +1,5 @@ connector.name=delta_lake hive.metastore.uri=thrift://hadoop-master:9083 fs.hadoop.enabled=false -fs.native-gcs.enabled=true +fs.gcs.enabled=true gcs.json-key=${ENV:GCP_CREDENTIALS} diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/hive.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/hive.properties index 5a1d1b9d7f96..b1c384c493ab 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/hive.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/hive.properties @@ -1,7 +1,7 @@ connector.name=hive hive.metastore.uri=thrift://hadoop-master:9083 fs.hadoop.enabled=false -fs.native-gcs.enabled=true +fs.gcs.enabled=true gcs.json-key=${ENV:GCP_CREDENTIALS} hive.non-managed-table-writes-enabled=true hive.parquet.time-zone=UTC diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/iceberg.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/iceberg.properties index 19e6855670fa..e71457707b2a 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/iceberg.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-gcs/iceberg.properties @@ -2,5 +2,5 @@ connector.name=iceberg hive.metastore.uri=thrift://hadoop-master:9083 iceberg.file-format=PARQUET fs.hadoop.enabled=false -fs.native-gcs.enabled=true +fs.gcs.enabled=true gcs.json-key=${ENV:GCP_CREDENTIALS} diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-hive4/trino/catalog/hive.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-hive4/trino/catalog/hive.properties index 9f1d8a437c84..6775b462a670 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-hive4/trino/catalog/hive.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-hive4/trino/catalog/hive.properties @@ -1,7 +1,7 @@ connector.name=hive hive.metastore.uri=thrift://metastore:9083 hive.non-managed-table-writes-enabled=true -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-iceberg-minio-cached/iceberg.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-iceberg-minio-cached/iceberg.properties index 1db25e62383c..0f0cc6d10ab0 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-iceberg-minio-cached/iceberg.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-iceberg-minio-cached/iceberg.properties @@ -3,7 +3,7 @@ hive.metastore.uri=thrift://hadoop-master:9083 fs.cache.enabled=true fs.cache.directories=/tmp/cache/iceberg fs.cache.max-disk-usage-percentages=90 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake-cached/delta.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake-cached/delta.properties index 42ad6d316e8a..ec5e25d22ae1 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake-cached/delta.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake-cached/delta.properties @@ -1,6 +1,6 @@ connector.name=delta_lake hive.metastore.uri=thrift://hadoop-master:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake-task-retries-filesystem/iceberg.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake-task-retries-filesystem/iceberg.properties index 61aeb5b65cb9..3070dc1615da 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake-task-retries-filesystem/iceberg.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake-task-retries-filesystem/iceberg.properties @@ -1,6 +1,6 @@ connector.name=iceberg hive.metastore.uri=thrift://hadoop-master:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/delta.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/delta.properties index 84370b5f41bc..517f5d197edf 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/delta.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/delta.properties @@ -1,6 +1,6 @@ connector.name=delta_lake hive.metastore.uri=thrift://hadoop-master:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/hive.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/hive.properties index 5d208d1e8652..65a80bc1928d 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/hive.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/hive.properties @@ -1,6 +1,6 @@ connector.name=hive hive.metastore.uri=thrift://hadoop-master:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/iceberg.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/iceberg.properties index 61aeb5b65cb9..3070dc1615da 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/iceberg.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/multinode-minio-data-lake/iceberg.properties @@ -1,6 +1,6 @@ connector.name=iceberg hive.metastore.uri=thrift://hadoop-master:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-databricks/delta.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-databricks/delta.properties index fafeb86a3268..b93db361d076 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-databricks/delta.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-databricks/delta.properties @@ -2,7 +2,7 @@ connector.name=delta_lake hive.metastore=glue hive.metastore.glue.region=${ENV:AWS_REGION} fs.hadoop.enabled=false -fs.native-s3.enabled=true +fs.s3.enabled=true # We need to give access to bucket owner (the AWS account integrated with Databricks), otherwise files won't be readable from Databricks s3.canned-acl=BUCKET_OWNER_FULL_CONTROL delta.enable-non-concurrent-writes=true diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-databricks/hive.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-databricks/hive.properties index 2d7e969cb2db..27d5268e71b6 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-databricks/hive.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-databricks/hive.properties @@ -2,7 +2,7 @@ connector.name=hive hive.metastore=glue hive.metastore.glue.region=${ENV:AWS_REGION} fs.hadoop.enabled=false -fs.native-s3.enabled=true +fs.s3.enabled=true # We need to give access to bucket owner (the AWS account integrated with Databricks), otherwise files won't be readable from Databricks s3.canned-acl=BUCKET_OWNER_FULL_CONTROL hive.non-managed-table-writes-enabled=true diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-oss/delta.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-oss/delta.properties index e0f7d714ff41..2caec5c7bbff 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-oss/delta.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-oss/delta.properties @@ -1,6 +1,6 @@ connector.name=delta_lake hive.metastore.uri=thrift://hadoop-master:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-oss/hive.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-oss/hive.properties index e8d8d57782e3..0112dcc3f49e 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-oss/hive.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-delta-lake-oss/hive.properties @@ -1,7 +1,7 @@ connector.name=hive hive.metastore.uri=thrift://hadoop-master:9083 hive.non-managed-table-writes-enabled=true -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-hive-hudi-redirections/hudi.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-hive-hudi-redirections/hudi.properties index 0b79740d103e..26fc885972fc 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-hive-hudi-redirections/hudi.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-hive-hudi-redirections/hudi.properties @@ -1,6 +1,6 @@ connector.name=hudi hive.metastore.uri=thrift://hadoop-master:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-hudi/hudi.properties b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-hudi/hudi.properties index 0b79740d103e..26fc885972fc 100644 --- a/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-hudi/hudi.properties +++ b/testing/trino-product-tests-launcher/src/main/resources/docker/trino-product-tests/conf/environment/singlenode-hudi/hudi.properties @@ -1,6 +1,6 @@ connector.name=hudi hive.metastore.uri=thrift://hadoop-master:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=false s3.region=us-east-1 s3.aws-access-key=minio-access-key diff --git a/testing/trino-server-dev/etc/catalog/delta.properties b/testing/trino-server-dev/etc/catalog/delta.properties index 47d196248ae9..7fddb455579b 100644 --- a/testing/trino-server-dev/etc/catalog/delta.properties +++ b/testing/trino-server-dev/etc/catalog/delta.properties @@ -10,7 +10,7 @@ hive.metastore.uri=thrift://localhost:9083 hive.hdfs.socks-proxy=localhost:1180 # MinIO uses 9000 by default, but this change conflicts with Hadoop -fs.native-s3.enabled=true +fs.s3.enabled=true fs.hadoop.enabled=true s3.region=us-east-1 s3.endpoint=http://localhost:9080 diff --git a/testing/trino-server-dev/etc/catalog/hudi.properties b/testing/trino-server-dev/etc/catalog/hudi.properties index 27fe5aa1f518..d4009ddce6e1 100644 --- a/testing/trino-server-dev/etc/catalog/hudi.properties +++ b/testing/trino-server-dev/etc/catalog/hudi.properties @@ -16,7 +16,7 @@ connector.name=hudi hive.metastore.uri=thrift://localhost:9083 -fs.native-s3.enabled=true +fs.s3.enabled=true s3.region=us-east-1 s3.endpoint=http://localhost:9080 s3.path-style-access=true diff --git a/testing/trino-tests/src/test/java/io/trino/sql/planner/IcebergCostBasedPlanTestSetup.java b/testing/trino-tests/src/test/java/io/trino/sql/planner/IcebergCostBasedPlanTestSetup.java index e758d1a68432..2d85a545fa73 100644 --- a/testing/trino-tests/src/test/java/io/trino/sql/planner/IcebergCostBasedPlanTestSetup.java +++ b/testing/trino-tests/src/test/java/io/trino/sql/planner/IcebergCostBasedPlanTestSetup.java @@ -104,7 +104,7 @@ public ConnectorFactory createConnectorFactory() connectorConfiguration = ImmutableMap.builder() .put("iceberg.catalog.type", TESTING_FILE_METASTORE.name()) .put("hive.metastore.catalog.dir", temporaryMetastoreDirectory.toString()) - .put("fs.native-s3.enabled", "true") + .put("fs.s3.enabled", "true") .put("fs.hadoop.enabled", "true") .put("s3.aws-access-key", MINIO_ROOT_USER) .put("s3.aws-secret-key", MINIO_ROOT_PASSWORD)