Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix incorrect partitionValues_parsed with id & name column mapping in Delta Lake #24129

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@
import static com.google.common.base.Preconditions.checkArgument;
import static com.google.common.base.Throwables.throwIfUnchecked;
import static com.google.common.collect.ImmutableList.toImmutableList;
import static io.trino.memory.context.AggregatedMemoryContext.newSimpleAggregatedMemoryContext;
import static io.trino.parquet.ParquetTypeUtils.constructField;
import static io.trino.parquet.ParquetTypeUtils.getColumnIO;
import static io.trino.parquet.ParquetTypeUtils.getDescriptors;
Expand Down Expand Up @@ -103,6 +104,16 @@ public static ParquetWriter createParquetWriter(OutputStream outputStream, Parqu
Optional.empty());
}

public static ParquetReader createParquetReader(
ParquetDataSource input,
ParquetMetadata parquetMetadata,
List<Type> types,
List<String> columnNames)
throws IOException
{
return createParquetReader(input, parquetMetadata, new ParquetReaderOptions(), newSimpleAggregatedMemoryContext(), types, columnNames, TupleDomain.all());
}

public static ParquetReader createParquetReader(
ParquetDataSource input,
ParquetMetadata parquetMetadata,
Expand Down
7 changes: 7 additions & 0 deletions plugin/trino-delta-lake/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -396,6 +396,13 @@
<scope>test</scope>
</dependency>

<dependency>
<groupId>io.trino</groupId>
<artifactId>trino-parquet</artifactId>
<type>test-jar</type>
<scope>test</scope>
</dependency>

<dependency>
<groupId>io.trino</groupId>
<artifactId>trino-parser</artifactId>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ public RowType getAddEntryType(
List<DeltaLakeColumnHandle> partitionColumns = extractPartitionColumns(metadataEntry, protocolEntry, typeManager);
if (!partitionColumns.isEmpty()) {
List<RowType.Field> partitionValuesParsed = partitionColumns.stream()
.map(column -> RowType.field(column.columnName(), typeManager.getType(getTypeSignature(DeltaHiveTypeTranslator.toHiveType(column.type())))))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add test in io.trino.plugin.deltalake.transactionlog.checkpoint.TestCheckpointEntryIterator

.map(column -> RowType.field(column.basePhysicalColumnName(), typeManager.getType(getTypeSignature(DeltaHiveTypeTranslator.toHiveType(column.type())))))
.collect(toImmutableList());
addFields.add(RowType.field("partitionValues_parsed", RowType.from(partitionValuesParsed)));
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ public void write(CheckpointEntries entries, TrinoOutputFile outputFile)
}
List<DeltaLakeColumnHandle> partitionColumns = extractPartitionColumns(entries.metadataEntry(), entries.protocolEntry(), typeManager);
List<RowType.Field> partitionValuesParsedFieldTypes = partitionColumns.stream()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a corresponding test in io.trino.plugin.deltalake.transactionlog.checkpoint.TestCheckpointWriter

Copy link
Member Author

@ebyhr ebyhr Nov 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you share scenarios you want to cover in the class? I intentionally avoided that. Both TestCheckpointWriter & TestCheckpointEntryIterator are not suitable to verify partitionValues_parsed field because AddFileEntry doesn't hold the value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about a test similar to io.trino.plugin.deltalake.transactionlog.checkpoint.TestCheckpointEntryIterator#testReadAddEntriesPartitionPruning with corresponding resource files

.map(column -> RowType.field(column.columnName(), column.type()))
.map(column -> RowType.field(column.basePhysicalColumnName(), column.type()))
Copy link
Contributor

@findinpath findinpath Nov 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now

"PartitionValues": {
        "col-6d32b73c-d46b-47f3-aeee-b4ce2231c81f": "30"
      },
      "PartitionValues_parsed": {
        "Col456d32b73c45d46b4547f345aeee45b4ce2231c81f": 30
      }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the missing dashes.

.collect(toImmutableList());
for (AddFileEntry addFileEntry : entries.addFileEntries()) {
writeAddFileEntry(pageBuilder, addEntryType, addFileEntry, entries.metadataEntry(), entries.protocolEntry(), partitionColumns, partitionValuesParsedFieldTypes, writeStatsAsJson, writeStatsAsStruct);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,20 @@
import io.trino.parquet.metadata.FileMetadata;
import io.trino.parquet.metadata.ParquetMetadata;
import io.trino.parquet.reader.MetadataReader;
import io.trino.parquet.reader.ParquetReader;
import io.trino.plugin.deltalake.transactionlog.AddFileEntry;
import io.trino.plugin.deltalake.transactionlog.DeletionVectorEntry;
import io.trino.plugin.deltalake.transactionlog.DeltaLakeSchemaSupport.ColumnMappingMode;
import io.trino.plugin.deltalake.transactionlog.DeltaLakeTransactionLogEntry;
import io.trino.plugin.deltalake.transactionlog.MetadataEntry;
import io.trino.plugin.deltalake.transactionlog.ProtocolEntry;
import io.trino.plugin.deltalake.transactionlog.checkpoint.CheckpointSchemaManager;
import io.trino.plugin.deltalake.transactionlog.statistics.DeltaLakeFileStatistics;
import io.trino.plugin.hive.FileFormatDataSourceStats;
import io.trino.plugin.hive.parquet.TrinoParquetDataSource;
import io.trino.spi.Page;
import io.trino.spi.block.Block;
import io.trino.spi.type.RowType;
import io.trino.spi.type.TimeZoneKey;
import io.trino.testing.AbstractTestQueryFramework;
import io.trino.testing.MaterializedRow;
Expand All @@ -58,6 +64,7 @@
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.ZoneId;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Optional;
Expand All @@ -71,13 +78,16 @@
import static com.google.common.collect.MoreCollectors.onlyElement;
import static com.google.common.io.MoreFiles.deleteRecursively;
import static com.google.common.io.RecursiveDeleteOption.ALLOW_INSECURE;
import static io.trino.parquet.ParquetTestUtils.createParquetReader;
import static io.trino.plugin.deltalake.DeltaTestingConnectorSession.SESSION;
import static io.trino.plugin.deltalake.TestingDeltaLakeUtils.copyDirectoryContents;
import static io.trino.plugin.deltalake.transactionlog.DeltaLakeSchemaSupport.extractPartitionColumns;
import static io.trino.plugin.deltalake.transactionlog.DeltaLakeSchemaSupport.getColumnsMetadata;
import static io.trino.plugin.deltalake.transactionlog.checkpoint.TransactionLogTail.getEntriesFromJson;
import static io.trino.plugin.hive.HiveTestUtils.HDFS_ENVIRONMENT;
import static io.trino.plugin.hive.HiveTestUtils.HDFS_FILE_SYSTEM_STATS;
import static io.trino.testing.TestingNames.randomNameSuffix;
import static io.trino.type.InternalTypeManager.TESTING_TYPE_MANAGER;
import static java.lang.String.format;
import static java.time.ZoneOffset.UTC;
import static org.assertj.core.api.Assertions.assertThat;
Expand Down Expand Up @@ -270,6 +280,70 @@ private void testAddNestedColumnWithColumnMappingMode(String columnMappingMode)
.containsPattern("(delta\\.columnMapping\\.physicalName.*?){11}");
}

@Test // regression test for https://github.com/trinodb/trino/issues/24121
void testPartitionValuesParsedCheckpoint()
throws Exception
{
testPartitionValuesParsedCheckpoint(ColumnMappingMode.ID);
testPartitionValuesParsedCheckpoint(ColumnMappingMode.NAME);
testPartitionValuesParsedCheckpoint(ColumnMappingMode.NONE);
}

private void testPartitionValuesParsedCheckpoint(ColumnMappingMode columnMappingMode)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have also product test in TestDeltaLakeColumnMappingMode to check reading/writing checkpoints by trino/delta

throws Exception
{
try (TestTable table = new TestTable(
getQueryRunner()::execute,
"test_checkpoint",
"(x int, part int) WITH (checkpoint_interval = 3, column_mapping_mode = '" + columnMappingMode + "', partitioned_by = ARRAY['part'])")) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add also test for other types (like Date) which has different representation in PartitionValues and PartitionValues_parsed

assertUpdate("INSERT INTO " + table.getName() + " VALUES (1, 10)", 1);
assertUpdate("INSERT INTO " + table.getName() + " VALUES (2, 20)", 1);
assertUpdate("INSERT INTO " + table.getName() + " VALUES (3, 30)", 1);

Path tableLocation = Path.of(getTableLocation(table.getName()).replace("file://", ""));
Path checkpoint = tableLocation.resolve("_delta_log/00000000000000000003.checkpoint.parquet");

MetadataEntry metadataEntry = loadMetadataEntry(0, tableLocation);
ProtocolEntry protocolEntry = loadProtocolEntry(0, tableLocation);

DeltaLakeColumnHandle partitionColumn = extractPartitionColumns(metadataEntry, protocolEntry, TESTING_TYPE_MANAGER).stream().collect(onlyElement());
String physicalColumnName = partitionColumn.basePhysicalColumnName();
if (columnMappingMode == ColumnMappingMode.ID || columnMappingMode == ColumnMappingMode.NAME) {
assertThat(physicalColumnName).matches(PHYSICAL_COLUMN_NAME_PATTERN);
}
else {
assertThat(physicalColumnName).isEqualTo("part");
}

int partitionValuesParsedFieldPosition = 6;
RowType addEntryType = new CheckpointSchemaManager(TESTING_TYPE_MANAGER).getAddEntryType(metadataEntry, protocolEntry, _ -> true, true, true, true);

RowType.Field partitionValuesParsedField = addEntryType.getFields().get(partitionValuesParsedFieldPosition);
assertThat(partitionValuesParsedField.getName().orElseThrow()).matches("partitionValues_parsed");
RowType partitionValuesParsedType = (RowType) partitionValuesParsedField.getType();
assertThat(partitionValuesParsedType.getFields().stream().collect(onlyElement()).getName().orElseThrow()).isEqualTo(physicalColumnName);

TrinoParquetDataSource dataSource = new TrinoParquetDataSource(new LocalInputFile(checkpoint.toFile()), new ParquetReaderOptions(), new FileFormatDataSourceStats());
ParquetMetadata parquetMetadata = MetadataReader.readFooter(dataSource, Optional.empty());
try (ParquetReader reader = createParquetReader(dataSource, parquetMetadata, ImmutableList.of(addEntryType), List.of("add"))) {
List<Integer> actual = new ArrayList<>();
Page page = reader.nextPage();
while (page != null) {
Block block = page.getBlock(0);
for (int i = 0; i < block.getPositionCount(); i++) {
List<?> add = (List<?>) addEntryType.getObjectValue(SESSION, block, i);
if (add == null) {
continue;
}
actual.add((Integer) ((List<?>) add.get(partitionValuesParsedFieldPosition)).stream().collect(onlyElement()));
}
page = reader.nextPage();
}
assertThat(actual).containsExactlyInAnyOrder(10, 20, 30);
}
}
}

/**
* @see deltalake.column_mapping_mode_id
* @see deltalake.column_mapping_mode_name
Expand Down Expand Up @@ -2136,6 +2210,16 @@ private static MetadataEntry loadMetadataEntry(long entryNumber, Path tableLocat
return transactionLog.getMetaData();
}

private static ProtocolEntry loadProtocolEntry(long entryNumber, Path tableLocation)
throws IOException
{
TrinoFileSystem fileSystem = new HdfsFileSystemFactory(HDFS_ENVIRONMENT, HDFS_FILE_SYSTEM_STATS).create(SESSION);
DeltaLakeTransactionLogEntry transactionLog = getEntriesFromJson(entryNumber, tableLocation.resolve("_delta_log").toString(), fileSystem).orElseThrow().stream()
.filter(log -> log.getProtocol() != null)
.collect(onlyElement());
return transactionLog.getProtocol();
}

private String getTableLocation(String tableName)
{
Pattern locationPattern = Pattern.compile(".*location = '(.*?)'.*", Pattern.DOTALL);
Expand Down
7 changes: 7 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1307,6 +1307,13 @@
<version>${project.version}</version>
</dependency>

<dependency>
<groupId>io.trino</groupId>
<artifactId>trino-parquet</artifactId>
<version>${project.version}</version>
<type>test-jar</type>
</dependency>

<dependency>
<groupId>io.trino</groupId>
<artifactId>trino-parquet</artifactId>
Expand Down