Support CTAS into Iceberg with timestamp(3)
See original GitHub issueTrino uses 3 digits of subsecond precision in Date and time functions, but Iceberg expects timestamp with 3 digits of subsecond precision.
As a result we can’t create Iceberg table from Trino
create table iceberg.examples.test
as select now() as n;
Exception:
Timestamp precision (3) not supported for Iceberg. Use "timestamp(6) with time zone" instead.
Is it possible to make default precision configurable or allow use of Timestamp with precision (3) for Iceberg tables?
Issue Analytics
- State:
- Created 3 years ago
- Reactions:2
- Comments:9 (6 by maintainers)
Top Results From Across the Web
Querying Iceberg table data and performing time travel
To run a time travel query, use FOR TIMESTAMP AS OF timestamp after the table name in the SELECT statement, as in the...
Read more >Spark DDL - Apache Iceberg
Iceberg supports CTAS as an atomic operation when using a SparkCatalog . CTAS is supported, but is not atomic when using SparkSessionCatalog ....
Read more >Row level transactions on S3 Data Lake - Dev Genius
CTAS can be used to create an Iceberg table from a normal Athena table ... NOT_SUPPORTED: Timestamp precision (3) not supported for Iceberg....
Read more >A Hands-On Look at the Structure of an Apache Iceberg Table
Create Table (CTAS) · The version-hint.text file contains the reference to the latest metadata file. · The v1.metadata.json is the metadata file.
Read more >Apache Iceberg features in CDW - Cloudera Documentation
From Hive, you can roll back the table data to the state at an older table snapshot, or to a timestamp. From Impala,...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
@findepi we have thousands of scripts (as well as many other companies I believe do) that we are migrating to iceberg automatically. We can’t just mechanically replace all the occurrences of now() or current_timestamp() with current_timestamp(6) as it may brake other places unrelated to iceberg and requires semantical analysis of the script. It will take hundreds of hours of work for the data developers to change the scripts and validate them. We are hacking iceberg connector to add conversion there but we’d like to avoid doing it as much as possible.
I have renamed this issue and started discussion in slack