Chord Data Platform
Overview
Chord Data Source Ingestion Guidelines
4 min
your data warehouse → chord getting started with chord is designed to be frictionless we meet your data where it is, allowing you to unlock ai driven insights, leverage the chord copilot, and activate your data downstream, all while leveraging your existing infrastructure investments all we need is for your data to land in cloud storage as files we take it from there for snowflake users, please refer to docid\ o lfn3hs3wslrifj ykki if you are not using snowflake, we can still ingest data from your warehouse (athena, bigquery, redshift, postgres, databricks, and others) using parquet with cloud storage (s3, etc ) for step by step instructions for integrating external cloud storage services to chord's data warehouse (snowflake), see docid\ bl5mzd2xgyparsf 6 yng note that chord's schema expectations and iam configurations can vary by cloud provider the approach chord provides two options for sharing data with us option 1 bring your own models send us your data as is chord will simply handle the rest export your current tables (orders, customers, products, etc ) exactly as they exist in your warehouse today you do not need to rename columns, change data types, or map relationships your effort low simple export how it works chord ingests your raw data and will adapt to your schema to generate insights chord ai and copilot understand your schema by reading table and column names, and you can query, analyze, segment, and activate immediately best for teams who want to test the platform quickly without engineering overhead and already have clean, modelled data in a data warehouse note some of chord's data models will not be available if your brand chooses this option option 2 bring your data and leverage chord’s models map your data to https //docs chord co/oms schema , and unlock access to chord’s full analytics suite this option is for teams that want standardized reporting, attribution, and activation across tools your effort moderate requires writing sql transformations how it works you ensure your data matches our specifications before it leaves your environment chord builds the unified tables you still own the raw data chord owns the modelling layer best for teams that want standardized metrics across teams, plug and play attribution and lifecycle reporting, and activation ready datasets required entities at a minimum, chord recommends providing orders customers line items products variants payments returns refunds shipments orders order line items subscriptions setup and automation when processing a high volume of files from object storage, it’s helpful to name files according to a hierarchy that can be easily queried using the following key format is advised \<source>/\<collection>/\<year>/\<month>/\<day>/\<hh\ mm\ ss>/\<partition> \<id> parquet here’s a specific example of what this might look like example oms/orders/2025/08/11/20 29 25/2025 08 10 14 00 e791c45e parquet this means the file is the orders collection from your custom oms platform it was uploaded on 2025 08 11 at 20 29 25 utc and the data is for the hour of 2025 08 10 14 00 it’s also helpful to include a unique identifier for the file (in this case, shortened guid e791c45e ) corresponding to the compute process that produced this file