loadData
Loads data from a CSV file into an existing table when you add it to your changelog.
Uses
A value of NULL in a cell will be converted to a database NULL rather than the string ‘NULL'. Lines starting with a number sign (#) are treated as comments. You can change the comment pattern by specifying commentLineStartsWith
attribute. To disable comments, set commentLineStartsWith
to empty value.
If the data type for a load column is set to NUMERIC, numbers are parsed in the US locale (for example: 123.45). Date/Time values included in the CSV file should be in ISO format to be parsed correctly by Liquibase. Liquibase initially sets the date format to ‘yyyy-MM-dd'T'HH:mm:ss' and then it checks for two special cases which will override the data format string:
- If the string representing the date/time includes a period (.), then the date format is changed to ‘yyyy-MM-dd'T'HH:mm:ss.SSS'.
- If the string representing the date/time includes a space, then the date format is changed to ‘yyyy-MM-dd HH:mm:ss'.
Once the date format string is set, Liquibase will then call the SimpleDateFormat.parse()
method attempting to parse the input string so that it can return a date/time. If problems occur, then a ParseException
is thrown and the input string is treated as a String
for the INSERT
command to be generated.
If UUID type is used, UUID value is stored as string and NULL in cell is supported.
Loading data with the loadData
tag
All CSV columns are used by default while generating SQL even if they are not described in columns property. If you want to skip the specific headers in the CSV file, use type property with a skip value.
Imagine that you have a table where a
,b
,c
are column names, and 1
,2
,3
are values. To load only a
and b
columns, add a column configuration for it and set its type to "skip":
<column name="c" header="c" type="skip" />
If you use the generateChangeLog
command to re-create the current state of the database and receive the output of this data as the CSV file in a specific directory, add dataOutputDirectory
where the CSV file with the insert statements will be kept:
liquibase --diffTypes=tables,functions,views,columns,indexes,foreignkeys,primarykeys,uniqueconstraints,data,storedprocedure,triggers,sequences --dataOutputDirectory=data generateChangeLog
If you don't use the --dataOutputDirectory
flag while running the command or the loadData
tag added in the changelog, you will have the insert statements in your changelog:
liquibase --diffTypes=tables,functions,views,columns,indexes,foreignkeys,primarykeys,uniqueconstraints,data,storedprocedure,triggers,sequences --changeLogFile=myChangelog.xml generateChangeLog

<changeSet author="support.liquibase.net (generated)" id="1595949192529-5">
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="1 "/>
<column name="COUNTRY_NAME" value="IS"/>
<column name="REGION_ID" valueNumeric="2"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="2 "/>
<column name="COUNTRY_NAME" value="US"/>
<column name="REGION_ID" valueNumeric="5"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="3 "/>
<column name="COUNTRY_NAME" value="HN"/>
<column name="REGION_ID" valueNumeric="6"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="4 "/>
<column name="COUNTRY_NAME" value="IS"/>
<column name="REGION_ID" valueNumeric="2"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="5 "/>
<column name="COUNTRY_NAME" value="US"/>
<column name="REGION_ID" valueNumeric="5"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="6 "/>
<column name="COUNTRY_NAME" value="HN"/>
<column name="REGION_ID" valueNumeric="6"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="7 "/>
<column name="COUNTRY_NAME" value="IS"/>
<column name="REGION_ID" valueNumeric="2"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="8 "/>
<column name="COUNTRY_NAME" value="US"/>
<column name="REGION_ID" valueNumeric="5"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="9 "/>
<column name="COUNTRY_NAME" value="HN"/>
<column name="REGION_ID" valueNumeric="6"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="10"/>
<column name="COUNTRY_NAME" value="IS"/>
<column name="REGION_ID" valueNumeric="2"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="11"/>
<column name="COUNTRY_NAME" value="US"/>
<column name="REGION_ID" valueNumeric="5"/>
</insert>
<insert tableName="COUNTRIES">
<column name="COUNTRY_ID" value="12"/>
<column name="COUNTRY_NAME" value="HN"/>
<column name="REGION_ID" valueNumeric="6"/>
</insert>
</changeSet>
Available attributes
Name | Description | Required for | Supports | Since |
---|---|---|---|---|
catalogName | Name of the catalog. | all | 3.0 | |
commentLineStartsWith | Lines starting with this are treated as comment and ignored. | all | ||
encoding | Encoding of the CSV file (defaults to UTF-8). | all | ||
file | CSV file to load. | all | all | |
quotchar | The quote character for string fields containing the separator character. | all | ||
relativeToChangelogFile | Whether the file path relative to the root changelog file rather than to the classpath. | all | ||
schemaName | Name of the schema. | all | ||
separator | Character separating the fields. | all | ||
tableName | Name of the table to insert data into. | all | all | |
usePreparedStatements | Use prepared statements instead of insert statement strings if the database supports it. | all |
Nested properties
Name | Description | Required for | Supports | Multiple allowed |
---|---|---|---|---|
columns / column | Column mapping and defaults can be defined.
The header or index attributes need to be defined. If the header name in the CSV is different than the column, name needs to be inserted.
If no column is defined at all, the type is taken from a database. Otherwise, for non-string columns the type definition might be required. |
all | yes |
Nested property attributes
Name | Name of the Column (Required) |
---|---|
type | Data type of the column. Its value has to be one of the LOAD_DATA_TYPE. If you want to skip loading a specific column, use the skip data type that was mentioned earlier, otherwise all columns in the CSV file will be used. |
header | Name of the column in the CSV file from which the value for the column will be taken if its different from
the column name. Ignored if index is also defined.
|
index | Index of the column in the CSV file from which the value for the column will be taken. |

<changeSet author="liquibase-docs" id="loadData-example">
<loadData catalogName="cat"
commentLineStartsWith="//"
encoding="UTF-8"
file="example/users.csv"
quotchar="'"
relativeToChangelogFile="true"
schemaName="public"
separator=";"
tableName="person"
usePreparedStatements="true">
<column header="header1"
name="id"
type="NUMERIC"/>
<column index="3"
name="name"
type="BOOLEAN"/>
</loadData>
</changeSet>

changeSet:
id: loadData-example
author: liquibase-docs
changes:
- loadData:
catalogName: cat
columns:
- column:
header: header1
name: id
type: NUMERIC
- column:
index: 3
name: name
type: BOOLEAN
commentLineStartsWith: //
encoding: UTF-8
file: example/users.csv
quotchar: ''''
relativeToChangelogFile: true
schemaName: public
separator: ;
tableName: person
usePreparedStatements: true

{
"changeSet": {
"id": "loadData-example",
"author": "liquibase-docs",
"changes": [
{
"loadData": {
"catalogName": "cat",
"columns": [
{
"column": {
"header": "header1",
"name": "id",
"type": "NUMERIC"
}
},
{
"column": {
"index": 3,
"name": "name",
"type": "BOOLEAN"
}
}
],
"commentLineStartsWith": "//",
"encoding": "UTF-8",
"file": "example/users.csv",
"quotchar": "'",
"relativeToChangelogFile": true,
"schemaName": "public",
"separator": ";",
"tableName": "person",
"usePreparedStatements": true
}
}
]
}
}
Database support
Database | Notes | Auto rollback |
---|---|---|
DB2/LUW | Supported | No |
DB2/z | Supported | No |
Derby | Supported | No |
Firebird | Supported | No |
H2 | Supported | No |
HyperSQL | Supported | No |
INGRES | Supported | No |
Informix | Supported | No |
MariaDB | Supported | No |
MySQL | Supported | No |
Oracle | Supported | No |
PostgreSQL | Supported | No |
SQL Server | Supported | No |
SQLite | Supported | No |
Sybase | Supported | No |
Sybase Anywhere | Supported | No |