Migrate from mySQL (mariaBD) to postgres - how?
Moderator: crythias
-
- Znuny newbie
- Posts: 8
- Joined: 09 Aug 2024, 12:17
- Znuny Version: 6.5
- Real Name: Gregor Warsow
Migrate from mySQL (mariaBD) to postgres - how?
Hi all,
I intend to migrate from otrs 6.0.18 to znuny 6.5 and thereby also to switch from mariaDB to postgres. I already tried the DB migration with the help of pgloader and noticed that some type casts should be set in the pgloader config file (e.g. datetime from mariaDB is migrated to 'datetime with time zone' in postgresql which causes issues in znuny). As I do not know whether this casting problem is the only one, I think it might be better to use the `schema.postgresql.sql`, transfer all data as csv (how to do this without exporting all tables individually to files?), and then run the `schema-post.postgresql.sql`. Doing it this way should ensure to have exactly the expected types in all tables.
How would you suggest to do the intended DB migration, especially the csv transfer.
Perhaps there is even a pgloader config file template?
So far, I would do:
mariaDB (otrs 6.0) -> postgresql (znuny 6.0) -> migration steps: 6.1/2/3/4 -> postgresql (znuny 6.5)
Would you suggest to do it in a different manner?
context: after switching from mariaDB 5 to mariaDB 10 I noticed a massive decrease in performance for full text searches from approx 30 seconds (mariaDB 5.5.68) to >10 minutes (mariaDB 10.11.6) - postgresql reports results within 5 seconds.
Thanks
Gregor
I intend to migrate from otrs 6.0.18 to znuny 6.5 and thereby also to switch from mariaDB to postgres. I already tried the DB migration with the help of pgloader and noticed that some type casts should be set in the pgloader config file (e.g. datetime from mariaDB is migrated to 'datetime with time zone' in postgresql which causes issues in znuny). As I do not know whether this casting problem is the only one, I think it might be better to use the `schema.postgresql.sql`, transfer all data as csv (how to do this without exporting all tables individually to files?), and then run the `schema-post.postgresql.sql`. Doing it this way should ensure to have exactly the expected types in all tables.
How would you suggest to do the intended DB migration, especially the csv transfer.
Perhaps there is even a pgloader config file template?
So far, I would do:
mariaDB (otrs 6.0) -> postgresql (znuny 6.0) -> migration steps: 6.1/2/3/4 -> postgresql (znuny 6.5)
Would you suggest to do it in a different manner?
context: after switching from mariaDB 5 to mariaDB 10 I noticed a massive decrease in performance for full text searches from approx 30 seconds (mariaDB 5.5.68) to >10 minutes (mariaDB 10.11.6) - postgresql reports results within 5 seconds.
Thanks
Gregor
-
- Znuny newbie
- Posts: 8
- Joined: 09 Aug 2024, 12:17
- Znuny Version: 6.5
- Real Name: Gregor Warsow
Re: Migrate from mySQL (mariaBD) to postgres - how?
When we have to deal with binary data like a jpg in standard_attachment.content, the problem is how to get this binary data over to the TEXT field in postgresql (the schema sql for postgresql defines this field to be of type TEXT).
pgloader handles LONGBLOB (which is used in the mysql schema) fields by transferring them to BYTEA fields instead of TEXT.
All fields which are LONGBLOB in the mysql schema are of type TEXT in the postgresql schema.
So it seemed reasonable to us to transcode LONGBLOB to LONGTEXT before executing pgloader in order to benefit from the LONGTEXT->TEXT conversion instead of LONGBLOB->BYTEA as BYTEA is not expected in the postgresql schema but TEXT is.
Doing this LONGBLOB to LONGTEXT is okay for most fields as they indeed only contain text. But, as mentioned, there are fields which may contain binary data such as png files: e.g. standard_attachment.content, mail_queue.raw_message, web_upload_cache.content, article_data_mime_attachment.content, etc.
The open question is how to transfer bianry content from LONGBLOB mariaDB to the TEXT field in postgresql.
pgloader handles LONGBLOB (which is used in the mysql schema) fields by transferring them to BYTEA fields instead of TEXT.
All fields which are LONGBLOB in the mysql schema are of type TEXT in the postgresql schema.
So it seemed reasonable to us to transcode LONGBLOB to LONGTEXT before executing pgloader in order to benefit from the LONGTEXT->TEXT conversion instead of LONGBLOB->BYTEA as BYTEA is not expected in the postgresql schema but TEXT is.
Doing this LONGBLOB to LONGTEXT is okay for most fields as they indeed only contain text. But, as mentioned, there are fields which may contain binary data such as png files: e.g. standard_attachment.content, mail_queue.raw_message, web_upload_cache.content, article_data_mime_attachment.content, etc.
The open question is how to transfer bianry content from LONGBLOB mariaDB to the TEXT field in postgresql.
-
- Administrator
- Posts: 4249
- Joined: 18 Dec 2007, 12:23
- Znuny Version: Znuny and Znuny LTS
- Real Name: Roy Kaldung
- Company: Znuny
- Contact:
Re: Migrate from mySQL (mariaBD) to postgres - how?
Hi,
Have you tried to adjust the casting for these tables? Don't use the default configuration as shown at https://pgloader.readthedocs.io/en/late ... ting-rules
- Roy
Have you tried to adjust the casting for these tables? Don't use the default configuration as shown at https://pgloader.readthedocs.io/en/late ... ting-rules
- Roy
Znuny and Znuny LTS running on CentOS / RHEL / Debian / SLES / MySQL / PostgreSQL / Oracle / OpenLDAP / Active Directory / SSO
Use a test system - always.
Do you need professional services? Check out https://www.znuny.com/
Do you want to contribute or want to know where it goes ?
Use a test system - always.
Do you need professional services? Check out https://www.znuny.com/
Do you want to contribute or want to know where it goes ?
-
- Znuny newbie
- Posts: 8
- Joined: 09 Aug 2024, 12:17
- Znuny Version: 6.5
- Real Name: Gregor Warsow
Re: Migrate from mySQL (mariaBD) to postgres - how?
Hi Roy,
yes, I created a pgloader config file with
CAST type int to integer,
type datetime to timestamp,
longtext to text
As you know, the postgresql schema does not use BYTEA but all fields which are binary in mysql/mariaDB (LONGBLOB) are TEXT in pgsql. However, pgloader can transfer longblob only to bytea (type longblob to bytea using byte-vector-to-bytea). So I somehow have to get the content from longblob (mysql) to text (pgsql), right?
For 'longtext to text' to work, I did
SELECT CONCAT('ALTER TABLE `', table_name, '` MODIFY `', column_name, '` LONGTEXT;')
FROM information_schema.columns
WHERE table_schema = @dbname
AND data_type = 'longblob';
in order to fetach all the longblob fields from the mariaDB instance and transfer them to longtext (pgloader does not allow 'longblob to text' but only 'longtext to text').
So when I execute the ALTER TABLE, some statements failed, like here:
MariaDB [otrs]> ALTER TABLE `standard_attachment` MODIFY `content` LONGTEXT;
ERROR 1366 (22007): Incorrect string value: '\xFF\xD8\xFF\xE0\x00\x10...' for column `otrs`.`standard_attachment`.`content` at row 1
`standard_attachment`.`content` was untouched and is still of type LONGBLOB due to this error.
yes, I created a pgloader config file with
CAST type int to integer,
type datetime to timestamp,
longtext to text
As you know, the postgresql schema does not use BYTEA but all fields which are binary in mysql/mariaDB (LONGBLOB) are TEXT in pgsql. However, pgloader can transfer longblob only to bytea (type longblob to bytea using byte-vector-to-bytea). So I somehow have to get the content from longblob (mysql) to text (pgsql), right?
For 'longtext to text' to work, I did
SELECT CONCAT('ALTER TABLE `', table_name, '` MODIFY `', column_name, '` LONGTEXT;')
FROM information_schema.columns
WHERE table_schema = @dbname
AND data_type = 'longblob';
in order to fetach all the longblob fields from the mariaDB instance and transfer them to longtext (pgloader does not allow 'longblob to text' but only 'longtext to text').
So when I execute the ALTER TABLE, some statements failed, like here:
MariaDB [otrs]> ALTER TABLE `standard_attachment` MODIFY `content` LONGTEXT;
ERROR 1366 (22007): Incorrect string value: '\xFF\xD8\xFF\xE0\x00\x10...' for column `otrs`.`standard_attachment`.`content` at row 1
`standard_attachment`.`content` was untouched and is still of type LONGBLOB due to this error.
-
- Znuny newbie
- Posts: 8
- Joined: 09 Aug 2024, 12:17
- Znuny Version: 6.5
- Real Name: Gregor Warsow
Re: Migrate from mySQL (mariaBD) to postgres - how?
I've decided to switch from using pgloader to CloneDB for my database migration. I completed the migration steps from OTRS 6.0 to Znuny versions 6.1, 6.2, 6.3, 6.4, and finally 6.5.11, and then used CloneDB to handle the MySQL to PostgreSQL migration (https://github.com/znuny/OTRSCloneDB/tree/rel-6.5.2).
CloneDB claims to manage BLOB conversions and includes a predefined list of BLOB columns, such as article_data_mime_attachment.content and standard_attachment.content (Blob columns list in CloneDB).
However, I'm observing issues with binary data conversion; specifically, standard attachments and mail attachments become unreadable post-migration.
Has anyone successfully used CloneDB for these particular columns? Any advice on managing binary data in these fields would be appreciated.
CloneDB claims to manage BLOB conversions and includes a predefined list of BLOB columns, such as article_data_mime_attachment.content and standard_attachment.content (Blob columns list in CloneDB).
However, I'm observing issues with binary data conversion; specifically, standard attachments and mail attachments become unreadable post-migration.
Has anyone successfully used CloneDB for these particular columns? Any advice on managing binary data in these fields would be appreciated.
-
- Znuny wizard
- Posts: 365
- Joined: 16 Apr 2016, 08:55
- Znuny Version: see in post
- Real Name: Hans
- Contact:
Re: Migrate from mySQL (mariaBD) to postgres - how?
Just out of curiosity, did you manage the switch to postgresql?
I have tried such a migration many years back, and finally got stuck like you. Since this time I usually do not recommend anymore to try to switch the database.
But if this would be reliably possible it would open some new doors...
So would be interested in your progress, and maybe can try to help you if something arises and you could post the log+errors here.
Elected 2022-06 as an IT Governance Portal Expert. The portal for Znuny, OTRS and OTOBO power users and admins
Specialized for AI-based Solutions with Znuny
IT Governance Portal
Specialized for AI-based Solutions with Znuny
IT Governance Portal
-
- Znuny newbie
- Posts: 8
- Joined: 09 Aug 2024, 12:17
- Znuny Version: 6.5
- Real Name: Gregor Warsow
Re: Migrate from mySQL (mariaBD) to postgres - how?
Yes, I successfully completed the migration from MySQL to PostgreSQL using CloneDB 6.5.2. However, during the process, I encountered a bug in the CloneDB package that required an adjustment in the Base.pm file.
The issue was in line 290 of the Base.pm file:
https://github.com/znuny/OTRSCloneDB/bl ... se.pm#L290
Here, a UTF-8 check is applied, but it isn't sufficient to handle binary data properly. To resolve this, I added a condition to skip the encoding check for columns containing binary data (BlobColumns) as defined by CloneDB itself:
This modification allowed the migration to handle binary data, such as attachments, correctly.
I completed the migration two weeks ago, and everything has been running smoothly since then. The performance improvement, especially in full-text search, has been remarkable. Queries that previously took several minutes now execute within seconds, demonstrating the efficiency gains achieved through PostgreSQL.
The issue was in line 290 of the Base.pm file:
https://github.com/znuny/OTRSCloneDB/bl ... se.pm#L290
Here, a UTF-8 check is applied, but it isn't sufficient to handle binary data properly. To resolve this, I added a condition to skip the encoding check for columns containing binary data (BlobColumns) as defined by CloneDB itself:
Code: Select all
# skip encoding check if column has binary data (blob)
next COLUMNVALUES if ( $Self->{BlobColumns}->{ lc "$Table.$Column" } );
I completed the migration two weeks ago, and everything has been running smoothly since then. The performance improvement, especially in full-text search, has been remarkable. Queries that previously took several minutes now execute within seconds, demonstrating the efficiency gains achieved through PostgreSQL.
-
- Znuny wizard
- Posts: 365
- Joined: 16 Apr 2016, 08:55
- Znuny Version: see in post
- Real Name: Hans
- Contact:
Re: Migrate from mySQL (mariaBD) to postgres - how?
big thx for your update and the fix.
Did you post a bug report to the project with your fix?
Yeah, the performance considerations was also my driver to do the switch. In the meanwhile we optimized the mariaDB config as much as possible to make it fast. Would anyway like to try if we would even more push the performance with switching to my prefered DB postgreSQL.
One thing which we regulary faced on other mysql->postgreSQL migration have been the encoding issues.
Did you verify that special characters e.g. diacritic symbols are well migrated too?
E.g. Umlauts, cyrillic, latin diacritic symbols and so on
do your fulltext searches still do a good job with your special chars?
did the attachments work like a charm still?
Did you post a bug report to the project with your fix?
Yeah, the performance considerations was also my driver to do the switch. In the meanwhile we optimized the mariaDB config as much as possible to make it fast. Would anyway like to try if we would even more push the performance with switching to my prefered DB postgreSQL.
One thing which we regulary faced on other mysql->postgreSQL migration have been the encoding issues.
Did you verify that special characters e.g. diacritic symbols are well migrated too?
E.g. Umlauts, cyrillic, latin diacritic symbols and so on
do your fulltext searches still do a good job with your special chars?
did the attachments work like a charm still?
Elected 2022-06 as an IT Governance Portal Expert. The portal for Znuny, OTRS and OTOBO power users and admins
Specialized for AI-based Solutions with Znuny
IT Governance Portal
Specialized for AI-based Solutions with Znuny
IT Governance Portal
-
- Znuny Employee
- Posts: 14
- Joined: 15 Apr 2022, 20:34
- Znuny Version: Znuny and Znuny LTS
- Real Name: Kai Herlemann
- Company: Znuny
Re: Migrate from mySQL (mariaBD) to postgres - how?
Hi Gregor,
So we created a new version of the CloneDB package with your bugfix, thank you.
BR,
Kai
I tested your bugfix and it works for me.gwarsow wrote: ↑22 Jan 2025, 13:20 Yes, I successfully completed the migration from MySQL to PostgreSQL using CloneDB 6.5.2. However, during the process, I encountered a bug in the CloneDB package that required an adjustment in the Base.pm file.
The issue was in line 290 of the Base.pm file:
https://github.com/znuny/OTRSCloneDB/bl ... se.pm#L290
Here, a UTF-8 check is applied, but it isn't sufficient to handle binary data properly. To resolve this, I added a condition to skip the encoding check for columns containing binary data (BlobColumns) as defined by CloneDB itself:
Code: Select all
# skip encoding check if column has binary data (blob) next COLUMNVALUES if ( $Self->{BlobColumns}->{ lc "$Table.$Column" } );
So we created a new version of the CloneDB package with your bugfix, thank you.
BR,
Kai
-
- Znuny newbie
- Posts: 8
- Joined: 09 Aug 2024, 12:17
- Znuny Version: 6.5
- Real Name: Gregor Warsow
Re: Migrate from mySQL (mariaBD) to postgres - how?
Hey Kai,
this is great to hear! Thanks for the feedback!
Best
Gregor
this is great to hear! Thanks for the feedback!
Best
Gregor
-
- Znuny newbie
- Posts: 8
- Joined: 09 Aug 2024, 12:17
- Znuny Version: 6.5
- Real Name: Gregor Warsow
Re: Migrate from mySQL (mariaBD) to postgres - how?
Code: Select all
Did you verify that special characters e.g. diacritic symbols are well migrated too? E.g. Umlauts, cyrillic, latin diacritic symbols and so on do your fulltext searches still do a good job with your special chars?
did the attachments work like a charm still?

As you were mentioning the full text search - it does a great job, but the migration from mysql to postgresql had a drawback due to case sensitivity.
The issue is that PostgreSQL behaves differently in terms of case sensitivity compared to mysql. Let's discuss this problem in a case example, where we want to search for a ticket title which contains the string INCOMPLETE. By default, LIKE in PostgreSQL is case-sensitive. So LIKE '%incomplete%' will only match exactly incomplete not INCOMPLETE or Incomplete.
The describes scenario works well in mysql but not in PostgreSQL.
I already created an issue for this along with a proposed fix but it has not yet been assigned, confirmed, asked back or anything alike.
However, I appplied this fix on our local instance and it works well. Now I get all results I would have expected to find. So I suggest everyone to apply this fix (CaseSensitive => 0 in line 320) as well if your znuny instance relies on pgsql.
Best
Gregor