Ok, first of all. You need to get the attachments out of the database.
MySQL and most of the other Databases are very bad at handling blob storage. One of the first things to do one a production instance.
Blobs cant be indexed or searched very well. And, depending on your MySQL config, you may also have a large ib_data file?
This would explain your performance issue. This would point to a wrong setting / historical change of file_per_table. In the earlier versions all data have been stored in file. So even after you exported all article information (plain text+attachments) it would be the same size. If file_per_table is correct (or the DB was imported on a new instance) the size would roughly decrease by the amount of article storage.
To export the attachments you can you the console command ArticleStorageSwitch. See the documentation here:
https://doc.znuny.org/doc/manual/admin/ ... rs-storage
Note: You may need double the space on your MySQL Storage partition article storage. MySQL alters tables using copy, change, write back.
I would also suggest to use a separate share / mount for it or a LVM partition. To make it easier to extend it, if needed.
I would recommend XFS for the ArticleStorage mount. With ext4 you may run into problems with the available inodes.
After the migration the database should be way smaller.
- Is it a cluster?
- Is replication active?
Was referencing to the DB, not the VM.
configured to have tons of caches for DB to have a fast UI. So read prefered. What parameters are interesting to you?
Ok. I cant help with MySQL performance tuning. We usually hire someone if it is special or (first try) create a fresh config to reduce possible errors from the past.
- Modified Filter in the Sysconfig?
puhh yes, but years ago. Need to ask if it was documented that time. If you name me what you need can recheck these settings
I'm talking of SearchIndex filter. There are only two:
- Ticket::SearchIndex::Attribute
- Ticket::SearchIndex::Filters
May not even be relevant, if you use unfiltered storage. Which improves performance for indexing but stores a lot of unnecessary stuff.
IOPS and IO wait are on monitoring and are not relevant to the capacity the storage array is able to deliver
If you say so.
anyway any idea how to verify quickly if indexes are okay on db?
The Support Assessment would tell you.
First check is: bin/otrs.Console.pl Maint::Database::Check
Greetings
Edit: missed this
In which file is this?
And is it possible to trace what the process is doing?
Afaik if the database+disk would be too slow, perl should not be on 100% CPU usage. Since it would wait more time compared to processing something in the CPU
Kernel/System/Ticket/Article/Backend/MimeBase.pm
Yes can can use funny tools like strace or nytprof. But as far as I can tell, the perl code is not the problem here. I tested already two days ago. 950k articles ~ took about ~55 Minutes. Times 10 for your article count = 550 Minutes, which is "normal" with the current implementation and usually done over night. You can increase the amount of workers, but I doubt that the result would change a lot.