Announcement

Collapse
No announcement yet.

How to Improve Load Times for Large Datasets?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to Improve Load Times for Large Datasets?

    I have a base entity with over 350,000 records, and the number is growing daily. Recently, I noticed that it takes around 15-16 seconds to load, whereas earlier, with around 340,000 records, it used to load within seconds. What could be causing this slowdown, and how can I reduce the load time, considering the record count may reach up to 1,000,000 in the near future?

  • #2
    How many rows you configured for list view, when opening the entity?

    Comment


    • #3
      8 List View

      Comment


      • #4
        First - set up a log of slow queries and analyze when delays occur (https://mariadb.com/kb/en/slow-query-log-overview/ for mariadb)
        Second - add the necessary indexes in accordance with the queries​

        Comment


        • #5
          The most expensive requests are getting the stream. Recommendation - increase the interval for getting data in widgets or completely refuse auto-update if possible

          Comment


          • #6
            It's reasonable to provide more details about what exactly is loading slow for you.
            • Only the list view of a particular entity type?
            • Does it load slowly for both admin and non-admin users?
            • Does it load not slowly if you set sorting by some other field?
            • Does it load slowly if you disable Total Count on the list view?
            • MariaDB or MySQL?
            This article covers most if issues with performance: https://docs.espocrm.com/administrat...ance-tweaking/

            1,000,000 is not a big number of a well configured system.

            Comment


            • #7
              dimyy Some useful links:



              Just looking at my host provider and notice that I have a database filesize limit... looking at my database these two table is huge... not sure if it safe to delete or how I can trim down the size a little. Email is a must to keep there, but the other two I think we can safety delete? Anyone can advise? I probably do a backup

              Dont know if cleanup works better now when i wrote the post.

              I delete the "biggest" tables like that:

              delete from bpmn_flow_node where created_at < now() - interval 30 DAY;
              delete from bpmn_process where created_at < now() - interval 30 DAY;
              delete from scheduled_job_log_record where created_at < now() - interval 30 DAY;
              delete from job where created_at < now() - interval 30 DAY;

              Followed by SSH:
              mysqlcheck -u root -p --optimize <dbname>

              Follow the Link from yuri. 1 M dataset is not much data for SQL-Table. Biggest SQL-File from customer I've ever seen was round 1.5 TB Database-File (MS-SQL). 300 M dataset for one OracleDB-Table is possible without performance loose.

              Most reason if databases get slower are fragmented or missing index, if having enough RAM and fast storage.

              Comment

              Working...
              X