Advanced Pack - New Mailchimp API

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • lucakuehne
    Senior Member
    • Feb 2016
    • 195

    #16
    And there are many more of these "
    MailChimp: Error after requesting GET"-Error logs until 14:10

    Comment

    • tanya
      Senior Member
      • Jun 2014
      • 4308

      #17
      code, I sent you before, avoids new checking of these batches

      Comment

      • lucakuehne
        Senior Member
        • Feb 2016
        • 195

        #18
        As I said before: if I insert this code there aren't generated any BatchJobs at all... And the existing ones run forever and never finish

        Comment

        • tanya
          Senior Member
          • Jun 2014
          • 4308

          #19
          this integration can not be without BatchJobs

          Comment

          • lucakuehne
            Senior Member
            • Feb 2016
            • 195

            #20
            Originally posted by tanya
            this integration can not be without BatchJobs
            I don't think you understand..
            If I insert the code given by you, there don't apear any MailchimpBatch Jobs at all under "<URL>/#Admin/jobs"
            And without the code given by you, there are created like 2 Jobs "MailchimpBatch" per Second! Is it really neccessary to create so many jobs for the sync between Mailchimp and Espo?

            Comment

            • alasdaircr
              Active Community Member
              • Aug 2014
              • 525

              #21
              tanya how are you getting on with the fixes?

              I've also noticed the issue with hundreds of batch jobs being created, and then MailChimp erroring that > 500 were waiting. This is for only 10 campaigns being synced with <1200 recipients to each

              Comment


              • tanya
                tanya commented
                Editing a comment
                now we are working on this. Close to the release
            • alasdaircr
              Active Community Member
              • Aug 2014
              • 525

              #22
              Looking into the code. You're fetching the sent-to report in a strange way. You grab all recipients then for each one request the sent status in batches.

              Instead you could add 1 batch request which is { operations: [ {'method' => 'GET', 'path': '/reports/campaignId/sent-to'} ] }

              Then poll that 1 batch (it automatically paginates for you) for the tgz of the sent-to report.

              It took just 20 seconds for my batch to run, returning 1800 sent-to reports.

              Comment

              • tanya
                Senior Member
                • Jun 2014
                • 4308

                #23
                This MailChimp API has an error with sent-to reports. We had some talk with MailChimp Support and they recommended to use the batch way of synchronization, until they fix this bug.

                Comment

                • alasdaircr
                  Active Community Member
                  • Aug 2014
                  • 525

                  #24
                  What's the error? This returned the report fine.

                  Comment

                  • tanya
                    Senior Member
                    • Jun 2014
                    • 4308

                    #25
                    on the different pages email addresses can duplicate. After all pages were watched not all emails were met. Sometimes even returns internal error

                    Comment

                    • alasdaircr
                      Active Community Member
                      • Aug 2014
                      • 525

                      #26
                      In a batch request if you don't specify count, it retrieves all of the results, so you don't have to page at all.

                      I.e. just this

                      post to /batches

                      { operations: [
                      { 'method' => 'GET', 'path': '/reports/<MC_CAMPAIGN_ID>/sent-to'}
                      ]}

                      Then monitor the batch operation, just one per campaign. And once it's finished download the result. The TGZ contains ALL the sent-to records for that campaign.

                      This is not how you're doing it, you're requesting batches at 100 records at a time, which makes it much much slower.

                      Comment

                      • alasdaircr
                        Active Community Member
                        • Aug 2014
                        • 525

                        #27
                        I just checked the result of a reports/campaign-id/email-activity request - there IS one duplicate coming through. This seems to be an off by one fault, as it is record number 923 from the second file of total 1923 results. The duplicate is at position 0 AND 1923. I've the API support team know about this. Should be easy to fix.

                        However despite that, it would be relatively easy to deal with this duplicate. Fetching the results this way compared to the way you are doing it is much much easier/faster.
                        Last edited by alasdaircr; 01-23-2017, 10:01 AM. Reason: typo

                        Comment

                        • alasdaircr
                          Active Community Member
                          • Aug 2014
                          • 525

                          #28
                          They got back to me and said they've confirmed it and will fix it. They suggest the fix would be to request the reports with a count paramater which is identical to the number of 'sent' records from the campaign detail information.

                          So, e.g. GET request this in a batch request: reports/<CAMPAIGNID>/email-activity?fields=emails.email_address,emails.activi ty&count=<NUMBEROFSENDS>

                          This way you get the whole response in one TGZ file from one batch request with no duplicates.

                          Please look into implementing this improvement. I don't know where you got the idea to split in 100's that is not a limit these days. And using batch requests means there's no issues with timeouts.

                          Comment

                          Working...