Help Required To Overcome Apex CPU Timeout Error

I am working on a module that will dedupe contact records on insert / update, but I end up in Apex. I understand that to overcome it, we need to optimize the code a little, but there is very little room for optimization in the next block of code. Any help would be appreciated.

On the account, we have a list of Molyipics, from which we can select the fields, based on which the uniqueness of contacts in this account will be determined. It can be different for the difference. The following code is part of a handler class for a trigger, where we have a map of the old contact list with the account id as the key (mapOfAccountIdWithItsContact) and the map with the new contact list and account as the key (newContactWithAccountMap), we iterate over these maps based on a set of accounts, the contacts of which we receive in the trigger. We have a map for constructing Lisst fields that will be used to make the contact unique for each account (mapOfAccountWithFilters).

Here's a snippet of code:

for(String accountId : accountIdSet){
        if(newContactWithAccountMap.get(accountId) != null){
            for(Contact newContact : newContactWithAccountMap.get(accountId)){
                for(Contact oldContact : mapOfAccountIdWithItsContact.get(accountId)){
                    //Check for duplication only in the respective account, also this should not apply on insertion of Office contact
                    matchingContactFound = false;
                    if(oldContact.id != newContact.id){  //while insert, newContact id will be null and while update it will verify that it is not matching itself with its old record. 
                        for(String filterFieldName : mapOfAccountWithFilters.get(accountId)){
                            if(oldContact.get(filterFieldName) == newContact.get(filterFieldName)){
                                matchingContactFound = true;
                                //If match is found update last de duplication date to today on old contact
                                oldContact.Last_De_Duplication_Date__c = System.Today();
                                oldContactsToUpdateSet.add(oldContact);
                            }else{
                                matchingContactFound = false;
                                break; //get another "old contact"
                            }
                        }
                    }
                    if(matchingContactFound){                               
                        //stop it from being inserted
                        duplicateContactSet.add(newContact.Id);
                        //newContact.addError('Contact cannot be inserted because a contact is already present based on the Master Target Identifier at client level.');
                        break; //get another "new contact"
                    }                       
                }
            }
        }
    }  

      

Any help with avoiding 4 cycles or an alternative approach is greatly appreciated. Thanks in advance.

+3


source to share


2 answers


Great question!

But it's hard to say something without seeing more of your code ...

  • As you narrowed down to this particular snippet, have you done any temporary tests (debugging logs? Setting breakpoints in developer console)?
  • How do you fill in your variables and what type they are (I prefer seeing Map<Id, List<Contact>

    etc. than trying to figure it out from the description)
  • Perhaps you are querying too much data, perhaps more efficient filtering can significantly reduce the execution time ... For example, you have a comment there about this should not apply when inserting an Office contact - you are already filtering this one before you even enter this method ?

Perhaps a little prep will help?

Another thing to consider is how many fields are in the multipicklist? Have you thought about preparing a little? For example, in every insert and every update, store the values ​​from the "important fields" in the helper field Text(255)

in the contact, let's call it "duplicate category / segment / tag" or something like that. Mark this field as an external identifier (it doesn't have to be unique, it's indexed enough for that).

You should then be able to quite quickly select contacts that have an account ID and a corresponding "category". If the field value is identical, or the length is 255, that means it has been truncated and you need to do an exact field-by-field mapping. But if there are no contacts with an identical "category" - you must be safe to say that there are no duplicates.

Such a thing should be carefully designed (trigger on contact, but also in the account to "recalculate" this field every time the field definitions change ... and if it is possible to point "Contact" to other records (for example, "User" ) you also need to protect yourself from user changes ...) but I would say definitely worth a try. You don't even need to maintain all of the fields. Say only the first and last name, email address, phone number, one of the addresses. This should already be a huge help in "bucketing" and then only request from the right buckets.


Code review;)

Your call cache



You have a lot of card calls get

. And you call System.today()

in a loop (I would understand System.now ... but today?). Check out this interesting short video and do what you can to have more local variables and fewer script statements. The list of fields doesn't change for a given account - so why are you painfully extracting it from the map with every iteration.

Check the logic

if(newContactWithAccountMap.get(accountId) != null)

- it should never be. if there are no contacts for "that" that are inserted - what do you do in this function anyway. You are too protective here, or this check is really necessary (which can provoke some big problems). Even for personal contacts (with AccountId = null), you shouldn't have gotten Contacts and created maps for accounts that aren't affected.

So, in the first iteration, I would do something like this (it's a slight improvement, but who knows, it might help):

for(String accountId : accountIdSet){
    List<String> fields =  mapOfAccountWithFilters.get(accountId);  // it always same for that Account, isn't it?
    for(Contact newContact : newContactWithAccountMap.get(accountId)){
        for(Contact oldContact : mapOfAccountIdWithItsContact.get(accountId)){
            if(oldContact.id != newContact.id){
                Boolean allFieldsMatch = true;
                for(String fieldName : fields){
                    if(allFieldsMatch &= (oldContact.get(fieldName) == newContact.get(fieldName))){
                        oldContactsToUpdateSet.add(oldContact);
                    }else{
                        break;
                    }
                }
                if(allFieldsMatch) {
                    duplicateContactSet.add(newContact.Id);
                    break;
                }
            }
        }
    }
}
Date today = System.today();
for(Contact c : oldContactsToUpdateSet){
    c.Last_De_Duplication_Date__c = today;
}

      

2nd pass

If that doesn't help, you can try to include some of my idea here. Let's say you insert 3 contacts into an account that already has 10. That's 3 * 10 = 30 comparisons in the inner loop (they are both brand new, so the trick of comparing the old and new contact ID doesn't help).

But if you prepared some kind of composite key like this "bucket", I said that you really flatten it.

for(String accountId : accountIdSet){
    List<String> fields =  mapOfAccountWithFilters.get(accountId);
    Set<String> oldContactBuckets = new Set<String>();
    /* I'm cheating here a bit. 
        All I want to know is whether there was a match. I don't care with which Contact.
        If you care - you'd have to convert this Set<String> to Map<String, Set<Id>> for example.
        Looks like you do care because you're setting this Last_De_Duplication_Date__c
        but I'll leave it as exercise for the reader :P
    */
    for(Contact oldContact : mapOfAccountIdWithItsContact.get(accountId)){
        oldContactBuckets.add(buildKey(oldContact, fields));
    }
    for(Contact newContact : newContactWithAccountMap.get(accountId)){
        String key = buildKey(newContact, fields);
        if(oldContactBuckets.contains(key)){
            duplicateContactSet.add(newContact.Id);
        }
    }
}

private String buildKey(Contact c, List<String> fields){
    List<String> temp = new List<String>();
    for(String fieldName : fields){
        temp.add(String.valueOf(c.get(fieldName)));
    }
    return String.join(temp, '\n'); // pick a field separator that unlikely to appear in your real data. Tab maybe?
}

      

First 10 keys will be built and then we will compare with 3 keys for incoming data. The meaning is only 13 equivalents of your "innermost" cycle.

If that doesn't help in your situation - you can always rewrite it to a batch vertex, I guess ...

+4


source


Part of the answer might be to do this batch process. When the batch code runs, each individual batch of processed records gets its own time limit. There's more on the package top here .

You don't show SOQL queries or have no idea how many contacts an account might have. The total number of accounts and contacts would also be helpful. Without them, it's difficult to determine where the problem might be. Maybe you should take a look at SOQL optimization. For example, there are operators such as "not equal" ("! =") That are ineffective. There's a lot to be said for this topic and you can read it here .



Another possibility is that you can replace multiple queries with a single query. In a situation like this, I could write code that made one request to the site (custom object). That would limit me to 100 sites. Instead, I ran the query for all sites (we have much less than the 50,000 limit) and then created a map to store the query results using a unique key.

0


source







All Articles