What is the spring transaction isolation level used to maintain a counter for a sold product?

I have an e-commerce site written with Spring Boot + Angular. I need to maintain a counter in the product table to keep track of how many have been sold. But the counter sometimes becomes inaccurate when many users order the same item at the same time.

In my service code, I have the following transactional declaration:

@Transactional(propagation = Propagation.REQUIRES_NEW, isolation = Isolation.READ_COMMITTED)

in which, after saving the order (using CrudRepository.save()

), I make a select query to sum the ordered quantities until then, hoping that the select query will consider all orders committed. But it doesn't seem to be the case, at times the counter is less than the actual number.

The same problem occurs for my other use case as well: quantity limits the product. I am using the same transaction isolation setup. In the code, I will make a selective request to see how much has been sold and discarded if we are unable to fulfill the order. But for hot items, we resell the item multiple times, because each thread does not see the orders just committed in other threads.

So, the READ_COMMITTED

correct isolation level for my use case? Or should I do a pessimistic lock for this use case?

UPDATE 05/13/17

I took the Ruben approach as I know more about Java than the database, so I took the easier path for myself. Here's what I did.

@Transactional(propagation = Propagation.REQUIRES_NEW, isolation = Isolation.SERIALIZABLE)
public void updateOrderCounters(Purchase purchase, ACTION action)

      

I am using JpaRepository, so I am not playing in entityManager directly. Instead, I just put the code to update the counters in a separate method and annotated as above. It seems to be good. I have seen> 60 concurrent connections placing orders and are not oversold and the response times look okay as well.

+3


source to share


2 answers


Depending on how you get the total number of items sold, the options available may vary:

1. If you have counted the number of items sold dynamically using an sum

order request

I presume that in this case you have an isolation setting SERIALIZABLE

for the transaction as it is the only one that supports range locks

and prevents phantom reads

. However, I would not recommend going with this isolation level as it has a significant impact on your system (or is used very carefully in well-designed places).

Links: https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html#isolevel_serializable

2. If you maintain a counter on the product or some other line related to the product

In this case, I would probably recommend using row level locking

eg select for update

in a service method that checks for product availability and increases the number of items sold. The high-level product placement algorithm can be similar to the following steps:

  • Get a string that stores the number of items remaining / sold using a query select for update

    ( @Lock(LockModeType.PESSIMISTIC_WRITE)

    in a repository method).
  • Make sure the resulting string has the updated field values ​​as it can be fetched from the Hibernate session level cache (hibernate will just execute the request select for update

    for id

    just to acquire the lock). You can achieve this by calling "entityManager.refresh (entity)".
  • Check the count

    line box , and if this value matches your business rules, then increase / decrease it.
  • Save the object, clear your hibernate session and commit the transaction (explicitly or implicitly).

Meta code below:





    @Transactional
    public Product performPlacement(@Nonnull final Long id) {
        Assert.notNull(id, "Product id should not be null");
        entityManager.flush();
        final Product product = entityManager.find(Product.class, id, LockModeType.PESSIMISTIC_WRITE);
        // Make sure to get latest version from database after acquiring lock, 
        // since if a load was performed in the same hibernate session then hibernate will only acquire the lock but use fields from the cache
        entityManager.refresh(product);
        // Execute check and booking operations
        // This method call could just check if availableCount > 0
        if(product.isAvailableForPurchase()) {
            // This methods could potentially just decrement the available count, eg, --availableCount
            product.registerPurchase();
        }
        // Persist the updated product 
        entityManager.persist(product);
        entityManager.flush();
        return product;
    }


      

This approach will ensure that no two threads / transactions will ever check and update on the same row storing the item counter at the same time .

However, because of this, it will also have some effect of degrading your system's performance, so you need to make sure that the atomic increment / decrement is used as far in the shopping flow as possible and as infrequent (for example, right in the checkout routine when customer reaches pay

). Another useful trick to minimize blocking effects is to add the "count" column not to the product itself, but to another table associated with the product. This will prevent product row locks because locks will be acquired on a different row / table combination that is exclusively used during the validation phase.

Links: https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html

Summary

Please note that both of these technologies introduce additional sync points in your system, which reduces bandwidth. Therefore, please take a close look at the impact it has on your system using a performance test or any other technique that is used in your project to measure throughput.

More often than not, online retailers prefer to resell / order certain items rather than impact performance.

Hope it helps.

+3


source


With these transaction parameters, you should see the material that was committed. But still, your transaction processing is not waterproof. The following is possible:

  • Let's say you have one item in the warehouse on the left.
  • Now two transactions start, each ordering one item.
  • Both check inventory and see: "Good enough stock for me."
  • Both capture.
  • You have now resold.

A serializable isolation level should fix this. BUT

  • the isolation levels available across different databases vary greatly, so I don't think it is actually guaranteed to provide you with the isolation level you require.

  • these limitations severely limit scalability. Transactions doing this should be as short and sparse as possible.



Depending on the database you are using, it might be better to implement it with a database constraint. For example, in oracle, you can create a materialized view that computes the full margin and constraint the result to non-negative.

Update

For a materialized approach, you do the following.

  • create a materialized view that calculates the value you want to constrain, eg. amount of orders. Make sure the materialized view is updated in a transaction that changes the contents of the subfolder tables.

    For the oracle, this is accomplished with a sentence ON COMMIT

    .

    ON THE COMMITTEE

    Specify ON COMMIT to specify that a fast update should occur whenever the database commits a transaction that is working on the main table of the materialized view. This suggestion can increase the time taken to complete the commit because the database performs the update operation as part of the commit process.

    See https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_6002.htm for details .

  • Place a check constraint on a materialized view to encode the constraint you want eg. that the value is never negative. Note that the materialized view is just another table, so you can create constraints in the same way as usual.

    See previous example https://www.techonthenet.com/oracle/check.php

+2


source







All Articles