Concurrent execution in SQL Server -
table schemas (sql server 2012)
create table interestbuffer ( accountno char(17) primary key, calculatedinterest money, provisionedinterest money, accomodatedinterest money, ) create table #tempinterestcalc ( accountno char(17) primary key, calculatedinterest money )
i doing upsert. update rows existed , insert others.
update set a.calculatedinterest = a.calculatedinterest + b.calculatedinterest interestbuffer inner join #tempinterestcalc b on a.accountno = b.accountno insert interestbuffer select a.accountno, a.calculatedinterest, 0, 0 #tempinterestcalc left join interestbuffer b on a.accountno = b.accountno b.accountno null
all working fine. problem occurs during concurrent executions. inserting data #tempinterestcalc
joining other various tables including left join interestbuffer
table , different set of data inserted #tempinterestcalc
each concurrent execution.
my problem executions become locked execution until commit them in serial.
my question providing different set of data should not have impact of row lock on other concurrent operation. suggestion appreciated.
update 1: have used sp_lock
interestbuffer table. says indid = 1, type = key, mode = x, status = grant
.
i think update , insert blocks other transaction make phantom reads.
update 2: sorry! told update fine. realized first transaction write blocking second transactions write. in first transaction run update , insert. in second transaction, after insert data in #tempinterestcalc table following , worked fine.
--insert data #tempinterestcalc select * #tempinterestcalc return --update interestbuffer --insert interestbuffer
update 3: think problem read data interestbuffer during update , insert interestbuffer.
update 4: answer below working if rebuild index
of branchcode in interestbuffer table. there reason batch insert/update make problem index ???
update 5: have read if maximum rows of page needs locked batch update sql server may locked page. there way see row containing page or page going lock , release during execution??
update 6: providing scenario.
create table [dbo].[account]( [accountno] [char](17) not null, [branchcode] [char](4) not null, constraint [pk_account] primary key clustered ( [accountno] asc )with (pad_index = off, statistics_norecompute = off, ignore_dup_key = off, allow_row_locks = on, allow_page_locks = on) on [primary] ) on [primary] create table [dbo].[interestbuffer]( [accountno] [char](17) not null, [branchcode] [char](4) not null, [calculatedinterest] [money] not null, constraint [pk_buffer] primary key clustered ( [accountno] asc )with (pad_index = off, statistics_norecompute = off, ignore_dup_key = off, allow_row_locks = on, allow_page_locks = on) on [primary] ) on [primary]
query branch 0001:
begin tran declare @branchcode char(4) = '0001' declare @calculatedinterestnew money = 10 create table #tempinterestcalc ( accountno char(17), branchcode char(4), calculatedinterestnew money, calculatedinterestold money ) insert #tempinterestcalc select a.accountno, a.branchcode, isnull(b.calculatedinterest, 0), b.calculatedinterest account left join interestbuffer b on a.accountno = b.accountno , a.branchcode = b.branchcode a.branchcode = @branchcode update set a.calculatedinterest = b.calculatedinterestnew + @calculatedinterestnew interestbuffer inner join #tempinterestcalc b on a.accountno = b.accountno , a.branchcode = b.branchcode a.branchcode = @branchcode insert interestbuffer select a.accountno, a.branchcode, a.calculatedinterestnew + @calculatedinterestnew #tempinterestcalc a.calculatedinterestold null drop table #tempinterestcalc --rollback --commit tran
for branch 0002, 0003 change @branchcode variable value 0002 &0003 , run them simultaneously.
you have potential deadlock problem because doing read against interestbuffer
table after write. transaction deadlock if has blocked part of interestbuffer
table update , transaction trying read again select needed insert.
you said left joining interestbuffer
while calculating #tempinterestcalc
table... why not use cache of data needed interestbuffer
don't have read again?
change temp table to:
create table #tempinterestcalc ( accountno char(17) primary key, calculatedinterestnew money, calculatedinterestold money )
you might possibly want set repeatable read isolation level before beginning transaction with:
set transaction isolation level repeatable read;
it's more restrictive locking, prevent other transactions trying process same records @ same time, need because combining old , new values. consider scenario:
- transaction 1 reads data , wants add 0.03 existing
calculatedinterest
of 5.0. - transaction 2 reads data , wants add 0.02 5.0.
- transaction 1 updates
calculatedinterest
5.03. - transaction 2's update overwrites values transaction 1 5.03 (instead of adding , coming 5.05).
maybe don't need if sure transactions never touching same records, if read committed won't let transaction 2 read values until transaction 1 finished it.
then separate transaction distinct read phase first , write phase:
--insert data #tempinterestcalc , include previous interest value insert #tempinterestcalc select accountno, query.calculatedinterest calculatedinterestnew, interestbuffer.calculatedinterest calculatedinterestold ( ... ) query left join interestbuffer on query.accountno = interestbuffer.accountno update set a.calculatedinterest = b.calculatedinterestnew + b.calculatedinterestold interestbuffer inner join #tempinterestcalc b on a.accountno = b.accountno insert interestbuffer select a.accountno, a.calculatedinterestnew, 0, 0 #tempinterestcalc --no join here needed read interestbuffer calculatedinterestold null
this shouldn't deadlock... see "unnecessary" blocking due lock escalation, particularly if updating large number of rows. once there more 5000 locks on table escalate table. no other transactions able continue until transaction completes. isn't bad thing... want make sure transactions short possible not lock other transactions long. if lock escalation causing problems, there some things can mitigate this such as:
- breaking transaction smaller chunks of work create fewer locks.
- ensuring have efficient query plan.
- making judicious use of lock hints.
check query plan , see if there table scan's of interestbuffer
in statements... particularly initial population of #tempinterestcalc
since didn't show how building that.
if absolutely never updating accounts in 1 branch @ same time, might consider keeping primary key same changing clustered index branch, account number
(order significant). keep records of same branch physically next each other , reduce chance plan table scan or lock pages other transactions might need. can use paglock
hints, encourage sql server lock page instead of row , prevent reaching threshold trigger lock escalation. this, modifying code update 6 in question this:
set transaction isolation level read uncommitted; begin tran declare @branchcode char(4) = '0001' declare @calculatedinterestnew money = 10 create table #tempinterestcalc ( accountno char(17), branchcode char(4), calculatedinterestnew money, calculatedinterestold money ) insert #tempinterestcalc select a.accountno, a.branchcode, isnull(b.calculatedinterest, 0), b.calculatedinterest account left join interestbuffer b on a.accountno = b.accountno , a.branchcode = b.branchcode a.branchcode = @branchcode update (paglock) set a.calculatedinterest = b.calculatedinterestnew + @calculatedinterestnew interestbuffer inner join #tempinterestcalc b on a.accountno = b.accountno , a.branchcode = b.branchcode a.branchcode = @branchcode insert interestbuffer (paglock) select a.accountno, a.branchcode, a.calculatedinterestnew + @calculatedinterestnew #tempinterestcalc a.calculatedinterestold null drop table #tempinterestcalc --rollback --commit tran
because records physically sorted should lock few pages... when updating thousands of records. run transaction branch 0003 @ same time 0001 without blocking issues. have blocking problem if try adjacent branch such 0002 @ same time. because records branch 0001 , 0002 share same page.
if need separate branches using partitioned table or index. don't know them, sounds might useful trying do, comes it's own set of complications.
Comments
Post a Comment