Duplicate handling in SQLAlchemy
I am using SQLAlchemy to write data from Python to a database in SQL. I am using the below code to write the contents of one of the tables in SQL, which would then give me the primary keys for the first table Table 1
, which I use when writing to the data for the next table Table 2
as foreign keys are Table 2
mapped to Table 1
. However, I am now writing a function to prevent duplicate data in these tables if I had to run the script again.
Base.metadata.create_all(engine, checkfirst=True)
Session = sessionmaker(bind=engine)
session = Session(bind=engine, expire_on_commit=False)
# Writing data into Table 1
session.add_all([
Table1(name = 'Euro'),
Table1(name = 'Fed'),
Table1(name = 'Aus'),
Table1(name = 'BOE'),
Table1(name = 'Canada')])
session.flush()
session.commit()
session = Session(bind=engine, expire_on_commit=False)
#Obtaining the primary keys for Table 1
listofindices = []
for row in session.query(Table 1):
listofindices.append(row.id)
So far away the code for handling duplicates looks like this. However, I'm not sure how to get the primary keys from Table 1
and at the same time prevent duplicate data in it :
Session = scoped_session(sessionmaker(bind=engine))
s = Session()
select_statement = s.execute('select count(*) from Table 1')
result = select_statement.fetchone()
table_exists = result[0] #Does TABLE 1 exist?
if table_exists != 0:
#DONT KNOW WHAT TO WRITE HERE TO PREVENT DUPLICATES AS NOW THE TABLE EXISTS
source to share
No one has answered this question yet
Check out similar questions: