Hi …I created an S3 bucket on aws and uploaded a JSONfile which I loaded to snowflake using the external stage pipe , But now I have a csv file I uploaded into the same s3 bucket. How do I get the csv into the snowflake?
Dear Sir, I developed a program to first move S3 file to stage (parquet) at snowflake, Then from there I used infer schema to create table in the same program and then I used COPY INTO to move data from stage (parquet) to that snowflake table. I need to now, know the # of rows the snowflake table has. I did try all ways, but somehow using snowflake connector, it's not printing anything. I used below code, import snowflake.connector conn=snowflake.connector.connect( user='TESTUSER', password='TestPassword', account='abc-99999999', warehouse='COMPUTE_WH', database='PRACTICEDB', schema='DDACHEMA' ) cur=conn.cursor() try: cur.execute("USE ROLE ACCOUNTADMIN") cur.execute("SELECT * FROM STUDENT") one_row=cur.fetchone() print(one_row) finally: cur.close() conn.close() It;s not printing value of one_row and program completed. When I run the same query at snowflake, it shows 5 records. Am I doing something wrong or missing anything? any issue with connector. Snowflake connector is 2.4.1 and python is 3.9.12 version.
Yes SF001, you can check this video , I explained the write_pandas technique here -- th-cam.com/video/mZtJqTVPv6E/w-d-xo.html But there is a reason to explain this external stage approach too is -- we can make that data loading ultra-fast with big data too , where pandas will blow-away in 2 seconds in front of the big volume . You can refer this link to know about parallelism in copy command -- interworks.com/blog/2020/03/04/zero-to-snowflake-multi-threaded-bulk-loading-with-python/ Happy Learning !
how can i configure a snowpipe to grab the same filename from an s3 bucket when the file is refreshed and re-uploaded?
Hi …I created an S3 bucket on aws and uploaded a JSONfile which I loaded to snowflake using the external stage pipe ,
But now I have a csv file I uploaded into the same s3 bucket.
How do I get the csv into the snowflake?
Hello
Getting tracback error in statement_1. The error - "TypeError: can only concatenate str (not "tuple") to str"
Could you please help?
Dear Sir, I developed a program to first move S3 file to stage (parquet) at snowflake, Then from there I used infer schema to create table in the same program and then I used COPY INTO to move data from stage (parquet) to that snowflake table. I need to now, know the # of rows the snowflake table has. I did try all ways, but somehow using snowflake connector, it's not printing anything. I used below code, import snowflake.connector
conn=snowflake.connector.connect(
user='TESTUSER',
password='TestPassword',
account='abc-99999999',
warehouse='COMPUTE_WH',
database='PRACTICEDB',
schema='DDACHEMA' )
cur=conn.cursor()
try:
cur.execute("USE ROLE ACCOUNTADMIN")
cur.execute("SELECT * FROM STUDENT")
one_row=cur.fetchone()
print(one_row)
finally:
cur.close()
conn.close() It;s not printing value of one_row and program completed. When I run the same query at snowflake, it shows 5 records. Am I doing something wrong or missing anything? any issue with connector. Snowflake connector is 2.4.1 and python is 3.9.12 version.
you don't actually need to do all of this, just use the snowflake connector and write_pandas function which does the same thing for you
Yes SF001, you can check this video , I explained the write_pandas technique here --
th-cam.com/video/mZtJqTVPv6E/w-d-xo.html
But there is a reason to explain this external stage approach too is -- we can make that data loading ultra-fast with big data too , where pandas will blow-away in 2 seconds in front of the big volume .
You can refer this link to know about parallelism in copy command --
interworks.com/blog/2020/03/04/zero-to-snowflake-multi-threaded-bulk-loading-with-python/
Happy Learning !
Thanks 😊 🙏
Most welcome Dipraj 😊Happy Learning !