If you're running into an error with "exit code 1" @1:28:55, you need to update the Dockerfile @1:08:08. Goto the Dockerfile and "image: postgres:9.2" for both "source_postgres" and "destination_postgres" I think this was a good idea but many details went off. I wish he'll be more specific about the version of the software he's using next time.
Thanks, I wasted many hours with that error. I would appreciate if you could expand on the reasons why this error happens. Again, thanks for your contribution.
bro, the execution of my code is still running, (after I edited the part of the code you mentioned), but I believe in you! Thank you so much wallah you are a savior! 🔥🔥
Please bring a bigger course , which covers all aspects from basics , like from scratch , MySQL , python/Java/Scala , Hadoop Spark , pyspark , something that covers a data engineering with one cloud
Hey , currently there is no set path to becoming a data engineer. So please create a proper certification with a clear roadmap of foundations and most used cloud tech in data engineering so that we can get some structure going for those interested in this career.
At 1:53:56 you may face an error due to the fact that the dbt service is launched before the completion of the elt_script service. To solve the issue, you have to add condition: service_completed_successfully under the depends_on clause of the dbt service to be sure that it will always be launched after the completion of the elt_script service.
@@wusswuzz5818 did you end up figure out any errors? dbt creates the view for me for the films, film_actors, and actors tables but not for the film_ratings table because the relation "public.films" does not exist
Please make this a series! (Edit) Suggestion 1: I would make is to include an overview before each section of 1) the overarching pipeline 2) where in the pipeline we are for a given section. In other words, having a map or diagram of what is happening would help with conceptual understanding. Suggestion 2: Explain the code line-by-line conceptually. Time writing the code on the screen could be cut and replaced just by explanation. This would save time.
Thanks for the course .. to all those using VM's make sure files are located on the VM .. tried running docker excercise with docker on VM and files on host (Windows) wasted a lot of time resloving errors finally moved all the files to the Ubuntu VM where things ran smoothly ...
@2:56:09 You can avoid relaunching Airflow containers in every change by mapping your DAG dir to Airflow's DAG dir; by adding a volumes mapping in the Airflow compose file, under x-airflow-common: x-airflow-common: ... volumes: - /path/to/project/dags:/opt/airflow/dags @2:58:52 Normally you can force rebuilding the Docker images using "--build" flag: docker compose -f docker-compose.yml up --build -d
An advice. Doesn't make sense to read and write the code from your top monitor. Save your and the watchers' time and just copy-paste what's there and explain it line by line.
Amazing! Looking forward to it. Also, it would be perfect to have a more comprehensive version of this course as well. Covering all batch and stream processing tools and methods.
@@JREQuickPods he codes and games on his twitch channel and he has a TH-cam(you can search his name) where he shares his experiences. twitch.tv/justinbchau. I watched him make some of this course a few months back on twitch.
This course is not for beginners, he is moving forward like a hell without telling how to install, he is moving fast forward and video title is for beginners
I just started and love the style. You teach fluent and set focus on the important take aways. I saw so much bullshit that I really expected that I need to watch you 10 minutes installing docker and already started skipping but you didn't show that part which is nice. Makes perfect sense. Someone not being able to RTFM and install docker on his own shouldn't focus on DE at this point anyway imho.
Came across another issue while running code for datapipeline gave me version error between dump and postgresql I changeed the version from latest to 15.5 to match dump version... hope this helps in case anyone face issues on this module
If you get the "pg_dump: error: aborting because of server version mismatch ". I found that running "apt-get update && apt-get install -y postgresql-client" is actually installing version 15, while "postgres:latest: pulls version 16. To fix this this, specify your image to be "postgres:15" for both the source_postgres and destination_postgres in the docker-compose.yaml. I also changed my RUN in command in Dockerfile to be "apt-get update && apt-get install -y postgresql-client-15" to be explicit.
Why use the subprocess module in the ELT script rather than libraries made specifically for connecting to databases in Python such as psycopg2 and SQLalchemy? I'm no expert but I asked ChatGPT and it said using subprocess in such a script adds unnecessary complexity and introduces safety concerns (i.e., passing passwords directly through command strings). (Edit): Thank you for putting this out for free, but I will put it here for others to see: Spark & Kafka were not mentioned or covered in this course as the intro suggests. And, certain parts of the course are a bit disorganized / glossed over, but this is an opportunity to get better with the technologies by debugging on your own as well, so there's that!
Airbyte section is complicated and badly explained. It looks for me like some part is missing in the recording. E.g. how airbyte was started? After this video I don't see significant advantage using Airbyte over elt_script from the video example.
In the "Building a Data pipeline from Scratch" section, when I am running the containers by docker compose up, I get the following error: elt-elt_script-1 | pg_dump: error: aborting because of server version mismatch elt-elt_script-1 | pg_dump: detail: server version: 16.1 (Debian 16.1-1.pgdg120+1); pg_dump version: 15.5 (Debian 15.5-0+deb12u1) I have installed the latest version of postgres on my machine i.e. 16.1. I have also removed the images and volumes and rerun the docker compose up command, still I get the above error. Can someone please help? Thanks!
I ran into this same problem. I fixed it by changing the docker-compose.yaml. Instead of image: postgres:latest under the source_postgres and destination_postgres, I wrote image: postgres:15 in both of these places. This way when the code is run, it installed the same version of PostgreSQL in both the source and destination.
Good job, you are creating visibility for airbyte in a great way, by providing an evolutionary view of the stack that gets one to eventually need it. Hope they continue to support you making content using this approach.
Errors: Database Error in model actors (models/example/actors.sql) relation "public.actors" does not exist relation "public.actors" does not exist ===> Solution: In 'docker-compose.yaml', add the below code to dbt: depends_on: elt_script: condition: service_completed_successfully
i did this and the dbt runs for the films, film_actors, and actors tables but when trying to create the view for the film_ratings table it still says that relation "public.films" does not exist. I have been frustratingly stuck on this for days, I even tried rebuilding the project. Not sure what else to do, any thoughts on what I can do next?
@@jomeltapawan9316 I had this same issue. I had to recreate my project due to another setup issue and as a result, I forgot to edit the dbt_project.yml file such that '+materialized: table' rather than 'view'. Once fixing this, the issue was resolved. During the beginning of the dbt section, he mentioned that the view setting is not useful for the model application. Hope this helps.
Thank you! I ran into all the missing configuration/typos Justin had. But for this one, his github yaml file didn't have the line, so it was difficult to find the root cause. thank you for sharing.
I have same folder structure but when I execute docker compose sentence I receive an error: fatal: Invalid --project-dir flag. Not a dbt project. Missing dbt_project.yml file.
should i first aim to become a data scientist then go to data engineering? just wondering if it's better that way or not, if my end goal is to be a data engineer. thoughts?
have a issue with postgres version mismatch. I change the version of postgres server to suit with it. services: source_postgres: image: postgres:15.5 same for destination_postgres. Hope this help
The setup for these services seems like A LOT of work and because of that things can go wrong in production, not to mention handover for something like this...what alternatives are there for this type of ELT pipeline?
I ran into the below error trying to run my container, can you please advice me on how to resolve it? All my files are in one directory. 2024-05-03 15:34:42 Node.js v18.20.2 2024-05-03 15:34:45 node:internal/modules/cjs/loader:1143 2024-05-03 15:34:45 throw err; 2024-05-03 15:34:45 ^ 2024-05-03 15:34:45 2024-05-03 15:34:45 Error: Cannot find module '/app/src/index.js' 2024-05-03 15:34:45 at Module._resolveFilename (node:internal/modules/cjs/loader:1140:15) 2024-05-03 15:34:45 at Module._load (node:internal/modules/cjs/loader:981:27) 2024-05-03 15:34:45 at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:128:12) 2024-05-03 15:34:45 at node:internal/main/run_main_module:28:49 { 2024-05-03 15:34:45 code: 'MODULE_NOT_FOUND', 2024-05-03 15:34:45 requireStack: [] 2024-05-03 15:34:45 } 2024-05-03 15:34:45 2024-05-03 15:34:45 Node.js v18.20.2
Why is dbt using the port 5434 to communicate with the destination database? Both containers are running in the same docker network so why do we need to use the exposed port number?
It does not need that port that port is exposed so that if you the user wants to take a peak at the database you can run a db manager against that port, otherwise to peek at the database you would have to exec -it into the db container to look at the tables.
upon doing "docker compose up" (this is right towards the end of the "building a data pipeline from scratch") I get the error : "source_postgres-1 | 2024-01-20 23:30:21.504 UTC [247] FATAL: password authentication failed for user "root" source_postgres-1 | 2024-01-20 23:30:21.504 UTC [247] DETAIL: Role "root" does not exist. source_postgres-1 | Connection matched file "/var/lib/postgresql/data/pg_hba.conf" line 128: "host all all all scram-sha-256"" Anyone else get the same issue? I'm not a huge expert on docker, so I can only guess I have to do something with the postgres user "root", even tho justin didn't make a root user, only "postgres".. but then how do I do this in docker as well?
Okay, major question. As you are adding in new technologies like Airflow in the docker file, where is your console log or terminal to let you know if there’s any syntax errors etc.? For example, with React, Django, Flutter, I always have the app running on local host and ALWAYS have that window open to see if there are any errors in the error log as I am updating files. How do you do that with this workflow to make sure you’re not making mistakes while you’re writing code?
I am getting an issue with docker compose up 'dbt-1 | relation "public.actors" does not exist' seems like dbt runs first and then the elt_script, even though I have the same docker compose file as the video shows.
@@josesalazar2384 I was having the same issue but with public.films does not exist. I went in the docker file and added the condition. But that did not solve the issue. i looked around and turns out in sources.yml i had the table source for films to be "films.sql". we dont need to add a .sql. Leaving this here in case someone has the same issue as me
I love it. But lets say i need to make a email classification system that distributes these emails after there classification and send them to there classified department server. Would you use a message queue like celery or use a broker like kafka or rabit mq?
Can someone help me, when running command docker run, in Docker i get error Exited(1), saying it can not find module: Error: Cannot find module '/app/src/index.js' I did per instructions, should i move index.js file somewhere else, it is in the app directory
Hi Justin, I have problem loading data to Postgres from the SQL scripts. The problem I encounter is "unf_inv_name expected afer this token". May I get some advices pls?
| 05:53:48 Encountered an error: dbt-1 | Runtime Error dbt-1 | fatal: Invalid --project-dir flag. Not a dbt project. Missing dbt_project.yml file dbt-1 exited with code 2 getting this issue how to solve this error
I had the same problem, I solved it like this: command: [ "run" ] networks: - elt_network volumes: - ./custom_postgres:/usr/app - ~/.dbt/:/root/.dbt/ You can change "run" to "debug" to see which file it does not recognize
Not even on the Gartner Quadrant. And Airbyte is a freemium, only 14 days of free use. And as you might have guessed, it's greed-priced, errr usage priced. the lowest use (10gb and only 4 rows to replicate per month) is $160.00. That is huge expensive. I think Im going to skip this advertisement.
It was actually he forgot the macro keyword when he was defining the macro in ratings_macro.sql file. {% macro generate_ratings() %} CASE WHEN user_rating >= 4.5 THEN 'Excellent' WHEN user_rating >= 4.0 THEN 'Good' WHEN user_rating >= 3.0 THEN 'Average' ELSE 'Poor' END as rating_category {% endmacro %}
Something wrong with airbyte section. 1. First of all, settings source/destination: host.docker.internal - doesn't work. localhost - does. 2. I didn't manage to connect to airbyte using airflow either: - raise AirflowException(e) airflow.exceptions.AirflowException: Unexpected status code 401 from token endpoint - OR raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8001): Max retries exceeded with url: /v1/applications/token (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: I think the reason in ":latest" versions. I tried even clone the example repo and get the same issues
We need a 60 hrs course for DataEngineer.
U make one
Check out the data engineering zoomcamp.
😂@@dijik123
@@StarLord-571
@@StarLord-571 it's not with the quantity but with the quality
We need full data engineering course py + sql + big data hadoop + apache spark + apache airflow + apache kafka + aws + project
If you're running into an error with "exit code 1" @1:28:55, you need to update the Dockerfile @1:08:08. Goto the Dockerfile and "image: postgres:9.2" for both "source_postgres" and "destination_postgres"
I think this was a good idea but many details went off. I wish he'll be more specific about the version of the software he's using next time.
Thanks, I wasted many hours with that error. I would appreciate if you could expand on the reasons why this error happens. Again, thanks for your contribution.
God bless you mate !
Thanks a lot for the point. I spent a bit of time trying to solve the issue.
bro, the execution of my code is still running, (after I edited the part of the code you mentioned), but I believe in you! Thank you so much wallah you are a savior! 🔥🔥
thanks mate
Please bring a bigger course , which covers all aspects from basics , like from scratch , MySQL , python/Java/Scala , Hadoop Spark , pyspark , something that covers a data engineering with one cloud
yes!!
Yes, please. This would be so helpful but thanks for this resource, a great start.
YES Please
Yes
Yes please
the best DE course that i ever seen. the most courses only stick to the theory and never show the practical part.
Hey , currently there is no set path to becoming a data engineer. So please create a proper certification with a clear roadmap of foundations and most used cloud tech in data engineering so that we can get some structure going for those interested in this career.
At 1:53:56 you may face an error due to the fact that the dbt service is launched before the completion of the elt_script service.
To solve the issue, you have to add condition: service_completed_successfully under the depends_on clause of the dbt service to be sure that it will always be launched after the completion of the elt_script service.
Thank you so much.
This did not work for me, and I can't progress any further. Frustrating.
Thanks man, I was struggling with this for a while
depends_on:
elt_script:
condition : service_completed_successfully
@@wusswuzz5818 did you end up figure out any errors? dbt creates the view for me for the films, film_actors, and actors tables but not for the film_ratings table because the relation "public.films" does not exist
in the docker-compose.yaml file, for anyone wondering (like me 1 min ago)
Please make this a series!
(Edit)
Suggestion 1: I would make is to include an overview before each section of 1) the overarching pipeline 2) where in the pipeline we are for a given section. In other words, having a map or diagram of what is happening would help with conceptual understanding.
Suggestion 2: Explain the code line-by-line conceptually. Time writing the code on the screen could be cut and replaced just by explanation. This would save time.
Thanks for the course .. to all those using VM's make sure files are located on the VM .. tried running docker excercise with docker on VM and files on host (Windows) wasted a lot of time resloving errors finally moved all the files to the Ubuntu VM where things ran smoothly ...
@2:56:09 You can avoid relaunching Airflow containers in every change by mapping your DAG dir to Airflow's DAG dir;
by adding a volumes mapping in the Airflow compose file, under x-airflow-common:
x-airflow-common:
...
volumes:
- /path/to/project/dags:/opt/airflow/dags
@2:58:52 Normally you can force rebuilding the Docker images using "--build" flag:
docker compose -f docker-compose.yml up --build -d
1:59:58
the reason why he encountered the error is that {% generate_ratings() %}.
To avoid the error, you should put {% macro generate_ratings() %}
An advice. Doesn't make sense to read and write the code from your top monitor. Save your and the watchers' time and just copy-paste what's there and explain it line by line.
Well you should read the description
Amazing!
Looking forward to it.
Also, it would be perfect to have a more comprehensive version of this course as well. Covering all batch and stream processing tools and methods.
Since when Justin is a data engineer? Well I guess the constant learning is real.
I’ve been following him since I started coding(almost 4 years now) he’s always learning and growing.
@@briabytes which channel ?
@@briabytes Please share his YT channel.
@@JREQuickPods he codes and games on his twitch channel and he has a TH-cam(you can search his name) where he shares his experiences. twitch.tv/justinbchau. I watched him make some of this course a few months back on twitch.
his twitch is in the other comment
This course is not for beginners, he is moving forward like a hell
without telling how to install, he is moving fast forward and video title is for beginners
I just started and love the style. You teach fluent and set focus on the important take aways. I saw so much bullshit that I really expected that I need to watch you 10 minutes installing docker and already started skipping but you didn't show that part which is nice. Makes perfect sense. Someone not being able to RTFM and install docker on his own shouldn't focus on DE at this point anyway imho.
Came across another issue while running code for datapipeline gave me version error between dump and postgresql I changeed the version from latest to 15.5 to match dump version... hope this helps in case anyone face issues on this module
Hero
If you get the "pg_dump: error: aborting because of server version mismatch ".
I found that running "apt-get update && apt-get install -y postgresql-client" is actually installing version 15, while "postgres:latest: pulls version 16.
To fix this this, specify your image to be "postgres:15" for both the source_postgres and destination_postgres in the docker-compose.yaml.
I also changed my RUN in command in Dockerfile to be "apt-get update && apt-get install -y postgresql-client-15" to be explicit.
Thank you. I have 30 mininutes for this error.
love you mate.
Why use the subprocess module in the ELT script rather than libraries made specifically for connecting to databases in Python such as psycopg2 and SQLalchemy?
I'm no expert but I asked ChatGPT and it said using subprocess in such a script adds unnecessary complexity and introduces safety concerns (i.e., passing passwords directly through command strings).
(Edit): Thank you for putting this out for free, but I will put it here for others to see: Spark & Kafka were not mentioned or covered in this course as the intro suggests. And, certain parts of the course are a bit disorganized / glossed over, but this is an opportunity to get better with the technologies by debugging on your own as well, so there's that!
Airbyte section is complicated and badly explained.
It looks for me like some part is missing in the recording. E.g. how airbyte was started?
After this video I don't see significant advantage using Airbyte over elt_script from the video example.
In the "Building a Data pipeline from Scratch" section, when I am running the containers by docker compose up, I get the following error:
elt-elt_script-1 | pg_dump: error: aborting because of server version mismatch
elt-elt_script-1 | pg_dump: detail: server version: 16.1 (Debian 16.1-1.pgdg120+1); pg_dump version: 15.5 (Debian 15.5-0+deb12u1)
I have installed the latest version of postgres on my machine i.e. 16.1. I have also removed the images and volumes and rerun the docker compose up command, still I get the above error. Can someone please help? Thanks!
I ran into this same problem. I fixed it by changing the docker-compose.yaml. Instead of image: postgres:latest under the source_postgres and destination_postgres, I wrote image: postgres:15 in both of these places. This way when the code is run, it installed the same version of PostgreSQL in both the source and destination.
@@oddlang687 it' s work , thanks you very much, i take two day to find the way to show that problem.
Thanks man!
@@oddlang687 Appreciate this comment, fixed for me. Thank you!
We would like to see more content for Data Engineering, possibly a full course.
More bi and DE courses like this plz
Good job, you are creating visibility for airbyte in a great way, by providing an evolutionary view of the stack that gets one to eventually need it. Hope they continue to support you making content using this approach.
Here I'm I thinking about data engineering, then Boom! TH-cam shows me a data Engineering course 😅
You probably googled
Finally a data engineering course!
I am stuck at installing docker, 'cant run on my pc' windows 10
Errors:
Database Error in model actors (models/example/actors.sql)
relation "public.actors" does not exist
relation "public.actors" does not exist
===> Solution:
In 'docker-compose.yaml', add the below code to dbt:
depends_on:
elt_script:
condition: service_completed_successfully
A hero, but why does it work for most people out of the box?
i did this and the dbt runs for the films, film_actors, and actors tables but when trying to create the view for the film_ratings table it still says that relation "public.films" does not exist. I have been frustratingly stuck on this for days, I even tried rebuilding the project. Not sure what else to do, any thoughts on what I can do next?
@@jomeltapawan9316 same bro =(
@@jomeltapawan9316 I had this same issue. I had to recreate my project due to another setup issue and as a result, I forgot to edit the dbt_project.yml file such that '+materialized: table' rather than 'view'. Once fixing this, the issue was resolved. During the beginning of the dbt section, he mentioned that the view setting is not useful for the model application. Hope this helps.
Right off the bat, would just like to make the comment of, please nix the background music or make it even quieter... Distracting.
another one had issues with host.docker.internal hence added extra_hosts:
- "host.docker.internal:host-gateway" to docker compose for dbt
Thank you! I ran into all the missing configuration/typos Justin had. But for this one, his github yaml file didn't have the line, so it was difficult to find the root cause. thank you for sharing.
@@paolaprieto8111 heya could you share the exact code you wrote, I can't seem to get it working
Please make a detailed tutorial on data engineer, about 20-30 hours full end to end course please it's a request 🙌
I have same folder structure but when I execute docker compose sentence I receive an error: fatal: Invalid --project-dir flag. Not a dbt project. Missing dbt_project.yml file.
should i first aim to become a data scientist then go to data engineering? just wondering if it's better that way or not, if my end goal is to be a data engineer. thoughts?
Please add more data engineering course like this. I really love it.
Long waiting for this course
have a issue with postgres version mismatch. I change the version of postgres server to suit with it.
services:
source_postgres:
image: postgres:15.5
same for destination_postgres.
Hope this help
thank you man! Great help
The setup for these services seems like A LOT of work and because of that things can go wrong in production, not to mention handover for something like this...what alternatives are there for this type of ELT pipeline?
Do you need to know maths for data engineering if so what types of
Thank you for this! Learned a lot today. 👍🏾
I ran into the below error trying to run my container, can you please advice me on how to resolve it? All my files are in one directory. 2024-05-03 15:34:42 Node.js v18.20.2
2024-05-03 15:34:45 node:internal/modules/cjs/loader:1143
2024-05-03 15:34:45 throw err;
2024-05-03 15:34:45 ^
2024-05-03 15:34:45
2024-05-03 15:34:45 Error: Cannot find module '/app/src/index.js'
2024-05-03 15:34:45 at Module._resolveFilename (node:internal/modules/cjs/loader:1140:15)
2024-05-03 15:34:45 at Module._load (node:internal/modules/cjs/loader:981:27)
2024-05-03 15:34:45 at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:128:12)
2024-05-03 15:34:45 at node:internal/main/run_main_module:28:49 {
2024-05-03 15:34:45 code: 'MODULE_NOT_FOUND',
2024-05-03 15:34:45 requireStack: []
2024-05-03 15:34:45 }
2024-05-03 15:34:45
2024-05-03 15:34:45 Node.js v18.20.2
YES THANK YOU SO MUCH keep expanding this!!!!
At 2:52:34 how was airbyte started ??
try looking into course resources > airbyte
My man...Justin!
You're a top crossfitter and I miss working out under your guidance
The intro and description don't match the video - there is no Spark or Kafka anywhere in the video.
Please make more video about Data Engineering
Can you pls do a complete end to end video on AWS or GCP data engineering data ingestion, etl, analytics using pyspark
Why is dbt using the port 5434 to communicate with the destination database? Both containers are running in the same docker network so why do we need to use the exposed port number?
It does not need that port that port is exposed so that if you the user wants to take a peak at the database you can run a db manager against that port, otherwise to peek at the database you would have to exec -it into the db container to look at the tables.
please come up with big data course
upon doing "docker compose up" (this is right towards the end of the "building a data pipeline from scratch") I get the error :
"source_postgres-1 | 2024-01-20 23:30:21.504 UTC [247] FATAL: password authentication failed for user "root"
source_postgres-1 | 2024-01-20 23:30:21.504 UTC [247] DETAIL: Role "root" does not exist.
source_postgres-1 | Connection matched file "/var/lib/postgresql/data/pg_hba.conf" line 128: "host all all all scram-sha-256""
Anyone else get the same issue? I'm not a huge expert on docker, so I can only guess I have to do something with the postgres user "root", even tho justin didn't make a root user, only "postgres".. but then how do I do this in docker as well?
instead of postgres:latest use image: postgres:12 and add this line POSTGRES_INITDB_ARGS: --auth-host=scram-sha-256
1:40:58 can't find the schema.yml file in the course github. Had to copy everything from the video :/
ran into the same issue, its in a different branch
Okay, major question. As you are adding in new technologies like Airflow in the docker file, where is your console log or terminal to let you know if there’s any syntax errors etc.? For example, with React, Django, Flutter, I always have the app running on local host and ALWAYS have that window open to see if there are any errors in the error log as I am updating files.
How do you do that with this workflow to make sure you’re not making mistakes while you’re writing code?
Yes! I’ve been waiting for this.
I really need a longer video.
Finalllyyyy an data project
please we need courses about big data , hadoop and sparks with practical projects
Thank you for this. I like the way you teach it step-by-step and I always got lost with JOINS. Haha anyways thanks for this! Keep it up
it was alright, I think adding a bit more discipline into the course would be an amazing upgrade
I am getting an issue with docker compose up
'dbt-1 | relation "public.actors" does not exist'
seems like dbt runs first and then the elt_script, even though I have the same docker compose file as the video shows.
fixed it by doing this on my dbt service
depends_on:
elt_script:
condition: service_completed_successfully
@@josesalazar2384 thank you for sharing this! I ran into the same problem and was looking forever for a solution. This worked great for me
thanks a lot
@@josesalazar2384 I was having the same issue but with public.films does not exist. I went in the docker file and added the condition. But that did not solve the issue. i looked around and turns out in sources.yml i had the table source for films to be "films.sql".
we dont need to add a .sql.
Leaving this here in case someone has the same issue as me
I like to see more of your content, good explanation skills
Thank you for amazing video. Could you please a second video including Data Engineering Projects
Might be great, but source code is not working at all (starts with main branch, and keeps going).
I love it. But lets say i need to make a email classification system that distributes these emails after there classification and send them to there classified department server.
Would you use a message queue like celery or use a broker like kafka or rabit mq?
Thanks!
Am I stubit or the first part about Docker was a bit incomprehensible?
Docker tends to feel like that...
heyy anybody knows where i can found the github repo for the dbt project? I'm searching for the schema and can't find it
same
@@joshwigginton9881 you switch to the dbt branch of the repo
best video on the internet
Is spark and kafka actually covered here as mentioned in the description?
nope
does the alpine container run the same as the ubuntu container i presume it does but want to know if it does change the scope of the project
Do full MySQL course
Hi! Thanks for the tutorial, where can I find those yml file ? in the github repo there are not those files
I mean the part of dbt
Sir on first run of docker compose up, here is the error I have been facing: elt_script-1 exited with code 1, some please help
did you solve this?
@@muh.zakyfirdaus6100 not yet solved, please help
@freeCodeCamp please put more content on data engineering ;)
I need a comprehensive video. Please !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!. I'm looking for that.
Can anyone suggest some youtube channel which provide all the topics like SQL python spark pyspark azure ???
Finalllyyyy an data project. What are the learning pre-requisites for this course?.
Great course!
How do i learn all of these? What course should i take? Please help
Can you make data engineering boot camp.
Please make video on performance testing using jmeter
Can someone help me, when running command docker run, in Docker i get error Exited(1), saying it can not find module:
Error: Cannot find module '/app/src/index.js'
I did per instructions, should i move index.js file somewhere else, it is in the app directory
fixed it by installing bunch of commands:
npm install uuid
npm install express
npm install sqlite3
But now getting cannot get/
How does the "data_dump.sql" file transfer from source container to the destination container?
Hi Justin, I have problem loading data to Postgres from the SQL scripts. The problem I encounter is "unf_inv_name expected afer this token". May I get some advices pls?
| 05:53:48 Encountered an error:
dbt-1 | Runtime Error
dbt-1 | fatal: Invalid --project-dir flag. Not a dbt project. Missing dbt_project.yml file
dbt-1 exited with code 2 getting this issue how to solve this error
I had the same problem, I solved it like this:
command:
[
"run"
]
networks:
- elt_network
volumes:
- ./custom_postgres:/usr/app
- ~/.dbt/:/root/.dbt/
You can change "run" to "debug" to see which file it does not recognize
Absolute path instead of ~ helped me
ok, this is super helpful.
What are the learning pre-requisites for this course?
brain
this guy could barely code in JavaScript like a year ago so not too much i guess.
@@AndrewHuange thanx for answering my query👍
Im curious what tools, besides, Airbyte can be used? (the presenters employer - FYI, this is a marketing channel for Airbyte).
Not even on the Gartner Quadrant. And Airbyte is a freemium, only 14 days of free use. And as you might have guessed, it's greed-priced, errr usage priced. the lowest use (10gb and only 4 rows to replicate per month) is $160.00. That is huge expensive. I think Im going to skip this advertisement.
@@markk364 you can run airbyte for free using the OSS version.
Yes! Thanks!
You didn't solve the column level macro bug @ 2:00:03. That's not the attitude mate.
It was actually he forgot the macro keyword when he was defining the macro in ratings_macro.sql file.
{% macro generate_ratings() %}
CASE
WHEN user_rating >= 4.5 THEN 'Excellent'
WHEN user_rating >= 4.0 THEN 'Good'
WHEN user_rating >= 3.0 THEN 'Average'
ELSE 'Poor'
END as rating_category
{% endmacro %}
This channel is gold 🪙❤
definitely not for beginners but good .
The semi-transparent terminal is a very bad idea for a video. It unnecessarily makes it harder to see the text.
Any Pre-requisites required ?
Here's your data model. People/Places/Pennies/Product involved in a Process.
Thank's for the video
Please cover Turbo. One of the most interesting meme coins.
This is excellent content
do you guys have an idea what vscode theme he is using?
Quallity content as always
Something wrong with airbyte section.
1. First of all, settings source/destination: host.docker.internal - doesn't work. localhost - does.
2. I didn't manage to connect to airbyte using airflow either:
- raise AirflowException(e) airflow.exceptions.AirflowException: Unexpected status code 401 from token endpoint
- OR raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8001): Max retries exceeded with url: /v1/applications/token (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
I think the reason in ":latest" versions. I tried even clone the example repo and get the same issues
Looks like you had a bad time at airbyte 😅 give some air to airbyte section