Mage
Mage
  • 141
  • 164 250
August 2, 2024 Release v0.9.73
🚨 Oh we're just getting started, bub! Deadpool & Wolverine release v0.9.73 🥷🐺🧠 Memory management upgrade
🔄 Dynamic blocks 2.0
🔐 Azure DB connection via Key Vault
👁️ Workspace monitoring
🐛 Bug fixes & more!Details: github.com/mage-ai/mage-ai/releases/tag/0.9.73
#dynamicblocks #azuredb #keyvault #changelog #dataengineering #mage #mageai
มุมมอง: 67

วีดีโอ

LLM Zoomcamp - LLM Orchestration 5.6 Retrieval
มุมมอง 609หลายเดือนก่อน
After exporting the chunks and embeddings, we can test the search query to retrieve relevant documents on sample queries. Github: github.com/DataTalksClub/llm-zoomcamp/tree/main/05-orchestration LLM Zoomcamp: github.com/DataTalksClub/llm-zoomcamp DataTalks.Club's Slack: datatalks.club/slack.html Mage Slack: www.mage.ai/chat #llm #llmorchestration #rag #datatalksclub #llmzoomcamp #mageai
LLM Zoomcamp - LLM Orchestration 5.7 Trigger
มุมมอง 458หลายเดือนก่อน
Automation is key to maintaining and updating your system. This section demonstrates how to schedule and trigger daily runs for your data pipelines, ensuring up-to-date and consistent data processing. Github: github.com/DataTalksClub/llm-zoomcamp/tree/main/05-orchestration LLM Zoomcamp: github.com/DataTalksClub/llm-zoomcamp DataTalks.Club's Slack: datatalks.club/slack.html Mage Slack: www.mage....
LLM Zoomcamp - LLM Orchestration 5.5 Export
มุมมอง 506หลายเดือนก่อน
After processing, data needs to be exported for storage so that it can be retrieved for better contextualization of user queries. Github: github.com/DataTalksClub/llm-zoomcamp/tree/main/05-orchestration LLM Zoomcamp: github.com/DataTalksClub/llm-zoomcamp DataTalks.Club's Slack: datatalks.club/slack.html Mage Slack: www.mage.ai/chat #llm #llmorchestration #rag #datatalksclub #llmzoomcamp #mageai
LLM Zoomcamp - LLM Orchestration 5.4 Embed
มุมมอง 433หลายเดือนก่อน
Embedding data translates text into numerical vectors that can be processed by models. Github: github.com/DataTalksClub/llm-zoomcamp/tree/main/05-orchestration LLM Zoomcamp: github.com/DataTalksClub/llm-zoomcamp DataTalks.Club's Slack: datatalks.club/slack.html Mage Slack: www.mage.ai/chat #llm #llmorchestration #rag #datatalksclub #llmzoomcamp #mageai
LLM Zoomcamp - LLM Orchestration 5.3 Tokenization
มุมมอง 596หลายเดือนก่อน
Tokenization is a crucial step in text processing and preparing the data for effective retrieval. Github: github.com/DataTalksClub/llm-zoomcamp/tree/main/05-orchestration LLM Zoomcamp: github.com/DataTalksClub/llm-zoomcamp DataTalks.Club's Slack: datatalks.club/slack.html Mage Slack: www.mage.ai/chat #llm #llmorchestration #rag #datatalksclub #llmzoomcamp #mageai
LLM Zoomcamp - LLM Orchestration 5.2 Chunk
มุมมอง 589หลายเดือนก่อน
Once data is ingested, we break it into manageable chunks. This section explains the importance of chunking data and various techniques. Github: github.com/DataTalksClub/llm-zoomcamp/tree/main/05-orchestration LLM Zoomcamp: github.com/DataTalksClub/llm-zoomcamp DataTalks.Club's Slack: datatalks.club/slack.html Mage Slack: www.mage.ai/chat #llm #llmorchestration #rag #datatalkclubs #llmzoomcamp ...
LLM Zoomcamp - LLM Orchestration 5.1 Ingest
มุมมอง 906หลายเดือนก่อน
In this section, we cover the ingestion of documents from a single data source. Github: github.com/DataTalksClub/llm-zoomcamp/tree/main/05-orchestration LLM Zoomcamp: github.com/DataTalksClub/llm-zoomcamp DataTalks.Club's Slack: datatalks.club/slack.html Mage Slack: www.mage.ai/chat #llm #llmorchestration #rag #datatalkclubs #llmzoomcamp #mageai
LLM Zoomcamp - LLM Orchestration 5.0 Module overview
มุมมอง 1Kหลายเดือนก่อน
Join Mage CEO and Co-founder, Tommy Dang, as he goes over LLM Orchestration using Mage. Getting started... 1. Clone repository: github.com/mage-ai/rag-project 2. cd into the repository 3. ./scripts/start.sh For more setup information, refer to these instructions: docs.mage.ai/getting-started/setup#docker-compose-template Github: github.com/DataTalksClub/llm-zoomcamp/tree/main/05-orchestration L...
June 21, 2024 Release v0.9.72
มุมมอง 2592 หลายเดือนก่อน
🚨 Raise your banners! House Targaryen release v0.9.72 🐉🎨 Canvas v2 🐍 Support custom python sink in Streaming pipeline 🔐 Allow configuring custom owner credentials 🐛 Bug fixes & more! Details: github.com/mage-ai/mage-ai/releases/tag/0.9.72 #python #streaming #changelog #dataengineering #mage #mageai
June 3, 2024 Release v0.9.71
มุมมอง 1903 หลายเดือนก่อน
Manifest your superhuman abilities with the X-Men release v0.9.71! ❌ 🧬 📡 Create streaming Oracle destination 👤 Support active project per user in multi-project platform ⚙️ Support job parameters in k8s executor🔑 Support OIDC_ROLES_MAPPING and user_roles 🐛 Bug fixes & more! Details: github.com/mage-ai/mage-ai/releases/tag/0.9.71 #oracle #streaming #multiproject #k8s #oidc
MLOps Zoomcamp - 3.1.2 Data preparation: Ingest data
มุมมอง 2.6K3 หลายเดือนก่อน
Join Mage CEO and Co-founder, Tommy Dang, as he goes over data preparation for ETL and feature engineering. He'll cover: -Ingest raw data -Prepare data for training -Build training sets -Data validations using built-in testing framework and much more! Github: github.com/DataTalksClub/mlops-zoomcamp/tree/main/03-orchestration/3.1 MLOps Zoomcamp: github.com/DataTalksClub/mlops-zoomcamp Sign up fo...
MLOps Zoomcamp - 3.2.7 Training: Hyperparameter XGBoost
มุมมอง 9093 หลายเดือนก่อน
Join Mage CEO and Co-founder, Tommy Dang, as he goes over training sklearn models and XGBoost. He'll cover: -Reusable training set data product -Training pipeline for sklearn models -Training pipeline for XGBoost -Tracking training metrics with experiments and much more! Github: github.com/DataTalksClub/mlops-zoomcamp/tree/main/03-orchestration/3.2 MLOps Zoomcamp: github.com/DataTalksClub/mlops...
MLOps Zoomcamp - 3.1.5 Data preparation: Prepare chart
มุมมอง 1.4K3 หลายเดือนก่อน
Join Mage CEO and Co-founder, Tommy Dang, as he goes over data preparation for ETL and feature engineering. He'll cover: -Ingest raw data -Prepare data for training -Build training sets -Data validations using built-in testing framework and much more! Github: github.com/DataTalksClub/mlops-zoomcamp/tree/main/03-orchestration/3.1 MLOps Zoomcamp: github.com/DataTalksClub/mlops-zoomcamp Sign up fo...
MLOps Zoomcamp - 3.5.4 Deploying: Deployed
มุมมอง 4553 หลายเดือนก่อน
Join Mage CEO and Co-founder, Tommy Dang, as he goes over deploying and running operations in production. He'll cover: -Permissions -Deploy -Continuous deployment and integration and much more! Github: github.com/DataTalksClub/mlops-zoomcamp/tree/main/03-orchestration/3.5 MLOps Zoomcamp: github.com/DataTalksClub/mlops-zoomcamp Sign up for the FREE course: airtable.com/appYdhA23GVZd1iN2/shrCb8y6...
MLOps Zoomcamp - 3.5.3 Deploying: CI/CD
มุมมอง 6383 หลายเดือนก่อน
MLOps Zoomcamp - 3.5.3 Deploying: CI/CD
MLOps Zoomcamp - 3.5.2 Deploying: Deploy
มุมมอง 6183 หลายเดือนก่อน
MLOps Zoomcamp - 3.5.2 Deploying: Deploy
MLOps Zoomcamp - 3.5.1 Deploying: Setup email
มุมมอง 5363 หลายเดือนก่อน
MLOps Zoomcamp - 3.5.1 Deploying: Setup email
MLOps Zoomcamp - 3.4.8 Triggering: API
มุมมอง 4613 หลายเดือนก่อน
MLOps Zoomcamp - 3.4.8 Triggering: API
MLOps Zoomcamp - 3.3.2 Observability: XGBoost part 1
มุมมอง 6763 หลายเดือนก่อน
MLOps Zoomcamp - 3.3.2 Observability: XGBoost part 1
MLOps Zoomcamp - 3.3.4 Observability: Overview
มุมมอง 5533 หลายเดือนก่อน
MLOps Zoomcamp - 3.3.4 Observability: Overview
MLOps Zoomcamp - 3.4.7 Triggering: Interactions run
มุมมอง 3593 หลายเดือนก่อน
MLOps Zoomcamp - 3.4.7 Triggering: Interactions run
MLOps Zoomcamp - 3.4.6 Triggering: Interactions
มุมมอง 4073 หลายเดือนก่อน
MLOps Zoomcamp - 3.4.6 Triggering: Interactions
MLOps Zoomcamp - 3.4.5 Triggering: Inference notebook
มุมมอง 4493 หลายเดือนก่อน
MLOps Zoomcamp - 3.4.5 Triggering: Inference notebook
MLOps Zoomcamp - 3.4.4 Triggering: GDP
มุมมอง 2023 หลายเดือนก่อน
MLOps Zoomcamp - 3.4.4 Triggering: GDP
MLOps Zoomcamp - 3.4.3 Triggering: Predict
มุมมอง 4513 หลายเดือนก่อน
MLOps Zoomcamp - 3.4.3 Triggering: Predict
MLOps Zoomcamp - 3.4.2 Triggering: Trigger
มุมมอง 4473 หลายเดือนก่อน
MLOps Zoomcamp - 3.4.2 Triggering: Trigger
MLOps Zoomcamp - 3.4.1 Triggering: Retraining pipeline
มุมมอง 5293 หลายเดือนก่อน
MLOps Zoomcamp - 3.4.1 Triggering: Retraining pipeline
MLOps Zoomcamp - 3.3.10 Observability: Email
มุมมอง 4233 หลายเดือนก่อน
MLOps Zoomcamp - 3.3.10 Observability: Email
MLOps Zoomcamp - 3.3.9 Observability: Setup alerts
มุมมอง 4633 หลายเดือนก่อน
MLOps Zoomcamp - 3.3.9 Observability: Setup alerts

ความคิดเห็น

  • @stefanelbl4365
    @stefanelbl4365 15 วันที่ผ่านมา

    Did I miss something? The triggers came out of nowhere

  • @mahmoodhonarvar3520
    @mahmoodhonarvar3520 15 วันที่ผ่านมา

    I have the Issue with the connection string!

  • @kakekjayus4634
    @kakekjayus4634 20 วันที่ผ่านมา

    your head covers the dependency tree :(

  • @ZD341
    @ZD341 หลายเดือนก่อน

    Where did you copy your connection string from in the export phase at 0:57 in the video?

  • @itsantbrown
    @itsantbrown หลายเดือนก่อน

    Is this an unreleased version? I don’t have AI pipeline capabilities in my Mage deployment

    • @mage_ai
      @mage_ai หลายเดือนก่อน

      Hello! This video is specific to the LLM Zoomcamp course. We preview the RAG pipelines that will be available on Mage Pro, the managed version of Mage. We are currently in private beta. We'll have more information soon on how to get signed-up and gain early access!

    • @mage_ai
      @mage_ai หลายเดือนก่อน

      You can sign-up for the private beta here: airtable.com/appm350RTn8FfirIX/pagGDSVTti4gzVvNg/form

  • @effimangouajeanmermoz6973
    @effimangouajeanmermoz6973 หลายเดือนก่อน

    Thank you for shared

  • @WhenslyNADY
    @WhenslyNADY หลายเดือนก่อน

    Hi! somebody can help me to solve this error "magic-1 exited with code 132" please

  • @anibara
    @anibara 2 หลายเดือนก่อน

    Great product. Will definitely check it out and see if this could be integrated in my current work.

  • @GiovanneAfonso
    @GiovanneAfonso 2 หลายเดือนก่อน

    is it possible (and a good practice) to create like 400 blocks?

  • @renanpereira5487
    @renanpereira5487 2 หลายเดือนก่อน

    Matt, would you say that this would be the most efficient way to implement dependency between two mage data pipelines? I mean, I need to schedule a pipeline in mage to run after another mage data pipeline. What I think is that, in my first pipeline, in the last block, I could run an API call to trigger the second pipeline. This seems like a super handy tool in mage for that purpose.

  • @renanpereira5487
    @renanpereira5487 2 หลายเดือนก่อน

    OMG, that was exactly what I needed. Thanks, Matt.

  • @cken27
    @cken27 2 หลายเดือนก่อน

    For people who experience the error "NotFound: 404 Not found: Dataset {project_id}:{schema_name} was not found in location US Location: US". My conclusion is that, I don't think Mage by itself has ever had the ability to create the cloud resource on the fly. This video just forgot the creation part and misled us , at 4:15, to think that Mage would create the BigQuery dataset for us, but there was no before-and-after comparison to verify this claim. It is likely that Matt just created this dataset manually beforehand. I spent an hour checking the videos, slack messages and FAQs for this error. Notice that, even for GCS, we have to manually create the bucket in the early section of the "2.2.4 Configuring GCP" video before we upload anything to it. Similarly, we should create the BigQuery dataset manually.

    • @cken27
      @cken27 2 หลายเดือนก่อน

      Note that if you re-used the same GCP project from week 1 here, you might have a dataset created in terraform video 1.3.1 to 1.3.3, in which you would only need to rename the dataset in BigQuery or in the export block, instead of creating one manually.

    • @GeraDeluxer
      @GeraDeluxer หลายเดือนก่อน

      In summary, just create the ny_taxi dataset by right-clicking on your BigQuery resource and selecting 'create dataset', then try again.

    • @trandy6673
      @trandy6673 หลายเดือนก่อน

      Thank you man. You saved me an hour here

  • @jonathanduran2921
    @jonathanduran2921 2 หลายเดือนก่อน

    Ben's face while Tommy was tweaking during the first 30 seconds is priceless.

  • @ibrahim555
    @ibrahim555 2 หลายเดือนก่อน

    custom sink in Streaming pipeline 💪💪

    • @mage_ai
      @mage_ai 2 หลายเดือนก่อน

      🙌💪

  • @mloeper
    @mloeper 2 หลายเดือนก่อน

    Thanks for this very instructive series Matt! Very enjoyable! Hope you keep on doing such a great job at Replit. All the best!

  • @mloeper
    @mloeper 2 หลายเดือนก่อน

    Great stuff! Just a quick note: I think the UI changed a bit since the video was created. In my case, I found the button to enable split view under the "View" menu in the top left menu.

  • @mloeper
    @mloeper 2 หลายเดือนก่อน

    Is this still the default versioning mode or has it been replaced by the git versioning app? I cannot find the view that is shown in this video anywhere.

  • @renanpereira5487
    @renanpereira5487 3 หลายเดือนก่อน

    Hi matt, can you drop the link on how to whitelist the desired IPs?

  • @tommydang7682
    @tommydang7682 3 หลายเดือนก่อน

    There are a lot of features in this release! A lot of hidden ones too 😉

  • @Pavel0793
    @Pavel0793 3 หลายเดือนก่อน

    Thanks for the lections! 0:08 At first why do we initialize GDP block based on data_preparation pipeline while we are building new sklearn_training pipeline? Secondly launching GDP block returns an error AttributeError: 'NoneType' object has no attribute 'to_dict' in trigger_and_check_status function: @safe_db_query 135 def trigger_and_check_status( 136 global_data_product: GlobalDataProduct, (...) 149 remote_blocks: List[Union[Dict, RemoteBlock]] = None, 150 ): 151 tags = merge_dict(logging_tags, dict( 152 block_uuid=block.uuid if block else None, --> 153 global_data_product=global_data_product.to_dict(), 154 round_number=round_number, 155 ))

    • @isabellevea8237
      @isabellevea8237 3 หลายเดือนก่อน

      I am getting the same error when loading the training set from Global Data Product

    • @basilbatov
      @basilbatov 3 หลายเดือนก่อน

      @@isabellevea8237 Because the 3.2.1 video is missing from this playlist...

  • @vladimirzhelanov853
    @vladimirzhelanov853 3 หลายเดือนก่อน

    I'm not trying to be mean and I appreciate your effort but I want to leave a honest review. In this video I've seen musical intro and outro (about 10 secs which I personally don't understand), and an explanation on how to create files in the Mage text editor. It's a little bit uninformative, taking into account that the title mentions "data preparation". Again, I mean no disrespect - I'm just drawing your attention to what seems obvious and strange to me

    • @mage_ai
      @mage_ai 3 หลายเดือนก่อน

      Thank you so much for your feedback! Intro is simply to give context of the video and lesson, while the outro was to place links to previous or next videos in sequence. We apologize as the end credit links did not appear until after your comment. In regards to the video content itself, we apologize if it did not meet your expectations. We tried to create in bite size pieces but with so much to explore and present, may not have embodied everything intended. We will learn from this and keep your input in mind for future lessons. Thanks again for your honest review.

    • @andrew_ng
      @andrew_ng 3 หลายเดือนก่อน

      @@mage_ai Actually to me I prefer bite sized so I don't need to search across a 90mins video. Appreciate the efforts. Peace

  • @vladimirzhelanov853
    @vladimirzhelanov853 3 หลายเดือนก่อน

    Hey there, What I noticed is that when some exception is thrown (let's say, `TypeError: Cannot cast DatetimeArray to dtype float64`) on my attempt to build a time series chart, the UI doesn't show anything. So I as a user must go to the logs and figure out what went wrong

    • @tommydang7682
      @tommydang7682 3 หลายเดือนก่อน

      Hello, thank you for reporting. Will fix.

  • @mahmoodhonarvar3520
    @mahmoodhonarvar3520 3 หลายเดือนก่อน

    Hey. Thank you. I have a question about the line code X_val.shape[0] == 51492 within the @test . the number 51492 could be varied from 1 dataset to another one. so in this situation the test wouldnt pass?

    • @Doshybot
      @Doshybot 3 หลายเดือนก่อน

      In this scenario, they are using a particular past month's data from a particular dataset, so it would not change. Thus if the number's wrong, something's up. But yeah, in a real world scenario there'd probably be ranges of acceptable values instead of a hardcoded one.

  • @Santiego
    @Santiego 3 หลายเดือนก่อน

    wtf xD

  • @FunTechies2023
    @FunTechies2023 3 หลายเดือนก่อน

    how backfills is different from schedular in mage data pipeline , please answer me

  • @brae1088
    @brae1088 3 หลายเดือนก่อน

    what am I doing wrong, I downloaded the json file and moved it to my magezoomcamp folder but within mage and the test pipeline it says file note found... I literally copied its path how is that possible

  • @Santiego
    @Santiego 3 หลายเดือนก่อน

    ConnectionError: ('Connection aborted.', TimeoutError('The write operation timed out')) :/

  • @JAVIERHERNANDEZ-wp6qj
    @JAVIERHERNANDEZ-wp6qj 3 หลายเดือนก่อน

    EXcellent section!!! Thank you Matt and DTC

  • @FunTechies2023
    @FunTechies2023 4 หลายเดือนก่อน

    hello team as i am unable to preare the branching mentined in the video can you please update this video according to the version v0.9.70

  • @tundeademusire1896
    @tundeademusire1896 4 หลายเดือนก่อน

    When I tried to export the data into BigQuery using either SQL or Python data exporter, I got the error, "NotFound: 404 Not found: Dataset<project_id>:<schema_name> was not found in location US. Location: US Job ID: <job_id>" I'll appreciate any help on how to resolve this. PS: I selected US multi-region when I created my bucket. My GOOGLE_LOCATION in my io_config.yaml file is US. My default code region in BigQuery is set to us-central1 (Iowa) (there's no option to select only US)

    • @ProfFloke
      @ProfFloke 4 หลายเดือนก่อน

      I had the same issue. I tried manually creating ny_taxi dataset from <project_id> in bigquery and set region to 'US' and it worked fine for me. hope this helps.

    • @tundeademusire1896
      @tundeademusire1896 4 หลายเดือนก่อน

      @@ProfFloke Thank you. This worked.

    • @anthonynjuguna5269
      @anthonynjuguna5269 4 หลายเดือนก่อน

      @@ProfFloke This was helpful. Thanks

    • @chen23saki
      @chen23saki 3 หลายเดือนก่อน

      @@ProfFloke thank you so much for the input

    • @UmakanthPasunuru
      @UmakanthPasunuru 2 หลายเดือนก่อน

      @@ProfFloke Thankyou . this worked.

  • @mitchedmunds
    @mitchedmunds 4 หลายเดือนก่อน

    The 30MB file size for the NY taxi parquet file seems really low for 1 million+ rows - is there an issue here? It also says that the file type is "text/plain". When I try to load it in the next video it is saying there is no data. Can you confirm?

    • @maruthikonjeti4572
      @maruthikonjeti4572 21 วันที่ผ่านมา

      That's because of the format. If it was .CSV then it would have been way more. Since it was save as .parquet. It was very less

  • @tnbao91
    @tnbao91 4 หลายเดือนก่อน

    can you demo a streaming pipeline with clickhouse sink?

  • @tommydang7682
    @tommydang7682 4 หลายเดือนก่อน

    This will be used in fallout shelters... to return magic into the world!

  • @TrezeguetDrax
    @TrezeguetDrax 4 หลายเดือนก่อน

    Why they use "{{}}" for the postgre configuration? I am begineer so i don't know what happen exactly

    • @farididahlan
      @farididahlan 4 หลายเดือนก่อน

      i think it is for calling the .env that we have

    • @farididahlan
      @farididahlan 4 หลายเดือนก่อน

      thats from docker environment

    • @Datasciencewithsheddy
      @Datasciencewithsheddy 4 หลายเดือนก่อน

      Its Jinga template syntax

  • @muhammadzakifuadi7869
    @muhammadzakifuadi7869 4 หลายเดือนก่อน

    excuse me, i have error, "Error loading project configuration", how to fix this? thanks

  • @raulcatacoragrundy5334
    @raulcatacoragrundy5334 4 หลายเดือนก่อน

    Can we all have an example with pyarrow, this would be great for projects in the DTC course and personally it would really help as I cannot make it work ...plz plz

  • @KrichkornOparad
    @KrichkornOparad 5 หลายเดือนก่อน

    This feature saves me when I need to securely transfer multiple media files from one location to another (SCP), especially when the files are spread across different directories and across the network. Typically, one would have to loop through each file, but secure copy (SCP) operations can fail for various reasons, and dealing with exceptions can be error-prone. An alternative approach would be to use an API trigger (like the one provided by Mage AI) to receive POST requests for each file that needs to be securely copied. However, this method can be highly resource-intensive, particularly when dealing with thousands of pipeline runs. Dynamic block, which allows me to manage source and destination specifications in a DataFrame and then dynamically execute SCP operations without having to handle exceptions manually (thanks to the "retry incomplete block" feature), is a game-changer. Not only does it save me from the hassle of exception handling, but it also keeps computational resources in check by avoiding unnecessary pipeline triggers. Thank you for your world-class project. Keep up the excellent work.

    • @tommydang7682
      @tommydang7682 5 หลายเดือนก่อน

      Thank you so much!!!!!!!!!!!!!!!!!!! You’re support means everything to us!

  • @artetl
    @artetl 5 หลายเดือนก่อน

    Looks amazing! Jupyter notebooks + airflow on steroids))

  • @Machiavelli698
    @Machiavelli698 5 หลายเดือนก่อน

    Thanks Matt! What a great way to introduce into mage ai!

  • @sayeekris9225
    @sayeekris9225 5 หลายเดือนก่อน

    Hi, I tried to pull from the website for NY taxi data and all appears to be in Parquet, when I try using read_parquet the kernel crashes, any suggestions Also how do you parse dates in Parquet read

  • @johnsmith-np5vy
    @johnsmith-np5vy 5 หลายเดือนก่อน

    Hello, I modified the file io_config.yaml as instructed. Then, I tried to create an SQL data loader and I got the error message below : while parsing a block mapping in "<unicode string>", line 137, column 3: POSTGRES_CONNECT_TIMEOUT: 10 ^ expected <block end>, but found '<scalar>' in "<unicode string>", line 138, column 22: POSTGRES_DBNAME: ""postgres"" ^ Traceback (hide traceback) Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/tornado/web.py", line 1786, in _execute result = await result File "/usr/local/lib/python3.10/site-packages/mage_ai/server/api/v1.py", line 61, in get return await execute_operation(self, resource) File "/usr/local/lib/python3.10/site-packages/mage_ai/api/views.py", line 63, in execute_operation raise err File "/usr/local/lib/python3.10/site-packages/mage_ai/api/views.py", line 56, in execute_operation response = await base_operation.execute() File "/usr/local/lib/python3.10/site-packages/mage_ai/api/operations/base.py", line 92, in execute result = await self.__executed_result() File "/usr/local/lib/python3.10/site-packages/mage_ai/api/operations/base.py", line 459, in __executed_result return await self.__create_or_index() File "/usr/local/lib/python3.10/site-packages/mage_ai/api/operations/base.py", line 544, in __create_or_index return await self.__resource_class().process_collection( File "/usr/local/lib/python3.10/site-packages/mage_ai/api/resources/BaseResource.py", line 192, in process_collection res = await res File "/usr/local/lib/python3.10/site-packages/mage_ai/api/resources/DataProviderResource.py", line 51, in collection profiles = list(yaml.safe_load(template_render(content)).keys()) if content else [] File "/usr/local/lib/python3.10/site-packages/yaml/__init__.py", line 125, in safe_load return load(stream, SafeLoader) File "/usr/local/lib/python3.10/site-packages/yaml/__init__.py", line 81, in load return loader.get_single_data() File "/usr/local/lib/python3.10/site-packages/yaml/constructor.py", line 49, in get_single_data node = self.get_single_node() File "/usr/local/lib/python3.10/site-packages/yaml/composer.py", line 36, in get_single_node document = self.compose_document() File "/usr/local/lib/python3.10/site-packages/yaml/composer.py", line 55, in compose_document node = self.compose_node(None, None) File "/usr/local/lib/python3.10/site-packages/yaml/composer.py", line 84, in compose_node node = self.compose_mapping_node(anchor) File "/usr/local/lib/python3.10/site-packages/yaml/composer.py", line 133, in compose_mapping_node item_value = self.compose_node(node, item_key) File "/usr/local/lib/python3.10/site-packages/yaml/composer.py", line 84, in compose_node node = self.compose_mapping_node(anchor) File "/usr/local/lib/python3.10/site-packages/yaml/composer.py", line 127, in compose_mapping_node while not self.check_event(MappingEndEvent): File "/usr/local/lib/python3.10/site-packages/yaml/parser.py", line 98, in check_event self.current_event = self.state() File "/usr/local/lib/python3.10/site-packages/yaml/parser.py", line 438, in parse_block_mapping_key raise ParserError("while parsing a block mapping", self.marks[-1], yaml.parser.ParserError: while parsing a block mapping in "", line 137, column 3: POSTGRES_CONNECT_TIMEOUT: 10 ^ expected , but found '' in "", line 138, column 22: POSTGRES_DBNAME: ""postgres"" I've also submitted the error on the "bugs-trouble-shoot-questions" Slack channel Can you please help me ?

  • @johnsmith-np5vy
    @johnsmith-np5vy 5 หลายเดือนก่อน

    The instructor video hides the down left corner of the slides 😞

  • @Sanaki131
    @Sanaki131 5 หลายเดือนก่อน

    Wanted to get familiar with Mage some time ago, after using airflow for a while. Great Intro! Will the code be available for all of the videos in the DTC course?

  • @bobrob6456
    @bobrob6456 5 หลายเดือนก่อน

    Thanks for the demo, is there an API to check the status of an API triggered pipeline other than the one that returns the logs?

  • @matei_woold_wewu
    @matei_woold_wewu 6 หลายเดือนก่อน

    0:11 I deleted Discord XD XD XD

  • @user-es2re7jo1z
    @user-es2re7jo1z 6 หลายเดือนก่อน

    Can we run dbt with mage locally?

    • @mage_ai
      @mage_ai 6 หลายเดือนก่อน

      Yes, you can run dbt with Mage locally by using Docker. Here is a guide to get you started: docs.mage.ai/getting-started/setup

    • @mage_ai
      @mage_ai 6 หลายเดือนก่อน

      If you have any other questions, you can join our community slack for live support: www.mage.ai/chat

  • @foreverbored2501
    @foreverbored2501 6 หลายเดือนก่อน

    Question: You're using docker to run an image of Mage. Within Mage you're setting up this ETL pipeline to get things uploaded to GCP. If you kill the Docker container where you're running Mage, won't your Mage files (i.e., your ETL pipeline) be lost? So you'd have to rewrite them again if you want to do the exact same thing right? How do you circumvent this?

    • @mitchedmunds
      @mitchedmunds 4 หลายเดือนก่อน

      You get around this by mounting a volume - this is done in the docker.compose.yml file where in the "magic" service we map the current directory to /home/src/ (`.:/home/src/`). You'll notice that when you change files via the Mage GUI, the changes are actually reflected in your local file system too, since when you mount a volume it keeps things in sync. The next time you do `docker compose up`, it will remount the same local directory (as long as you run from that directory, since "." is relative) and all the files you had before will be there.

  • @ErickCalderin
    @ErickCalderin 6 หลายเดือนก่อน

    I refactored transform section this way: cond = data['passenger_count'] > 0 print(f'Rides with zero passengers: {len(data[~cond])}') return data[cond]

  • @pygeekrfoo820
    @pygeekrfoo820 6 หลายเดือนก่อน

    There's a lot to be desired in this last section on deployment.