Alex, at min25 you go fetch the MinIO IP Address, you can actually replace that with the docker-compose hostname for your MinIO service. it will figure out what the containers IP address is all by itself
That’s what I usually do but for some reason in Spark in this type of setup it never resolves right, the other containers do. I always end us getting url not resolved errors unless I pass the ip address for MinIO only in PySpark. With Dremio and other tools the service resolves fine, at least on my pc.
Oh wait this is the Flink video, I had the same issue here where using the service name from The docker compose file wasn’t resolving right in Flink which is why I had to manually get the ip.
@@Dremio looking at your docker container, I see service name and container name, I"m specifically referring to the hostname variable. but then i've not tried this with PySpark...
Thank You for your effort ! I have faced a problem that the Java runtime jdk version in the image is 11 and the compiled version of the jar file is jdk version 18 as you instructed . now I am facing java runtime error while trying to submit the job over into Flink. it gave me error like class file version 62.0 , this version (mean flink job manager) java runtime only recognizes class file versions up to 55 . How can I change the java jdk version used by the image
In the blog for this exercise I should have a not on how to address this, requires a small change in the app configs -> www.dremio.com/blog/using-flink-with-apache-iceberg-and-nessie/
Hi there, thanks for the awesome video! Any reasons why s3.endpoint was setup an ip address rather than a host name when creating the catalog? I found the hostname style could also work in the demo with s3.path-style-access=true.
How to do that without using docker
the most useful video with a lot of interesting information
Alex, at min25 you go fetch the MinIO IP Address, you can actually replace that with the docker-compose hostname for your MinIO service. it will figure out what the containers IP address is all by itself
That’s what I usually do but for some reason in Spark in this type of setup it never resolves right, the other containers do. I always end us getting url not resolved errors unless I pass the ip address for MinIO only in PySpark. With Dremio and other tools the service resolves fine, at least on my pc.
Oh wait this is the Flink video, I had the same issue here where using the service name from
The docker compose file wasn’t resolving right in Flink which is why I had to manually get the ip.
@@Dremio looking at your docker container, I see service name and container name, I"m specifically referring to the hostname variable.
but then i've not tried this with PySpark...
Thank You for your effort ! I have faced a problem that the Java runtime jdk version in the image is 11 and the compiled version of the jar file is jdk version 18 as you instructed . now I am facing java runtime error while trying to submit the job over into Flink.
it gave me error like class file version 62.0 , this version (mean flink job manager) java runtime only recognizes class file versions up to 55
. How can I change the java jdk version used by the image
In the blog for this exercise I should have a not on how to address this, requires a small change in the app configs -> www.dremio.com/blog/using-flink-with-apache-iceberg-and-nessie/
Hi there, thanks for the awesome video! Any reasons why s3.endpoint was setup an ip address rather than a host name when creating the catalog?
I found the hostname style could also work in the demo with s3.path-style-access=true.
I think it was just my particular env at the time i kept running into an issue with the host name is my docker env so i just used the ip to be safe.
🙋 P r o m o S M