Hello Jendrik. Thank you for such a great content about Gradle! I am wondering - is it possible to resolve all the dependencies from all "configurations" with one single command? In other words: is there an alternative to `npm install`/`bundle install`/`go mod download` (which exist in NodeJs/Ruby/Go build tools correspondingly)? I miss that command when I have experiments with "dockerized" builds (I am aware of buildpacks and jib, but sometimes I need to get pure Docker/docker-compose setup for troubleshooting). Unfortunately, it seems like `gradlew :dependencies` downloads only pom files (which is enough to build the graph). I am looking for the command which would download "all at once", which would allow further dockerized steps to not make any remote calls and work much faster in "offline mode".
Hi Kirill. Thank you. That's a good question. You are right that ":dependencies" only looks at the metadata. It's a common point of confusion that the dependency resolution is done in two phases: (1) looking at metadata and resolving conflicts (2) downloading the actual artifacts (if needed). I plan to do another video on that topic at some point. To your question: There is no built-in task in Gradle that "resolves all configurations". And I think this is one of the topics where no consensus has been reached yet in the community what the best solution is. You probably also do not need to resolve "all" configurations but only the ones that you actually use (which would be "*RuntimeClasspath" and "*CompileClasspath" in a standard Java project). Two solutions I would probably use: 1. Run a build that would do everything the real build does in Dry Run mode: *gradle build --dry-run* As this computes the inputs of all tasks, it also resolves the dependencies. Unfortunately --dry-run is broken for composite builds (github.com/gradle/gradle/issues/2517). 2. Write your own task that does the resolving. It should be enough to have an "empty" task that has the configurations it needs as input. E.g.: *tasks.register("resolveAll") {* *configurations.forEach {* *if (it.isCanBeResolved) {* // or just selected configurations *inputs.files(it)* *}* *}* *doLast { }* // Have at least on action to trigger the download *}*
great playlist. best all-in-one gradle crash course. one thing: at 2:55 you can see a code snippet with the "dependencyResolutionManagement"-block and a "includeBuild(String)"-call within. but "includeBuild" is not available on the instance of "DependencyResolutionManagement"-class, but it is available on "Settings"-instance. therefore: is it necessary to call "includeBuild(String)" INSIDE the "dependencyResolutionManagement"-block?
Thanks! You are correct. You do *not* need to put it inside 'dependencyResolutionManagement {}'. The reason I did it is, because I find it clearer to distinguish between 'repositories & included builds' for your main build and 'repositories & included builds' for plugins.
This is nice, and I guess it beats Gradles' doc page "Declaring Dependencies" by far. Short videos are hard, however, I wonder if you have tried top-down approach. For instance: "in Java, you need runtime classpath, so why don't Gradle allow configuring runtimeClasspath? If that was possible, then we would need to duplicate a slightly different declaration for the compilation classpath, so Gradle factors out common parts into last of disjoint buckets, and then it builds runtime and compile classpaths from the relevant bits".
Sounds like something that could work well. I probably did not think about that, because I wanted to start with what you write in your build. That is, starting with the example in the IDE and then get to explain wha's happening behind the scenes next. But yes - that's more bottom up. I guess I am also hesitant to start with a too strong Java use case and talk too much about Java in the beginning as motivation. There is the struggle that Java is "only" an example as I also want to convey that the general concept works independent of notions like "runtimeClasspath" or "implementation". You can use Gradle for something completely different if you do not apply the Java Library plugin but instead your own 'Configurations'. Not sure if this is clear from the video in the end though. :)
@@jjohannes Well, you can scale it: "here's how you declare dependencies in Java", "Here's how you declare dependencies for asciidoc presentations", etc. I do not mean generic descriptions are always worse. What I mean is that specific examples from well-known ecosystems do help people to understand how the feature works and "why is it so complex".
I am not a hardcore java or kotlin developer. But, why did you package all your `.class` file into `.zip` why not package all the `.jar` file with the consumer code into .jar , can it still be run if it is inside .zip archive file.
That's a large question. :) I haven't really done anything on this topic in my videos. But it is an interesting topic, which is now on my list for future episodes! I think there was always the idea that a *JAR* is a ready-to-use software component that you can plug into your software without modifying it. It contains, for example, all meta information (like license information) and may be signed. Which are all interesting aspects if you use 3rd party components (like open source components from Maven Central). With the Java Module System (see th-cam.com/video/X9u1taDwLSA/w-d-xo.html) this idea became even stronger. Now each Jar has meta-information (module-info.class) that is actively used by the Java runtime. With this in mind, it seems natural that all Jars that make up an application are delivered as they are. Without the Java Module System however, it is also a common approach to re-package everything into one *JAR* . In Gradle, the Shadow Plugin (imperceptiblethoughts.com/shadow) is quite popular that offers some convenient functionality to make sure no important metadata is lost when combining Jars. And many framework specific plugins - like Spring Boot or Micronaut - also come with their own packaging support. But in the end, there are many options to do this and my impression is that there is not the "one right way" how to package a Java-based application at the moment. With the Java Module System, the Java tools themselves contain the *jlink* and *jpackage* (docs.oracle.com/en/java/javase/21/jpackage/packaging-overview.html) commands. These are for creating a package (basically again a ZIP or an Installer) which contains a complete application for a certain Operating System - including the Java runtime. I think this is where things are heading on the long run.
@@jjohannes so basically if my code require 2 or more dependencies that I will use them in the source code, in which they came in the form of 2 separate .jar files. If it needs to be packaged for release distribution to the user it needs to be repackaged again into single .jar file otherwise it will be called fat jar if not mistaken. But the manifest file is important too, it cannot be lost because it specify the package name of that dependency. If the two jars, need to be ripped apart and merge again together the package name will change. But for now I think I want to learn how to write the `run` task for debugging. Otherwise I have to specify manually the `classpath` to the JVM. Only just recently I know that the consumer code i.e the .class code that contains the main class can be linked directly by using class path parameter. Meaning it doesn't have to be in a .jar file. And only recently, I know the `application` gradle plugin is useful to set the `Main-class` attribute for the .jar file.
Is it possible to have custom dependency resolution in Gradle? For example, I can specify a file name and Gradle will download it from some external source like gdrive
It is indirectly possible. You can use an 'ivy' repository. It allows you to freely define how the coordinates you use in a dependency declaration map to the location of a file in the 'repository'. With this, you can make an arbitrary URL look like a repository to Gradle. Then the download will go through Gradle's dependency management and the file will be cached. Since usually the file is not versioned, you can control how long to cache the file using 'isChanging = true' dependencies and 'resolutionStrategy.cacheChangingModulesFor(...)'. Here is a full example: github.com/jjohannes/gradle-demos/blob/main/custom-repository/build.gradle.kts
Already try the demo and it worked like a charm! In your demo though, there's a custom configuration, could you please make a video about that? Thank you so much
@@EsaFirman Creating a custom Configuration is covered in the video '#13 - Aggregating Custom Artifacts'. At 2m43s I start talking about creating a configuration that is used for resolving. Which is what is done in the custom repo example as well. Just that it is much simpler, because we only download one artifact/file and do not care about multiple variants. th-cam.com/video/2gPJD0mAres/w-d-xo.htmlm43s
The best content about gradle for a long time
thanks for the concise and clear explanation! appreciate the content :)
Hello Jendrik. Thank you for such a great content about Gradle!
I am wondering - is it possible to resolve all the dependencies from all "configurations" with one single command? In other words: is there an alternative to `npm install`/`bundle install`/`go mod download` (which exist in NodeJs/Ruby/Go build tools correspondingly)? I miss that command when I have experiments with "dockerized" builds (I am aware of buildpacks and jib, but sometimes I need to get pure Docker/docker-compose setup for troubleshooting). Unfortunately, it seems like `gradlew :dependencies` downloads only pom files (which is enough to build the graph). I am looking for the command which would download "all at once", which would allow further dockerized steps to not make any remote calls and work much faster in "offline mode".
Hi Kirill. Thank you. That's a good question.
You are right that ":dependencies" only looks at the metadata. It's a common point of confusion that the dependency resolution is done in two phases: (1) looking at metadata and resolving conflicts (2) downloading the actual artifacts (if needed). I plan to do another video on that topic at some point.
To your question: There is no built-in task in Gradle that "resolves all configurations". And I think this is one of the topics where no consensus has been reached yet in the community what the best solution is. You probably also do not need to resolve "all" configurations but only the ones that you actually use (which would be "*RuntimeClasspath" and "*CompileClasspath" in a standard Java project). Two solutions I would probably use:
1. Run a build that would do everything the real build does in Dry Run mode: *gradle build --dry-run* As this computes the inputs of all tasks, it also resolves the dependencies. Unfortunately --dry-run is broken for composite builds (github.com/gradle/gradle/issues/2517).
2. Write your own task that does the resolving. It should be enough to have an "empty" task that has the configurations it needs as input. E.g.:
*tasks.register("resolveAll") {*
*configurations.forEach {*
*if (it.isCanBeResolved) {* // or just selected configurations
*inputs.files(it)*
*}*
*}*
*doLast { }* // Have at least on action to trigger the download
*}*
great playlist. best all-in-one gradle crash course.
one thing: at 2:55 you can see a code snippet with the "dependencyResolutionManagement"-block and a "includeBuild(String)"-call within. but "includeBuild" is not available on the instance of "DependencyResolutionManagement"-class, but it is available on "Settings"-instance. therefore: is it necessary to call "includeBuild(String)" INSIDE the "dependencyResolutionManagement"-block?
Thanks!
You are correct. You do *not* need to put it inside 'dependencyResolutionManagement {}'. The reason I did it is, because I find it clearer to distinguish between 'repositories & included builds' for your main build and 'repositories & included builds' for plugins.
This is nice, and I guess it beats Gradles' doc page "Declaring Dependencies" by far. Short videos are hard, however, I wonder if you have tried top-down approach. For instance: "in Java, you need runtime classpath, so why don't Gradle allow configuring runtimeClasspath? If that was possible, then we would need to duplicate a slightly different declaration for the compilation classpath, so Gradle factors out common parts into last of disjoint buckets, and then it builds runtime and compile classpaths from the relevant bits".
Sounds like something that could work well. I probably did not think about that, because I wanted to start with what you write in your build. That is, starting with the example in the IDE and then get to explain wha's happening behind the scenes next. But yes - that's more bottom up.
I guess I am also hesitant to start with a too strong Java use case and talk too much about Java in the beginning as motivation. There is the struggle that Java is "only" an example as I also want to convey that the general concept works independent of notions like "runtimeClasspath" or "implementation". You can use Gradle for something completely different if you do not apply the Java Library plugin but instead your own 'Configurations'.
Not sure if this is clear from the video in the end though. :)
@@jjohannes Well, you can scale it: "here's how you declare dependencies in Java", "Here's how you declare dependencies for asciidoc presentations", etc. I do not mean generic descriptions are always worse. What I mean is that specific examples from well-known ecosystems do help people to understand how the feature works and "why is it so complex".
I am not a hardcore java or kotlin developer. But, why did you package all your `.class` file into `.zip` why not package all the `.jar` file with the consumer code into .jar , can it still be run if it is inside .zip archive file.
That's a large question. :) I haven't really done anything on this topic in my videos. But it is an interesting topic, which is now on my list for future episodes!
I think there was always the idea that a *JAR* is a ready-to-use software component that you can plug into your software without modifying it. It contains, for example, all meta information (like license information) and may be signed. Which are all interesting aspects if you use 3rd party components (like open source components from Maven Central).
With the Java Module System (see th-cam.com/video/X9u1taDwLSA/w-d-xo.html) this idea became even stronger. Now each Jar has meta-information (module-info.class) that is actively used by the Java runtime. With this in mind, it seems natural that all Jars that make up an application are delivered as they are.
Without the Java Module System however, it is also a common approach to re-package everything into one *JAR* . In Gradle, the Shadow Plugin (imperceptiblethoughts.com/shadow) is quite popular that offers some convenient functionality to make sure no important metadata is lost when combining Jars. And many framework specific plugins - like Spring Boot or Micronaut - also come with their own packaging support. But in the end, there are many options to do this and my impression is that there is not the "one right way" how to package a Java-based application at the moment.
With the Java Module System, the Java tools themselves contain the *jlink* and *jpackage* (docs.oracle.com/en/java/javase/21/jpackage/packaging-overview.html) commands. These are for creating a package (basically again a ZIP or an Installer) which contains a complete application for a certain Operating System - including the Java runtime. I think this is where things are heading on the long run.
@@jjohannes so basically if my code require 2 or more dependencies that I will use them in the source code, in which they came in the form of 2 separate .jar files.
If it needs to be packaged for release distribution to the user it needs to be repackaged again into single .jar file otherwise it will be called fat jar if not mistaken. But the manifest file is important too, it cannot be lost because it specify the package name of that dependency. If the two jars, need to be ripped apart and merge again together the package name will change.
But for now I think I want to learn how to write the `run` task for debugging. Otherwise I have to specify manually the `classpath` to the JVM. Only just recently I know that the consumer code i.e the .class code that contains the main class can be linked directly by using class path parameter. Meaning it doesn't have to be in a .jar file. And only recently, I know the `application` gradle plugin is useful to set the `Main-class` attribute for the .jar file.
Is it possible to have custom dependency resolution in Gradle?
For example, I can specify a file name and Gradle will download it from some external source like gdrive
It is indirectly possible. You can use an 'ivy' repository. It allows you to freely define how the coordinates you use in a dependency declaration map to the location of a file in the 'repository'. With this, you can make an arbitrary URL look like a repository to Gradle.
Then the download will go through Gradle's dependency management and the file will be cached. Since usually the file is not versioned, you can control how long to cache the file using 'isChanging = true' dependencies and 'resolutionStrategy.cacheChangingModulesFor(...)'.
Here is a full example: github.com/jjohannes/gradle-demos/blob/main/custom-repository/build.gradle.kts
@@jjohannes cool! And thanks for creating the demo! I will try it over the weekend
Already try the demo and it worked like a charm! In your demo though, there's a custom configuration, could you please make a video about that? Thank you so much
QQ, can we listen to when Gradle will download the file?
@@EsaFirman Creating a custom Configuration is covered in the video '#13 - Aggregating Custom Artifacts'. At 2m43s I start talking about creating a configuration that is used for resolving. Which is what is done in the custom repo example as well. Just that it is much simpler, because we only download one artifact/file and do not care about multiple variants. th-cam.com/video/2gPJD0mAres/w-d-xo.htmlm43s