Polylith - the last architecture you will ever need by Joakim Tengstrand and Furkan Bayraktar

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 พ.ย. 2024

ความคิดเห็น • 12

  • @saurabhmehta7681
    @saurabhmehta7681 7 หลายเดือนก่อน

    Great project!

  • @awerlogus
    @awerlogus 3 ปีที่แล้ว +3

    Looks good, but I still have some questions
    1) You said that this architecture is language agnostic. Why it then assumes that the program will be built before running? It's not obligatory in some platforms, NodeJS for example.
    2) How to manage external dependencies for multiple projects in such non buildable platforms? For example, I have project1 and project2. Project1 depends on a very heavy module written in c++. When I want to deploy project2, I do git clone on my prod machine, run npm install, and this heavy module is being installed automatically to stay unused in runtime. To solve it I can make 2 types of dependency lists: dependencies over the whole workspace and dependencies per project to install it on prod. But it will require to keep it synchronized manually. Sure, I can start using some js bundler like webpack, but it will make file path and position info in error call stacks inconsistent and will also slow down local manual testing and deployment processes. Do you know any flawless way to solve this needless dependencies install problem?
    3) I also use event driven programming approach, and each project has its own file with a list of event subscriptions. What is it in the terms of the architecture? What about config files? Type declarations?
    4) Is it really appropriate to share components, containing project specific domain logic, which can not be reused? If I have 2 projects - frontend and backend, shall I make repositories, storing some server state in memory, shared? I think it will just make code navigation harder because of folders with different projects' domain logic are being mixed. I'd organize the workspace like this: create 'Components', 'Library', etc. folders for each project, start writing code inside a project and then if I see that some logic may be reused in another project, refactor this code and then move it to the top level. In the short term it may help us to refrain from overengineering. And the long term goal is to keep each project's code as small as possible by reusing as many logic as possible. What do you think?
    Will be grateful for the answers.

    • @tengstrand
      @tengstrand 3 ปีที่แล้ว +6

      Hi and thanks for your interest in Polylith. I will try to answer your questions.
      1) A project groups the code into one "codebase", but you are not forced to build an artifact out of it. You may just want to copy it somewhere or use it directly.
      2) If I understand you right, you get cross-contamination with dependencies across projects, and to remove that contamination you are forced to do manual dependency synchronization. In Polylith, you can share components across projects, but each project can also have its own dependencies that are totally isolated from other projects.
      3) If you have code, configuration, or libraries, that are only used in one of the projects, you can just include that for that specific project, and it will not "contaminate" other projects.
      4) When you start a new Polylith workspace, you only have the development project and no components and bases. Then you add your first component, and it will only be used from the development project. Then after a while, you will have a handful of components and then you may decide to create a project and use them from there. At this point, the components are only used by one "real" project, if we don't count development. And yes, it's perfectly fine to divide your code into components even though they are only used by one project, because components are not only a good way of sharing code (they are actually already shared between two projects, development, and the other project) but they also make the code easier to reason about and test. At this point, you don't know if any of these components will be used in other projects in the future. Remember that you can create production like projects also, e.g. for testing, that includes special components that may have "fake" implementations of some interfaces, which means that you may still want to reuse components that you thought only made sense from that first project (the project will almost look exactly like the other project, except one of a few components). Polylith is very flexible and when you start to use it, you will realize that you can easily compose new projects by combining existing bricks (components/bases). In Polylith, our experience is that it's not a problem that a component is only used in one or a few projects. It's like having a bucket of Lego bricks where you keep all your libraries and bricks, ready to be used by any project in your workspace.
      We have kept frontend code and backend code separated so far, even in separate repositories, but it should be fine to keep them in the same monorepo also if you think that is better, as long as you keep them in separate directories (at least that is what I would suggest). Our reasoning has been, so far, that it didn't make sense to mix frontend and backend, because most code can't be shared between backend and frontend. If you have code that can be shared, then that should be possible to solve also.

  • @tagama
    @tagama 2 ปีที่แล้ว +2

    What if you need to change an interface and other teams are not prepared to update? Is it any different than any other programming to interfaces best practices? If yes, how so? thx!

    • @CalvaTV
      @CalvaTV ปีที่แล้ว +1

      Not an expert on Polylith, but I'd say that there is no difference there. In the Clojure world (you might or might not be a Clojurian) the general approach is to never, ever, break your callers. So you keep old interfaces, including their implementations, and add new interfaces, either in a separate namespace, or just `same-namespace.Foo2()`. Polylith makes this easier than it might be if the interfaces aren't as clearly cut out. When you have no users on the old interface any longer, you can delete it and rename the new one if that makes sense.

  • @Oliver0909
    @Oliver0909 3 ปีที่แล้ว +2

    Awesome!

  • @blacknick3931
    @blacknick3931 ปีที่แล้ว

    So Polylith is just a methodology of architecture from a bottom-up direction? Instead of setting up a whole bunch of projects, something like with 3 layers of architecture at the beginning, we write the components and bases first and then compose them as a project when needed, plus configuration for deployment.

  •  3 ปีที่แล้ว

    I can't imagine yet how to make a single change in a single component and how that change is
    - Propagated to the actual running projects
    - rollbacked if that component doesn't have a version number, just a name.
    The thing is, I'm understanding a project refers to a component by name. So there is no way to say from a project to use an specific older version of a component, so if you change the component because working in another project provoked it, then this and all the other projects get the change: Am I right?
    A good example of this is your "optimizing" example, where you extract from the gcp-storage component the logic for file transfer in it's own component (gcp-transfer). But, in that moment, your your project "backend" suddenly forgot how to transfer a file. That is what I'm understanding, which obviously IS a misunderstanding. So help me please: what is happening there?

    •  3 ปีที่แล้ว

      ​@Joakim Tengstrand Thank you very much! I'm getting there. So... One artifact inside the projects directory refers somehow to a component, let's call it A. Then I change the component A a bit. Does that artifact get that change?
      I think that is the behaviour because if you check 16:20, you can see the deps.edn refers to the components by name/src. There is not version. That is the only thing that concerns me. That's why I'm wondering how you do a "rollback" or pointing to an older version of a component.

  • @MaximBazhenov
    @MaximBazhenov 3 ปีที่แล้ว

    So it's a monorepo with "components" + interfaces designating their borders, "bases" exposing system APIs, single development time deps.edn file and several *.edn file per deployable artifact. Ok. Last architecture I will ever need, I doubt. For a small sized team it's perfect. For the enterprise, where there're dozens or hundreds ~10 men teams, I don't think so. The base idea is monorepo + several *.deps files, and the monorepo is its primary bottleneck.

    • @tengstrand
      @tengstrand 3 ปีที่แล้ว +5

      The monorepo is one important piece, but all pieces are needed to give you all the benefits, e.g. the separation between development and production. If you have hundreds of developers, there is nothing stopping you from having many monorepos. In the FAQ (polylith.gitbook.io/polylith/conclusion/faq) I have summarised it like this:
      interface: Enables functionality to be replaced in projects/artifacts.
      component: The way we package reusable functionality.
      base: Enables a public API to be replaced in projects/artifacts.
      library: Enables global reuse of functionality.
      project: Enables us to pick and choose what functionality to include in the final artifact.
      development: Enables us to work with all our code from one place.
      workspace: Keeps the whole codebase in sync. The standardized naming and directory structure is an example of convention over configuration which enables incremental testing/builds and tooling to be built around Polylith.

    • @it_is_ni
      @it_is_ni 3 ปีที่แล้ว +5

      Aren’t Google and Facebook running giant monorepos? (Not saying they’re good examples, just that it doesn’t _need_ to be a barrier.