How To Run DALL-E Mini/Mega On Your Own PC(Windows)

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ค. 2024
  • DALL-E Mini has become very popular online. However, due to this popularity, it has become hard to use public websites that host it due to the high traffic.
    This video goes over how to run the model on your own PC. You need just WSL 2 and Docker. I use Windows 11 and two RTX 3090s as well for faster operation.
    Github: github.com/borisdayma/dalle-mini
    HuggingFace: huggingface.co/spaces/dalle-m...
    Maintainer Twitter: / borisdayma
    Discord: / discord
    Donations(if you want):
    Ethereum:
    4CE913643909Fa3168297cC2857C0aDdAB389Ad8
    Monero: 45mqN96o5JZZhwVTZHHiQJeyhp3WndiPC44hdrAmWeDGeCmaC1c45gTGh5eDUtEhx3JDGbsAnsD3VBXKdiorUhusUydLG22
    Timestamps
    00:00 - Intro
    00:17 - Requirements
    00:55 - WSL Installation
    02:45 - Installing Docker Desktop
    03:48 - Cloning The Repo
    04:55 - Building Docker Image
    05:20 - Running Docker Image
    06:03 - Selecting GPUs To Use
    07:20 - Running The Notebook
    10:31 - Outro
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 96

  • @Brillibits
    @Brillibits  2 ปีที่แล้ว +15

    Some corrections:
    1. First I said you need Windows 11 for using GPUs. I have been told that is wrong and that Windows 10 works as well. Make sure you still have the latest version of either though
    2. I misspoke and said 32MB when I meant 32GB. This isn't the 80s lol.
    For a video that goes over more details on the model see here: th-cam.com/video/eWpzLIa6v9E/w-d-xo.html&ab_channel=Blake
    Discord: discord.gg/F7pjXfVJwZ

  • @mickymacanori1768
    @mickymacanori1768 2 ปีที่แล้ว +7

    😎 tfw I configured this for my Windows before this Windows video dropped.
    Once again, thank you so much for your commitments and efforts, Blake. You have opened my eyes to AI when a few days ago I was just generating silly little images on the huggingfaces website. Thank you so much.

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว +1

      Congrats on that! There are a few tricky differences for sure.
      Thanks for watching!

  • @eternalreturnal
    @eternalreturnal 2 ปีที่แล้ว

    I want to thank you for explaining how to do this. This also made me learn the proper way of using docker in WSL. just opened the possibility to install other AI models in Windows! I used to try to install nvidia drivers and docker inside wsl itself, to no avail, which ended up with me simply installing Ubuntu in a separate drive... Thanks a lot!

  • @PunmasterSTP
    @PunmasterSTP 2 ปีที่แล้ว

    This was an excellent tutorial. Thank you so much for making it!

  • @barrettvelker198
    @barrettvelker198 2 ปีที่แล้ว

    Super helpful. I'd never heard of wsl before.

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว

      It has some issues, so I use Ubuntu over it still, but its useful and getting better everyday.

  • @user-ue6iv2rd1n
    @user-ue6iv2rd1n 2 ปีที่แล้ว +5

    I'm not dedicated enough to set this up for memes...

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว +1

      lol thats fair

  • @ergoleski
    @ergoleski 2 ปีที่แล้ว +4

    Keeps erroring out that it can't find a jax cuda distribution on the build step

  • @RediceRyan
    @RediceRyan 2 ปีที่แล้ว +3

    If it takes 20 seconds on 2 3090's to generate an image. For DALL-E 2 that is 16 times the resolution and more complex, do you think it would be possible in 16 to 128 times the length of time? Or probably impossible due to RAM limitations.

  • @KirBirger
    @KirBirger 2 ปีที่แล้ว +7

    I can't seem to build the docker image. I get the following error:
    No matching distribution found for jaxlib==0.3.10+cuda11.cudnn82 (from jax[cuda])
    Google seems to suggest that windows isn't supported here. I'm running this from WSL 2 instance of Ubuntu 20.04. My Docker is set to use WSL as a backend

    • @FuchsDanin
      @FuchsDanin 2 ปีที่แล้ว

      github.com/borisdayma/dalle-mini/issues/260#issuecomment-1158185934

    • @Nerfyy
      @Nerfyy 2 ปีที่แล้ว

      did you ever find a fix? this is driving me nuts here

    • @its.juhnny
      @its.juhnny 2 ปีที่แล้ว

      I am getting this error as well - probably a change in dependency somewhere along the way

  • @alexanderzaman7801
    @alexanderzaman7801 2 ปีที่แล้ว

    Hi Blake, Thanks for posting this. I got all the way to setting up the Jupityer notebook and got the Eiffel tower and mountain to load but for some reason, when I try to change the prompts value to something new, it just keeps rendering the same thing.
    Any idea what I might be doing wrong?
    I'm using a GTX 1070 (8GB) and the mini model

  • @spirocorbett3839
    @spirocorbett3839 ปีที่แล้ว +3

    Some of us are having a hard time following these instructions. If its possible to have a step by step, written instruction, on how to get this fine program running I would be very grateful. I don't have the apps or any of the software you've mentioned (nor do i know what they do or how to acquire them) but i have all the hardware requirements. sorry if this sounds stupid :(

  • @austindimalanta324
    @austindimalanta324 ปีที่แล้ว +1

    After i built and run docker image, my workspace is empty. So i can't access /tools/inference. LS doesnt show anything in workspace directory. Any ideas why?

  • @Crocodile909
    @Crocodile909 2 ปีที่แล้ว

    For some reason when I get to the point I'm actually opening the webpage it takes me to a login screen instead of the first screen you see. It says the token should work to login but I only keep getting an error saying invalid credentials. Also the first link it provides just doesn't work at all.

  • @mauricenr2969
    @mauricenr2969 2 ปีที่แล้ว

    @Blake I wanted to try to run on a windows 11 laptop with 4 gb vram. when you said you got it working on a cpu, what does that mean? I really would want this to run even if it took a long time!

  • @jekksit
    @jekksit ปีที่แล้ว

    Confirmed working on a single 3080 Ti with 16GB VRAM and 32GB RAM. Thanks! But is there any way to change the aspect ratio of generated images?

  • @niharikkatyagi4089
    @niharikkatyagi4089 2 ปีที่แล้ว

    After I successfully installed wsl, it is not showing me the version installed.
    It shows the message "Install Windows Subsystem for Linux features. If no options are specified, the recommended features will be installed along with the default distribution.
    To view the default distribution as well as a list of other valid distributions, use 'wsl --list --online'..."
    What do I do?

  • @bsouroartes
    @bsouroartes 2 ปีที่แล้ว

    just cant find the field for put the WANDB API, can you pls commment about ? thanks a lot

  • @phenixis
    @phenixis 5 หลายเดือนก่อน

    Hi, this video have been make since years ago so I didnt hope a awnser ... but I have this error when I make the 5:58
    . exec /opt/nvidia/nvidia_entrypoint.sh: exec format error can you help me ^^

  • @goblinphreak2132
    @goblinphreak2132 2 ปีที่แล้ว +2

    the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'

  • @sadrootbeer
    @sadrootbeer 2 ปีที่แล้ว +2

    How do you re-run the program at a later time without having to re-download all those files on jupyter page? I tried to not do them but it kept telling me the variables werent defined and forced me to re-download like 9gb of files again

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว

      The last video I mentioned of making will go over that and more.
      Docker commit is the answer.
      After running everything do "docker ps" and note the id. Then "docker commit id dalle-mini:latest"
      I tried to have each video cover one topic and not be much longer than 10 minutes if possible.

    • @buntwogarde6583
      @buntwogarde6583 2 ปีที่แล้ว

      thanks for asking this, was about to have to ask myself

  • @zachroush3816
    @zachroush3816 2 ปีที่แล้ว

    Can I do this without GPUs? I just have a laptop with a CPU

  • @Kimlovecrona
    @Kimlovecrona 2 ปีที่แล้ว +2

    been waiting

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว

      hope it was worth the wait :)

  • @MyTomServo
    @MyTomServo 2 ปีที่แล้ว

    Thanks for the help I got the thing to work. I was having trouble when trying to use other models from the dalle-mini project though.
    What changes would I have to make to run the mega_full model?
    I think you change mega-1-fp16:latest to mega-1:latest and I had to run it in administrator, so I got past that part, but at the end
    it gives me this error
    ValueError: pmap got inconsistent sizes for array axes to be mapped:
    the tree of axis sizes is:
    ({'attention_mask': 1, 'attention_mask_uncond': 1, 'input_ids': 1, 'input_ids_uncond': 1}, 1, {'lm_head': {'kernel': 2048}, 'model': {'decoder': {'embed_positions': {'embedding': 256}, 'embed_tokens': {'embedding': 16416}, 'final_ln': {'bias': 2048}, 'layernorm_embedding': {'bias': 2048, 'scale': 2048}, 'layers': {'FlaxBartDecoderLayers': {'FlaxBartAttention_0': {'k_proj': {'kernel': 24}, 'out_proj': {'kernel': 24}, 'q_proj': {'kernel': 24}, 'v_proj': {'kernel': 24}}, 'FlaxBartAttention_1': {'k_proj': {'kernel': 24}, 'out_proj': {'kernel': 24}, 'q_proj': {'kernel': 24}, 'v_proj': {'kernel': 24}}, 'GLU_0': {'Dense_0': {'kernel': 24}, 'Dense_1': {'kernel': 24}, 'Dense_2': {'kernel': 24}, 'LayerNorm_0': {'bias': 24}, 'LayerNorm_1': {'bias': 24}}, 'LayerNorm_0': {'bias': 24}, 'LayerNorm_1': {'bias': 24, 'scale': 24}, 'LayerNorm_2': {'bias': 24}, 'LayerNorm_3': {'bias': 24, 'scale': 24}}}}, 'encoder': {'embed_positions': {'embedding': 64}, 'embed_tokens': {'embedding': 50272}, 'final_ln': {'bias': 2048}, 'layernorm_embedding': {'bias': 2048, 'scale': 2048}, 'layers': {'FlaxBartEncoderLayers': {'FlaxBartAttention_0': {'k_proj': {'kernel': 24}, 'out_proj': {'kernel': 24}, 'q_proj': {'kernel': 24}, 'v_proj': {'kernel': 24}}, 'GLU_0': {'Dense_0': {'kernel': 24}, 'Dense_1': {'kernel': 24}, 'Dense_2': {'kernel': 24}, 'LayerNorm_0': {'bias': 24}, 'LayerNorm_1': {'bias': 24}}, 'LayerNorm_0': {'bias': 24}, 'LayerNorm_1': {'bias': 24, 'scale': 24}}}}}})
    This error comes from the line:
    encoded_images = p_generate(
    tokenized_prompt,
    shard_prng_key(subkey),
    params,
    gen_top_k,
    gen_top_p,
    temperature,
    cond_scale,
    )

    • @MyTomServo
      @MyTomServo 2 ปีที่แล้ว

      Also this was printed out
      File /usr/local/lib/python3.8/dist-packages/jax/_src/api.py:1547, in _mapped_axis_size(tree, vals, dims, name, kws)
      1545 sizes = [x.shape[d] if d is not None else None for x, d in zip(vals, dims)]
      1546 sizes = tree_unflatten(tree, sizes)
      -> 1547 raise ValueError(msg.format(f"the tree of axis sizes is:
      {sizes}")) from None

  • @Kyledude97
    @Kyledude97 2 ปีที่แล้ว +1

    So I was able to get to the notebook with the last video, however I got stuck on the first step as it wouldn’t install due to “versioneer not having a version”.

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว +1

      Make sure you have the latest version of Windows. Feel free to join the discord for more help.

  • @eternalreturnal
    @eternalreturnal 2 ปีที่แล้ว +2

    jax is not detecting my GPU apparently. inside the notebook I get the error:
    WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
    inside the container i can run nvidia-smi and nvcc --version and they definitely detect my GPU and CUDA though... Any ideas? Thanks

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว +1

      sounds like cuda support was not installed. Maybe the container image changed on the repo...

    • @FartButtPizza1234567
      @FartButtPizza1234567 2 ปีที่แล้ว +1

      I'm running into the same exact issue.

    • @LuckyFlowery
      @LuckyFlowery 2 ปีที่แล้ว

      Me too. It happened in win10 with rtx 3060

  • @xDevoneyx
    @xDevoneyx 2 ปีที่แล้ว

    Thanks for this walkthrough. I seems all is running fine up till the Jupyter notebook step "# Load models & tokenizer". It asked for the token, which I pasted in, and then nothing. No output, no feedback, also not in the powershell window in which the inference_pipe.ipynb was started. The "In [*]:" shows. In the video it shows at least that it is going to download some huge files. Am I missing some setting that I do not get to see the output? Chrome or Edge makes no difference.
    I installed bmon. (sudo apt update | sudo apt install bmon | bmon) and saw no significant incoming or outgoing traffic (

    • @thiagovieirasoares
      @thiagovieirasoares ปีที่แล้ว

      Hi friend, please, if it's not a nuisance could you tell me step by step how you fixed it, the lines and everything? I would be very grateful. Apparently my problem is the same as yours;

  • @hunt4642
    @hunt4642 2 ปีที่แล้ว +9

    Whenever I try to build it I get an error saying "Could not find a version that satisfies the requirement jaxlib==0.3.10+cuda11.cudnn82". The only way I was able to build it was removing "cuda" from jax[cuda] in the Dockerfile. I am on windows 10

    • @derkdottv
      @derkdottv 2 ปีที่แล้ว +2

      Same here - The devs said on GitHub that "They do not plan on supporting Windows Releases" and left it at that. I tried a ton of work-arounds to no avail.

    • @reddaugherty
      @reddaugherty 2 ปีที่แล้ว +1

      running into the same issue here- I wonder if there is any workaround.

    • @FuchsDanin
      @FuchsDanin 2 ปีที่แล้ว

      github.com/borisdayma/dalle-mini/issues/260#issuecomment-1158185934

    • @FuchsDanin
      @FuchsDanin 2 ปีที่แล้ว +2

      Known issue, workaround is to hardcode versions into Dockerfile before building. Linked, issue #260 on the github repo if the link doesn't show/work

  • @Andee...
    @Andee... 2 ปีที่แล้ว +1

    Unfortunately the github version of dalle mini/mega isn't the same as the one on huggingface.
    So you can't do for example "xQc" or other internet culture stuff sadly.
    I spent like 4 days trying to make it properly work just to figure that out after managing to do so. :/

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว

      I thought DALL-E Mega was the one on HuggingFace? Could you show me where to read otherwise?

    • @Andee...
      @Andee... 2 ปีที่แล้ว +1

      @@Brillibits It is dall-e mega, but it's not been trained on the same imageset from what I've gathered. I haven't read about it, just experimented personally. Just tried the same prompt on both versions and got very different results. xQc on huggingface becomes xQc, on the github version it's just text or random shapes. :/

    • @eternalreturnal
      @eternalreturnal 2 ปีที่แล้ว

      @@Andee... I've noticed the results are different as well. Maybe it's the Mega full model? Dalle-E Playground seems to have this one and I can tell you it does the pop culture stuff. I've tried a prompt like "character x as a guest star in Friends" on both Mega and Mega Full and only Full understands what Friends, the show, is.

  • @piotr.kaczmarski
    @piotr.kaczmarski ปีที่แล้ว

    8:19 I get this:
    jaxlib version 0.4.13 is newer than and incompatible with jax version 0.3.25. Please update your jax and/or jaxlib packages.

  • @LauLauHip
    @LauLauHip ปีที่แล้ว

    How long does it take to download the model? I've been sitting here for... Idek how long.. Prob an hour

    • @LauLauHip
      @LauLauHip ปีที่แล้ว

      Also it only shows the star, no output after typing the api key in

  • @Kimlovecrona
    @Kimlovecrona 2 ปีที่แล้ว +4

    I get an error with process 3 (installing jax)

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว +1

      During building the docker image or what?

    • @0xhayleydev
      @0xhayleydev 2 ปีที่แล้ว

      @@Brillibits yep. I get the same thing:
      #6 11.65 ERROR: No matching distribution found for jaxlib==0.3.10+cuda11.cudnn82 (from jax[cuda])

    • @0xhayleydev
      @0xhayleydev 2 ปีที่แล้ว +3

      have solved the issue and created PR #261

    • @antunnel
      @antunnel 2 ปีที่แล้ว +2

      @@0xhayleydev How did you solve it? Running into the same issue here.
      Edit - I realized you literally said you posted pull request 261 on the projects git, i followed that and was able to run the installation without flaw. Thanks!

    • @randommofo123
      @randommofo123 2 ปีที่แล้ว

      ​@@0xhayleydev I tried it and I'm getting WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) on the 3rd part of the Jupyter notebook even though my gpu shows up at the nvidia-sim part.

  • @bogdanoff1723
    @bogdanoff1723 2 ปีที่แล้ว +1

    Nice tutorial. Did you mean 32Gb Ram? I believe you said 32Mb

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว +2

      If I said MB I definitely meant GB, my bad.

  • @xbon1
    @xbon1 ปีที่แล้ว

    do u really need WSL for this? is there just some pytorch way to do this for dall-e mega?

    • @Brillibits
      @Brillibits  ปีที่แล้ว

      This requires Jax and cuda, which isn't even supported on Windows. The alternative is to build from source, so this is much easier.

    • @xbon1
      @xbon1 ปีที่แล้ว

      @@Brillibits CUDA is supported on Windows though, Pytorch can utilize it perfectly fine.

    • @mushmello526
      @mushmello526 ปีที่แล้ว

      @@xbon1 But Jax is not supported on Windows

  • @barrettvelker198
    @barrettvelker198 2 ปีที่แล้ว +1

    I've been looking for training scripts for Dolle mini. Maybe something that runs on Collab. Does somebody know of a repo or tutorial working on this?

  • @wizpizz
    @wizpizz 2 ปีที่แล้ว +1

    Unfortunately cannot run the mega model on my 3050Ti, but mini works fine

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว

      CPU may still work

    • @xDevoneyx
      @xDevoneyx 2 ปีที่แล้ว

      Is there a known limitation? I am just trying to install it all with a RTX 2080Ti with 11GB of ram.

  • @zachroush3816
    @zachroush3816 2 ปีที่แล้ว

    I have a laptop with only CPU and 8GB or RAM. Is it still possible to run the mini version?

    • @FuchsDanin
      @FuchsDanin 2 ปีที่แล้ว +1

      It's possible CPU-only. I ran mine, i7-8660k w/ 32GB RAM. It took several minutes and ~21gb of RAM. With 8gb, not possible. You can run them on Google Colab for free though!

  • @bmrobotik828
    @bmrobotik828 2 ปีที่แล้ว

    Hi, how can I run using CPU only?

    • @FuchsDanin
      @FuchsDanin 2 ปีที่แล้ว

      24gb+ of RAM and five-plus minutes per run on recent high-end CPU, replace jax[cuda] with jax[cpu] in Dockerfile

    • @bmrobotik828
      @bmrobotik828 2 ปีที่แล้ว

      @@FuchsDanin Thanks. I have a high-end system but no dGPU :(

    • @bubbadoes6225
      @bubbadoes6225 ปีที่แล้ว

      @@FuchsDanin Where do you make this change? Running windows 10

  • @wortexmkd
    @wortexmkd 2 ปีที่แล้ว +1

    25 seconds per image on a single 1080Ti

    • @Brillibits
      @Brillibits  2 ปีที่แล้ว

      Thanks for sharing

  • @hourglassflipper
    @hourglassflipper ปีที่แล้ว

    Hmm, I got dall-e playground to work with its docker image, but this one gives me a "ValueError: pmap got inconsistent sizes for array axes to be mapped" error at the code block where the images are. Anyone found and fixed this issue? Either way, I appreciate the fact that you made this video.
    "File /usr/local/lib/python3.8/dist-packages/jax/_src/api.py:1652, in _mapped_axis_size(tree, vals, dims, name, kws)
    1650 sizes = [x.shape[d] if d is not None else None for x, d in zip(vals, dims)]
    1651 sizes = tree_unflatten(tree, sizes)
    -> 1652 raise ValueError(msg.format(f"the tree of axis sizes is:
    {sizes}")) from None
    ValueError: pmap got inconsistent sizes for array axes to be mapped:
    the tree of axis sizes is:
    ..."

  • @arx_angel
    @arx_angel ปีที่แล้ว

    I get the following under the progress bar when running everything and progress stays at zero percent. I'm guessing something has changed between June and today.
    EDIT: Killing the kernal shows results? I feel like the warnings are messing things up. Is there a way to suppress them?
    /usr/local/lib/python3.8/dist-packages/flax/core/lift.py:112: FutureWarning: jax.tree_flatten is deprecated, and will be removed in a future release. Use jax.tree_util.tree_flatten instead.
    scopes, treedef = jax.tree_flatten(scope_tree)
    /usr/local/lib/python3.8/dist-packages/flax/core/lift.py:729: FutureWarning: jax.tree_leaves is deprecated, and will be removed in a future release. Use jax.tree_util.tree_leaves instead.
    lengths = set(jax.tree_leaves(lengths))
    /usr/local/lib/python3.8/dist-packages/flax/core/axes_scan.py:134: FutureWarning: jax.tree_flatten is deprecated, and will be removed in a future release. Use jax.tree_util.tree_flatten instead.
    in_avals, in_tree = jax.tree_flatten(input_avals)
    /usr/local/lib/python3.8/dist-packages/flax/linen/transforms.py:249: FutureWarning: jax.tree_leaves is deprecated, and will be removed in a future release. Use jax.tree_util.tree_leaves instead.
    jax.tree_leaves(tree)))
    /usr/local/lib/python3.8/dist-packages/flax/core/axes_scan.py:146: FutureWarning: jax.tree_unflatten is deprecated, and will be removed in a future release. Use jax.tree_util.tree_unflatten instead.
    broadcast_in, constants_out = jax.tree_unflatten(out_tree(), out_flat)
    /usr/local/lib/python3.8/dist-packages/jax/_src/ops/scatter.py:87: FutureWarning: scatter inputs have incompatible types: cannot safely cast value from dtype=float16 to dtype=float32. In future JAX releases this will result in an error.
    warnings.warn("scatter inputs have incompatible types: cannot safely cast "

  • @dr.robotnikgaming9218
    @dr.robotnikgaming9218 2 ปีที่แล้ว

    terminal is bugging out for me at running the docker image, when I type in ./run_docker_image.sh I get this error
    Unable to find image 'dalle-mini:latest' locally
    docker: Error response from daemon: pull access denied for dalle-mini, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
    please help asap, I'm an impatient guy

    • @aethernets9442
      @aethernets9442 2 ปีที่แล้ว

      for me I just ran it with: Sudo ./run_docker_image.sh and it "worked" but then I get " docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown."

    • @dr.robotnikgaming9218
      @dr.robotnikgaming9218 2 ปีที่แล้ว

      @@aethernets9442 I don't believe you're supposed to put "sudo" anywhere unless that might be your username on your pc (for example mine is "owner" since I got mine factory reset not terribly long ago due to a trojan)

    • @aethernets9442
      @aethernets9442 2 ปีที่แล้ว

      @@dr.robotnikgaming9218 from wha tI know sudo is just an "admin" thing, like, running your command with admin privileges.Im not super knowledgeable but I think I remember that from my small time with using command lines with ubuntu

    • @dr.robotnikgaming9218
      @dr.robotnikgaming9218 2 ปีที่แล้ว

      @@aethernets9442 I'll have to try and see if that works for me then, I'm starting to lose interest in dall e but I'll give it a shot tomorrow

  • @FluorescentApe
    @FluorescentApe 2 ปีที่แล้ว +1

    I feel so stupid when trying to do these things. Some commands must be entered in the PowerShell and some in the "Git Bash". I get stuck at 05:00 and there's an error about Jax not finding a distribution. I followed some guide online how to build it from source and that on it's own had issues. I will paste the error below if someone has a fix for it. Thanks in advance.
    #6 6.788 Downloading typing_extensions-4.2.0-py3-none-any.whl (24 kB)
    #6 6.930 ERROR: Could not find a version that satisfies the requirement jaxlib==0.3.10+cuda11.cudnn82 (from jax[cuda]) (from versions: 0.1.32, 0.1.40, 0.1.41, 0.1.42, 0.1.43, 0.1.44, 0.1.46, 0.1.50, 0.1.51, 0.1.52, 0.1.55, 0.1.56, 0.1.57, 0.1.58, 0.1.59, 0.1.60, 0.1.61, 0.1.62, 0.1.63, 0.1.64, 0.1.65, 0.1.66, 0.1.67, 0.1.68, 0.1.69, 0.1.70, 0.1.71, 0.1.72, 0.1.73, 0.1.74, 0.1.75, 0.1.76, 0.3.0, 0.3.2, 0.3.5, 0.3.7, 0.3.8, 0.3.10)
    #6 6.990 ERROR: No matching distribution found for jaxlib==0.3.10+cuda11.cudnn82 (from jax[cuda])
    #6 ERROR: executor failed running [/bin/sh -c pip install --upgrade "jax[cuda]" -f storage.googleapis.com/jax-releases/jax_releases.html && pip install -q git+github.com/borisdayma/dalle-mini.git git+github.com/patil-suraj/vqgan-jax.git]: exit code: 1
    ------
    > [3/5] RUN pip install --upgrade "jax[cuda]" -f storage.googleapis.com/jax-releases/jax_releases.html && pip install -q git+github.com/borisdayma/dalle-mini.git git+github.com/patil-suraj/vqgan-jax.git:
    ------
    executor failed running [/bin/sh -c pip install --upgrade "jax[cuda]" -f storage.googleapis.com/jax-releases/jax_releases.html && pip install -q git+github.com/borisdayma/dalle-mini.git git+github.com/patil-suraj/vqgan-jax.git]: exit code: 1
    Daniel@DESKTOP-USPJG46 MINGW64 ~/dalle-mini/Docker (main)

    • @FluorescentApe
      @FluorescentApe 2 ปีที่แล้ว +1

      OKAY UPDATE: i checked further down in the comments and saw someone mentioning a pull request. In it, they mentioned you have to navigate to ./dalle-mini/Docker and edit the "Dockerfile". Change the jax[cuda] URL from "jax_releases.html" to "jax_cuda_releases.html", save the file and you can then build the docker.

    • @raven3696
      @raven3696 2 ปีที่แล้ว +2

      @@FluorescentApe Absolute champ, thanks for that

    • @kintustis
      @kintustis 2 ปีที่แล้ว +1

      life saver. thank you

  • @doubleatheman
    @doubleatheman 2 ปีที่แล้ว

    {user}@{PC}:~/dalle-mini$ ./run_docker_image.sh
    Unable to find image 'dalle-mini:latest' locally
    docker: Error response from daemon: pull access denied for dalle-mini, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
    See 'docker run --help'. I am stuck on this step

  • @kepapi6985
    @kepapi6985 2 ปีที่แล้ว

    I keep getting this error message on the very last step, appreciate any help thanks; ValueError Traceback (most recent call last)
    Input In [29], in ()
    11 key, subkey = jax.random.split(key)
    12 # generate images
    ---> 13 encoded_images = p_generate(
    14 tokenized_prompt,
    15 shard_prng_key(subkey),
    16 params,
    17 gen_top_k,
    18 gen_top_p,
    19 temperature,
    20 cond_scale,
    21 )
    22 # remove BOS
    23 encoded_images = encoded_images.sequences[..., 1:]
    [... skipping hidden 4 frame]
    File /usr/local/lib/python3.8/dist-packages/jax/_src/api.py:1632, in _mapped_axis_size(tree, vals, dims, name, kws)
    1630 sizes = [x.shape[d] if d is not None else None for x, d in zip(vals, dims)]
    1631 sizes = tree_unflatten(tree, sizes)
    -> 1632 raise ValueError(msg.format(f"the tree of axis sizes is:
    {sizes}")) from None
    ValueError: pmap got inconsistent sizes for array axes to be mapped:

    • @kepapi6985
      @kepapi6985 2 ปีที่แล้ว

      I was able to fix this by manually setting to my main GPU and setting my MSI Afterburner to the same config when I mine crypto.