Can Chain of Thought make any LLM smarter? Llama 3.2 and Anthropic Claude Sonnet 3.5 comparison

แชร์
ฝัง

ความคิดเห็น • 4

  • @deucebigs9860
    @deucebigs9860 3 หลายเดือนก่อน +1

    Thanks! Interesting strategies to get it to count that I'll keep in mind for the future.

  • @99dynasty
    @99dynasty 3 หลายเดือนก่อน

    The type of language you give a smaller model for chain of thought would have to be researched and appropriately worded. It’s highly likely you could get a very similar increase in performance if you understood the exact limitations of that model

  • @fguerraz
    @fguerraz 3 หลายเดือนก่อน +1

    I mean no offence but your prompt was not formulated in correct English for llama, so maybe that’s why it didn’t do great.
    “Count the number of the letter r” is not a great formulation

    • @StanislavKhromov
      @StanislavKhromov  3 หลายเดือนก่อน +1

      Hi, thanks for watching and for the comment. Even if we try to write in more correct english (eg: "How many times does the letter 'r' appear in the word 'strawberry'?") neither Llama 3.1 nor Sonnet can answer it correctly. For example, Llama 3.1 8b told me: "Let's count! The word "strawberry" contains two occurrences of the letter 'r'."