Challenges in Augmenting Large Language Models with Private Data

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ม.ค. 2025

ความคิดเห็น • 3

  • @deeliciousplum
    @deeliciousplum 7 หลายเดือนก่อน

    While only 7 minutes in, I am being shown that LLMs of all ilk are sponging up a beginner (as more seasoned users would know not to do this) user's private information while a beginner user may be employing server side as well as hosted on their computer LLMs to aid them in their coding projects. As far as I know, there is no one to hold accountable if an LLM has pilfered a user's private data, private data which may also include family and everyone in their contact lists which may have been used by the user for one of their coding projects. There ought to be clear, safe, and ethical protocols where an LLM can have a disable from gathering sensitive/private data from its users feature. It is already next to impossible to press upon Google, Facebook, and/or various other social media sites to remove content which a user may deem as private. Imagine trying to find a human being who can be directed to delete the private data that LLMs, those which are now integral parts of our browsers, operating systems, and coding tools, have pilfered during our everyday use. Why do we rapidly roll out tech which is not ready to safely use? Sigh.