Multi-Stream + Multi Neural Network Inferencing on Xilinx Kria SoM-KV260

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ส.ค. 2024
  • We are quite focused on "making Edge based AI inferencing cost effective", we called it as "Revolutionizing AI deployment in Edge", that is why we come with "Multi-Stream + Multi Neural Network Model based inferencing implementation". This implementation is done in Xilinx Kria KV260 board.
    This type of implementation reduces the "Cost Per Stream (CPS) for AI based deployment targeting the edge" . By this implementation edge based solution as for smart cities, retail analytics, video analytics and computer vision solutions can be more cost effective and can provide good performance in terms of "power/wattage and frame-rate".
    LogicTronix is Certified Partner of Xilinx and we are Design Service Partner of "Xilinx Kria SoM for AI/ML", for any development or solutions targeting edge, you can contact us at info@logictronix.com or ip-sales@logictronix.com!
    #Kria #MultiML #Inferencing #NN #Video #Streams #FPGA #Edge #AI #Acceleration #Xilinx #smartcity #VideoAnalytics #ComputerVision

ความคิดเห็น • 1

  • @SahaParikshit
    @SahaParikshit 2 ปีที่แล้ว

    please share a hackstar project of this implementation