- 23
- 38 834
Ryo Suzuki
เข้าร่วมเมื่อ 2 เม.ย. 2015
Assistant Professor of Computer Science
Human-Computer Interaction Researcher
ryosuzuki.org/
x.com/ryosuzk
www.linkedin.com/in/ryosuzuki
Human-Computer Interaction Researcher
ryosuzuki.org/
x.com/ryosuzk
www.linkedin.com/in/ryosuzuki
[DIS 2024] RealityEffects: Augmenting 3D Volumetric Videos with Object-Centric Annotation and Dynami
[DIS 2024] RealityEffects: Augmenting 3D Volumetric Videos with Object-Centric Annotation and Dynamic Visual Effects
dl.acm.org/doi/10.1145/3643834.3661631
Authors:
Jian Liao, Kevin Van, Zhijie Xia, Ryo Suzuki
Abstract:
This paper introduces RealityEffects, a desktop authoring interface designed for editing and augmenting 3D volumetric videos with object-centric annotations and visual effects. RealityEffects enhances volumetric capture by introducing a novel method for augmenting captured physical motion with embedded, responsive visual effects, referred to as object-centric augmentation. In RealityEffects, users can interactively attach various visual effects to physical objects within the captured 3D scene, enabling these effects to dynamically move and animate in sync with the corresponding physical motion and body movements. The primary contribution of this paper is the development of a taxonomy for such object-centric augmentations, which includes annotated labels, highlighted objects, ghost effects, and trajectory visualization. This taxonomy is informed by an analysis of 120 edited videos featuring object-centric visual effects. The findings from our user study confirm that our direct manipulation techniques lower the barriers to editing and annotating volumetric captures, thereby enhancing interactive and engaging viewing experiences of 3D volumetric videos.
dl.acm.org/doi/10.1145/3643834.3661631
Authors:
Jian Liao, Kevin Van, Zhijie Xia, Ryo Suzuki
Abstract:
This paper introduces RealityEffects, a desktop authoring interface designed for editing and augmenting 3D volumetric videos with object-centric annotations and visual effects. RealityEffects enhances volumetric capture by introducing a novel method for augmenting captured physical motion with embedded, responsive visual effects, referred to as object-centric augmentation. In RealityEffects, users can interactively attach various visual effects to physical objects within the captured 3D scene, enabling these effects to dynamically move and animate in sync with the corresponding physical motion and body movements. The primary contribution of this paper is the development of a taxonomy for such object-centric augmentations, which includes annotated labels, highlighted objects, ghost effects, and trajectory visualization. This taxonomy is informed by an analysis of 120 edited videos featuring object-centric visual effects. The findings from our user study confirm that our direct manipulation techniques lower the barriers to editing and annotating volumetric captures, thereby enhancing interactive and engaging viewing experiences of 3D volumetric videos.
มุมมอง: 118
วีดีโอ
[CHI 2024] InflatableBots: Inflatable Shape-Changing Mobile Robots for Large-Scale Encountered-Type
มุมมอง 1.3K5 หลายเดือนก่อน
[CHI 2024] InflatableBots: Inflatable Shape-Changing Mobile Robots for Large-Scale Encountered-Type Haptics in VR ryosuzuki.org/inflatablebots/ Authors: Ryota Gomi, Ryo Suzuki, Kazuki Takashima, Kazuyuki Fujita, Yoshifumi Kitamura Abstract: We introduce InflatableBots, shape-changing inflatable robots for large-scale encountered-type haptics in VR. Unlike traditional inflatable shape displays, ...
[UIST 2023] RealityCanvas: Augmented Reality Sketching for Embedded and Responsive Scribble ...
มุมมอง 1.1Kปีที่แล้ว
[UIST 2023] RealityCanvas: Augmented Reality Sketching for Embedded and Responsive Scribble Animation Effects ryosuzuki.org/realitycanvas/ Authors: Zhijie Xia, Kyzyl Monteiro, Kevin Van, Ryo Suzuki Abstract: We introduce RealityCanvas, a mobile AR sketching tool that can easily augment real-world physical motion with responsive hand-drawn animation. Recent research in AR sketching tools has ena...
[UIST 2023] Augmented Math: Authoring AR-Based Explorable Explanations by Augmenting Static Math ...
มุมมอง 1Kปีที่แล้ว
[UIST 2023] Augmented Math: Authoring AR-Based Explorable Explanations by Augmenting Static Math Textbooks ryosuzuki.org/augmented-math/ Authors: Neil Chulpongsatorn*, Mille Skovhus Lunding*, Nishan Soni, Ryo Suzuki Abstract: We introduce Augmented Math, a machine learning-based approach to authoring AR explorable explanations by augmenting static math textbooks without programming. To augment ...
[UIST 2023] HoloBots: Augmenting Holographic Telepresence with Mobile Robots for Tangible Remote ...
มุมมอง 1.3Kปีที่แล้ว
[UIST 2023] HoloBots: Augmenting Holographic Telepresence with Mobile Robots for Tangible Remote Collaboration in Mixed Reality ryosuzuki.org/holobots/ Authors: Keiichi Ihara, Mehrad Faridan, Ayumi Ichikawa, Ikkaku Kawaguchi, Ryo Suzuki Abstract: This paper introduces HoloBots, a mixed reality remote collaboration system that augments holographic telepresence with synchronized mobile robots. Be...
[CHI 2023] Teachable Reality: Prototyping Tangible Augmented Reality with Everyday Objects
มุมมอง 1.4Kปีที่แล้ว
[CHI 2023] Teachable Reality: Prototyping Tangible Augmented Reality with Everyday Objects by Leveraging Interactive Machine Teaching ryosuzuki.org/teachable-reality Authors: Kyzyl Monteiro, Ritik Vatsal, Neil Chulpongsatorn, Aman Parnami, Ryo Suzuki Abstract: This paper introduces Teachable Reality, an augmented reality (AR) prototyping tool for creating interactive tangible AR applications wi...
[CHI 2023] ChameleonControl: Teleoperating Real Human Surrogates through MR Gestural Guidance
มุมมอง 1.4Kปีที่แล้ว
[CHI 2023] ChameleonControl: Teleoperating Real Human Surrogates through Mixed Reality Gestural Guidance for Remote Hands-on Classrooms ryosuzuki.org/chameleon-control Authors: Mehrad Faridan, Bheesha Kumari, Ryo Suzuki Abstract: We present ChameleonControl, a real-human teleoperation system for scalable remote instruction in hands-on classrooms. In contrast to an existing video or AR/VR-based ...
[UIST 2022] RealityTalk: Real-time Speech-driven Augmented Presentation for AR Live Storytelling
มุมมอง 1.5K2 ปีที่แล้ว
[UIST 2022] RealityTalk: Real-time Speech-driven Augmented Presentation for AR Live Storytelling ryosuzuki.org/realitytalk ilab.ucalgary.ca/realitytalk/ Authors: Jian Liao, Adnan Karim, Shivesh Jadon, Rubaiat Habib Kazi, Ryo Suzuki Abstract: We present RealityTalk, a system that augments real-time live presentations with speech-driven interactive virtual elements. Augmented presentations levera...
[UIST 2022] Sketched Reality: Sketching Bi-Directional Interactions Between Virtual & Physical World
มุมมอง 1.9K2 ปีที่แล้ว
[UIST 2022] Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UI ryosuzuki.org/sketched-reality Authors: Hiroki Kaimoto*, Kyzyl Monteiro*, Mehrad Faridan, Jiatong Li, Samin Farajian, Yasuaki Kakehi, Ken Nakagaki, Ryo Suzuki Abstract: This paper introduces Sketched Reality, an approach that combines AR sketching and actuated...
[CHI 2020] RoomShift: Room-scale Dynamic Haptics for VR with Furniture-moving Swarm Robots
มุมมอง 4.1K4 ปีที่แล้ว
[CHI 2020] RoomShift: Room-scale Dynamic Haptics for VR with Furniture-moving Swarm Robots ryosuzuki.org/roomshift Ryo Suzuki, Hooman Hedayati, Clement Zheng, James Bohn, Daniel Szafir, Ellen Yi-Luen Do, Mark D. Gross, Daniel Leithinger Abstract: RoomShift is a room-scale dynamic haptic environment for virtual reality, using a small swarm of robots that can move furniture. RoomShift consists of...
[UIST 2020] RealitySketch: Embedding Responsive Graphics and Visualizations in AR
มุมมอง 5K4 ปีที่แล้ว
[UIST 2020] RealitySketch: Embedding Responsive Graphics and Visualizations in AR through Dynamic Sketching ryosuzuki.org/realitysketch/ Authors: Ryo Suzuki, Rubaiat Habib Kazi, Li-Yi Wei, Stephen DiVerdi, Wilmot Li, Daniel Leithinger Abstract: We present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of...
Collective Shape-changing Display Concept Video [Suzuki 2017]
มุมมอง 3594 ปีที่แล้ว
Concept Video for Dynamic Shape Construction and Transformation with a Swarm of Collective Elements by Ryo Suzuki ryosuzuki.org/phd-thesis/
Ryo Suzuki PhD Defense - Dynamic Shape Construction and Transformation with Collective Elements
มุมมอง 8014 ปีที่แล้ว
Title: Dynamic Shape Construction and Transformation with Collective Elements ryosuzuki.org/phd-thesis Author: Ryo Suzuki University of Colorado Boulder, Department of Computer Science Date: May 13th, 2020 Thesis Committee: - Daniel Leithinger (CU Boulder, chair) - Mark Gross (CU Boulder) - Tom Yeh (CU Boulder) - Hiroshi Ishii (MIT Media Lab) - Takeo Igarashi (The University of Tokyo) Abstract:...
[TEI 2020] LiftTiles: Constructive Building Blocks for Room-scale Shape-changing Interfaces
มุมมอง 4K4 ปีที่แล้ว
[TEI 2020] LiftTiles: Constructive Building Blocks for Prototyping Room-scale Shape-changing Interfaces ryosuzuki.org/lift-tiles/ Authors: Ryo Suzuki, Ryosuke Nakayama, Dan Liu, Yasuaki Kakehi, Mark D. Gross, Daniel Leithinger Abstract: Large-scale shape-changing interfaces have great potential, but creating such systems requires substantial time, cost, space, and efforts, which hinders the res...
[UIST 2019] LiftTiles: Modular and Reconfigurable Room-scale Shape Displays
มุมมอง 1.3K5 ปีที่แล้ว
[UIST 2019 Adjunct] LiftTiles: Modular and Reconfigurable Room-scale Shape Displays through Retractable Inflatable Actuators ryosuzuki.org/lift-tiles/ Authors: Ryo Suzuki, Ryosuke Nakayama, Dan Liu, Yasuaki Kakehi, Mark D. Gross, Daniel Leithinger Abstract: This paper introduces LiftTiles, a modular and reconfigurable room-scale shape display. LiftTiles consist of an array of retractable and in...
[UIST 2019] ShapeBots: Shape-changing Swarm Robots
มุมมอง 10K5 ปีที่แล้ว
[UIST 2019] ShapeBots: Shape-changing Swarm Robots
[Pacific Graphics 2018] Tabby: Explorable Design for 3D Printing Textures
มุมมอง 3186 ปีที่แล้ว
[Pacific Graphics 2018] Tabby: Explorable Design for 3D Printing Textures
[CHI 2018] Reactile: Programming Swarm User Interfaces through Direct Physical Manipulation
มุมมอง 8586 ปีที่แล้ว
[CHI 2018] Reactile: Programming Swarm User Interfaces through Direct Physical Manipulation
[ASSETS'17] FluxMarker: Enhancing Tactile Graphics with Dynamic Tactile Markers
มุมมอง 5627 ปีที่แล้ว
[ASSETS'17] FluxMarker: Enhancing Tactile Graphics with Dynamic Tactile Markers
Sift Visualization Toolkit for Non Programmers
มุมมอง 507 ปีที่แล้ว
Sift Visualization Toolkit for Non Programmers
very cool work! What device are you using for this volumetric capture?
Which mobile robotic base are those? 😊
call it inflatabots not inflatablebots
ok
cant wait to use it in real teaching environment :D
This is really astonishing. During my college years, I attempted to develop an AI system for Lip Reading, and I am still contemplating its integration with the VR/ AR system.
Good incentive
Awesome, could you tell me what AI/ML is using for this experience?
We might like to collaborate on this at our lab here in Kyoto, Japan.
Dude, sick.
This is amazing work Ryo! Congratulations team! Thank you for citing Ad hoc UI :) I'm unlikely to push that paper into a full version by myself now given the busy life, but rather rely on talented students to push the interaction boundary forward, stay tuned :)
☹️ PЯӨMӨƧM
My first question: How do you track the ball and the pen?
Great work mate!
i want to use this so very much! great work!
Impressive! I love the idea.
Very clever!
ok
cute and sweet babys
Is this a product that is going to be released? is there an evaluation version? it looks like it fills a nice niche
Outstanding! Congrats for the great and meaningful job.