- 12
- 192 870
Tassilo von Gerlach
เข้าร่วมเมื่อ 16 ส.ค. 2013
iOS and VisionPro app developer.
Procedural Spheres in Reality Kit on Vision OS
00:00 - Introduction and overview
00:53 - Rendering a simple triangle in RealityKit
02:51 - Understanding mesh resources and descriptors
04:57 - Defining triangle vertices and edges
05:42 - Introduction to spheres and cubes concept
06:53 - Explaining the TerrainFaceView
08:16 - Creating a terrain face with resolution and local up
10:54 - Calculating vertex positions on the cube
11:45 - Converting cube to sphere: The normalization step
13:42 - Experimenting with multiple terrain faces
14:46 - Introduction to the Planet entity
15:57 - Demonstrating the complete procedural sphere
16:27 - Adjusting resolution and color of the sphere
17:15 - Conclusion and future content plans
Almost in F - Tranquillity by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. creativecommons.org/licenses/by/4.0/
Source: incompetech.com/music/royalty-free/index.html?isrc=USUAN1100394
Artist: incompetech.com/
00:53 - Rendering a simple triangle in RealityKit
02:51 - Understanding mesh resources and descriptors
04:57 - Defining triangle vertices and edges
05:42 - Introduction to spheres and cubes concept
06:53 - Explaining the TerrainFaceView
08:16 - Creating a terrain face with resolution and local up
10:54 - Calculating vertex positions on the cube
11:45 - Converting cube to sphere: The normalization step
13:42 - Experimenting with multiple terrain faces
14:46 - Introduction to the Planet entity
15:57 - Demonstrating the complete procedural sphere
16:27 - Adjusting resolution and color of the sphere
17:15 - Conclusion and future content plans
Almost in F - Tranquillity by Kevin MacLeod is licensed under a Creative Commons Attribution 4.0 license. creativecommons.org/licenses/by/4.0/
Source: incompetech.com/music/royalty-free/index.html?isrc=USUAN1100394
Artist: incompetech.com/
มุมมอง: 559
วีดีโอ
Cooking with Quest 3 in Augmented Reality
มุมมอง 122Kปีที่แล้ว
Please fill out this form if you want to chat about spatial computers and cooking in AR! forms.gle/JewueKeaKxYyMEB19 In this video, I try to find out how good #quest3 is for #cooking in #augmentedreality. #virtualreality #quest #meta #applevisionpro #spatialcomputing #ar #chickensoup #springonions #soup #chef #vr #ar #mixedreality #passthrough
How to Build a Vision Pro App Part 5: Particles
มุมมอง 3.4Kปีที่แล้ว
Part 5 of coding a VisionPro app to create wall art and place it in your physical environment. This project integrates RealityKit, Reality Composer Pro, SwiftUI, and UIKit to offer an immersive AR experience. Support me on Patreon: patreon.com/tassilovg Introduction Video: th-cam.com/video/IefFafD8mR8/w-d-xo.html Project Repo: github.com/tracyhenry/GenerativeDoodleArt_VisionOS Follow us on Twit...
How to Build a Vision Pro App Part 4: Animations & UIKit
มุมมอง 3.2Kปีที่แล้ว
Part 4 of coding a VisionPro app to create wall art and place it in your physical environment. This project integrates RealityKit, Reality Composer Pro, SwiftUI, and UIKit to offer an immersive AR experience. Support me on Patreon: patreon.com/tassilovg Introduction Video: th-cam.com/video/IefFafD8mR8/w-d-xo.html Project Repo: github.com/tracyhenry/GenerativeDoodleArt_VisionOS Follow us on Twit...
How to Build a Vision Pro App Part 3: SwiftUI Attachments, Combine, Reality Composer Pro Animations
มุมมอง 6Kปีที่แล้ว
Part 3 of coding a VisionPro app to create wall art and place it in your physical environment. This project integrates RealityKit, Reality Composer Pro, SwiftUI, and UIKit to offer an immersive AR experience. Support me on Patreon: patreon.com/tassilovg Introduction Video: th-cam.com/video/IefFafD8mR8/w-d-xo.html Project Repo: github.com/tracyhenry/GenerativeDoodleArt_VisionOS Follow us on Twit...
How to Build a Vision Pro App Part 2: Reality Kit Entities & Anchors
มุมมอง 10Kปีที่แล้ว
Part 2 of coding a VisionPro app to create wall art and place it in your physical environment. This project integrates RealityKit, Reality Composer Pro, SwiftUI, and UIKit to offer an immersive AR experience. Support me on Patreon: patreon.com/tassilovg Introduction Video: th-cam.com/video/IefFafD8mR8/w-d-xo.html Project Repo: github.com/tracyhenry/GenerativeDoodleArt_VisionOS Follow us on Twit...
How to Build a Vision Pro App Part 1: Window Group & Immersive Space
มุมมอง 17Kปีที่แล้ว
Part 1 of coding a VisionPro app to create wall art and place it in your physical environment. This project integrates RealityKit, Reality Composer Pro, SwiftUI, and UIKit to offer an immersive AR experience. Introduction Video: th-cam.com/video/IefFafD8mR8/w-d-xo.html Project Repo: github.com/tracyhenry/GenerativeDoodleArt_VisionOS Follow us on Twitter: tvon_g tracy hen...
How to build a Vision Pro App Part 0: Project Overview
มุมมอง 3.6Kปีที่แล้ว
Demo of our first VisionPro app to create wall art and place it in your physical environment. This project integrates RealityKit, Reality Composer Pro, SwiftUI, and UIKit to offer an immersive AR experience. Project Repo: github.com/tracyhenry/GenerativeDoodleArt_VisionOS Follow us on Twitter: tvon_g tracy henry Keywords: Vision Pro App Development, visionOS, Open-Source...
Unity vs Reality Kit (Swift) for Vision Pro Development
มุมมอง 16Kปีที่แล้ว
Let's take a look at the two primary ways of building applications for Apple Vision Pro (Vision OS) - using Unity and Apple Native Frameworks. This video aims to provide you with a clear understanding of the differences between these two methodologies and help you make an informed decision for your specific app requirements. We discuss the key features and benefits of both Unity and Apple's nat...
Why Apple doesn't talk about Artificial Intelligence
มุมมอง 2.1Kปีที่แล้ว
Follow me on Twitter: tvon_g Subscribe to my newsletter: tassilo.beehiiv.com/subscribe KEY SOURCES: th-cam.com/video/DgLrBSQ6x7E/w-d-xo.html th-cam.com/video/GYkq9Rgoj8E/w-d-xo.html th-cam.com/video/yQ16_YxLbB8/w-d-xo.html Disclaimer: This video is purely my opinion and should not be regarded as a primary source. I am not a financial advisor and this is not a recommendation to buy o...
Exploring Whisper C++ : Open Source Speech Recognition Library with iOS Swift UI Sample
มุมมอง 9Kปีที่แล้ว
In this video, we dive into the open-source speech recognition library, Whisper C , by exploring its functionality, understanding how it works, and discussing its benefits. We'll go through the code and run the SwiftUI sample app that's included in the repo to see the library in action. Whisper C is a lightweight, portable library that can run on various platforms such as macOS, iOS, Android, L...
Thanks very much. Exactly the inspiration, and details, I needed to implement my concept of Image Bubbles within Terrain Cubes for my visionOS project.
loved it please share more video, we are waiting
funny I just bought a vision pro the other day and I was considering making an app that helps you cook.. but while talking to GPT as oppoed to typing like "ok what do i do now?" GPT: "Chop the onions and saute". What do you think? is this the future? what were the current shortcomings you faced with this method?
Yep using your dirty hands typing on an invisible keyboard 😊
Yep using your dirty hands typing on an invisible keyboard 😊
Using your dirty hands typing on an invisible keyboard 😊
All that sticky stream getting on to your quest and inside the gaps... did you find a hack to save it 🤔
🎯 Key points for quick navigation: 00:00 *📚 Overview: Introduction to Unity and Apple native frameworks for Vision Pro app development.* 00:26 *🧐 Clarification: Immersive and fully immersive means augmented reality (AR) and virtual reality (VR).* 01:05 *📐 UI Design: AR apps can operate in full or shared spaces with specific UI constraints.* 01:30 *🎨 Rendering Engine: Reality Kit renders AR apps in both shared space and full space.* 01:57 *⚙️ Unity Translation: Unity uses poly spatial for translating materials and shaders into Reality Kit.* 02:24 *🛠️ Development Tools: Apple native frameworks provide Xcode, Swift UI previews, and Reality Composer Pro for development.* 03:05 *🏗️ Integrated Environment: Apple's frameworks offer a seamless, integrated development experience for AR/VR.* 03:20 *🔄 Unity Support: Poly spatial supports Unity's features like particle effects but not handwritten shaders.* 04:14 *📸 Volumetric Cameras: Essential for rendering content in shared spaces.* 04:27 *👀 User Interaction: Vision Pro utilizes eyes, hands, and gestures for user input within Unity.* 04:55 *🚀 Coming Soon: Poly spatial is not available yet, but a beta program is open for applications.* 05:08 *🌐 Cross-Platform: Unity's Universal Render Pipeline supports cross-platform compatibility.* 05:37 *🔀 Decision Factors: Choose technology based on cross-platform needs and development preferences.* 07:12 *🚀 Speed of Execution: Personal familiarity with development tools influences the speed of reaching an MVP.* 07:41 *📢 Viewer Engagement: Request for audience feedback on content preferences and future topics.* Made with HARPA AI
imagine the future where employers slap one of these on for job tutorials lol
Pass through image is improved now should try this test again
With the new Meta AI integrated, there will be a lot less typing. It will a much better experience then
These videos are great but your audio level is so low!
Thank you. Can we build app for apple vision pro with unreal? Any updated version of this video?
The aptitudes look too high to be planets... like look at a image of Olympus mons on Mars
it would be interesting to change the background to virtual with windows on real areas over the chrome window. how to do it?
omg so good contents series!!!
Great tutorials, Happy that i finally able to make some progress!
Hi~ This is Morrie from a standing desk company.I want to collaborate with you~ I couldn't find your email.... Could you give me your email so I could give you the details?😉
nice video
Sick app! Thanks
Awesome as always. Do you plan on doing PolySpatial tutorials by any chance? I’m really good with Unity but I’m really slowly getting better at Swift / Xcode but there isn’t a ton of documentation/ support with PolySpatial so far. Either way, awesome channel
Appreciate it. I'm going to focus on Reality Kit for the foreseeable future but perhaps I'll explore Unity PolySpatial down the road
@@tassilovg yes I want more videos on Reality kit
how did you add those infinite animations
Using swift ui animations attached to the RealityView that hosts the cubes
@@tassilovg awesome thank you!
Can you program an app for the AVP which will respond to the user blinking their eyes?
Good to see your cooking experience was so great
your passthrough looks so great on quest3
The only question I have is why did you keep the onion skin on the onion ?
Hi @Tassilo the background is not visible for me in vision os 2 and the color issue I have mentioned in the previous comment, if possible can you please check and fix in the new branch and put the details in description. it would be great help . I am stuck , I anyhow want to complete this tutorial series
please check and update if this possible
'baseColor' was deprecated in visionOS 1.0: renamed to 'color' I was getting this error in function loadImageMaterial, if I fixed that to color ; Started getting this error again Cannot assign value of type 'MaterialColorParameter' to type 'SimpleMaterial.BaseColor' (aka 'PhysicallyBasedMaterial.BaseColor')
anyone has faced the similar issue?
Thank you for this video.
would you make a video or tell us how you export the character from unity with separated animation please ? whatever I do it bake all animations to one
Has anyone released a dictation app based on this yet? I need one so I’ll do it myself if not.
Did you manage to do a comparison between whisper.cpp and apple Speech framework?
Thank you for organizing such clear and easy-to-walkthrough tutorial. Could you please share the learning path and the learning materials that you have gone through? Thx much!
4:15 I thought he would hit and damage the screen with that plate 😂😂
Whoa that was amazing! I just bought it and I love it! Now I wanna do the same thing you did lol I NEED to know how you were able to have the chat GPT on your VR? The keyboard and how did you do the multiple browsers too please!! Thank you 😊
Hallo @tassilovg I watched all your videos. They helped me very much. I have got a small test project for visionOS and got a problem. I want to attach a USDZ file or Scene to an image anchor. Can you help me?
Thanks for this series but I do have one simple question, how did you learn all this? Which resources? Because it is not really explaining to a UIKit/SwiftUI developer why something needs to be done, for people who have no background with all this SceneKit stuff. One cannot possibly learn just by reading documentation. So, from which resources did you learn, would you please share? Thanks.
I want to integrate in my app but issue is I'm not separate time stamp for each word as you mentioned documents to used max-length like ml-1and I want for each word time stamp and set with word accordingly so how to do so im not using yerminal thats fime but i need time stamp in swift UI
How about facial animation? Is it possible with shape keys?
Anyone who is having trouble with attachments in the first 5 minutes, this code worked for me: } attachments: { Attachment(id: "attachment") { VStack { Text(inputText) .frame(maxWidth: 600, alignment: .leading) .font(.extraLargeTitle2) .fontWeight(.regular) .padding(40) .glassBackgroundEffect() } .tag("attachment") .opacity(showTextField ? 1 : 0) } } I'm sure it'll be outdated again here shortly but in the meantime, here ya go :)
Thank you for sharing! It works for me now.
Hi sir, just want to know if buying a Vision Pro is going to allow me to build and test Vision Pro applications? On Apple website, it says need a Vision Pro development kit. Thanks a lot
yeah, any vision pro would work
// Enable Interactions on the entity. immersiveEntity.components.set(InputTargetComponent()) immersiveEntity.components.set(CollisionComponent(shapes:[.generateBox(width: 5, height: 10, depth: 1)]))
This is amazing. Thanks for taking the time to prepare the tutorials. Any advice on the Mac for develop for Vision Pro? Do you need a Max Processor or do you have more Memory or Storage? Do you mind sharing your specs and sharing your experience so far ?
Love it!! Please keep it going!
Hi Tassilo! How did you make the "ggml-base.bin" model work? The file that I've downloaded does not seem to be supported and I've tried it to encrypt it in many different ways but still wasn't successful.
I have been loving this tutorial! I have a question for better understanding how the provided animations work on the assistant model. In reality composer pro, the assistant and each animation in CharacterAnimations.usda (wave_model, jump_up_model, etc) contain an identical hierarchy, common_people_idle > basic_rig > basic_rig_pelvis > skeleton > animation. If they are all seemingly the exact same, how are they playing different animations? And how does the assistant from Immersive.usda know how to play animations from CharacterAnimations? It seems like all we have to do is playAnimation and the assistant is able to play any animation from CharacterAnimations. Does all of this have something to do with the way the animations are named such as common_people@jump_down.usdz ? Sorry if this is confusing. If anyone has answers to even some of these questions that would be very appreciated. I am trying to create a similar project for my own animations, a model that plays different animations in response to tapping.