- 43
- 124 405
SEED - Electronic Arts
เข้าร่วมเมื่อ 2 ส.ค. 2019
SEED - Search for Extraordinary Experiences Division.
SEED is a pioneering group within Electronic Arts, combining creativity with applied research. We explore, build, and help determine the future of interactive entertainment.
Our research targets meaningful areas of applied innovation that can be directly expressed in new player experiences, future connected services, and advanced development techniques.
Learn more about SEED at seed.ea.com
SEED is a pioneering group within Electronic Arts, combining creativity with applied research. We explore, build, and help determine the future of interactive entertainment.
Our research targets meaningful areas of applied innovation that can be directly expressed in new player experiences, future connected services, and advanced development techniques.
Learn more about SEED at seed.ea.com
ExFlowSions: Using ML in Game Audio Production
How do we give superpowers to game audio designers using AI?
Take a peek inside how ML research and development works at SEED with Mónica Villanueva and Jorge García. In this presentation, they walk us through implementing the ExFlowSions research project, which uses machine learning to perform style transfer on sound effects. While this research is still in its early days, this presentation is a great showcase for the ideation, workflow, and implementation of their system.
**Note: Re-uploaded with improved audio. **
Chapters:
00:00 Start
01:51 Introduction
05:43 Motivations & requirements
07:35 Research codebase
09:21 Tool design
12:17 Implementation
14:07 Testing approaches
15:11 Demos
23:14 Feedback
25:25 Future work
28:03 Conclusions
----------
SEED is a pioneering group within Electronic Arts, combining creativity with applied research. We explore, build, and help determine the future of interactive entertainment.
Learn more about SEED at seed.ea.com
Find us on:
Twitter: seed
LinkedIn: www.linkedin.com/company/seed-ea/
Take a peek inside how ML research and development works at SEED with Mónica Villanueva and Jorge García. In this presentation, they walk us through implementing the ExFlowSions research project, which uses machine learning to perform style transfer on sound effects. While this research is still in its early days, this presentation is a great showcase for the ideation, workflow, and implementation of their system.
**Note: Re-uploaded with improved audio. **
Chapters:
00:00 Start
01:51 Introduction
05:43 Motivations & requirements
07:35 Research codebase
09:21 Tool design
12:17 Implementation
14:07 Testing approaches
15:11 Demos
23:14 Feedback
25:25 Future work
28:03 Conclusions
----------
SEED is a pioneering group within Electronic Arts, combining creativity with applied research. We explore, build, and help determine the future of interactive entertainment.
Learn more about SEED at seed.ea.com
Find us on:
Twitter: seed
LinkedIn: www.linkedin.com/company/seed-ea/
มุมมอง: 135
วีดีโอ
A Theory of Stabilization by Skull Carving
มุมมอง 293หลายเดือนก่อน
This presentation was delivered at SIGGRAPH Asia 2024. Authors: Mathieu Lamarre, Patrick Anderson, Étienne Danvoye Presented by: Mathieu Lamarre Download the research paper here: www.ea.com/seed/news/siggraphasia2024-skull-carving/ Accurately stabilizing facial motion is essential for creating photo-real avatars for 3D games, virtual reality, movies, and machine-learning training data. In that ...
Gigi Lightning Talks
มุมมอง 1.6K3 หลายเดือนก่อน
This past summer, SEED brought together developers for the first ever "Gigi Jam." Game developers from all over Electronic Arts were invited to show off their prowess using the Gigi rapid prototyping platform design for real-time rendering (www.ea.com/seed/news/gigi). We received 13 amazing entries on topics including differentiable rendering, Perlin noise-based explosion effects, ray-traced pa...
Benchmarking Gesture Generation for Game Characters
มุมมอง 3294 หลายเดือนก่อน
How do you evaluate something as subjective and ephemeral as human body language for natural and lifelike qualities? There are many approaches to generating gestures for game characters based on speech. But the lack of a standard way of judging the appropriateness and human-like qualities of those gestures has been a stumbling block to progress in the field. SEED’s Taras Kucherenko has been ins...
A Position-Based Material Point Method
มุมมอง 3.8K4 หลายเดือนก่อน
It's well understood that game physics simulations have to execute quickly, but the most difficult requirement is actually the stability of the simulation. Game physics simulations must operate completely autonomously, and usually, the player has some kind of input into the simulation, which can potentially be quite violent. This severely constrains the methods that we can use. The explicit Mat...
Gigi Tutorial: Rapid Prototyping Platform for Real-Time Rendering
มุมมอง 2.6K4 หลายเดือนก่อน
Gigi is a rapid prototyping and development platform for real-time rendering, developed by SEED. It's intended for professionals, researchers, students, and hobbyists. Gigi is open source, and this video is an introductory tutorial. github.com/electronicarts/gigi Chapters 00:00 Start 00:40 Downloading Gigi 01:38 Using GigiViewer 04:04 Creating a technique in GigiEdit 11:03 Applying the techniqu...
Improving Generalization in Game Agents with Imitation Learning
มุมมอง 4405 หลายเดือนก่อน
How do we efficiently train in-game AI agents to handle new situations that they haven’t been trained on? Imitation learning is an effective approach for training game-playing agents and, consequently, for efficient game production. However, generalization - the ability to perform well in related but unseen scenarios - is an essential requirement that remains an unsolved challenge for game AI. ...
Towards Optimal Training Distribution for Photo-to-Face Parametric Models in Video Games
มุมมอง 2176 หลายเดือนก่อน
How do we best construct game avatars from photos? There’s a great deal of interest in personalizing game avatars with photos of players’ faces. Training an ML model to predict 3D facial parameters from a photo requires abundant training data. This presentation by SEED’s Igor Brovikov discusses a work in progress with an optimized view of the training data. Igor’s presentation was delivered at ...
Beyond White Noise for Real-Time Rendering
มุมมอง 16K7 หลายเดือนก่อน
Going beyond white noise for temporal and spatial denoising in real-time rendering can produce better results with no increase in rendering time. In this presentation, SEED’s Alan Wolfe discusses the use of different types of noise for random number generation, focusing on applications in real time rendering, and includes research just published at I3D 2024. In randomized rendering algorithms, ...
Machine Learning for Game Developers
มุมมอง 4.9K11 หลายเดือนก่อน
Machine learning (ML) is a powerful tool that’s already in use across game development, supporting everything from asset creation to automated testing. As an accelerant, it allows developers to create better-looking, higher-quality games with less time and effort, and in the future, it could even enable game teams to create things that are currently impossible due to time, cost, and effort cons...
SEED's Voice2Face Named Fast Company's Next Big Thing in Tech
มุมมอง 840ปีที่แล้ว
We are thrilled to announce that Electronic Arts and SEED’s Voice2Face project have been named one of FastCompany’s "Next Big Things in Tech!" It is a tremendous honor to be recognized alongside other game-changing organizations for the Fast Company Tech Awards. Voice2Face is an innovative and creative tool that generates remarkably accurate lip-sync animation for game characters based on nothi...
Coverage Bitmasks for Efficient Rendering Algorithms
มุมมอง 2.5Kปีที่แล้ว
This presentation gives the humble old bitmask the attention it deserves. Written and presented by SEED’s Martin Mittring. Bitmasks have been part of parallel hardware implementations without really being called out. They have been disguised under various names: Coverage / Visibility / Occupancy / Occlusion / Blocker [bit] masks. They enable efficient rasterization, soft shadows, antialiasing, ...
Genea Challenge 2023: Gesture Generation in Monadic and Dyadic Setings
มุมมอง 800ปีที่แล้ว
This paper reports on the GENEA Challenge 2023, in which participating teams built speech-driven gesture-generation systems using the same speech and motion dataset, followed by a joint evaluation. This year’s challenge provided data on both sides of a dyadic interaction, allowing teams to generate full-body motion for an agent given its speech (text and audio) and the speech and motion of the ...
Meet Hau Nghiep Phan - SEED Technical Art Director
มุมมอง 359ปีที่แล้ว
There are a lot of similarities between teaching and machine learning. Meet ML researcher Hau Nghiep Phan. SEED is a pioneering group within Electronic Arts, combining creativity with applied research. We explore, build, and help determine the future of interactive entertainment. Learn more about SEED at seed.ea.com Find us on: Twitter: seed LinkedIn: www.linkedin.com/company/seed-ea/
Using a Differentiable Function for Rig Inversion
มุมมอง 1.7K2 ปีที่แล้ว
Rig inversion is a mathematical approach that allows animators to remap an existing mesh animation onto an animation rig. This allows animators to tweak and fix up existing mesh animations, which can be difficult or impossible otherwise. The difficulty with rig inversion is finding the rig parameter vector that best approximates a given input mesh. In this paper, we propose to solve this proble...
Imitation Learning to Inform the Design of Computer Games
มุมมอง 8472 ปีที่แล้ว
Imitation Learning to Inform the Design of Computer Games
GENEA Challenge 2022: Co-Speech Gesture Generation
มุมมอง 1.8K2 ปีที่แล้ว
GENEA Challenge 2022: Co-Speech Gesture Generation
Voice2Face: Audio-Driven Facial and Tongue Rig Animations
มุมมอง 6K2 ปีที่แล้ว
Voice2Face: Audio-Driven Facial and Tongue Rig Animations
EGSR 2022: Spatiotemporal Blue Noise Masks
มุมมอง 3.5K2 ปีที่แล้ว
EGSR 2022: Spatiotemporal Blue Noise Masks
GTC 2021: Towards Advanced Game Testing With AI
มุมมอง 2.3K3 ปีที่แล้ว
GTC 2021: Towards Advanced Game Testing With AI
CoG 2021: Improving Playtesting Coverage via Curiosity-Driven Reinforcement Learning Agents
มุมมอง 7853 ปีที่แล้ว
CoG 2021: Improving Playtesting Coverage via Curiosity-Driven Reinforcement Learning Agents
CoG 2021: Adversarial Reinforcement Learning for Procedural Content Generation
มุมมอง 2.6K3 ปีที่แล้ว
CoG 2021: Adversarial Reinforcement Learning for Procedural Content Generation
SIGGRAPH 2021: Global Illumination Based on Surfels
มุมมอง 50K3 ปีที่แล้ว
SIGGRAPH 2021: Global Illumination Based on Surfels
SIGGRAPH 2021: Direct Delta Mush Compression
มุมมอง 1.3K3 ปีที่แล้ว
SIGGRAPH 2021: Direct Delta Mush Compression
SIGGRAPH 2021: Swish - Neural Network Cloth Simulation in Madden NFL 21
มุมมอง 4.6K3 ปีที่แล้ว
SIGGRAPH 2021: Swish - Neural Network Cloth Simulation in Madden NFL 21
Swish Sizzle Reel: Neural Network Cloth Simulation in Madden NFL 21
มุมมอง 1.1K3 ปีที่แล้ว
Swish Sizzle Reel: Neural Network Cloth Simulation in Madden NFL 21
Re•Work 2021: Augmenting Automated Game Testing with Deep Reinforcement Learning
มุมมอง 6823 ปีที่แล้ว
Re•Work 2021: Augmenting Automated Game Testing with Deep Reinforcement Learning
At 40:30, the low frequency noise in the bottom left image looks similar to the "boiling" artifact which denoisers often produce in ray traced games. Does this suggest the boiling is caused by the usage of white noise? Another question, may spatial or spatio-temporal blue noise also be used to improve diffusion models? I looks like they are using white noise currently.
Hey, yes indeed. One of the main tools in the toolbox for denoisers is blurring aka a low pass filter. That removes high frequencies and leaves low frequencies. If your rendering noise has no low frequencies, the noise goes away. If your rendering noise does have low frequencies, they are left behind after the filtering. If you do a low pass filter and are left with big lumpy blobs that slowly change over time, that is rendering noise that is low frequency over space and time. So yes, white noise is the primary culprit of boiling. The quake 2 RTX project also mentions that blue noise is better denoised than white noise. Regarding diffusion models, I definitely believe so and there is a paper about this. Give a google for "Blue noise for diffusion models" to see. Blue noise was also used in the NVIDIA paper "Filtering After Shading With Stochastic Texture Filtering", the authors mention that they believe it would help with "Random-Access Neural Compression of Material Textures", and there are various other papers that cite our noise papers. There is even a citation where they use it in echocardiograms. It's quite exciting, because it seems that real time rendering, and ML (and various other things) could use this stuff, but it isn't very well known yet.
you mean high framerates
That filtering point of view was very cool to see. Neatly explains why blue noise looks clean when viewed farther away. Looking forward to learning more about FAST noise as well!
Glad you enjoyed it!
Looks insane, thanks for the talk
Great overview! The story of loot boxes reminds me of an article I read in Game AI Pro. In that article, they also found that players perceived pure random samples as "unfair." What that article did was reject certain samples if the same result was produced repeatedly too many times.
Fantastic talk, thank you!
Great stuff! Enjoying using GIGI for rapid prototyping of some ideas I've wanted to try for a while. More videos like this please :)
While it is stable, is it efficient enough to be used in games, perhaps if it were to be able to take advantage of the less used 2 cores in the most common 8 core CPU’s just curious, thanks for your explanation
These vids popped into my head recently … from back when AI weren’t in the normie zeitgeist. I saw one where you could train AI to help make maps but don’t remember if it was SEED, I remember the vid as having cute bushes, vaguely cartoonlike.
This is awesome, I can already imagine using this for prototyping various kinds of graphics techniques.
вы продаете или только показываете?
это научная работа, которая не предназначена для коммерциализации, а для развития науки
@@taraskucherenko6661 понял, очень круто, удачи вам в вашем деле
This is really COOL! Anybody you know of make a blender implementation for this yet?
MPM is the most fun class of simulation that exists imo. Would love to see this in games someday!
If this is being done on CPU at 500k particles then it sounds like GPUs should be able to go way further than that if it can be ported.
I'm puzzled why profit-hungry short-sighted companies like EA permit their researchers to publish results publicly instead of patenting and/or using it for competitive advantage.
This looks like it'd be a cool approach for a plate tectonics simulation
Very interesting to see the physics going into game development.
really awesome video! thanks for sharing
I am a touchdesigner user and i aim to use this for some interactive installations… keep up with this page ❤. You push me to become a better artist with all the knowledge you share freely here. A big thanks
Really awesome stuff. Ill defo love working on this.
Wow, your videos are the right stuff to get even more interested in game engine development. Thank you and greetings from the Walrus :D
This seems pretty fire 🔥
My friend at EA just introduced me to this, seems like an excellent tool for prototyping a number of things I've been working on in the past!
This looks cool, but sadly it doesn't build straight from Github. Missing ImGui library I think.
It's working for other people, and does includes imgui. The repo lists a discord channel. If you go there, we can work with you and try to figure it out together. There are also pre-built binaries (a .zip and also an installer) in the releases section of the repo.
hopefully vulkan support will be implemented soon or released if it already implemented.
Probably would not happen
@@iHR4K It isn't yet implemented, but it is planned!
@ oh wow, my bad, just jealous to vulkan and their validation layers :)
Let's goo
cool to see my old slide in the beginning! i have another anecdote about the benefits when using TAA the application of blue noise on reflections just came about from seeing a high quality image on the "dithering" wikipedia article applying it i noticed that it looked a little better than white noise without TAA, but with TAA the difference was much bigger the negative correlation was specifically helpful while using TAA specifically because of TAA's neighborhood clamping/clipping essentially it ensured that the neighborhood AABB would always be big enough to always contain a new sample, avoiding rejection whereas with white noise, often times TAA takes in the reprojected sample, sees that it's outside the random box, and tosses it gonna check out that FAST, now that the i3d session is online, thanks for the cool presentation!
My journey started from seeing that inside presentation, and research into blue noise textures was pretty much stale since the 90s with void and cluster, so thanks for reviving the interest. FAST is the snowball growing as it rolls down hill :)
Blue noise textures looks suspiciously similar to Turing patterns
Oculus has had libraries to do this for 5+ years
thank you youtube recommendation algortihm. my brain is smarter now
I would argue that you can't have a random function that is completely fair, because it would need to be deterministic and random at the same time. that's what I get from this video.
For sure! Random is only fair "at the limit". Like, at infinity. Before then... who knows.
Excellent presentation!
I really learn a lot from this video, thank for the quality content! I didn't know that you can optimize the noise pattern based on different filter. I was wondering from a computer vision perspective, we can use back-propagation to learn good convolution filter. But in your case, given a set of image samples, what if you optimize it stage? In other words, learn best filter, and then learn best noise, will it improved?
Yeah. There is a lot of work on denoising. There is a little work on the noise side. If they were both optimized together, I do think better results are possible. Someone needs to get on it :)
This is/ was a great video; worth my time.
At 4:00 the answer would realitically be somewhere in between 50% and 100%. Much closer to 50% than 100%, but the probablility of the coin being unfair is to be considered. If we estimate the prior probablility of the coin being unfair (landing 75% of the time on one side) to be 1% for each side, after 3 consecutive heads the probability of the coin being biased towards "heads" raises to 3.3% and the probability of it being biased towards "tails" drops to 0.1%, making the overall probability of getting "heads" next time to be slightly above 50.8%.
Great presentation!
I wonder if a natural-looking prior or a collected statistics on the level of neighboring pixel's spatial correlation can be used to construct a better noise (instead of using assumption of small delta x to small delta y).
An auto correlation of a blue noise texture shows a ripple of negative then positive correlation that almost looks like sinc. But, the radius is very small and becomes uncorrelated after a radius of 5 or so. I do think what you are saying is a way to approach it, yeah.
Wolfe, Thank you for the great video. I've learned a lot from your blog and I'm glad to be able to express my gratitude on this channel.
"RNG" in gamer talk doesn't mean "random number generator", just inherent randomness or your luck with it. It's not that gamers think the generator is bad when they have bad luck, they just use the acronym as an expression for something else. "Bad luck with the RNG" got shortened to "Bad RNG" but meaning the same thing.
There is no luck. It is random noise. A predetermined probability. A more fair assertion would be that some people are luckier than others. The more realistic take is that most people will never be lucky.
@@mito._ Luck is a favourable outcome of a random (feeling) situation, the less likely the more lucky. It's just how language works.
@@Bassalicious A favorable outcome is just a specific configuration of a system. The perception of "luck" requires perception or, as you said, a "feeling" of favorableness - which is just bias towards a specific predetermined configuration.
Holy smokes, EA can't afford any audio production? Are you calling in from inside a sea can?
Ha! No, it's just me recording from my house without a very nice audio setup.
This is one of those videos I wish I could give more than just one thumbs up. Very well presented.
I would argue that from both the implementation and gameplay perspective, keeping loot tables random is the best choice. While you do "smooth out" the gameplay experience, you also lose out on more personalized experiences, for example a player having a lucky day and getting many rare drops in a row.
Yeah, other people have said that same in response to this stuff, that the unpredictability is a feature. I'd say it's a design choice, and it's nice to be able to do either.
I think the idea of a "lucky day" is purely sentimental and adds no real value to the average user experience. A lucky day is confirmation bias, amongst hundreds or thousands of unlucky days.
Blue noise rendering kinda looks like analogue photography. So cool!
Nice talk! I liked the visuals, I think that's the most colorful Rendering Equation I've ever seen!
3:40 The thing with Gambler's Fallacy is that, statistically, it WILL WORK for a PART of the people. If you look at e.g. 1000 people each doing 4 coin flips you expect "There's no way I get 4 Heads" to hold for 937 of them (=1000*(1-0.5^4)) and about 60 of those will get the HHHT falacy scenario P.S. You can run the trials with a 1-liner in your browser's console: [...Array(1000)].map(person => [...Array(4)].map(isHead => Math.random() >= 0.5).every(isHead => isHead == true)).filter(allHeads => allHeads == false).length
Why is this one hour long when it can be explained simply in 5 minutes 😑 You don’t need to storytell the whole history of computer graphics and math to explain a simple RNG 😑
Very interesting video. Well done.
Great presentation! Do you think that blue noise is relatively easy to implement in softwares retrospectively? I am sure that adding blue noise systems moving forward will be more popular.
I think the challenge with adding blue noise or low discrepancy sequences to old software is that old software often doesn't have very many people working on it, and the people that do work on it probably don't know everything about the software. There are lots of dark corners where the person who made it doesn't work there anymore and nobody wants to touch it. So yeah, i think it is possible to add it to older software, but the usual challenges of changing older software come up. I agree with you that newer software doesn't have the same excuse, and all we can do is try to help people understand this stuff to make better implementations before the new software becomes old software and stops changing :)
Thanks!
3:32 Isn´t it basicly saying that no previous outcomes have an influence on the next result. The odds are ALWAYS 50/50? I had 20 times black in a row in a roulette game once. Lost me a good chunk of money that evening 😀
That is super cursed! 😂
From electronic arts, the randomness is based on the money spent on DLCs