- 131
- 250 922
Professor Bryce
United States
เข้าร่วมเมื่อ 10 ม.ค. 2020
Davidson College
Data Structures for Deviation Payoffs (AAMAS Talk)
Paper abstract:
We present new data structures for representing symmetric normal-form games. These data structures are optimized for efficiently computing the expected utility of each unilateral pure-strategy deviation from a symmetric mixed-strategy profile. The cumulative effect of numerous incremental innovations is a dramatic speedup in the computation of symmetric mixed-strategy Nash equilibria, making it practical to represent and solve games with dozens to hundreds of players. These data structures naturally extend to role-symmetric and action-graph games with similar benefits.
Paper link: arxiv.org/abs/2302.13232
Julia library with the paper's experiments: github.com/Davidson-Game-Theory-Research/gameanalysis.jl
More-practical python library for solving role-symmetric games: github.com/egtaonline/gameanalysis
For background, see the playlist for my Algorithmic Game Theory course, especially videos 18-20: th-cam.com/play/PLgPbN3w-ia_Md9sxkXhCIAmTITSOUDJz2.html
We present new data structures for representing symmetric normal-form games. These data structures are optimized for efficiently computing the expected utility of each unilateral pure-strategy deviation from a symmetric mixed-strategy profile. The cumulative effect of numerous incremental innovations is a dramatic speedup in the computation of symmetric mixed-strategy Nash equilibria, making it practical to represent and solve games with dozens to hundreds of players. These data structures naturally extend to role-symmetric and action-graph games with similar benefits.
Paper link: arxiv.org/abs/2302.13232
Julia library with the paper's experiments: github.com/Davidson-Game-Theory-Research/gameanalysis.jl
More-practical python library for solving role-symmetric games: github.com/egtaonline/gameanalysis
For background, see the playlist for my Algorithmic Game Theory course, especially videos 18-20: th-cam.com/play/PLgPbN3w-ia_Md9sxkXhCIAmTITSOUDJz2.html
มุมมอง: 574
วีดีโอ
Counterfactual Regret Minimization (AGT 26)
มุมมอง 9Kปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 14 - Wednesday.
Sequential (and Perfect Bayesian) Equilibrium (AGT 25)
มุมมอง 3.4Kปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 14 - Monday.
Subgame Perfection and Backwards Induction (AGT 24)
มุมมอง 1.1Kปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 13 - Wednesday.
Extensive Form Games (AGT 23)
มุมมอง 733ปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 13 - Monday.
Action-Graph Games (AGT 22)
มุมมอง 332ปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 12 - Wednesday.
Congestion Games (AGT 21)
มุมมอง 910ปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 12 - Monday.
Data Structures for Symmetric Games (AGT 20)
มุมมอง 306ปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 11 - Wednesday. This video covers ideas from my recent paper: arxiv.org/abs/2302.13232
Gradient Descent for Nash (AGT 19)
มุมมอง 422ปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 11 - Monday.
Replicator Dynamics (AGT 18)
มุมมอง 1.5Kปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 10 - Wednesday.
Fictitious Play and Regret Matching (AGT 17)
มุมมอง 2.1Kปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 10 - Monday.
Complexity of Nash: PPAD (AGT 16)
มุมมอง 527ปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 9 - Wednesday.
Reductions and Why Zero Sum only Helps with Two Players (AGT 15)
มุมมอง 307ปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 9 - Monday.
Finding (Coarse) Correlated Equilibria with Linear Programming (AGT 14)
มุมมอง 1.8Kปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 7 - Wednesday.
Finding Zero Sum Nash Equilibria with Linear Programming (AGT 13)
มุมมอง 986ปีที่แล้ว
Davidson CSC 383: Algorithmic Game Theory, S23. Week 7 - Monday.
Symmetric Games and Sperners Lemma (AGT 11)
มุมมอง 569ปีที่แล้ว
Symmetric Games and Sperners Lemma (AGT 11)
Nash's Theorem: Every Game has an Equilibrium (AGT 10)
มุมมอง 1.7Kปีที่แล้ว
Nash's Theorem: Every Game has an Equilibrium (AGT 10)
Equilibria with Pre-Commitment: Stackelberg & Coarse Correlated (AGT 09)
มุมมอง 596ปีที่แล้ว
Equilibria with Pre-Commitment: Stackelberg & Coarse Correlated (AGT 09)
Nash Refinements: Trembling Hand and Evolutionary Stability (AGT 08)
มุมมอง 1.8Kปีที่แล้ว
Nash Refinements: Trembling Hand and Evolutionary Stability (AGT 08)
Nash Approximation: ε-Equilibria (AGT 07)
มุมมอง 703ปีที่แล้ว
Nash Approximation: ε-Equilibria (AGT 07)
Predicting Joint Behavior with Correlated Equilibria (AGT 06)
มุมมอง 1.7Kปีที่แล้ว
Predicting Joint Behavior with Correlated Equilibria (AGT 06)
Predicting Strategies with Mixed Nash Equilibria (AGT 05)
มุมมอง 758ปีที่แล้ว
Predicting Strategies with Mixed Nash Equilibria (AGT 05)
Predicting Actions with Dominance and Pure-Nash (AGT 04)
มุมมอง 685ปีที่แล้ว
Predicting Actions with Dominance and Pure-Nash (AGT 04)
Von Neumann-Morgenstern Utility (AGT 02)
มุมมอง 2.6Kปีที่แล้ว
Von Neumann-Morgenstern Utility (AGT 02)
Approximation Algorithms (Algorithms 25)
มุมมอง 4.3K2 ปีที่แล้ว
Approximation Algorithms (Algorithms 25)
sir you deserves the most slopiest bj from my side gawk gawk thank you very much
Thank you for the clear and concise explanation.
The best explanation ever. Thank you professor
Great video
So perfect
Thank you for this great video. Why don't you use Kakutani's FPT? So we don't have to worry about the adv fnc being a one-to-one mapping?
geniusssssssssssssssss , my heart is so full after watching this video
I can't remember the last time I commented on a youtube video as it was much too long ago, but I just had to because your videos on deeplearning are CRIMINALLY underrated. I have yet to find another resource that explains ResNet so intuitively as you break down each concepts to laymen terms and you take your time explaining them. You have an amazing way of explaining concepts and I sincerely hope your videos get all the recognition they deserve!
This is so true!
Good day, am having difficulties understanding how to point out the independent set and vertex cover from a graph
@zachrayman7879 is correct that the cell (a,cf) should be (2,2). Reply is also correct in that in the subgame after player 1 plays b player 2 moves e so (a,cf) is not subgame perfect
I have been stuck at 6:56, why are the slack finish times: 1, 11, 15 and 17? Could you please explain the calculations behind that
Great explanation, it helped me a lot. Thank you for taking the time to make this video!
Wow amazing tutorial. Thanks ❤
Thank you very much, This is great. I'm watching this again and still super helpful. So delta is actually the upstream gradient, right? I only think that it would be nice to mention what is a computation graph, but otherwise, super helpful.
Thanks, you are the best ❤
great tutorial!
Bro is underrated asf
very nice 🎉
thank you so so much for this video!
Wow, so clear! That was stellar, thank you!
A true legend
shouldn't folding have a utility or expected value of zero. since no money goes into the pot how can it return -1 in value?
thanks. Sorry but the video you mentioned in the last seconds of this video in not yet uploaded(video about stability proof)
That was amazing! So clear and concise explanation. Thanks!
Thank you for your good explanation, helped me a lot on my deep understanding journey of all these mechanims 😊
AMAZING!!
In this example iI think Sj >= Fgi because if Sj is smaller then there is a conflict in task and the task is not compatible instead the Si should be after the finish time of last request
speaking about the error value and calling it a Loss value using that term out of original context, makes this confusing to the new learner...
thank you so much Prof
Thankyou. That was quite a puzzle to me and now I see the instructions. Thankyou. Appreciated much
understood
great
where i can find his full course?
Thank you
great teachings proff! keep up the great work
Such a great explanation. Love this!
Your explanation is great.
I am still confused. What are the prerequisites to understanding this video.
I would recommend you to read the official paper "Deep Residual Learning for Image Recognition". I found explanations there pretty clear + there are videos on youtube explaining this paper.
That was amazing. I am surprised by how I understood the whole thing and didn’t get bored or confused through out. Well done
tfw you click on a video and it's your old college professor lmao
That's awesome! Haven't seen anything by my professors other than what they shared in class, but you never know
Professor Bryce, you sir are a god. Thank you for this series.
Free books
Thx dude u are awesome !
This is godsent, thank you
Thanks for nice explanation But I have one query, in part 16:00 where you said "each output neuron get input from every neuron across the depth of previous layer", here doesn't that make each output depth neuron same??
Thank you Professor! This introduction is really helpful and detailed!
you saved my life
Professor Bryce, you are a splendid teacher. I enjoy watching all your course's videos, especially the explanation depth - and at the same time pinpointed simplicity - are amazing. Thank you! Have a nice day!
Isn't this similar to RNNs where subsets of data is used for each epoch & in residual network, a block of layers is injected with fresh signal, much like boosting.