In ots2 implication, I think it may happen that the Y(0)>Y(1) individuals have higher Y(1) than Y(1)>=Y(0) individuals since they are separated. In that case I think ots2 bound is inavailable.
I'm trying to understand this step as well. In the example, one has E(Y | T=1 ) = 0.9 and E(Y | T = 0) = 0.2 and by OTS assumption, E( Y(1) | T = 0 ) < E(Y |T = 0) = 0.2. So I think in the example the inequality in the proof is verified
Wow I'm sorry this is very late. But think of it like this. If Y1 < Y0 then E[ Y1| Y1 > Y0 ] > E[Y0 | Y1 > y0]. But if Y0 > Y1 then E[Y1 | Y1 < Y0] might or might not be greater than E[Y0 | Y0 < Y1]. It wouldn't be clear. On the other hand, E[Y1 | Y1 < Y0] > E[ Y1| Y1 > Y0 ].
In the Observational-Counterfactual Decomposition (and the rest of bounds), do we need to condition on confounders to calculate pi? In other words, are we computing a propensity score or is this basic simply a marginal?
In the notes about Linear Sensitivity Analysis you ask us "What assumption is violated when the data are generated by a noiseless process?" Could you clarify this for me? Thanks!
It seems to me that performing the informal benchmark (from Cinelli and Hazlett (2020)) is a fool's errand, given that not all unobserved confounders are not created equal.
I watched almost all of your videos for Causal Inference, they are great!
Just wanted to Thanks a ton for such quality lectures.
It is great to learn this topic from you Brady!
a very nice and clear explanation of bounds and assumptions! thx!
In ots2 implication, I think it may happen that the Y(0)>Y(1) individuals have higher Y(1) than Y(1)>=Y(0) individuals since they are separated. In that case I think ots2 bound is inavailable.
maybe, we need also to assume Y(0) is independent of T? That plus OTS will give E[Y(1)|T=0]
How did we get the first inequality in the proof of the slide shown at 27:52?
I'm trying to understand this step as well. In the example, one has E(Y | T=1 ) = 0.9 and E(Y | T = 0) = 0.2 and by OTS assumption, E( Y(1) | T = 0 ) < E(Y |T = 0) = 0.2. So I think in the example the inequality in the proof is verified
Wow I'm sorry this is very late. But think of it like this. If Y1 < Y0 then E[ Y1| Y1 > Y0 ] > E[Y0 | Y1 > y0]. But if Y0 > Y1 then E[Y1 | Y1 < Y0] might or might not be greater than E[Y0 | Y0 < Y1]. It wouldn't be clear. On the other hand, E[Y1 | Y1 < Y0] > E[ Y1| Y1 > Y0 ].
These assumptions lead to interesting bounds, but are they realistic in some contexts? e.g. when can you really make the OTS assumption?
In the Observational-Counterfactual Decomposition (and the rest of bounds), do we need to condition on confounders to calculate pi? In other words, are we computing a propensity score or is this basic simply a marginal?
In the notes about Linear Sensitivity Analysis you ask us "What assumption is violated when the data are generated by a noiseless process?" Could you clarify this for me? Thanks!
It seems to me that performing the informal benchmark (from Cinelli and Hazlett (2020)) is a fool's errand, given that not all unobserved confounders are not created equal.
How do we get the bound for counterfactual ( E[Y(1) | T=0]) in real life problems?
the gunshot treatment. lmao