...we know that giving people various cues can make up for their poor understanding of the #logic of conditionals.
For instance, adding a scenario to a familiar conditional makes people far more likely to realize how others would test it...if it's a rule: https://dx.doi.org/10.3791/67794
"As shown in Figure 5, performance was considerably higher when the scenario was present rather than absent when [rule-like or] deontic framing was used, but there was little effect of the scenario with indicative framing."
"The main results have shown a significant main effect of the problem content (F(2, 147) = 16.60; p < 0.0001). Post hoc analyses revealed significant differences between neutral (M = 0.50) and permission content (M = 0.97), p < .0001. There was also a significant difference between permission and obligation rules (M = 0.28, p < .027). Overall, higher logical indices were obtained with permission content, and the lowest were registered with neutral rules. A significant main effect of scenario was also obtained (F(1,148) = 19.24; p < 0.0001). For all three types of content, the logical indices were higher in the condition where a Scenario was provided (M = 0.986), than in the No-Scenario condition (M = 0.464). Finally, a significant interaction between Scenario and Framing was registered: F(1,148) = 7.64; p < 0.006."