From 49b372f9671604c5e3150d77f0d4576e4787f39d Mon Sep 17 00:00:00 2001 From: Bart Moyaers Date: Thu, 30 Jan 2020 16:21:32 +0100 Subject: [PATCH] add first exercises --- exercises.md | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 exercises.md diff --git a/exercises.md b/exercises.md new file mode 100644 index 0000000..33aa75f --- /dev/null +++ b/exercises.md @@ -0,0 +1,43 @@ + + +# Exercise 1.1: Self-Play +*Suppose, instead of playing against a random opponent, the reinforcement learning algorithm described above played against itself, with both sides learning. What do you think would happen in this case? Would it learn a different policy for selecting moves?* + +It would most certainly learn a different policy for selecting moves, since its opponent is not the same any more. It would then slowly learn the values of game states $V(S_t)$, causing it also to play against a tougher opponent (itself). Since it will learn slowly, I guess the update rule +$$ V(S_t) \leftarrow v(S_t) + \alpha [V(S_{t+1}) - V(S_t)] $$ +will accomodate for the changing opponent, and it would slowly but surely become really good at tic-tac-toe. + +# Exercise 1.2: Symmetries +*Many tic-tac-toe positions appear different but are really the same because of symmetries. How might we amend the learning process described above to take advantage of this? In what ways would this change improve the learning process? Now think again. Suppose the opponent did not take advantage of symmetries. In that case, should we? Is it true, then, that symmetrically equivalent positions should necessarily have the same value?* + +By taking the symmetry of the game into account, we can greatly reduce the set of game states $S$. Because the set is smaller, the learning algorithm would learn the values $v_*(S_t)$ faster. (Less choices to make, the same states would get more value updates.) + +However: if the opponent is not a perfect player, and he does not take into account symmetries himself, there might be certain game states with a high value that will get a lower value estimate because of their symmetry to other low-value game states. Suppose that we have a symmetric state $S_s$ that actually comprises 4 different rotated game states: +$$ S_s = \{S_1, S_2, S_3, S_4\} $$ +With values : +$$v_*(S_1) = 0.5,\, v_*(S_2) = 0,\, v_*(S_3) = 0,\, v_*(S_4) = 0$$ + +With symmetries, the estimated value $V(S_s)$ would be +$$\frac{\sum_{i=1}^4{v_*(S_i)}}{|S_s|}=\frac{0.5}{4}=0.125$$ +Instead of focusing on state $S_1$ with a high value, the algorithm might prefer other states with scores higher than $0.125$. + +When playing against imperfect players it is not necessarily advantageous to take symmetries into account. + +# Exercise 1.3: Greedy Play +*Suppose the reinforcement learning player was greedy, that is, it always played the move that brought it to the position that it rated the best. Might it learn to play better, or worse, than a nongreedy player? What problems might occur?* + +A greedy player would not be able to search the complete state space, causing its value estimates $V(S_i)$ to be way off. It would always choose a random state with a value estimate that is higher than the other estimates, and never explore further states. + +A greedy player would quite certainly be worse than a nongreedy player that does a *small* amount of exploration. + +# Exercise 1.4: Learning from Exploration +*Suppose learning updates occurred after all moves, including exploratory moves. If the step-size parameter is appropriately reduced over time (but not the tendency to explore), then the state values would converge to a different set of probabilities. What (conceptually) are the two sets of probabilities computed when we do, and when we do not, learn from exploratory moves? Assuming that we do continue to make exploratory moves, which set of probabilities might be better to learn? Which would result in more wins?* + +If we learn from exploratory moves, the state values of above-average states would be lower. Depending on the step-size parameter $\alpha$, the tendency to explore, and the average value of the different lower-valued states. + +# Exercise 2.1 +*In $\varepsilon$-greedy action selection, for the case of two actions and $\varepsilon = 0.5$, what is the probability that the greedy action is selected?* + +Let set $A = \{A_1,\,A_2\}$ be the set of actions where $V(A_1) > V(A_2)$. The chance $P(A_1)$ of selecting greedy action $A_1$ is then +$$P(A_1)=\varepsilon + \varepsilon \frac{1}{|A|}=\varepsilon \left(1+\frac{1}{|A|}\right)=0.5\left(1+\frac{1}{2}\right)=0.75$$ +