首页
网站开发
桌面应用
管理软件
微信开发
App开发
嵌入式软件
工具软件
数据采集与分析
其他
首页
>
> 详细
代写data程序、代做python设计编程
项目预算:
开发周期:
发布时间:
要求地区:
How to Train your Bee
Assignment 2 Help Guide
CALCULATE AN OPTIMAL POLICY FOR
© Dreamworks, ”Bee Movie”Recap: Bellman Equation
• Bellman Equation is used to calculate the optimal value of a state
• Equation looks complicated, but it’s just the highest expected reward from the best action:
• is the probability of entering the next state !
given we perform action in state
• ! is the reward received for performing action in state and entering the next state !
• is the discount factor (makes immediate rewards worth more than future rewards)
• is the value of the next state !
• The optimal policy, , is the best available action that can be performed in each state
• The value of a state is given by the highest expected reward when following the optimal
policy
a
Assignment 2 Help GuideR
)Recap: Policy Iteration
1. Arbitrarily assign a policy to each state (e.g.
action to be performed in every state is LEFT)
2. Until convergence
• Policy Evaluation: Determine the value of every state based
on the current policy
• Policy Improvement: Determine the best action to be
performed in every state based on the values of the current
policy, then update the policy based on the new best
action
• Convergence occurs if the policy between two
iterations does not change
• Tutorial 7 involves implementing Policy Iteration in
the simple Gridworld environment
Assignment 2 Help GuideComputing State Space
• Both Value Iteration and Policy Iteration require us to loop through every state
• Value Iteration: to determine the current best policy and the value of a state
• Policy Iteration: to determine the new best policy based on the current value of a state
• We need a way to compute every state so we can loop through them all
• One way is to get every combination of states possible
• In BeeBot, for a given level, each state is given by the position and orientation of the bee, the position and
orientation of the widgets
• We can determine the list of all states by computing every combination of bee position and orientation, widget
position, and widget orientation
• However, this might include some invalid combinations (e.g., widget or bee are inside a wall)
• Is there a better way we can search for possible states?
Assignment 2 Help GuideTransition Outcomes
• The probabilistic aspect of this assignment means that by performing certain actions in
certain states, there might be multiple next states that can be reached each with a different
probability and reward
• The assignment involves creating a get_transition_outcomes(state, action)
function
• Takes a (state, action) pair and for every possible next state returns the probability of ending up in that state
and the reward
• Should return a list or other data structure with all the next states, probabilities, and rewards
• This will be useful when utilising the Bellman Equation to calculate the value of a state
• When creating the transition function, there are a few things to consider:
• What are the random aspects of the BeeBot environment?
• What are the possible next states to consider from a given state?
• What are edge cases that need to be considered (e.g. moving near a wall or thorn)?
Assignment 2 Help GuideTransition Outcomes
• The transition function will usually assume a given action is valid, so we need to only feed it
actions that are valid for given states to avoid any odd behaviour
• We can cache if actions are valid to improve runtime
• perform_action(state, action)
• Might help you understand how to determine the possible next states for certain states and actions
• However, note that it only returns one possible state for a given action
• We can cache the results of the transition function to improve runtime
• Tutorial 6 involves creating a transition function for the simple Gridworld environment
Assignment 2 Help GuideTerminal States
• We need to create a way to handle terminal states when calculating the values and optimal
policies of states, otherwise the agent might think it can leave the terminal states
• There are two ways we can model the terminal states to do this
• Terminal states
• Set the value of a terminal state to 0
• Skip over it without updating its value if it is encountered in a loop
• Absorbing states
• Create a new state outside of the regular state space to send the agent to once it reaches a terminal state
• If the player performs any action in the absorbing state, it remains in the absorbing state
• The reward is always 0 for the absorbing state, no matter the action performed
Assignment 2 Help GuideReward Function
• Rewards are not dependent on the resulting state, but on the state the agent is in and the
action it performs
• Reward functions considered up until this point have been R(s), solely based on the state
the agent is in
• For BeeBot, the expected reward function is R(s, a) – actions also give rewards that need to
be considered, as well as any possible penalties
• We can use get_transition_outcomes(state, action) to get the rewards:
• We can start by initialising a matrix of all zeroes size |S| x |A|
• Then, loop over each (state, action) pair and initialise the total expected reward to 0
• Loop over the outcomes from get_transition_outcomes(state, action) and add the (probability x
reward) to compute the expected reward over all outcomes
• i.e. , = = ∑#! ! , ) ⋅ ( , , !
)
Assignment 2 Help GuideValue Iteration: Updating State Values
• How we choose to update states can affect the performance of our value iteration
• Batch updates uses the value of the next state from the previous iteration to update the
value of the current state in the current iteration
• In-place updates uses the value of the next state from the current iteration, if it has already
been calculated, to update the value of the current state in the current iteration
• If the next state has not yet been calculated in the current iteration, it uses the value from the previous iteration
• In-place updates typically converges in fewer iterations
• The order in which states are calculated also has an effect for in-place updates (i.e. starting
near the goal and working backwards may enable faster convergence)
Assignment 2 Help GuidePolicy Iteration: Linear Algebra
• The numpy library is allowed for this assignment, as well as built-in python libraries
• We can use linear algebra to compute the value of states for the policy evaluation step of
Policy Iteration
• is the identity matrix of size |S| x |S|
• $ is a matrix containing the transition probabilities based on the current policy
• is a vector containing rewards for every state based on the current policy
• numpy can be used to perform linear algebra
• vπ = np.linalg.solve(I − γPπ, r)
Assignment 2 Help GuideImproving Runtime
• We need to compute the state space to calculate the value of every state
• Calculating the value of every state can be time consuming, especially for large levels
• We can remove unreachable states from the state space and only consider states the agent can reach
• This can improve runtime as we are reducing the number of states to calculate the value of
• Remember to use caching where possible
• If you are repeatedly computing something, caching can drastically improve runtime
• Remember to limit use of inefficient data structures
• If you're checking whether an element is in a list (e.g. if next state is in the states list), either modify your code
to remove the need for this, or use a data structure with more efficient lookup (e.g. a set or dictionary)
Assignment 2 Help Guide“Flying is exhausting. Why don't you
humans just run everywhere, isn't
that faster?”
- Barry B. Benson
You can do it!
13
© Dreamworks, ”Bee Movie”
Assignment 2 Help Guide
软件开发、广告设计客服
QQ:99515681
邮箱:99515681@qq.com
工作时间:8:00-23:00
微信:codinghelp
热点项目
更多
代做ceng0013 design of a pro...
2024-11-13
代做mech4880 refrigeration a...
2024-11-13
代做mcd1350: media studies a...
2024-11-13
代写fint b338f (autumn 2024)...
2024-11-13
代做engd3000 design of tunab...
2024-11-13
代做n1611 financial economet...
2024-11-13
代做econ 2331: economic and ...
2024-11-13
代做cs770/870 assignment 8代...
2024-11-13
代写amath 481/581 autumn qua...
2024-11-13
代做ccc8013 the process of s...
2024-11-13
代写csit040 – modern comput...
2024-11-13
代写econ 2070: introduc2on t...
2024-11-13
代写cct260, project 2 person...
2024-11-13
热点标签
mktg2509
csci 2600
38170
lng302
csse3010
phas3226
77938
arch1162
engn4536/engn6536
acx5903
comp151101
phl245
cse12
comp9312
stat3016/6016
phas0038
comp2140
6qqmb312
xjco3011
rest0005
ematm0051
5qqmn219
lubs5062m
eee8155
cege0100
eap033
artd1109
mat246
etc3430
ecmm462
mis102
inft6800
ddes9903
comp6521
comp9517
comp3331/9331
comp4337
comp6008
comp9414
bu.231.790.81
man00150m
csb352h
math1041
eengm4100
isys1002
08
6057cem
mktg3504
mthm036
mtrx1701
mth3241
eeee3086
cmp-7038b
cmp-7000a
ints4010
econ2151
infs5710
fins5516
fin3309
fins5510
gsoe9340
math2007
math2036
soee5010
mark3088
infs3605
elec9714
comp2271
ma214
comp2211
infs3604
600426
sit254
acct3091
bbt405
msin0116
com107/com113
mark5826
sit120
comp9021
eco2101
eeen40700
cs253
ece3114
ecmm447
chns3000
math377
itd102
comp9444
comp(2041|9044)
econ0060
econ7230
mgt001371
ecs-323
cs6250
mgdi60012
mdia2012
comm221001
comm5000
ma1008
engl642
econ241
com333
math367
mis201
nbs-7041x
meek16104
econ2003
comm1190
mbas902
comp-1027
dpst1091
comp7315
eppd1033
m06
ee3025
msci231
bb113/bbs1063
fc709
comp3425
comp9417
econ42915
cb9101
math1102e
chme0017
fc307
mkt60104
5522usst
litr1-uc6201.200
ee1102
cosc2803
math39512
omp9727
int2067/int5051
bsb151
mgt253
fc021
babs2202
mis2002s
phya21
18-213
cege0012
mdia1002
math38032
mech5125
07
cisc102
mgx3110
cs240
11175
fin3020s
eco3420
ictten622
comp9727
cpt111
de114102d
mgm320h5s
bafi1019
math21112
efim20036
mn-3503
fins5568
110.807
bcpm000028
info6030
bma0092
bcpm0054
math20212
ce335
cs365
cenv6141
ftec5580
math2010
ec3450
comm1170
ecmt1010
csci-ua.0480-003
econ12-200
ib3960
ectb60h3f
cs247—assignment
tk3163
ics3u
ib3j80
comp20008
comp9334
eppd1063
acct2343
cct109
isys1055/3412
math350-real
math2014
eec180
stat141b
econ2101
msinm014/msing014/msing014b
fit2004
comp643
bu1002
cm2030
联系我们
- QQ: 9951568
© 2021
www.rj363.com
软件定制开发网!