首页
网站开发
桌面应用
管理软件
微信开发
App开发
嵌入式软件
工具软件
数据采集与分析
其他
首页
>
> 详细
代写data程序、代做Python编程设计
项目预算:
开发周期:
发布时间:
要求地区:
Assignment 3
(Last Update: 10 Oct)
Introduction
Download the code for this assignment and then unzip the archive.
As in assignments 1 and 2, this assignment uses python 3. Do not use python 2. You can
work on the assignment using your favorite python editor. We recommend VSCode.
Post any questions or issues with this assignment to our discussion forum. Alternatively, you
may also contact your TA directly.
This assignment uses auto-grading for Problem 1, Problem 2, and Problem 3. For this
assignment's first question, we rely on the random number generator generating the same
random sequence. Same as in Assignment 2. This time, we will be using the
random.choices() function as follows.
import random
from collections import Counter
seed = 2
if seed!=-1:
random.seed(seed, version=1)
n = 0.1 #noise
a = 'N' #intended action
d = {'N':['N', 'E', 'W'], 'E':['E', 'S', 'N'], 'S':['S', 'W', 'E'], 'W':['W', 'N', 'S']}
l = []
for _ in range(100000):
l += random.choices(population=d[a], weights=[1 - n*2, n, n])[0]
print(Counter(l).keys()) # equals to list(set(words))
print(Counter(l).values()) # counts the elements' frequency
print(l[:5])
This will give the following output.
dict_keys(['W', 'N', 'E'])
dict_values([10025, 80059, 9916])
['W', 'W', 'N', 'N', 'E']
Note that we obtain ~80% intended actions and 10% unintended actions here. Make sure that
you understand the output and that you can reproduce it on your machine before proceeding.
Note that we use anaconda python 3.9 to obtain the above result.
Problem 1: An MDP Episode
In this part of the assignment, we will play an episode in an MDP by following a given
policy. Consider the first test case of problem 1 (available in the file
test_cases/p1/1.prob).
seed: 2
noise: 0.1
livingReward: -0.05
grid:
_ _ _ 1
_ # _ -1
S _ _ _
policy:
E E E exit
N # N exit
N W N W
The first part of this file specifies an MDP. S is the start state with four available actions (N,
E, S, W), _ is an ordinary state with the same four available actions and 1,-1 are states where
the only available action is exit and the reward is 1 and -1 respectively. The reward for
action in other states is -0.05. # is a wall.
Actions are not deterministic in this environment. In this case with noise = 0.1, we are
successfully acting 80% of the time, and 20% of the time we will act perpendicular to the
intended direction with equal probability, i.e. 10%, for each unintended direction. If the agent
attempts to move into a wall, the agent will stay in the same position. Note that this MDP is
identical to the example we covered extensively in class.
The second part of this file specifies the policy to be executed.
As usual, your first task is to implement the parsing of this grid MDP in the function
read_grid_mdp_problem_p1(file_path) of the file parse.py. You may use any
appropriate data structure.
Next, you should implement running the episode in the function play_episode(problem)
in the file p1.py.
Below is the expected output. Note that we always use exactly 5 characters for the output of a
single grid and that the last line does not contain a new line.
Start state:
_ _ _ 1
_ # _ -1
P _ _ _
Cumulative reward sum: 0.0
--------------------------------------------
Taking action: W (intended: N)
Reward received: -0.05
New state:
_ _ _ 1
_ # _ -1 P _ _ _
Cumulative reward sum: -0.05
--------------------------------------------
Taking action: W (intended: N)
Reward received: -0.05
New state:
_ _ _ 1
_ # _ -1
P _ _ _
Cumulative reward sum: -0.1
--------------------------------------------
Taking action: N (intended: N)
Reward received: -0.05
New state:
_ _ _ 1
P # _ -1
S _ _ _
Cumulative reward sum: -0.15
--------------------------------------------
Taking action: N (intended: N)
Reward received: -0.05
New state:
P _ _ 1
_ # _ -1
S _ _ _
Cumulative reward sum: -0.2
--------------------------------------------
Taking action: S (intended: E)
Reward received: -0.05
New state:
_ _ _ 1
P # _ -1
S _ _ _
Cumulative reward sum: -0.25
--------------------------------------------
Taking action: N (intended: N)
Reward received: -0.05
New state:
P _ _ 1
_ # _ -1
S _ _ _
Cumulative reward sum: -0.3
--------------------------------------------
Taking action: E (intended: E)
Reward received: -0.05
New state:
_ P _ 1
_ # _ -1
S _ _ _
Cumulative reward sum: -0.35
--------------------------------------------
Taking action: E (intended: E)
Reward received: -0.05
New state:
_ _ P 1
_ # _ -1
S _ _ _
Cumulative reward sum: -0.4
--------------------------------------------
Taking action: E (intended: E)
Reward received: -0.05
New state:
_ _ _ P
_ # _ -1
S _ _ _ Cumulative reward sum: -0.45
--------------------------------------------
Taking action: exit (intended: exit)
Reward received: 1.0
New state:
_ _ _ 1
_ # _ -1
S _ _ _
Cumulative reward sum: 0.55
As you can see, in this question we don’t use any discount factor. We will introduce that in
the next question. You can also try some of the other test cases such as
test_cases/p1/8.prob.
seed: 42
noise: 0.2
livingReward: -1
grid:
# 10 #
-100 _ -100
-100 _ -100
-100 _ -100
-100 _ -100
-100 S -100
# 1 #
policy:
# exit #
exit N exit
exit N exit
exit N exit
exit N exit
exit N exit
# exit #
With correct implementation, you should be able to pass all test cases.
(base) scdirk@Dirks-Air a3 % python p1.py -8
Grading Problem 1 :
----------> Test case 1 PASSED <----------
----------> Test case 2 PASSED <----------
----------> Test case 3 PASSED <----------
----------> Test case 4 PASSED <----------
----------> Test case 5 PASSED <----------
----------> Test case 6 PASSED <----------
----------> Test case 7 PASSED <----------
----------> Test case 8 PASSED <----------
Problem 2: Policy Evaluation
In problem 2 you will implement policy evaluation as follows
This time we have discounting and we also introduce a new variable for the number of
iterations. Here is the first test case.
discount: 0.9
noise: 0.1
livingReward: 0
iterations: 10
grid:
-10 100 -10
-10 _ -10
-10 _ -10
-10 S -10
policy:
exit exit exit
exit N exit
exit N exit
exit N exit
Note that there is no randomness involved this time and that we use discounting.
As usual, your first task is to implement the parsing of this grid MDP in the function
read_grid_mdp_problem_p2(file_path) of the file parse.py. You may use any
appropriate data structure.
Next, you implement value iteration for policy evaluation as discussed in class. Your
policy_evaluation(problem) function in p2.py should return the evolution of values
as follows.
V^pi_k=0
| 0.00|| 0.00|| 0.00|
| 0.00|| 0.00|| 0.00|
| 0.00|| 0.00|| 0.00|
| 0.00|| 0.00|| 0.00|
V^pi_k=1
| -10.00|| 100.00|| -10.00|
| -10.00|| 0.00|| -10.00|
| -10.00|| 0.00|| -10.00|
| -10.00|| 0.00|| -10.00|
V^pi_k=2
| -10.00|| 100.00|| -10.00|
| -10.00|| 70.20|| -10.00|
| -10.00|| -1.80|| -10.00|
| -10.00|| -1.80|| -10.00|
V^pi_k=3
| -10.00|| 100.00|| -10.00|
| -10.00|| 70.20|| -10.00|
| -10.00|| 48.74|| -10.00|
| -10.00|| -3.10|| -10.00|
V^pi_k=4 | -10.00|| 100.00|| -10.00|
| -10.00|| 70.20|| -10.00|
| -10.00|| 48.74|| -10.00|
| -10.00|| 33.30|| -10.00|
V^pi_k=5
| -10.00|| 100.00|| -10.00|
| -10.00|| 70.20|| -10.00|
| -10.00|| 48.74|| -10.00|
| -10.00|| 33.30|| -10.00|
V^pi_k=6
| -10.00|| 100.00|| -10.00|
| -10.00|| 70.20|| -10.00|
| -10.00|| 48.74|| -10.00|
| -10.00|| 33.30|| -10.00|
V^pi_k=7
| -10.00|| 100.00|| -10.00|
| -10.00|| 70.20|| -10.00|
| -10.00|| 48.74|| -10.00|
| -10.00|| 33.30|| -10.00|
V^pi_k=8
| -10.00|| 100.00|| -10.00|
| -10.00|| 70.20|| -10.00|
| -10.00|| 48.74|| -10.00|
| -10.00|| 33.30|| -10.00|
V^pi_k=9
| -10.00|| 100.00|| -10.00|
| -10.00|| 70.20|| -10.00|
| -10.00|| 48.74|| -10.00|
| -10.00|| 33.30|| -10.00|
This example should look familiar. We have covered it in chapter 2 of our lecture slides.
Hint: The output of an individual floating point value v was done as follows
return_value += '|{:7.2f}|'.format(v)
Finally, check the correctness of your implementation via
(base) scdirk@Dirks-Air a3 % python p2.py -7
Grading Problem 2 :
----------> Test case 1 PASSED <----------
----------> Test case 2 PASSED <----------
----------> Test case 3 PASSED <----------
----------> Test case 4 PASSED <----------
----------> Test case 5 PASSED <----------
----------> Test case 6 PASSED <----------
----------> Test case 7 PASSED <----------
Problem 3: Value Iteration
Now it is time to complement value iteration.
This time, the provided problems do not include policies:
discount: 1
noise: 0.1
livingReward: -0.1
iterations: 10
grid:
_ _ _ 1
_ # _ -1
S _ _ _
As usual, load the problem definition in the function
read_grid_mdp_problem_p3(file_path) of the file parse.py.
Next implement value_iteration(problem) in p3.py such that it returns the following string for
the first test case. Note that this is still a non-deterministic environment as before with the
same stochastic motion.
V_k=0
| 0.00|| 0.00|| 0.00|| 0.00|
| 0.00|| ##### || 0.00|| 0.00|
| 0.00|| 0.00|| 0.00|| 0.00|
V_k=1
| -0.10|| -0.10|| -0.10|| 1.00|
| -0.10|| ##### || -0.10|| -1.00|
| -0.10|| -0.10|| -0.10|| -0.10|
pi_k=1
| N || N || N || x |
| N || # || N || x |
| N || N || N || N |
V_k=2
| -0.20|| -0.20|| 0.68|| 1.00|
| -0.20|| ##### || -0.20|| -1.00|
| -0.20|| -0.20|| -0.20|| -0.20|
pi_k=2
| N || N || E || x |
| N || # || W || x |
| N || N || N || S |
V_k=3
| -0.30|| 0.40|| 0.75|| 1.00|
| -0.30|| ##### || 0.32|| -1.00|
| -0.30|| -0.30|| -0.30|| -0.30|
pi_k=3
| N || E || E || x |
| N || # || N || x |
| N || N || N || S |
V_k=4
| 0.16|| 0.58|| 0.81|| 1.00|
| -0.40|| ##### || 0.43|| -1.00|
| -0.40|| -0.40|| 0.10|| -0.40|
pi_k=4 | E || E || E || x |
| N || # || N || x |
| N || N || N || S |
V_k=5
| 0.34|| 0.66|| 0.82|| 1.00|
| -0.05|| ##### || 0.49|| -1.00|
| -0.50|| -0.10|| 0.16|| -0.16|
pi_k=5
| E || E || E || x |
| N || # || N || x |
| N || E || N || W |
V_k=6
| 0.46|| 0.69|| 0.83|| 1.00|
| 0.16|| ##### || 0.51|| -1.00|
| -0.20|| 0.01|| 0.26|| -0.08|
pi_k=6
| E || E || E || x |
| N || # || N || x |
| N || E || N || W |
V_k=7
| 0.52|| 0.70|| 0.83|| 1.00|
| 0.30|| ##### || 0.52|| -1.00|
| 0.01|| 0.11|| 0.30|| 0.00|
pi_k=7
| E || E || E || x |
| N || # || N || x |
| N || E || N || W |
V_k=8
| 0.54|| 0.71|| 0.83|| 1.00|
| 0.37|| ##### || 0.52|| -1.00|
| 0.15|| 0.16|| 0.32|| 0.04|
pi_k=8
| E || E || E || x |
| N || # || N || x |
| N || E || N || W |
V_k=9
| 0.56|| 0.71|| 0.84|| 1.00|
| 0.41|| ##### || 0.52|| -1.00|
| 0.23|| 0.19|| 0.34|| 0.06|
pi_k=9
| E || E || E || x |
| N || # || N || x |
| N || E || N || W |
Once you are done. Check if you can pass all test cases as follows.
(base) scdirk@Dirks-Air a3 % python p3.py -4
Grading Problem 3 :
----------> Test case 1 PASSED <----------
----------> Test case 2 PASSED <----------
----------> Test case 3 PASSED <----------
----------> Test case 4 PASSED <----------
Problem 4: Q-Value TD Learning
In the final problem of this assignment you will implement temporal difference learning of Q
values.
Your task is to apply temporal difference learning of Q values to the test case 2 of problem 3
and see if you can get an optimal policy. Discount, noise and living reward should be the
same as specified in the test case. In your solution, you should …
- start from initial Q values = 0
- use epsilon greedy (with decay) or exploration functions to force exploration
- implement an appropriate learning rate decay to reach an optimal policy
- stop your iteration when the solution is found (don’t compare against optimal policy
to decide when to stop)
- run your learning algorithm multiple times (don’t set a fixed seed) and output how
often the optimal policy can be found e.g., 9/10
Note that one of the optimal policies for this example can be found on slide 15 of chapter 2
(same as test case 2 of problem 3) shown below.
Note that there could be multiple optimal policies for this particular example, or for the
examples provided in the test cases. Write your findings with a short analysis as comments at
the beginning of the file p4.py.
There are no auto-graders and no test cases provided for this question. You should provide all
parameters and the problem definition itself in the file p4.py. Also, tell us how to run your
code.
To submit your assignment to Moodle, *.zip the following files ONLY:
- p1.py
- p2.py
- p3.py
- p4.py
- parse.py
Do not zip any other files. Use the *.zip file format. Make sure that you have submitted the
correct files.
软件开发、广告设计客服
QQ:99515681
邮箱:99515681@qq.com
工作时间:8:00-23:00
微信:codinghelp
热点项目
更多
代做 program、代写 c++设计程...
2024-12-23
comp2012j 代写、代做 java 设...
2024-12-23
代做 data 编程、代写 python/...
2024-12-23
代做en.553.413-613 applied s...
2024-12-23
代做steady-state analvsis代做...
2024-12-23
代写photo essay of a deciduo...
2024-12-23
代写gpa analyzer调试c/c++语言
2024-12-23
代做comp 330 (fall 2024): as...
2024-12-23
代写pstat 160a fall 2024 - a...
2024-12-23
代做pstat 160a: stochastic p...
2024-12-23
代做7ssgn110 environmental d...
2024-12-23
代做compsci 4039 programming...
2024-12-23
代做lab exercise 8: dictiona...
2024-12-23
热点标签
mktg2509
csci 2600
38170
lng302
csse3010
phas3226
77938
arch1162
engn4536/engn6536
acx5903
comp151101
phl245
cse12
comp9312
stat3016/6016
phas0038
comp2140
6qqmb312
xjco3011
rest0005
ematm0051
5qqmn219
lubs5062m
eee8155
cege0100
eap033
artd1109
mat246
etc3430
ecmm462
mis102
inft6800
ddes9903
comp6521
comp9517
comp3331/9331
comp4337
comp6008
comp9414
bu.231.790.81
man00150m
csb352h
math1041
eengm4100
isys1002
08
6057cem
mktg3504
mthm036
mtrx1701
mth3241
eeee3086
cmp-7038b
cmp-7000a
ints4010
econ2151
infs5710
fins5516
fin3309
fins5510
gsoe9340
math2007
math2036
soee5010
mark3088
infs3605
elec9714
comp2271
ma214
comp2211
infs3604
600426
sit254
acct3091
bbt405
msin0116
com107/com113
mark5826
sit120
comp9021
eco2101
eeen40700
cs253
ece3114
ecmm447
chns3000
math377
itd102
comp9444
comp(2041|9044)
econ0060
econ7230
mgt001371
ecs-323
cs6250
mgdi60012
mdia2012
comm221001
comm5000
ma1008
engl642
econ241
com333
math367
mis201
nbs-7041x
meek16104
econ2003
comm1190
mbas902
comp-1027
dpst1091
comp7315
eppd1033
m06
ee3025
msci231
bb113/bbs1063
fc709
comp3425
comp9417
econ42915
cb9101
math1102e
chme0017
fc307
mkt60104
5522usst
litr1-uc6201.200
ee1102
cosc2803
math39512
omp9727
int2067/int5051
bsb151
mgt253
fc021
babs2202
mis2002s
phya21
18-213
cege0012
mdia1002
math38032
mech5125
07
cisc102
mgx3110
cs240
11175
fin3020s
eco3420
ictten622
comp9727
cpt111
de114102d
mgm320h5s
bafi1019
math21112
efim20036
mn-3503
fins5568
110.807
bcpm000028
info6030
bma0092
bcpm0054
math20212
ce335
cs365
cenv6141
ftec5580
math2010
ec3450
comm1170
ecmt1010
csci-ua.0480-003
econ12-200
ib3960
ectb60h3f
cs247—assignment
tk3163
ics3u
ib3j80
comp20008
comp9334
eppd1063
acct2343
cct109
isys1055/3412
math350-real
math2014
eec180
stat141b
econ2101
msinm014/msing014/msing014b
fit2004
comp643
bu1002
cm2030
联系我们
- QQ: 9951568
© 2021
www.rj363.com
软件定制开发网!