首页
网站开发
桌面应用
管理软件
微信开发
App开发
嵌入式软件
工具软件
数据采集与分析
其他
首页
>
> 详细
代写program、代做Python/c++编程
项目预算:
开发周期:
发布时间:
要求地区:
Homework2
Release v5.0 16/02/2025
CONTENTS
1 TicTacToe 1
2 Contents 3
2.1 TicTacToe Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Reinforcement Learning Player . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Requirements (60 points) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Submission Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 Marking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
i
ii
CHAPTER
ONE
TICTACTOE
Gameland is a small independent game development studio. Gameland develops 8-bit games for Android and Apple
products. They have decided to create a series of games for children ages 6 to 12. Their first game will be tic-tac-toe.
However, to make their version of tic-tac-toe different from their competitors they’ve chosen to include an AI player
for the builtin opponent. You have been hired to design and implement an agent that learns to play tic-tac-toe using
Reinforcement Learning.
For this homeworking assignment, you will be working in pairs. Find a partner and complete the registration quiz on
KEATS; make sure both of you complete the Partner Selection quiz on KEATS. It’s best to find someone in your lab
to work with rather than someone from another lab session. A group of three can be formed if there’s an odd number
of students in a lab.
Tic-tac-toe is a game played on a 3 by 3 grid. The game is played with two players, who take turns putting an X or an
O in the grid. The player who has formed a horizontal, vertical, or diagonal sequence of three marks wins. Here’s a
sample game where player X wins. You can learn more about tic-tac-toe here.
1
Homework2, Release v5.0 16/02/2025
2 Chapter 1. TicTacToe
CHAPTER
TWO
CONTENTS
2.1 TicTacToe Module
The TicTacToe module, available on KEATS, contains the functions and classes that you will need to complete this
homework assignment. Be mindful that there are several classes in the module, always make sure you’re editing the
right segment of code. You are not allowed to modify any of the method headers or return types. Changing either
will result in a failing mark. The TicTacToe module has a text-based version of tic-tac-toe already implemented. The
game looks like this
3
Homework2, Release v5.0 16/02/2025
Each square on the tic-tac-toe board has a unique value from 0 to 8. For example, the square at the top left is zero.
4 Chapter 2. Contents
Homework2, Release v5.0 16/02/2025
In the following image, player one has entered 4 as his or her move. An ‘X’ has been placed in the centre of the board.
2.1. TicTacToe Module 5
Homework2, Release v5.0 16/02/2025
2.1.1 Two Human Players
You’ll also find sample code on KEATS. The twoHumans code demonstrates how to use the module for 2-human
players. Download the sample and get the game running. After you get code for 2-human players working, take a look
at the full API. Familiarise yourself with all the functions, classes and methods before you begin implementing the RL
player.
1 import TicTacToe as ttt
2
3 def main():
4
5 # Players
6 player1 = ttt.createPlayer('X', ttt.HUMAN_AGENT)
7 player1.name = 'Alice'
8
9 player2 = ttt.createPlayer('O', ttt.HUMAN_AGENT)
10 player2.name = "Bob"
11
12 # Create board
13 board = ttt.TicTacToe()
14 board.setPlayers(player1, player2)
15 board.drawBoard()
16
17 # Run Game
18 while not board.isGameOver():
19
20 player = board.next()
21 player.makeMove(board)
22 board.drawBoard()
23
24 winner = board.getWinner()
25
26 if winner:
27 print(f'Congratulations {winner.name}')
28
29 main()
2.2 Reinforcement Learning Player
The first step in implementing your reinforcement learning agent is to provide a random number seed, without it you’ll
have a hard time tuning your parameters. Find the following code at the top of the TicTacToe module:
RANDOM_NUMBER_SEED = 9999
random.seed(RANDOM_NUMBER_SEED)
Replace the 9999 with a unique number, for example, your birthday, 10242006.
In the RLPlayer class you will be implementing three methods, makeMove(board), rewardState(board,
prevBoard=None) and getReward(board). Pseudocode is provided for makeMove(board) and
rewardState(board, prevBoard=None). Notice in the pseudocode that the self keyword is sometimes
omitted, it’s your responsibility to add it where necessary.
6 Chapter 2. Contents
Homework2, Release v5.0 16/02/2025
The next three sections explain the methods you will have to implement in the RLPlayer class. Before entering your
code, make sure you’re in the right class.
2.2.1 Make Move
This method is responsible for making a move for the RL agent. It follows the Epsilon-Greedy Algorithm which
balances the need for exploitation and exploration. The RL agent will perform an optimal move (1 - epsilon) percent
of the time, otherwise she will randomly select a move. However, if the agent is not training, she will always perform
an optimal move.
def makeMove(board):
if mode is TRAINING_MODE then do the following
Generate a random number, n, between 0 and 1 (2)
if n is less than epsilon then
anyMove = random.choice(board.remainingMoves)
moveLegal = board.makeMove(anyMove, self.letter)
if not moveLegal:
print('*** WARNING ILLEGAL MOVE BY RL ***')
else
getRLMove(board)
Call rewardState (make sure to pass board and previousState as arguments)
Set previousState equal to a copy of the board
else
getRLMove(board)
2.2.2 Reward State
In class, we discussed the difference between short and long term value of performing an action. In the example we
used in class, we utilised the average reward as the long term value of performing an action. We will take a slightly
different approach for our RL playing agent. We are going to use temporal-difference learning (𝑇 𝐷(0)) which is the
simplest variant of Reinforcement Learning. A full explanation of this Reinforcement Learning approach is outside
the scope of this module. To learn how to play tictactoe, the agent will predict the value of game states. That is, if the
agent knows the value of game states then it can perform the move that will maximise the value of the next game state.
For example, if the current state is 𝑆𝑡, the agent should make a move that maximises the value of 𝑉 (𝑆𝑡 + 1). where
𝑆𝑡+1 is the next game state.
We’re going to use the following equation to predict the value of states:
𝑉 (𝑆𝑡) = 𝑉 (𝑆𝑡) + 𝛼[𝑅𝑡+1 + 𝛾𝑉 (𝑆𝑡+1) − 𝑉 (𝑆𝑡)] (2.1)
In Equation (2.1), the game state 𝑆𝑡 is the previous game state and 𝑆𝑡+1 is the next game state after a move has been
made. Alpha, 𝛼, is the learning rate. The learning rate affects the rate of learning; it determines how much the old state
value, 𝑉 (𝑆𝑡), will change. Gamma, 𝛾, is the discount rate. The discount rate expresses the value of future rewards to
the agent —if it’s set to zero then the RL agent is more concerned about the immediate reward than future rewards.
Alpha and gamma should be set to a number between 0 and 1.
2.2. Reinforcement Learning Player 7
Homework2, Release v5.0 16/02/2025
The RLPlayer class has a an attribute, valueFunction, a dictionary, to implement the function 𝑉 (𝑆𝑡). The dictionary
serves as a look up table for values of states. The dictionary keys are game states and the values are determined by Eq.
(2.1). Use the method board.getKey(letter) (in the TicTacToe class) to convert the game state to a dictionary key.
For example, if you have a board and would like to get the value of that board (or game state), you would execute the
following statements:
1 key = board.getKey(self.letter)
2 value = self.valueOfState(key)
Assuming the board is 𝑆𝑡 then line 2 of this code segment would give you 𝑉 (𝑆𝑡). Here’s the pseudocode for
rewardState(board, prevBoard=None)
def rewardState(board, prevBoard=None):
if prevBoard is None then
Get the reward for board, call getReward
Get the key for the board (board.getKey(self.letter))
Get the value associated with the key, call valueOfState(key)
Update the dictionary, valueFunction, using key to: value + self.learningRate *␣
˓→reward
else:
Get the reward for board, call getReward
Get the key for the current board (board.getKey(self.letter))
Get the value associated with the current board key, call valueOfState(key)
Get the key for the previous board (prevBoard.getKey(self.letter))
Get the value associated with the previous board key, call valueOfState(previous␣
˓→key)
Update the dictionary, valueFunction, using the previous board key to: valuePrev␣
˓→+ self.learningRate * (reward +(self.discountRate * valueCurrent) - valuePrev)
2.2.3 Get Reward
The reward is the immediate feedback that an agents receives about its action(s). There are no exact rules about how
to formulate rewards in Reinforcement Learning. However, the reward will affect learning; a poorly designed reward
scheme can slow the learning process or prevent the agent from learning altogether. For the getReward(board)
method, you will have to design and implement your own reward scheme. For example, you may have a simple scheme
such as if the player wins, the reward is 10, otherwise the reward is -10. The player can also get a positive reward for
drawing the game. Regardless of how your scheme works, the method must return an integer value.
2.2.4 Training
Congratulations, you’re almost done with your RL Agent. Now you must train it. The RL agent will learn to play by
playing hundreds of games against a random player. Download and run the training.py program.
Once you’ve fixed all your syntax errors, you have one final step: fine tune your hyper-parameters. The learning rate,
discount rate, epsilon are set to 0.1, 0.2, 0.3 (line 13). These are not good values. It’s up to you to tweak these values
until your RL agent is performing well. Also, the number of episodes or the number of games the RL gets to play, is
only set to 500 (line 14). You can increase that number up to 50,000. It’s important that you do not share the values
you use with other students.
You may have noticed that everytime the training script runs, it creates a file, cse_policy_hw2.txt Do not edit this file
or change its name. This file contains your RL agent’s policy; its brain. You will submit this text file along with your
8 Chapter 2. Contents
Homework2, Release v5.0 16/02/2025
code.
You might be wondering, How do you tell if your agent is getting better? The training program produces stats that you
can use to evaluate your agent. For example,
Agents Won Lost Draws Rating
RL Agent 4 6 0 1172.4
Random 6 4 0 1227.6
In these results, the RL agent won 4 games but lost 6 (she had no draws). The final value, 1172.4 is her ELO rating
(learn more about the ELO rating system here). These stats tell me the RL Agent did not learn very well. First, she lost
more games then she won. Second, her rating is below 1200; Agents begin with an ELO of 1200 and her ELO went
down. At the very least, your RL agent should do better than an agent that randomly picks moves.
1 import TicTacToe as ttt
2
3 def main():
4
5 # Players
6 rlAgent = ttt.createPlayer('X', ttt.RL_AGENT)
7 rlAgent.name = 'RL Agent'
8
9 partner = ttt.createPlayer('O', ttt.RANDOM_AGENT)
10 partner.name = "Random"
11
12 # Training Session 1
13 rlAgent.initTraining(0.1, 0.2, 0.3)
14 ttt.train(rlAgent, partner, 1000)
15 rlAgent.save()
16
17 # Training Session 2 Optional
18 # rlAgent.initTraining(0.1, 0.2, 0.3)
19 # ttt.train(partner, rlAgent, 1000)
20
21 # Evaluation
22 rlAgent.setMode(ttt.PLAYING_MODE)
23 tournament = ttt.Tournament()
24 tournament.start(rlAgent, partner, 5)
25 tournament.start(partner, rlAgent, 5)
26 tournament.printStats([rlAgent, partner])
27
28 main()
2.2. Reinforcement Learning Player 9
Homework2, Release v5.0 16/02/2025
2.3 Requirements (60 points)
1. Implemented the 3 missing methods, makeMove(board), rewardState(board, prevBoard=None), and getRe ward(board) as described in the pseudocode and brief. See sections, 2.2.1, 2.2.2 and 2.2.3. Do not submit
code from the Internet and expect full credit. (30 points)
2. The RL agent selects a moved based what it has learned through Reinforcement Learning. Your agent must not
use a fixed policy for making decisions. (15 pts)
3. Did not add new code to the bottom of the module such as main(), testing or training code; did not add any print
statements to the module (5 pts)
4. Selected and registered a partner on KEATS (5 pts)
5. Followed all submission requirements (5 pts)
2.4 Submission Instructions
2.4.1 Nominate Someone to Submit
• Only one group member has to submit the group’s assignment —you must discuss and decide which group
member will be responsible for submitting the assignment.
• The person submitting the assignment. This person will take responsibility for uploading the correct files,
submitting everything before the deadline, and for adding all other group members to the submission record.
• Other group members. If you are not submitting the assignment on behalf of the group, keep an eye out for a
notification email that will let you know when your partner has submitted the assignment. The group member
submitting must add you to the submission record. Only after they have done this will you become part of the
assignment group on Gradescope.
2.4.2 Submission
1. Submit the TicTacToe module & policy file on Gradescope by Wednesday, March 26𝑡ℎ, 16:00.
2. You must submit your modified TicTacToe module on Gradescope — the TicTacToe.py file.
3. You must submit your policy file on Gradescope — the cse_policy_hw2.txt file.
4. At the top of your program put the following comments. You must also include the hyperparameters that you
used to train your RL agent in your submission.
1 # [Your Full Name]
2 # [Your K number]
3 # Homework 2
4 #
5 #rlAgent.initTraining(0.1, 0.2, 0.3) PUT YOUR NUMBERS NOT THESE!
6 #ttt.train(rlAgent, partner, 500) PUT YOUR NUMBERS NOT THESE!
5. You do not need to include a cover sheet
6. If asked for a submission title, enter your full name.
10 Chapter 2. Contents
Homework2, Release v5.0 16/02/2025
2.4.3 Late Policy
All coursework must be submitted on time. If you submit coursework late and have not applied for an extension or
have not had a mitigating circumstances claim upheld, you will have an automatic penalty applied. If you submit late,
but within 24 hours of the stated deadline, the work will be marked, and 10 raw marks will be deducted. All work
submitted more than 24 hours late will receive a mark of zero.
• Any program submitted within 24 hours after the deadline will lose 10 points
• Any program submitted after the 24 hour grace period will receive a 0
2.5 Marking
For this lab, your mark will be based on the program’s correctness and its performance. Your RL agent will play in a
tournament against another agent. This other agent plays using random moves and a fixed policy.
Correctness is worth 60% and that’s described under the Requirements section. Performance is worth 40% and depends
on how well your agent performs on one of the following three categories, ELO, win rate or success rate (percent of
games won or drawn). The amount you’ll earn for performance, depends on the class average, i.e., a performance above
average will earn more than a performance below average. Points will be given based on the category where your agent
had the best performance. If your submission crashes (during play) you will receive a 55.
2.5. Marking 11
软件开发、广告设计客服
QQ:99515681
邮箱:99515681@qq.com
工作时间:8:00-23:00
微信:codinghelp
热点项目
更多
代写tc2343、代做python设计程...
2025-04-15
6412ele代写、代做c/c++,pyth...
2025-04-15
fit5221代写、代做python语言编...
2025-04-15
代写assessment 3 – “annota...
2025-04-15
代写 comp 310、代做 java/pyt...
2025-04-14
代做 program、代写 java 语言...
2025-04-14
program 代做、代写 c++/pytho...
2025-04-14
代写review questions – ad/a...
2025-04-14
代写eng5009 advanced control...
2025-04-14
代做ent204tc corporate entre...
2025-04-14
代写assignment st3074 ay2024...
2025-04-14
代做cs3243 introduction to a...
2025-04-14
代做empirical finance (bu.23...
2025-04-14
热点标签
mktg2509
csci 2600
38170
lng302
csse3010
phas3226
77938
arch1162
engn4536/engn6536
acx5903
comp151101
phl245
cse12
comp9312
stat3016/6016
phas0038
comp2140
6qqmb312
xjco3011
rest0005
ematm0051
5qqmn219
lubs5062m
eee8155
cege0100
eap033
artd1109
mat246
etc3430
ecmm462
mis102
inft6800
ddes9903
comp6521
comp9517
comp3331/9331
comp4337
comp6008
comp9414
bu.231.790.81
man00150m
csb352h
math1041
eengm4100
isys1002
08
6057cem
mktg3504
mthm036
mtrx1701
mth3241
eeee3086
cmp-7038b
cmp-7000a
ints4010
econ2151
infs5710
fins5516
fin3309
fins5510
gsoe9340
math2007
math2036
soee5010
mark3088
infs3605
elec9714
comp2271
ma214
comp2211
infs3604
600426
sit254
acct3091
bbt405
msin0116
com107/com113
mark5826
sit120
comp9021
eco2101
eeen40700
cs253
ece3114
ecmm447
chns3000
math377
itd102
comp9444
comp(2041|9044)
econ0060
econ7230
mgt001371
ecs-323
cs6250
mgdi60012
mdia2012
comm221001
comm5000
ma1008
engl642
econ241
com333
math367
mis201
nbs-7041x
meek16104
econ2003
comm1190
mbas902
comp-1027
dpst1091
comp7315
eppd1033
m06
ee3025
msci231
bb113/bbs1063
fc709
comp3425
comp9417
econ42915
cb9101
math1102e
chme0017
fc307
mkt60104
5522usst
litr1-uc6201.200
ee1102
cosc2803
math39512
omp9727
int2067/int5051
bsb151
mgt253
fc021
babs2202
mis2002s
phya21
18-213
cege0012
mdia1002
math38032
mech5125
07
cisc102
mgx3110
cs240
11175
fin3020s
eco3420
ictten622
comp9727
cpt111
de114102d
mgm320h5s
bafi1019
math21112
efim20036
mn-3503
fins5568
110.807
bcpm000028
info6030
bma0092
bcpm0054
math20212
ce335
cs365
cenv6141
ftec5580
math2010
ec3450
comm1170
ecmt1010
csci-ua.0480-003
econ12-200
ib3960
ectb60h3f
cs247—assignment
tk3163
ics3u
ib3j80
comp20008
comp9334
eppd1063
acct2343
cct109
isys1055/3412
math350-real
math2014
eec180
stat141b
econ2101
msinm014/msing014/msing014b
fit2004
comp643
bu1002
cm2030
联系我们
- QQ: 9951568
© 2021
www.rj363.com
软件定制开发网!