首页
网站开发
桌面应用
管理软件
微信开发
App开发
嵌入式软件
工具软件
数据采集与分析
其他
首页
>
> 详细
代做COMP3670、Python程序语言代写
项目预算:
开发周期:
发布时间:
要求地区:
COMP3670/6670 Programming Assignment 3 - Linear
Regression
Enter Your Student ID:
Your Name:
Deadline:
Submit: Write your answers in this file, and submit a single Jupyter Notebook file (.ipynb) on
Wattle. Rename this file with your student number as 'uXXXXXXX.ipynb'. Note: you don't
need to submit the .png or .npy files.
Enter Discussion Partner IDs Below: You could add more IDs with the same markdown
format above. Please implement things by yourself. If you use any external resources, list
them here.
In [ ]:
The following section provides some helper functions.
import numpy as np
import matplotlib.pyplot as plt
In [ ]:
Task 0: Warming Up
The following code block visualises the difference between different methods of performing
linear regression.
## GENERAL FUNCTIONS - DO NOT MODIFY ##
def lr_mle(X, y):
# maximum likelihood (least squares) for linear regression
XtX = np.dot(X.T, X)
Xty = np.dot(X.T, y)
theta = np.linalg.solve(XtX, Xty)
return theta
def lr_map(X, y, alpha=0.1):
# maximum a-posteriori (regularised least squares) for linear regression
N, D = X.shape[0], X.shape[1]
XtX = np.dot(X.T, X) + np.diag(alpha*N*np.ones(D))
Xty = np.dot(X.T, y)
theta = np.linalg.solve(XtX, Xty)
return theta
def lr_bayes(X, y, alpha=0.1, noise_var=0.01):
# exact posterior for Bayesian linear regression
N, D = X.shape[0], X.shape[1]
XtX = np.dot(X.T, X) + np.diag(alpha*N*np.ones(D))
Xty = np.dot(X.T, y)
mean = np.linalg.solve(XtX, Xty)
# note: calling inv directly is not ideal
cov = np.linalg.inv(XtX) * noise_var
return mean, cov
def predict_point(X, theta):
# predict given parameter estimate
return np.dot(X, theta)
def predict_bayes(X, theta_mean, theta_cov):
# predict gien parameter posterior
mean = np.dot(X, theta_mean)
cov = np.dot(X, np.dot(theta_cov, X.T))
return mean, cov
def add_bias_col(x):
# add an all-one column
n = x.shape[0]
return np.hstack([x, np.ones([n, 1])])
## END GENERAL FUNCTIONS ##
第2页 共9页
In [ ]:
Task 1: What makes a good regression?
As can be seen from the visualisation above, the regressed line seems to be far from the
# load data
data = np.loadtxt("./data/ass3_data1_train.txt")
x_train, y_train = data[:, 0][:, None], data[:, 1][:, None]
data = np.loadtxt("./data/ass3_data1_valid.txt")
x_valid, y_valid = data[:, 0][:, None], data[:, 1][:, None]
# some data for visualisation
N_plot = 100
x_plot = np.linspace(-2.5, 2.5, N_plot).reshape([N_plot, 1])
# add one col to the inputs
x_train_with_bias = add_bias_col(x_train)
x_plot_with_bias = add_bias_col(x_plot)
# MLE = least squares
theta_mle = lr_mle(x_train_with_bias, y_train)
f_mle = predict_point(x_plot_with_bias, theta_mle)
# MAP = regularised least squares
alpha = 0.1
theta_map = lr_map(x_train_with_bias, y_train, alpha)
f_map = predict_point(x_plot_with_bias, theta_map)
# exact Bayesian
theta_mean, theta_cov = lr_bayes(x_train_with_bias, y_train, alpha)
f_bayes_mean, f_bayes_cov = predict_bayes(
x_plot_with_bias, theta_mean, theta_cov)
# plot utility
def plot(x, y, x_plot, f_mle, f_map, f_bayes_mean, f_bayes_cov):
# plot utility
plt.figure(figsize=(6, 4))
plt.plot(x, y, '+g', label='train data', ms=12)
if f_mle is not None:
plt.plot(x_plot, f_mle, '-k', label='mle')
if f_map is not None:
plt.plot(x_plot, f_map, '--k', label="map", zorder=10)
if f_bayes_mean is not None:
plt.plot(x_plot, f_bayes_mean, '-r', label="bayes", lw=3)
f_std = np.sqrt(np.diag(f_bayes_cov))
upper = f_bayes_mean[:, 0] + 2*f_std
lower = f_bayes_mean[:, 0] - 2*f_std
plt.fill_between(x_plot[:, 0], upper, lower, color='r', alpha=
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.ylim([-3, 3])
# plot the training data and predictions
plot(x_train, y_train, x_plot, f_mle, f_map, f_bayes_mean, f_bayes_cov
第3页 共9页
datapoints. Are there any ways we can improve the regression?
Task 1.1
Explain why the above linear regression fails.
-----Your answer here-----
Task 1.2
What kind of features would lead to a better result? Why?
-----Your answer here-----
Task1.3
Implement featurise function that takes raw datapoints as the input and output a
reasonable design matrix Φ according to the method you mentioned in Task 1.2.
In [ ]:
Task 2: Estimating noise variance through the marginal
likelihood
One commonly asked question in Bayesian linear regression is how can we define the noise
level of the target. In previous questions, we set the noise variance in lr_bayes to be 0.01
def featurise(x):
# TODO: Try to come up with proper features
features = add_bias_col(x) # change this!
return features
x_train_feat = featurise(x_train)
x_valid_feat = featurise(x_valid)
x_plot_feat = featurise(x_plot)
# repeat but now with features
# MLE
theta_mle = lr_mle(x_train_feat, y_train)
f_mle = predict_point(x_plot_feat, theta_mle)
# MAP
alpha = 0.1
theta_map = lr_map(x_train_feat, y_train, alpha)
f_map = predict_point(x_plot_feat, theta_map)
# exact Bayesian
theta_mean, theta_cov = lr_bayes(x_train_feat, y_train, alpha)
f_bayes_mean, f_bayes_cov = predict_bayes(
x_plot_feat, theta_mean, theta_cov)
plot(x_train, y_train, x_plot, f_mle, f_map, f_bayes_mean, f_bayes_cov
第4页 共9页
- a fixed constant. But intuitively, after we have observed some datapoints, the noise level
can actually be inferred or estimated. This tasks is designed for you to investigate the
marginal likelihood (a.k.a. model evidence) and how we can use this to pick the noise
variance.
Task 2.1
Implement the negative log marginal likelihood, given the noise level of the likelihood,
training inputs and outputs, and the prior variance. We can pick prior_var using the
same procedure, but assume prior_var = 0.5 for this exercise. The form of the
marginal likelihood is provided in Week 7's lecture slides.
In [ ]:
Task 2.2
Select the most appropriate noise level that minimises the negative log marginal likelihood. In
practice, we can do this mimimisation by gradient descent, but for this exercise, we assume
we have access to a predefined set of potential noise levels and just need to pick one.
In [ ]:
Task 2.3
We visualise the predictions using the estimated noise variance, and compare to those when
the noise is very large or very small. Based on these graphs and the negative log marginal
likelihood corresponding to these noise levels, explain why finding a proper noise level of the
likelihood is important.
# 2a
def negative_log_marginal_likelihood(noise_var, x, y, prior_var=0.5):
# TODO: implement this
return 0
# 2.2
# a predefined list
potential_noise_vars = np.logspace(-4, 1.5, 50)
## YOUR CODE HERE! ##
noise_var_estimated = potential_noise_vars[0] # change this!
#####################
第5页 共9页
In [ ]:
-----Your answer here-----
** Task 2.4 - Optional **
The naive implementation of the negative log marginal likelihood above would require the
inverse of an N by N matrix, which is of time complexity . This is computationally
intractable for a large dataset (large N). Can we speed this up?
Θ(?3)
In [ ]:
Task 3: Regularisation
In machine learning, regularisation is an important technique to reduce overfitting.
Regularisation also provides better generalisation in general. This task aims to show how
regularisation affects the parameter estimates.
Task 3.1
Implement L1 , L2 . Both functions take the weight as input, output the regularisation
value and the gradient of the regularisation term (NOT THE GRADIENT OF THE ENTIRE
?
# fit with the estimated noise variance
N = x_train_feat.shape[0]
prior_var = 0.5
alpha = noise_var_estimated / prior_var / N
theta_mean, theta_cov = lr_bayes(x_train_feat, y_train, alpha, noise_var_estimated
f_bayes_mean, f_bayes_cov = predict_bayes(
x_plot_feat, theta_mean, theta_cov)
plot(x_train, y_train, x_plot, None, None, f_bayes_mean, f_bayes_cov)
# fit with a very large noise
noise_var = 5
alpha = noise_var / prior_var / N
theta_mean, theta_cov = lr_bayes(x_train_feat, y_train, alpha, noise_var
f_bayes_mean, f_bayes_cov = predict_bayes(
x_plot_feat, theta_mean, theta_cov)
plot(x_train, y_train, x_plot, None, None, f_bayes_mean, f_bayes_cov)
# fit with a very small noise
noise_var = 0.00001
alpha = noise_var / prior_var / N
theta_mean, theta_cov = lr_bayes(x_train_feat, y_train, alpha, noise_var
f_bayes_mean, f_bayes_cov = predict_bayes(
x_plot_feat, theta_mean, theta_cov)
plot(x_train, y_train, x_plot, None, None, f_bayes_mean, f_bayes_cov)
data = np.loadtxt("./data/ass3_data1_train_large.txt")
x_large, y_large = data[:, 0][:, None], data[:, 1][:, None]
x_large_feat = featurise(x_large)
def negative_log_marginal_likelihood_v2(noise_var, x, y, prior_var=0.5
# TODO: implement this
return 0
第6页 共9页
OBJECTIVE FUNCTION).
In [ ]:
Task 3.2
We now run gradient descent and plot the predictions. Comment on the results.
def L1(theta):
# TODO: implement this
return 0, np.zeros_like(theta) # change this
def L2(theta):
# TODO: implement this
return 0, np.zeros_like(theta) # change this
def data_fit(theta, x, y):
diff = y - np.dot(x, theta) # N x 1
f = np.mean(diff**2) # 1 x 1
df = - 2 * np.dot(diff.T, x).T / x.shape[0]
return f, df
def objective(theta, x, y, alpha, l2=True):
reg_func = L2 if l2 else L1
reg, dreg = reg_func(theta)
fit, dfit = data_fit(theta, x, y)
obj = fit + alpha * reg
dobj = dfit + alpha * dreg
return obj, dobj
第7页 共9页
In [ ]:
-----Your answer here-----
D = x_train_feat.shape[1]
theta_l2_sgd_init = np.random.randn(D, 1)
theta_l2_sgd = theta_l2_sgd_init
no_iters = 2000
learning_rate = 0.1
alpha = 0.1
l2 = True
for i in range(no_iters):
obj, dobj = objective(theta_l2_sgd, x_train_feat, y_train, alpha,
theta_l2_sgd -= learning_rate * dobj
if i % 100 == 0:
print(i, obj)
f_l2_sgd = predict_point(x_plot_feat, theta_l2_sgd)
theta_l1_sgd = theta_l2_sgd_init
l2 = False
for i in range(no_iters):
obj, dobj = objective(theta_l1_sgd, x_train_feat, y_train, alpha,
theta_l1_sgd -= learning_rate * dobj
if i % 100 == 0:
print(i, obj)
f_l1_sgd = predict_point(x_plot_feat, theta_l1_sgd)
# Without any regularisation
theta_noreg_sgd = theta_l2_sgd_init
for i in range(no_iters):
obj, dobj = objective(theta_noreg_sgd, x_train_feat, y_train, 0, l2
theta_noreg_sgd -= learning_rate * dobj
if i % 100 == 0:
print(i, obj)
f_noreg_sgd = predict_point(x_plot_feat, theta_noreg_sgd)
theta_map = lr_map(x_train_feat, y_train, alpha)
f_map = predict_point(x_plot_feat, theta_map)
# plot utility
plt.figure(figsize=(6, 4))
plt.plot(x_train, y_train, '+g', label='train data', ms=12)
plt.plot(x_plot, f_l2_sgd, '-k', lw=3, label='sgd l2')
plt.plot(x_plot, f_l1_sgd, '-r', lw=2, label='sgd l1')
plt.plot(x_plot, f_map, '--o', label="map", zorder=10, lw=1)
plt.plot(x_plot, f_noreg_sgd, '-x', label="no reg", lw=2)
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.ylim([-3, 3])
plt.show()
软件开发、广告设计客服
QQ:99515681
邮箱:99515681@qq.com
工作时间:8:00-23:00
微信:codinghelp
热点项目
更多
代做 program、代写 c++设计程...
2024-12-23
comp2012j 代写、代做 java 设...
2024-12-23
代做 data 编程、代写 python/...
2024-12-23
代做en.553.413-613 applied s...
2024-12-23
代做steady-state analvsis代做...
2024-12-23
代写photo essay of a deciduo...
2024-12-23
代写gpa analyzer调试c/c++语言
2024-12-23
代做comp 330 (fall 2024): as...
2024-12-23
代写pstat 160a fall 2024 - a...
2024-12-23
代做pstat 160a: stochastic p...
2024-12-23
代做7ssgn110 environmental d...
2024-12-23
代做compsci 4039 programming...
2024-12-23
代做lab exercise 8: dictiona...
2024-12-23
热点标签
mktg2509
csci 2600
38170
lng302
csse3010
phas3226
77938
arch1162
engn4536/engn6536
acx5903
comp151101
phl245
cse12
comp9312
stat3016/6016
phas0038
comp2140
6qqmb312
xjco3011
rest0005
ematm0051
5qqmn219
lubs5062m
eee8155
cege0100
eap033
artd1109
mat246
etc3430
ecmm462
mis102
inft6800
ddes9903
comp6521
comp9517
comp3331/9331
comp4337
comp6008
comp9414
bu.231.790.81
man00150m
csb352h
math1041
eengm4100
isys1002
08
6057cem
mktg3504
mthm036
mtrx1701
mth3241
eeee3086
cmp-7038b
cmp-7000a
ints4010
econ2151
infs5710
fins5516
fin3309
fins5510
gsoe9340
math2007
math2036
soee5010
mark3088
infs3605
elec9714
comp2271
ma214
comp2211
infs3604
600426
sit254
acct3091
bbt405
msin0116
com107/com113
mark5826
sit120
comp9021
eco2101
eeen40700
cs253
ece3114
ecmm447
chns3000
math377
itd102
comp9444
comp(2041|9044)
econ0060
econ7230
mgt001371
ecs-323
cs6250
mgdi60012
mdia2012
comm221001
comm5000
ma1008
engl642
econ241
com333
math367
mis201
nbs-7041x
meek16104
econ2003
comm1190
mbas902
comp-1027
dpst1091
comp7315
eppd1033
m06
ee3025
msci231
bb113/bbs1063
fc709
comp3425
comp9417
econ42915
cb9101
math1102e
chme0017
fc307
mkt60104
5522usst
litr1-uc6201.200
ee1102
cosc2803
math39512
omp9727
int2067/int5051
bsb151
mgt253
fc021
babs2202
mis2002s
phya21
18-213
cege0012
mdia1002
math38032
mech5125
07
cisc102
mgx3110
cs240
11175
fin3020s
eco3420
ictten622
comp9727
cpt111
de114102d
mgm320h5s
bafi1019
math21112
efim20036
mn-3503
fins5568
110.807
bcpm000028
info6030
bma0092
bcpm0054
math20212
ce335
cs365
cenv6141
ftec5580
math2010
ec3450
comm1170
ecmt1010
csci-ua.0480-003
econ12-200
ib3960
ectb60h3f
cs247—assignment
tk3163
ics3u
ib3j80
comp20008
comp9334
eppd1063
acct2343
cct109
isys1055/3412
math350-real
math2014
eec180
stat141b
econ2101
msinm014/msing014/msing014b
fit2004
comp643
bu1002
cm2030
联系我们
- QQ: 9951568
© 2021
www.rj363.com
软件定制开发网!