首页 > > 详细

辅导MATP6610/4820辅导留学生R设计、R编程辅导

项目预算:   开发周期:  发布时间:   要求地区:

Final Project of MATP6610/4820
(Due at mid-night of May-01-2020 to )
Instructions: Choose one of the three topics below. You should write a report by LaTex
to show how you derive your solver (include all gradient formulas if any gradient is used), all
results you obtain, and all observations you make. To earn full credit, you need make your
report well-written, include in your report all results required below each topic, and also,
you need make sure your code works in a right way and produce correct solutions. Please
pack your report and code in a single zip file. Do NOT zip the provided data file. Send a
pdf file of your report and the zip file to . Your pdf and zip
files should both be named in the format “MATP4820 FinalProject FirstName LastName”
or “MATP6610 FinalProject FirstName LastName”, depending on if you are a 4820 or 6610
student. If you work with one another student, include the names of both students in the
name of your files.
Team work: You can work individually or with at most one other student. A 4820
student can only work with another 4820 student in a group, and a 6610 student can only
work with another 6610 student in a group.
Submission: Each group only needs to make one submission. If you work with another
student, write both names in the report, and explain how you collaborate and the contribu-
tion from each one. Both students in a group will get the same grade, but I want to know
each one’s contribution.
Bonus: if you do two topics, you can earn up to 40/number of group member percent
extra credit, and if you do all three topics, you can earn up to 50/number of group member
percent extra credit.
Topic I: Portfolio optimization
Assume that we have one unit of capital and n assets to invest on. The ith asset has an
expected return rate ξi ∈ R for each i. Our goal is to find a portfolio with the maximum
expected return such that the risk is controlled. This problem can be formulated as
max
x∈Rn
ξ>x, s.t.
n∑
i=1
xi ≤ 1, x>Qx ≤ α, xi ≥ 0, i = 1, . . . , n, (1)
1
where ξ = [ξ1, ξ2, . . . , ξn]
> denotes the vector of expected return rate, and Q is the covariance
matrix of the return.
Requirements
Include every item below and your code in a single report. Use the provided datasets to test
your code.
1. Develop a solver for (1) by the augmented Lagrangian method (ALM). You can put the
two inequality constraints
∑n
i=1 xi ≤ 1 and x>Qx ≤ α into the augmented Lagrangian
function, and then to solve the ALM subproblem, you need the projection onto the
nonnegativity constraint set {x ∈ Rn : xi ≥ 0, i = 1, . . . , n}. Alternatively, you can
just put the inequality constraint x>Qx ≤ α into the augmented Lagrangian function,
and then you will need the projection onto the set {x ∈ Rn : ∑ni=1 xi ≤ 1, xi ≥ 0, i =
1, . . . , n}. The latter projection is doable but not simple, so you will need first figure
out how to do the projection.
2. In either way described above, you will need an iterative method to solve the ALM
subproblem. Develop a subroutine for the ALM subproblems by the projected gradient
method or its accelerated version. Report the update you perform and the stopping
condition you adopt.
3. Write down the update of your ALM solver and also the stopping condition you use to
terminate your ALM solver.
4. Test your solver using the provided data set. Report the violation of the primal fea-
sibility, the dual feasibility, and the complementarity condition at all outer iterates.
You can tune the parameter α > 0 and report a few sets of results corresponding to
different α > 0.
Topic II: Robust principal component analysis
Let X be composed of a sparse matrix S and a low-rank matrix L. The robust PCA [2]
aims at finding S and L, given X. Using `1 norm to promote sparsity and nuclear norm to
promote low-rankness, robust PCA can be modeled as
min
L,S
‖L‖∗ + λ‖S‖1, s.t. L + S = X. (2)
Here, ‖L‖∗ denotes the nuclear norm of L and equals the summation of its singular values,
and ‖S‖1 =

i,j |sij|, where sij denotes the (i, j)-th entry of S.
2
Requirements
Include every item below in a single report and attach your code. Use the provided datasets
to test your code.
1. Develop two solvers for (2): one is by the quadratic penalty method or the augmented
Lagrangian method (ALM), and another is by the alternating direction method of
multipliers (ADMM). [Do NOT do both quadratic penalty and ALM, just one
of them.]
2. For the quadratic penalty method or the ALM, you will need to approximately solve
subproblems in the form of
min
L,S
‖L‖∗ + λ‖S‖1 + β
2
‖L + S− Z‖2F , (3)
where β > 0 is the penalty parameter. In the above, Z = X for the quadratic penalty
method, and Z will include X and also the Lagrangian multiplier for the ALM. Develop
a subroutine by the proximal gradient method. Report how you do the proximal map-
ping of non-differentiable terms, the update you perform, and the stopping condition
you adopt. Refer to [1, Theorem 2.1] for the proximal mapping of the nuclear norm.
The paper is included in LMS for your convenience.
3. For the ADMM, you will update L and S separately. Formulate each subproblem
and report how you solve the subproblems (you should have a closed-form solution for
each subproblem). Give the iterative update of the ADMM and report the stopping
condition you use.
4. Compare the two solvers that you develop by using the provided synthetic data and
also Escalator video Dataset. Note that the video dataset is in 3D format. It contains
200 frames of size 130×160. You will need first reshape each frame into a column vector
and form a 20800 × 200 matrix X. Plot the objective value and constraint violation
at all iterates by the two solvers. For each solver, after you obtain the solution (L,S),
reshape each column of L and S into a 130× 160 image and use imshow to show a few
selected columns of L and S. You can also use implay to see how the foreground and
background of the video are separated. Report what you observe.
Topic III: Neyman-Pearson Classification
Given data points {(xi, yi)}Ni=1 with xi ∈ Rp and yi ∈ {+1,−1} for each i, let N+ = {i ∈
[N ] : yi = 1} and N− = {i ∈ [N ] : yi = −1} be the positive and negative sets of points
3
respectively. The Neyman-Pearson classification (NPC) [3] minimizes the false negative error
while controlling the level α > 0 of false positive error through solving the problem:
minimize
w,b
1
N+

i∈N+
`(w, b; xi, yi), s.t.
1
N−

i∈N−
`(w, b; xi, yi) ≤ α, (4)
where N+ and N− denote the cardinalities of N+ and N−, and ` is a certain error function.
For this project, we let ` be the error function used in logistic regression, i.e.,
`(w, b; xi, yi) = log
(
1 + exp
(− yi(w>xi + b))) . (5)
Requirements
1. Develop a solver for (4) with ` given in (5) by the augmented Lagrangian method
(ALM). Your solver should be able to accept (X,y, α) and also a few optional param-
eters (z0, tol,maxit) as the input. Here, X ∈ Rp×N and y ∈ RN with the i-th column
of X being the i-th data point, and z0 is the initial iterate.
2. In your report, explicitly formulate the subproblem that is solved within each outer
iteration and explain how you solve the subproblem, including the optimization method
you choose, the iterative update formula, and the stopping condition.
3. Report the stopping condition you use to terminate your ALM solver.
4. For the training data sets provided by the instructor, run your ALM solver. Save and
report (by figure or table) the violation of primal feasibility and also dual feasibility at
each outer iteration.
5. Based on the output solution, do the classification on the testing data and report the
false positive and false negative errors you obtain. You can tune the model parameter
α. You should report the errors and the corresponding value of α you use.
References
[1] J.-F. Cai, E. J. Cande`s, and Z. Shen. A singular value thresholding algorithm for matrix
completion. SIAM Journal on optimization, 20(4):1956–1982, 2010.
[2] E. J. Cande`s, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal
of the ACM (JACM), 58(3):1–37, 2011.
[3] P. Rigollet and X. Tong. Neyman-pearson classification, convexity and stochastic con-
straints. Journal of Machine Learning Research, 12(Oct):2831–2855, 2011.

软件开发、广告设计客服
  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-23:00
  • 微信:codinghelp
热点标签

联系我们 - QQ: 9951568
© 2021 www.rj363.com
软件定制开发网!