写作EE425X程序、 辅导Python,c++

” 写作EE425X程序、 辅导Python,c++Homework 1b: Linear Regression part 2.EE425X – Machine Learning: A Signal Processing PerspectiveHomework 1 focused on learning the parameter for linear regression. In this homework we will first understandhow to use the learnt parameter to predict the output for a given query input. We will also understandbias-variance tradeoff and how to decide the model dimension when limited training data is available. ThisHW will rely heavily on the code from the previous homework.Generate Data Code: Generate m + mtest data points satisfyingy = Tx + ewith being ONE fixed n length Vector for all of them. Use n = 100, = [100, 99, 98, 97…1]0, 2e = 0.01||||22,e N (0, 2e), x N (0, I), and assume mutual independence of the different inputs and noise values (e).1. Use code from Homework 1 (using any one approach is okay) to learn . Vary m and show a plot of bothestimation error in ,|| ||22/||||2and a second plot of the Monte Carlo estimate of the prediction error on the test data (test data MSE).Normalized-Test-MSE := E[(ytest y)2]/E[y2test], with y := TxtestMonte Carlo estimate means: compute (ytest y)2for mtest different input-output pairs and then averagethe result.(a) Vary m: use m = 80, m = 100, m = 120, m = 400. If your code is unable to return an estimate of ,you can report the errors to Be (and for the plot just use a large value say 100000 to replace .(b) Repeat this experiment with 2e = 0.1||||22.Thus this part will produce four plots.2. In this second part, suppose you have only m = 80 training data points satisfying y = Tx + e, withn = 100. Notice n is the same as in the first part. I had a typo earlier which has now been fixed.What you will have Concluded from part 1 is that you cannot learn correctly in this case because m iseven smaller than n.Let us assume you do not have the option to increase m. What can you do? All you can do is reduce nto a value nsmall m. Experiment with different values of nsmall to come up with the best one. Do thisexperiment for two values of 2e: 2e = 0.01||||22and 2e = 0.1||||22.How to decide Which entries of x to throw away? For now, just throw away the last nnsmall + 1 entries.So for nsmall = 1, let xsmall be just the first entry, and so on. So for nsmall = 30 for example, xsmallwill be the first 30 entries of x. There are many other better ways which we will learn about later in thecourse.Start with nsmall = 1 and keep increasing its value and each time compute Normalized-Test-MSE bylearning a value of first (using m = 80 of course). Obtain a plot. Use the plot and what you learn inclass to decide what value of nsmall is best.3. Interpret your results based on the Bias-Variance tradeoff discussion. See Section 11 of Summary-Notesand what will be taught in the next few classes.如有需要,请加QQ:99515681 或WX:codehelp

添加老师微信回复‘’官网 辅导‘’获取专业老师帮助,或点击联系老师1对1在线指导