” 辅导FIT1045语言程序、 写作Python编程设计FIT1045 Algorithms and programming in Python, S2-2020Programming AssignmentAssessment value: 22% (10% for Part 1 + 12% for Part 2)Due: Week 6 (Part 1), Week 11 (Part 2)ObjectivesThe objectives of this assignment are: To demonstrate the ability to implement algorithms using basic data structures and operations on them. To gain experience in designing an algorithm for a given problem description and implementing thatalgorithm in Python. To explain the computational problems and the challenges they pose as well as your chosen strategies andprogramming techniques To address them.Submission ProcedureYou are going to create a single Python module file regression.py based on the provided template. Submit afirst version of this file at the due date of Part 1, Friday midnight of Week 6. Submit a second version of thisfile at the due date of Part 2, Friday midnight of Week 11.For each of the two versions:1. All tasks have to be solved and documented within the file regression.py.2. Submit this regression.py file to Moodle.Important Note: Please ensure that you have read and understood the universitys policies on plagiarismand collusion available at https://www.monash.edu.au/students/policies/academic-integrity.html. Youwill be required to agree to these policies when you submit your assignment.A common mistake students make is to use Google to find solutions to the questions. Once you have seen asolution it is often difficult to come up with your own version. The best way to avoid making this mistake isto avoid using Google. You have been given all the tools you need in workshops. If you find you are stuck, feelfree to ask for assistance on Moodle, ensuring that you do not post your code.Marks and Module DocumentationThis assignment will be marked both by the correctness of your module and your analysis of it, which has to beprovided as part of your function documentations. Each of the assessed function of your module must containa docstring that contains in this order:1. A one line short summary of what the function is computing2. A formal input/output specification of the function3. Some usage examples Of the function (in doctest style); feel free to augment and alter the examples for theassessed functions that are already provided in the template if you feel they are incomplete or otherwiseinsufficient (provide some explanations in this case)4. A paragraph explaining the challenges of the problem that the function solves and your overall approachto address these challenges 辅导FIT1045语言程序、 写作Python编程设计5. A paragraph explaining the specific choices of programming techniques that you have used6. [only for Part 2] A paragraph analysing the computational complexity of the function; this does not implythat a specific computational complexity such as O(n log n) is required just that the given complexity isunderstood and analysedFor example, if Scaled(row, alpha) from Lecture 6 was an assessed function (in Part 2 and you are aFIT1045 student) its documentation in the submitted module could look as follows:def scaled(row, alpha): Provides a scaled version of a list with numeric elements.Input : list with numeric entries (row), scaling factor (alpha)Output: new list (res) of same length with res[i]==row[i]*alphaFor example: scaled([1, 4, -1], 2.5)[2.5, 10.0, -2.5] scaled([], -23)[]This is a list processing problem where an arithmetic operation has to be carried outfor each element of an input list of unknown size. This means that a loop is requiredto iterate over all elements and compute their scaled version. Importantly, accordingthe specification, the function has to provide a new list instead of mutating the inputlist. Thus, the scaled elements have to be accumulated in a new list.In my implementation, I chose to iterate over the input list with a for-loop and toaccumulate the computed scaled entries with an augmented assignment statement(+=) in order to avoid the re-creation of each intermediate list. This solution allows aconcise and problem-related code with no need to explicitly handle list indices.The computational complexity of the function is O(n) for an input list of length n. Thisis because the For loop is executed n times, and in each iteration there is constanttime operation of computing the scaled element and a constant time operation ofappending it to the result list (due to the usage of the augmented assignmentstatement instead of a regular assignment statement with list concatenation).res = []for x in row:res += [alpha*x]return resBeyond the assessed functions, you are highly encouraged to add additional functions to the module that youuse as part of your implementation of the assessed functions. In fact, such a decomposition is essential for areadable implementation of the assessed function, which in turn forms the basis of a coherent and readableanalysis in the documentation. All these additional functions also must contain a minimal docstring with theitems 1-3 from the list above. Additionally, you can also use their docstrings to provide additional information,which you can refer to from the analysis paragraphs of the assessed functions.This assignment has a total of 22 marks and contributes to 22% of your final mark. For each day anassignment is late, the maximum achievable mark is reduced by 10% of the total. For example, if the assignmentis late by 3 days (including weekends), the highest achievable mark is 70% of 15, which is 10.5. Assignmentssubmitted 7 days after the due date will normally not be accepted.Mark breakdowns for each of the assessed functions can be found in parentheses after each task section below.Marks are subtracted for inappropriate or incomplete discussion of the function in their docstring. Again,readable code with appropriate decomposition and variable names is the basis of a suitableanalysis in the documentation.2OverviewIn this assignment we create a Python module to perform some basic data science tasks. While the instructionscontain some mathematics, the main focus is on implementing the corresponding algorithms and finding agood decomposition into subproblems and functions that solve these subproblems. In particular, we want toimplement a method that is able to estimate relationships in observational data and make predictions basedon these relationships. For example, consider a dataset that measures the average life expectancy for variouscountries together With other factors that potentially have an influence on the life expectancy, such as theaverage body mass index (BMI), the level of schooling in that country, or the gross domestic product (GDP)(see Table 1 for an example). Can we predict the life expectancy of Rwanda? And can we predict how the lifeexpectancy of Liberia will change if it improved its schooling to the level of Austria?Country Adult Mortality Infant Deaths BMI GDP Schooling Life expectancyAustria 10.77 4.32 25.4 4.55e+11 15.7 81.1Georgia 18.23 15.17 27.1 1.76e+10 13.5 74.5Liberia 29.95 74.52 24.0 3.264e+09 9.8 61.1Mexico 30.73 17.29 28.1 1.22e+12 12.9 76.6Rwanda 35.31 64.04 22.1 9.509e+09 10.8 ?Table 1: Example dataset on Life Expectancy.To answer these questions, we have to find the relationships between the target variablein case of this examplethe life expectancyand the explanatory variablesin this example Adult Mortality, Infant Deaths, etc.(see Figure 1). For this we want to develop a method that finds linear relationships between the explanatoryvariables and the target variable and use it to understand those relationship and make predictions on thetarget variable. This method is called a linear regression. The main idea of linear regression is to use data toinfer a prediction function that explains a target variable through linear effects of one or more explanatoryvariables. Finding the relationship between one particular explanatory variable and the target variable (e.g.,the relationship between schooling and life expectancy) is called a univariate regression. Finding the jointrelationship of all explanatory variables with the target variable is called a multivariate regression.(a) Adult mortality vs. life expectancy. (b) Schooling vs. life expectancy.Figure 1: Visualization of the relationship between explanatory variables and the target variable.In this assignment, you will develop functions that perform univariate regression (Part I) and multivariateregression (Part 2) and use them to answer questions on a dataset on life expectancy, similar to the exampleabove.To help you to visually check and understand your implementation, a module for plotting data and linearprediction functions is provided.3Part 1: Univariate Regression (10%, due in Week 6)The first task is to model a linear relationship between one explanatory variable and the target variable.The data in this assignment is always represented as a table containing m rows and n columns and a listof length m containing the corresponding target variables. The table is represented as a list with m elements,each being a list of n values, one for each explanatory variable.An example Dataset with one explanatory variable x and a target variable y would be x = [1,2,3] y = [1,4,6]and an example dataset with two explanatory variables x(1), x(2) would be data = [[1,1],[2,1],[1,3]] y = [3,4,7]Task A: Optimal Slope (2 Marks)Let us now start by modelling the linear relationship between one explanatory variable x and the target variabley based on the data of m observations (x1, y1), . . . ,(xm, ym). For that, lets start out simple by modelling therelation of x and y asy = ax , (1)i.e., a straight line through the origin. For this model we want to find an optimal slope parameter a that fitsthe given data as good as possible. A central concept to solve this problem is the residual vector defined asr =y1 ax1y2 ax2. . .ym axm ,i.e., the m-component vector that contains for each data point the difference of the target value and thecorresponding predicted value. Intuitively, for a good fit, all components of the residual vector should have asmall magnitude (being all 0 for a perfect fit). A popular approach is to determine the optimal parameter valuea as the one that minimises the sum of squared components of the residual vector, i.e., the quantity,which we also refer to as the sum of squared residuals. With some math (that is outside the scope of thisunit) we can show that for the slope parameter a that minimises the sum of squared residuals it holds that theexplanatory data is orthogonal to the residual vector, i.e.,x r = 0 . (2)Here, refers to the dot product of two vectors, which in this case isx r = x1r1 + x2r2 + + xmrm .Pluging in the definition of r into Equation 2 yields(x x) a = x y (3)from which we can easily find the optimal slope a.InstructionsBased on Equation 3, write a function slope(x, y) that computes the optimal slope for the simple regressionmodel in Equation 1.Input: Two lists of numbers (x and y) of equal length representing data of an explanatory and a target variable.Output: The optimal least squares slope (a) with respect to the given data.For example:4def slope(x, y):Computes the slope of the least squares regression line(without intercept) for explaining y through x. slope([0, 1, 2], [0, 2, 4])2.0 slope([0, 2, 4], [0, 1, 2])0.5 slope([0, 1, 2], [1, 1, 2])1.0 slope([0, 1, 2], [1, 1.2, 2])1.04If you want to visualize the data and predictor, you can use the plotting functions provided in plotting.py.For example X = [0, 1, 2] Y = [1, 1, 2] a = slope(X, Y) b = 0 linear(a,b, x_min=0, x_max=2) plot2DData(X, Y) plt.show()Figure 2: Example plot generated with the functionsprovided in plotting.py.will produce the plot given in Figure 2.Task B: Optimal Slope and Intercept (3 Marks)0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.000.00.51.01.52.0y = axy = ax + bFigure 3: Linear regression with and without an interceptfor three points (0, 1),(1, 1.2), and (2, 2).Consider the Example regression problem in Figure 3.Even with an optimal choice of a the regression modely = ax of Equation 1 does not fit the data well. Inparticular the point (0, 1) is problematic because themodel given in Equation 1 is forced to run throughthe origin and thus has no degree of freedom to fita data point with x = 0. To improve this, we haveto consider regression lines that are not forced to runthrough the origin by extending the model equationtoy = ax + b (4)where in addition to the slope a we also have an additiveterm b called the intercept. Now we have twoparameters a and b to optimise, but it turns out thata simple modification of Equation 3 lets us solve thismore complicated problem.Instead of the residual being orthogonal to the explanatorydata as is, we now require orthogonality tothe centred version of the data. That is, x r = 0 where x denotes the centred data vector defined aswith = (x1 + x2 + + xm)/m being the mean value of the explanatory data. Again, we can rewrite the5above condition as a simple linear equation involving two dot-products(x x) a = x y (5)which again allows to easily determine the slope a. However, now we also need to find the optimal intercept bcorresponding to this value of a. For this we can simply find the average target value minus the average fittedvalue when only using the slope (or in other words the average residual):b = ((y1 ax1) + + (ym axm))/m . (6)InstructionsUsing Equations (5) and (6) above, write a function line(x, y) that computes the optimal slope and interceptfor the regression model in Equation 4Input: Two lists of numbers (x and y) of equal length representing data of an explanatory and a target variable.Output: A tuple (a,b) where a is the optimal least squares slope and b the optimal intercept with respect tothe given data.For example line([0, 1, 2], [1, 1, 2])(0.5, 0.8333333333333333)Task C: Best Single Variable Predictor (4 Marks)(2) and target variable y with m = 3 observations givenin Table 2. We can see that the model y = 0, 29x(2) +2.14 is more Accurate than the model y = 0.07x(1) + 1.5.We are now able to determine a regression model thatrepresents the linear relationship between a targetvariable and a single explanatory variable. However,in usual settings like the one given in the introduction,we observe not one but many explanatory variables(e.g., in the example GDP, Schooling, etc.).As an abstract description of such a setting we considern variables x(1), x(2), . . . , x(n)such that for eachj with 0 j n we have measured m observationsm . These conceptually correspond to thecolumns of a given data table. The individual rows ofour data table then become n-dimensional data pointsthat we again denote by x1, . . . xm, but now xi representnot a single number but a vector.A general, i.e., multi-dimensional, linear predictoris then given by an n-dimensional weight vector aand an intercept b that together describe the targetvariable as:y = (a x) + b , (7)i.e., we generalise Equation 4 by turning the slope a into a n-component linear weight vector and replacesimple multiplication by the dot product (the intercept b is still a single number). Part 2 of the assignment willbe about finding such general linear predictors. In this task, however, we will start out simply by finding thebest univariate predictor and then represent it using a multivariate weight-vector a. Thus, we need to answertwo questions:1. How to find the best univariate predictor (i.e., the one with the smallest sum of squared residuals)?2. How to represent it as a multivariate predictor (i.e., in the form of Equation (7))?Table 2: Example datasetwith n = 2 Explanatoryvariables x(1) and x(2), anda target variable y with m =3 observations.Consider the simple example from Figure 4 with two explanatory variables(the data is shown in Table 2). Here, the univariate modely = 0.29x(2) + 2.14 ,i.e., a(2) = 0.29, b(2) = 2.14, is better thany = 0.07x(1) + 1.5 ,i.e., a(1) = 0.07, b(1) = 1.5. Hence, our multivariate model in the form of Equation(7) isy = ((0, 0.29) x) + 2.14 .Of course, for a general dataset with n explanatory variables x(1), . . . , x(n) and a target variable y we have tocompare n different univariate predictors and embed the best one into an n-dimensional weight vector.InstructionsBased on the functions for fitting univariate regression lines, write a function best single predictor(data,y) that computes a general linear predictor (i.e., one that is defined for multi-dimensional data points) bypicking the best univariate prediction model from all available explanatory variables.Input: Table data with m 0 rows and n 0 columns representing data of n explanatory variables and a listy of length m Containing the values of the target variable corresponding to the rows in the table.Output: A pair (a, b) of a list a of length n and a number b representing a linear predictor with weights aand intercept b with the following properties:a) There is only one non-zero element of a, i.e., there is one i in range(n) such that a[j]==0 for allindices j!=i.b) The predictor represented by (a, b) has the smallest possible squared error among all predictorsthat satisfy property a).For example (see Figure 4): data = [[1, 0],… [2, 3],… [4, 2]] y = [2, 1, 2] weights, b = best_single_predictor(data, y) weights, b([0.0, -0.2857142857142858], 2.1428571428571432)Task D: Regression Analysis (1 Marks)You have now developed the tools to carry out a regression analysis. In this task, you will perform a regressionanalysis on the life-expectancy dataset an excerpt of which was used as an example in the overview. The datasetprovided in the file /data/life_expectancy.csv.InstructionsFind the best single predictor for the life expectancy dataset and use this predictor to answer the following twoquestions:1. What life expectancy does your model predict for Rwanda?2. What life expectancy does your model predict for Liberia, if Liberia improved its schooling to the levelof Austria?You can perform the analysis in a function regression analysis or dynamically in the console. Provide theresults of your analysis in the documentation.7Part 2: Multivariate Regression (12%, due in Week 11)In part 1 we have developed a method to find a univariate linear regression model (i.e., one that models therelationship between a single explanatory variable and the target variable), as well as a method that picks thebest univariate regression model when multiple explanatory variables are available. In this part, we develop amultivariate regression method that models the joint linear relationship between all explanatory variables andthe target variable.Task A: Greedy Residual Fitting (6 Marks)We start using a greedy approach to multivariate regression. Assume a dataset with m data points x1, . . . , xmwhere each data point x has n explanatory variables x(1), . . . , x(n), and corresponding target variables y1, . . . , ym.The goal is to find the slopes for all explanatory variables that help predicting the target variable. The strategywe use greedily picks the best predictor and adds its slope to the list of used predictors. When all slopes arecomputed, it finds the best intercept.(2)greedy regressionFigure 5: Continuation of the example given in Figure4. The plane Depicts the multivariate regressionobtain from greedy residual fitting. The error of thismodel is lower than the individual regressions on x(1)and x(2)For that, recall that a greedy algorithm iterativelyextends a partial solution by a small augmentationthat optimises some selection criterion. In our setting,those augmentation options are the inclusion ofa currently unused explanatory variable (i.e., one thatcurrently still has a zero coefficient). As selection criterion,it makes sense to look at how much a previouslyunused explanatory variable can improve thedata fit of the current predictor. For that, it shouldbe useful to look at the current residual vector,because it specifies the part of the target variable thatis still not well explained. Note that a the slope of apredictor that predicts this residual well is a good optionfor augmenting the current solution. Also, recallthat an augmentation is used only if it improves theselection criterion. In this case, a reasonable selectioncriterion is again the sum of squared residuals.What is left to do is compute the intercept for themultivariate predictor. This can be done similar toEquation 6 asb = ((y1 a x1) + + (ym a xm)) /m .The resulting multivariate predictor can then be written as in Equation 7. An example multivariate regressionis shown in Figure 5.InstructionsWrite a function Greedy predictor that computes a multivariate predictor using the greedy strategy similarto the one described above.Input: A data table data of explanatory variables with m rows and n columns and a list of corresponding targetvariables y.Output: A tuple (a,b) where a is the weight vector and b the intercept obtained by the greedy strategy.For example (the first example is visualized in Figure 5) data = [[1, 0],… [2, 3],… [4, 2]] y = [2, 1, 2] weights, intercept = greedy_predictor(data, y) weights, intercept8[0.21428571428571436, -0.2857142857142858] 1.642857142857143 data = [[0, 0],… [1, 0],… [0, -1]] y = [1, 0, 0] weights, intercept = greedy_predictor(data, y) weights, intercept[-0.49999999999999994, 0.7499999999999999] 0.7499999999999999Task B: Optimal Least Squares Regression (6 Marks)regression on x(1)regression on x(2)greedy regressionleast squares regressionFigure 6: Continuation of the example given in Figure 5.The plot depicts the optimal least squares regressionand compares it to the greedy solution (for reference,the best single predictors are shown as well). The leastsquares regression has the smallest error.Recall that the central idea for finding the slope of theoptimal univariate regression line (with intercept) wasEquation 2 that stated that the residual vector has tobe orthogonal to the values of the centred explanatoryvariable. For multivariate regression we havemany variables, and it is not surprising that for anoptimal linear predictor a x + b, it holds that theresidual vector is orthogonal to each of the centredexplanatory Variables (otherwise we could change thepredictor vector a bit to increase the fit). That is, insteadof a single linear equation, we now end up withn equations, one for each data column xFor the weight vector a that satisfies these equationsfor all i = 0, . . . , n 1, you can again simply find thematching intercept b as the mean residual when usingjust the weights a for fitting:b = ((y1 a x1) + + (ym a xm))/m .In summary, we know that we can simply transform the problem of finding the least squares predictor tosolving a system of linear equation, which we can solve by Gaussian Elimination as covered in the lecture. Anillustration of such a least squares predictor is given in Figure 6.Instructions1. Write a function equation(i, data, y) that produces the coefficients and the right-hand-side of thelinear equation for explanatory variable x(i)(as specified in Equation (8)).Input: Integer i with 0 = i n, data matrix data with m rows and n columns such that m 0 and n 0, and list of target values y of length n.Output: Pair (c, d) where c is a list of coefficients of length n and d is a float representing the coefficientsand right-hand-side of Equation 8 for data column i.For example (see Figure 6): data = [[1, 0],… [2, 3],… [4, 2]] target = [2, 1, 2] equation(0, data, target)([4.666666666666666, 2.3333333333333326], 0.3333333333333326) equation(1, data, target)([2.333333333333333, 4.666666666666666], -1.3333333333333335)2. Write a function least squares predictor(data, y) that finds the optimal least squares predictor forthe given data matrix and target vector.Input: Data Matrix data with m rows and n columns such that m 0 and n 0.9Output: Optimal predictor (a, b) with weight vector a (len(a)==n) and intercept b such that a, bminimise the sum of squared residuals.For Example: data = [[0, 0],… [1, 0],… [0, -1]] y = [1, 0, 0] weights, Intercept = least_squares_predictor(data, y) weights, intercept([-0.9999999999999997, 0.9999999999999997], 0.9999999999999998) data = [[1, 0],… [2, 3],… [4, 2]] y = [2, 1, 2] weights, intercept = least_squares_predictor(data, y) weights, intercept([0.2857142857142855, -0.4285714285714285], 1.7142857142857149)10如有需要,请加QQ:99515681 或邮箱:99515681@qq.com
“
添加老师微信回复‘’官网 辅导‘’获取专业老师帮助,或点击联系老师1对1在线指导。