AdaBoost留学生作业 辅导、 写作C/C++ Machine Learning程序

” AdaBoost留学生作业 辅导、 写作C/C++ Machine Learning程序The University of Adelaide, School of Computer ScienceIntroduction to Statistical Machine LearningSemester 2, 2020 Assignment 2: Implementation of AdaBoostSubmissionInstructions and submission guidelines: You must sign an assessment declaration coversheet to submit with your assignment. Submit your assignment via the Canvas MyUni.ReadingWith this assignment, you will see how Adaboost works on a classification task. TheAdaBoost algorithm is described in the class and more informtion on AdaBoost can be foundon the web pages: httpss://en.wikipedia.org/wiki/AdaBoostPlease Read A Short Introduction to Boosting by Yoav Freund and Robert E. Schapire, whichcan be found here:If you find difficulties to understand this paper, you may read other tutorial/survey paperson the same webpage. If and only if you want to know more about Boosting methods, youare encouraged to read the following papers on Boosting (Optional):CodingYou are provided with the training data (xi; yi); i = 1…., belonging to two classes, with binarylabels yi (If yi is NOT {+1, -1}, you need to convert the labels into {+1, -1} first). You shoulduse these training data to train an Adaboost classifier.Please implement the AdaBoost algorithm as given on page 3 of the Freund and Schapirepaper. The algorithm requires that you train a weak learner on data sampled from thetraining set. While I expect you to design your AdaBoost program in such a way that you canplug in any weak learner, I would like you to use Decision Stumps for this assignment.Decision Stumps are simply one-level decision trees. That is, the learner selects an attributefor the root of the tree and immediately classifies examples based on their values for thatattribute. Refer to: httpss://en.wikipedia.org/wiki/Decision_stumpAdaBoost留学生作业 辅导、 写作C/C++程序语言作业、 辅导C/C++课程设计作业To simplify the task, I have also provided a Matlab implementation of Decision Stump(build_stump.m). This is For reference only. Please be aware that you may need torewrite/modify the decision stump code for your own needs.There is a combinatorically large number of experiments that you could run and likewise,number of measures/settings that you can report against (training time, prediction ontesting set, test time, number of boosting, depth of weak learners your implementationonly has to provide for Stumps but you can compare against Matlab/Python versions withdeeper weak learners for Adaboost.If you want, you can extend your code to have trees of some greater depth as weaklearners). This assignment is deliberately open-ended and flexible, meaning that you canfollow to some extent what interests you but also tests your ability to think strategically andwork out what might be The most informative, interesting and efficient things that you coulddo (and report on).Please be aware that there is the law of diminishing returns. Loosely put, you do a great joband you will get 9/10, and you do an amazing job and you will get 10/10. However, for the10% extra marks you may well have done 400% more work.Please start early. This might be a tough algorithm to implement and debug. You can chooseeither Matlab, Python, or C/C++ to implement AdaBoost. I would personally suggest Matlabor Python.Your code should not rely on any 3rd-party toolbox. Only Matlabs built-in APIs or Python/C/C++s standard libraries are allowed. When you submit your code, please report youralgorithms training/test error on the given datasets.You are also required to submit a report (10 pages in PDF format), which should have thefollowing sections (Report contributes 45% to the mark; code 55%): An algorithmic description of the AdaBoost method. (5%) Your understanding of AdaBoost (anything that you believe is relevant to this algorithm)(5%) Some analysis of your implementation. You should include the training/test error curveagainst the number of iterations on the provided data sets in this part (see above. This partis open-ended) (20% for master students and 25% for undergraduate students) You should Compare performance with an inbuild package (such as fitemsemble inMatlab: httpss://au.mathworks.com/help/stats/fitensemble.html) (5% for master studentsand 10% for undergraduate students) You may also train an SVM and compare the results of SVM with AdaBoost. What do youobserve? (10% for master students. This task is optional for undergraduate students)In summary, you need to submit (1) the code that implements AdaBoost and (2) a report inPDF.DataYou will use Wisconsin Diagnostic Breast Cancer dataset to test your model. All the datapoints are stored in the file wdbc_data.csv. The explanation of the data field is given inwdbc_names.txt. You need to predict diagnosis of each sample based on the real-valuedfeatures.There are 569 samples in wdbc_data.csv. You will use the first 300 samples for trainingand use the remaining part for testing.如有需要,请加QQ:99515681 或邮箱:99515681@qq.com

添加老师微信回复‘’官网 辅导‘’获取专业老师帮助,或点击联系老师1对1在线指导