DS 5230程序 辅导、 写作Python编程

” DS 5230程序 辅导、 写作Python编程DS 5230 Unsupervised Machine Learning andData Mining Spring 2021 Homework 1Submission Instructions It is recommended that you complete this exercises in Python 3 and submityour solutions as a Jupyter notebook. You may use any other language, as long as you include a README withsimple, clear instructions on how to run (and if necessary compile) your code. Please upload all files (code, README, written answers, etc.) to GradeScopein a single zip file.Exercise 1: Understanding Apriori and FP growth1. Consider a dataset for frequent set mining as in the following table where wehave 6 binary features and each row represents a transaction.TID Itemsa. Illustrate the first three passes of the Apriori algorithm (set sizes 1, 2 and3) for support threshold of 3 transactions. For each stage, list the candidatesets Ck and the Frequent sets Lk. What are the maximal frequent setsdiscovered in the first 3 levels?b. Pick one of the maximal sets that has more than 1 item, and check if anyof its subsets are association rules with support (i.e. frequency) at least0.3 and confidence at least 0.6. Please explain your answer and show yourwork.2. Given the following transaction database, let the min support = 2, answer thefollowing questions.a. Construct FP-tree from the transaction datasbase and draw it here.b. Draw ds conditional FP-tree, and find frequent patterns (i.e.itemsets)based on ds Conditional FP-Tree.Exercise 2: Probability Basics1. Let X and Y be two independent random variables with densities p(x) and p(y),respectively. Show the following two properties:Ep(x,y)[X + aY ] = Ep(x)[X] + aEp(y)[Y ] (1)Varp(x,y)[X + aY ] = Varp(x)[X] + a2Varp(y)[Y ] (2)for any scalar constant a R. Hint: use the definition of expectation andvariance,Ep(x)[X] = Zxp(x)xdx (3)varp(x)[X] = Ep(x)[X2] E2p(x)[X] (4)2. Let X be a Random variable with Beta distribution,p(x; , ) = x1(1 x)1B(, )(5)where B(, ) is beta function. Prove thatE[X] = + (6)var[X] = ( + )2( + + 1) (7)23. Suppose we observe N i.i.d data points D = {x1, x2, …, xN }, where eachxn {1, 2, …, K} is a random variable with categorical (discrete) distributionparameterized by = (1, 2, …, K), i.e.,xn Cat(1, 2, …, K), n = 1, 2, …, N (8)In detail, this distribution means that for a specific n, the random variable xnfollows P(xn = k) = k, k = 1, 2, …, K.Equivalently, we can also Write the density function of a categorical distributionasp(xn) = YKk=1I[xn=k]k(9)where I[xn = k] is called identity function, and defined asI[xn = k] = 1, if xn = k0, otherwise (10)a. Now we want to prove that the joint distribution of multiple i.i.d categoricalvariables is a multinomial distribution. Show that the density function ofD = {x1, x2, …, xN } isp(D|) = YKk=1Nkk(11)where Nk =PNn=1 I[xn = k] is the number of random variables belongingto category k. In other word, D = {x1, x2, …, xN } follows a multinomialdistribution.b. We often call p(D|) likelihood function, since it indicates the possibilitywe observe this Dataset given the model parameters . By Bayes rule, wecan rewrite the posterior asp(|D) = p(D|)p()p(D)(12)where p() is piror distribution which indicates our preknowledge aboutthe model parameters. And p(D) is the distribution of the observations(data), which is constant w.r.t. posterior. Thus we can writep(|D) p(D|)p() (13)If we assume the Dirichlet prior on , i.e.,p(; 1, 2, …, K) = Dir(; 1, 2, …, K) = 1where B() is Beta function and = (1, 2, …, K).3Now try to derive the joint distribution p(D, ) and ignore the constantterm w.r.t. . Show that the posterior is actually also Dirichlet andparameterized as follows:p(|D) = Dir(; 1 + N1, 2 + N2, …, K + NK) (15)[In fact, this nice property is called conjugacy in machine learning. A generalstatement is : If the prior distribution is conjuagate to the likelihood, thenthe posterior will be the same distribution as the prior distribution. Searchconjugate prior and exponential family for more detail if you are interested.]Before you work on implementation, you need to install Jupyter and PySparkby reading Instructions on PySpark Installation.pdfExercise 3: Exploratory Analysis and Data VisualizationIn this exercise, we will be looking at a public citation dataset from Aminer ( httpss://aminer.org/), a free online service used to index and search academic socialnetworks. You will perform some exploratory analysis and data visualization for thisdataset. The dataset is up to the year 2012 and can be downloaded in the attachedfile called q3 dataset.txt. We show an example item format in README.txt.The ArnetMiner public citation dataset is a real world dataset containing lots ofnoise. For example, you may see a venue name like The Truth About ManagingPeople…And Nothing But the Truth. However, you are not expected to do datacleaning in this phase.1. Count the number of Distinct authors, publication venues (conferences andjournals), and publications in the dataset.a. List each of the counts.b. Are these numbers likely to be accurate? As an example look up all thepublications venue names associated with the conference Principles andPractice of Knowledge Discovery in Databases1.c. In addition to the problem in 1.b, what other problems arises when youtry to determine the number of distinct authors in a dataset?2. We will now look at the publications associated with each author and venue.a. For each author, construct the list of publications. Plot a histogram of thenumber of publications per author (use a logarithmic scale on the y axis).1 httpss://en.wikipedia.org/wiki/ECML_PKDD4b. Calculate the mean and standard deviation of the number of publicationsper author. Also Calculate the Q1 (1st quartile2), Q2 (2nd quartile, ormedian) and Q3 (3rd quartile) values. Compare the median to the meanand explain the difference between the two values based on the standarddeviation and the 1st and 3rd quartiles.c. Now construct a list of publications for each venue. Plot a histogram ofthe number of publications per venue. Also calculate the mean, standarddeviation, median, Q1 and Q3 values. What is the venue with the largestnumber of publications in the dataset?3. Now construct the list of references (that is, the cited publications) for eachpublication. Then in turn use this set to calculate the number of citations foreach publication (that is, the number of times a publication is cited).a. Plot a histogram of the number of references and citations per publication.What is the publication with the largest number of references? What isthe publication with the largest number of citations?b. Calculate the so called impact factor for each venue. To do so, calculatethe total number of citations for the publications in the venue, and thendivide this number by the number of publications for the venue. Plot ahistogram of the results.c. What is the venue with the highest apparent impact factor? Do you believethis number?d. Now repeat the calculation from part b, but restrict the calculation tovenues with at least 10 publications. How does your histogram change?List the citation counts for all publications from the venue with the highestimpact factor. How does the impact factor (mean number of citations)compare to the median number of citations?e. Finally, construct a list of publications for each publication year. Usethis list to plot the average number of references and average number ofcitations per Publication as a function of time. Explain the differences yousee in the trends.Exercise 4: Market Basket Analysis of Academic CommunitiesIn this problem, you will try to apply frequent pattern mining techniques to the realworld bibliographic dataset from Aminer ( httpss://aminer.org/). One thing worthnoting is that you are required consider the whole dataset, instead of running withpart of the dataset. You may use any Apriori or FP-growth implementation that ismade available in existing libraries. We recommend that you use the implementationin Spark ( https://spark.apache.org/).2 httpss://en.wikipedia.org/wiki/Quartile51. The dataset included with this problem is q4 dataset.txt. Parse this data, andcomment on how it differs from the previous file (q3 dataset.txt), in terms ofnumber of publications, authors, venues, references, and years of publication.2. Coauthor discovery: Please use FP-Growth to analyze coauthor relationships,treating each paper as a basket of authors.a. What happens when you successively decrease the support threshold usingthe values {1e-4, 1e-5, 0.5e-5, 1e-6}?b. Keep threshold = 0.5e-5 and report the top 5 co-authors for these researchers:Rakesh Agrawal, Jiawei Han, Zoubin Ghahramani and Christos Faloutsosaccording to frequency.3. Academic community discovery: In computer science communities tend toorganize around Conferences. Here are 5 key conferences for areas of data science Machine learning: NIPS/NeurIPS (Neural Information Processing Systems)3 Data mining: KDD (Conference on Knowledge Discovery and Data Mining) Databases: VLDB (Very Large Data Bases) Computer networks: INFOCOM (International Conference on ComputerCommunications) Natural language processing: ACL (Association for Computational Linguistics)a. We will now use FP growth to analyze academic communities. To do so,represent each author as a basket in which the items are the venues in whichthe author has at least one publication. What happens as you decrease thesupport threshold using values {1e-3, 5e-4, 1e-4}?b. Keep the threshold=5e-4 and report results. For each of those 5 keyconferences, please Report the top 3 venues that authors also publish in.3NIPS is today abbreviated as NeurIPS, but this dataset only contains references to NIPS.如有需要,请加QQ:99515681 或WX:codehelp

添加老师微信回复‘’官网 辅导‘’获取专业老师帮助,或点击联系老师1对1在线指导