STATS 731程序 辅导、 写作c+,java程序

” STATS 731程序 辅导、 写作c+,java程序STATS 731, 2020, Semester 1Assignment 3 (5%)Due: 5pm Friday 29th May, as a Canvas file uploadQuestion 1 [18 marks]A few years ago, some colleagues and I measured the masses of the black holes in the centres ofsome galaxies using a technique called reverberation mapping. The file black hole masses.csvhas the measurements. These are actually the log10 of the mass measurements in solar masses,so 6 one million suns, 7 ten million suns, and so on. For simplicity, well just call themmeasurements, and the true log-masses well just call masses. The stdev column is an estimateof the likely size of the measurement error, such thatmeasurement[i] Normaltrue mass[i], stdev[i]2. (1)In this question you will use two BUGS models, which well call the simple model and thehierarchical model. The simple model is:model{for(i in 1:length(measurement)){# Casual wide prior for each true masstrue_mass[i] ~ dnorm(0, 1/1000^2)measurement[i] ~ dnorm(true_mass[i], 1/stdev[i]^2)}}The hierarchical model is:model{# Casual wide priors now apply to the hyperparametersmu ~ dnorm(0, 1/1000^2)log_sigma ~ dnorm(0, 1/10^2)sigma – exp(log_sigma)for(i in 1:length(measurement)){true_mass[i] ~ dnorm(mu, 1/sigma^2)measurement[i] ~ dnorm(true_mass[i], 1/stdev[i]^2)}STATS 731作业 辅导、 写作c+,java程序语言作业、 辅导Python课程作业In the hierarchical model, and are thought to describe the overall population of black holesfor which the observed ones can be considered a representative sample.1(a) [6 marks] Draw a PGM for the simple model and a PGM for the hierarchical model. Forthe latter, I dont mind whether you explicitly include the deterministic nodes, or mergesigma and log_sigma into one node for presentation purposes.(b) [2 marks] Run the simple Model for a lot of iterations and obtain the posterior distributionfor the true log-mass of the first black hole. Summarise it using the posterior mean the posterior standard deviation, which for a normal posterior is a 68% central credibleinterval. The result should be obvious in hindsight.(c) [2 marks] The hierarchical models prior has dependence between and the true massparameters. Modify the model so that it expresses exactly the same prior assumptions,but the the prior has independence between all stochastic nodes. Hint: This is thepre-whitening idea that we saw for the starling ANOVA model.(d) [4 marks] Run either version of the hierarchical model for a lot of iterations and summarisethe posterior distributions for mu, sigma, and true mass[1] using any summaries youthink are appropriate.(e) [4 marks] Explain why: (i) sigma has a greater than 50% posterior probability of beingsmaller than sqrt(mean((data$measurement – mean(data$measurement))^2)); (ii)the true mass of the first black hole is more likely to be above its measurement than belowit; and (iii) the uncertainty on the true mass of the first black hole is smaller with thehierarchical model than with the simple model1.Question 2 [18 marks]Consider the following Simulated data which I generated in R using either runif() or rexp().x = c(0.610164901707321, 1.99984208494425, 1.50817369576544, 0.707493807654828,1.49413506453857)In this question, perform model averaging/selection to try to infer whether I used runif() orrexp(). Let U be the proposition that it was runif() and E be the proposition that it wasrexp(). If U is true, then the prior and sampling distribution arelog b Normal (0, 1) (2)xi| b Uniform(0, b) (3)If E is true, then the prior and sampling distribution arelog Normal (0, 1) (4)xi| Exponential() (5)(a) [3 marks] Both models E and U imply prior predictive distributions for the data, andhence the data mean x. Would the two prior predictive distributions for x be the same ordifferent? Explain your answer.(b) [2 marks] Part (a) implies that learning only x would provide some information aboutwhether E or U is true. Does this seem reasonable to you?1The phenomenon in (i) is known as shrinkage and (ii) and (iii) are sometimes described as one unknownquantity borrowing strength from measurements of others.2(c) [4 marks] Write down analytical expressions for the marginal likelihoods p(x | U) andp(x | E). Retain all constant factors.(d) [4 marks] Numerically find the values of the two marginal likelihoods.(e) [2 marks] Find the Bayes Factor (either way around) and also the posterior probabilitiesof U and E, assuming prior probabilities of 1/2 each.(f) [3 marks] If p(b | U) were made much wider, the Bayes Factor would strongly favour E.Explain why this occurs.3如有需要,请加QQ:99515681 或邮箱:99515681@qq.com

添加老师微信回复‘’官网 辅导‘’获取专业老师帮助,或点击联系老师1对1在线指导