• Subject Code : ASB-4408
  • Subject Name : Financial econometrics

Wald, Likelihood Ratio and Lagrange Multiplier

Question B2

a. Describe briefly the three hypothesis testing procedures that are available under maximum likelihood estimation. Which is likely to be calculated in practice and why?

The three hypothesis testing procedures are Wald, Likelihood ratio and Lagrange Multiplier

The Wald test approximates the LR test, however with the favorable position that it just requires assessing one model. The Wald test works by testing the invalid theory that a lot of parameters is equivalent to some esteem. In the model, the invalid speculation is that the two coefficients of intrigue are at the same time equivalent to zero. In the event that the test neglects to dismiss the invalid speculation, this proposes expelling the factors from the model won't generously hurt the attack of that model, since an indicator with a coefficient that is exceptionally little comparative with its standard blunder is for the most part not doing a lot to help anticipate the dependent variable.

The LR test is performed by assessing two models and contrasting the attack of one model with the attack of the other. Expelling indicator factors from a model will quite often make the model fit less well, yet it is important to test whether the watched distinction in model fit is measurably huge. The LR test does this by looking at the log probabilities of the two models, in the event that this distinction is measurably critical, at that point the less prohibitive model (the one with more factors) is said to fit the information essentially superior to the more prohibitive model. In the event that one has the log probabilities from the models, the LR test is genuinely simple to figure. The equation for the LR test measurement is:

LR=−2ln(L(m1)L(m2))=2(loglik(m2)−loglik(m1))

The Lagrange multiplier test requires assessing just a single model. The thing that matters is that with the Lagrange multiplier test, the model evaluated does exclude the parameter(s) of intrigue. This implies, in our model, we can utilize the Lagrange multiplier test to test in the case of adding science and math to the model will bring about a critical improvement in model fit. The test measurement is determined dependent on the incline of the probability work at the watched estimations of the factors in the model. This evaluated slope, or "score" is the explanation the Lagrange multiplier test is now and then called the score test. The scores are then used to evaluate the improvement in model fit if extra factors were remembered for the model.

b. What stylized fact of financial data cannot be explained using linear trend models?

  • Frequency: Stock market costs are estimated each time there is an exchange or someone posts another statement, so regularly the recurrence of the information is high
  • Non-stationarity: Financial information (resource costs) are covariance non-fixed; however, in the event that we expect that we are discussing returns from here on, at that point we can truly believe them to be fixed.
  • Linear Independence: They ordinarily have little proof of straight (autoregressive) reliance, particularly at low recurrence.
  • Non-normality: They are not ordinarily circulated – they are fat-followed.
  • Volatility pooling and asymmetries in instability: The profits show unpredictability grouping and influence impacts.

c. Which of these features can be modelled using a GARCH (1,1) process?

GARCH models are intended to catch the unpredictability grouping impacts in the profits (GARCH(1,1) can display the reliance in the squared returns, or squared residuals), and they can likewise catch a portion of the unequivocal leptokurtosis, so that regardless of whether the residuals of a linear model of the required structure, the ut’s, are leptokurtic, the normalized residuals from the GARCH estimation are probably going to be less leptokurtic. Standard GARCH models can't, notwithstanding, represent leverage effects

d. Why, in empirical research, have researchers preferred GARCH (1,1) models to pure

ARCH (p) models?

Comparing GARCH(1,1) model with ARCH model, when we gauge an ARCH model, we require αi >0 ; i=1,2,...,q (since fluctuation can't be negative) GARCH(1,1) goes some approach to get around these. The GARCH(1,1) model has just three parameters in the restrictive fluctuation condition, contrasted with q+1 for the ARCH(q) model, so it is progressively closefisted. Since there are less parameters than a run of the mill qth request ARCH model, it is more uncertain that the assessed estimations of at least one of these 3 parameters would be negative than all q+1 parameters. Additionally, the GARCH(1,1) model can normally still catch the entirety of the critical reliance in the squared returns since it is conceivable to compose the GARCH(1,1) model as an ARCH(q), so slacks of the squared residuals once again into the vast past assistance to clarify the present estimation of the restrictive difference, ht.

e. Describe two extensions to the original GARCH model. What additional characterisitcs of financial data might they able to capture?

The restrictive difference conditions for the EGARCH and GJR models are respectively

log(σ2t)= ω + βlog(σ2t-1)+γut-1/√σt-1+α[|(ut-1)/√σt-1 + √2/π

σt2 = α0 + α1u2t-1+βσt-12+γut-12It-1

where It-1 = 1 if ut-1<0 or 0 otherwise

 For a leverage effect, we would see γ > 0 in the two models. The EGARCH model likewise has the additional advantage that the model is communicated regarding the log of ht, so that regardless of whether the parameters are negative, the restrictive change will consistently be certain. We don't thusly need to falsely force non-pessimism limitations. One type of the GARCH-M model can be composed yt = µ +other terms + δσt-1+ ut , ut follows N(0,ht) σt2 = α0 + α1ut-12+βσt-12 with the goal that the model permits the slacked estimation of the contingent change to influence the arrival. As such, our best current gauge of the total risk of the advantage impacts the arrival, with the goal that we expect a positive coefficient for δ. Note that a few creators use δσt

f. Consider the following GARCH (1,1) model

yt = μ + ut, ut~N(0, σ2)

σ2 = α0 + αut−12 + βσt−12

if yt is a financial return series, what range of value are likely the coefficients μ, α0, α1 and β ?

Since yt are financial returns, we would anticipate their mean worth to be certain and little. We are not told the recurrence of the information, yet assume that we had a time of every day returns information, at that point µ would be the normal day by day rate return throughout the year, which may be, say 0.05 (percent). We would expect the estimation of α0 again to be little, state 0.0001, or something of that request. The genuine fluctuation of the aggravations would be given by α0/(1-( α1 + α2)). Average qualities for α1 and α2 are 0.8 and 0.15 individually. Interestingly, every one of the three alphas must be certain, and the whole of α1 and α2 would be required to be not exactly, yet near, solidarity, with α2 > α1.

Question B3

a. Briefly outline Johansen’s methodology for testing for cointegration between a set of

variables in the context of VAR. (10 marks)

The Johansen test is registered in the accompanying manner. Assume we have p factors that we think may be cointegrated. In the first place, guarantee that all the factors are of a similar request of non-fixed, and in reality are I(1), since it is improbable that factors will be of a higher request of combination. Stack the factors that are to be tried for cointegration into a p-dimensional vector, called, state, Δyt.

At that point develop a px1 vector of first contrasts, yt, and structure and gauge the accompanying VAR

Δyt = π yt-k + γ1Δyt-1+ γ2Δyt-2+…….+ γk-1Δyt-(k-1)+ut

b. Explain how the Engle-Granger test for cointegration works. Can this test allow for

more than one co-integrating relationship? (10 marks)

The Engle-Granger way to deal with cointegration includes first guaranteeing that the factors are separately unit root forms. At that point a relapse would be led of one of the arrangement on the other would be led and the residuals from that relapse gathered. These residuals would then be exposed to a Dickey-Fuller or increased Dickey-Fuller test. On the off chance that the invalid theory of a unit root in the DF test relapse residuals isn't dismissed, it would be inferred that a fixed blend of the non-fixed factors has not been found and in this manner that there is no cointegration. Then again, if the invalid is dismissed, it would be reasoned that a fixed blend of the non-fixed factors has been found and along these lines that the factors are cointegrated.

c. A researcher uses Johansen’s procedure and obtains the following test statistics (and critical values);

r     λmax     95% critical values

0     38.962     33.178

1     29.148     27.169

2     16.304     20.278

3     8.861       14.036

4     1.994        3.962

Determine the number of cointegrating vectors. 

Null Hypothesis

Alternative Hypothesis

λmax

95% critical values

r=0

r=1

38.962

33.178

r=1

r=2

29.148

29.148

29.148

29.148

27.169

r=2

r=3

16.304

20.278

r=3

r=4

8.861

14.306

r=4

r=5

1.994

3.962

the test statistic is significantly more than the critical value, so we dismiss the invalid speculation that there are no cointegrating vectors. The equivalent is valid for the subsequent line (that is, we dismiss the invalid theory of one cointegrating vector for the elective that there are two). Taking a gander at the third line, we can't dismiss (at the 5% level) the invalid speculation that there are two cointegrating vectors, and this is our decision. There are two autonomous straight blends of the factors that will be fixed.

Remember, at the center of any academic work, lies clarity and evidence. Should you need further assistance, do look up to our Nursing Assignment Help

Get It Done! Today

Applicable Time Zone is AEST [Sydney, NSW] (GMT+11)
Upload your assignment
  • 1,212,718Orders

  • 4.9/5Rating

  • 5,063Experts

Highlights

  • 21 Step Quality Check
  • 2000+ Ph.D Experts
  • Live Expert Sessions
  • Dedicated App
  • Earn while you Learn with us
  • Confidentiality Agreement
  • Money Back Guarantee
  • Customer Feedback

Just Pay for your Assignment

  • Turnitin Report

    $10.00
  • Proofreading and Editing

    $9.00Per Page
  • Consultation with Expert

    $35.00Per Hour
  • Live Session 1-on-1

    $40.00Per 30 min.
  • Quality Check

    $25.00
  • Total

    Free
  • Let's Start

Browse across 1 Million Assignment Samples for Free

Explore MASS
Order Now

My Assignment Services- Whatsapp Tap to ChatGet instant assignment help

refresh