In
this post, I am going to tell you how to run single as well as multiple
regression analysis using SPSS. Let’s illustrate this with an example. Imagine
that you are conducting a survey and collecting data on the following matters
of respondents.
1.
Gender
2.
Age
3.
Income
in Dollars ($$)
4.
Brand
Loyalty (Perceived)
5.
Service
Quality (Experienced/Perceived)
First
of all, you need to input the above mentioned variables in to SPSS. For details
on how to enter/ input the variables in the SPSS please see my previous posts.
Once, the variables and values are entered, now you are ready to enter the
data. And once the data is entered in SPSS Data View window, now you are ready
to run descriptive statistics, correlation test (Discussed in this post) or
regression analysis (Discuss in this post)
In
this example, the variable of Brand Loyalty is entered via likert scale with 1
refers to very low brand loyalty, 2
refers to low, 3 refers to neutral, 4 refers to high, and 5 refers to very high
brand loyalty. The variable of service quality is also based on likert scale
with 1 refers to very poor service quality, 2 refers to poor, 3 refers to
neutral, 4 refers to good, and 5 refers to very good service quality. Since, regression analysis is mostly run when
data is continuous (Based on likert scale); therefore, the data qualify for
running normal correlation analysis.
The
command for running regression analysis is as under.
Analyze---Regression----Linear
Now
a dialogue box will open in front of you. In this box, on left hand side, you
can see all of the variables. Select the dependent variable and transfer it in
to Dependent box. Similarly, select the independent variable and transfer it in
to independent variable box.
In our example, we transfer, brand loyalty to the
dependent box, and service quality in to the independent box. Then, just press
OK.
The result will
appear in new output window with four tables and some details.
Here is how you
will interpret these results. The first table with the heading ‘Variables
Entered/Removed’ is only for information. It will show that the name of the
variables entered in the regression analysis. Normally, we ignore this table. In the second table with the heading ‘Model
Summary’, the value of R, R Square, and Adjusted R Square are important for us.
The value of R shows the correlation or association between the variables. In
this case, the value of R is .431 which shows that both brand loyalty and service
quality is 43.1% positively associated. Similarly, the value of R Square is
.186 which means that the independent variable (service quality) is explaining
18.6% variation or change in the dependent variable (brand loyalty). The value
of adjusted R square is an improved form of R Square as it adjusts for degree
of freedom and interpretation is also similar to R Square. Thus, Adjusted R
Square is considered a better indicator of model fitness over simple R Square.
Then in third
table with the heading ‘ANOVA’, few values are important for us. The value of F
shows that how fit the overall model is. In this example, the value of F is
4.10 which show that overall, the model is fit and can be considered
satisfactory. Next to F value, there is Sig value which in this example is
0.058 which shows that the F statistics is significant (Its significant since P
value is less than 0.10).
In the last or
fourth table with the heading of ‘Coefficients’, there are various information.
Mostly, we are interested in the value of B or beta for both constant and
independent variable. Normally, we do not give much importance to the constant;
however, its interpretation is that if everything else goes zero, still there
will be some change in the dependent variable. Thus, in this example, the beta value of 2.16 for
the constant shows that if service quality is zero, still, there will be 2.16
units brand loyalty. The beta value of .407 written in the row of service
quality shows that if there is one unit increase in the service quality, the
brand loyalty will go up by .407 units. Similarly, other important information
is t value which can be used to see if the effect of independent variable on
the dependent variable is significant or insignificant. In this example, the
value of 2.026 of t value shows that the results are significant (It’s
significant since its bigger than 2). We can also check the significance by
looking at the Sig value located next to t-value. The value of Sig is 0.058
which is less than 0.10 therefore, we can conclude that the effect of service quality on brand
loyalty are significant and if any hypothesis are constructed, then those
hypothesis must be accepted based on the significant t or sig value.
So summing up
all the things, normally we report the following things with this order in our
report or thesis or article.
1.
Constant’s
beta value, t value, and sig value
2.
Independent
variable’s beta value, t value, and sig value
3.
R
Value
4.
R
Square value
5.
Adjusted
R Square value
6.
F
value with its sig value
Multiple
Regression Analysis:
In multiple
regression analysis, everything is similar to the simple regression analysis as
discussed above with the exception that in multiple regression analysis there
are more than single independent variable. So simply, when inputting the
variables in the regression box, you add multiple variables and run the test.
SPSS Practice File is as under
For details on
Online SPSS classes at economical fee, please contact the following.
Email:
onlinetrainingsolution@gmail.com
Skype Id:
faireast1
Mobile:
+92-3239256994
No comments:
Post a Comment