Scaffolding Technology, Educational Blog for Teachers and Learners

Q1. In SPSS, where do you enter variable names, labels, and measurement scales?

a) Data View

b) Variable View

c) Output Viewer

d) Syntax Window

Answer: b) Variable View

Explanation: In Variable View, you define variable properties (name, label, type, measure, etc.). In Data View, you enter actual data values.

Q2. You want to enter the marks of 50 students in SPSS. Which view is used to enter the marks?

a) Variable View

b) Data View

c) Output Viewer

d) Chart Editor

Answer: b) Data View

Explanation: Data View is like an Excel sheet where rows represent cases (students) and columns represent variables (marks, age, etc.).

Q3. In SPSS, which of the following measurement levels is NOT available?

a) Nominal

b) Ordinal

c) Interval

d) Ratio

Answer: d) Ratio

Explanation: SPSS provides only Nominal, Ordinal, and Scale (interval/ratio combined). It does not separately list “Ratio”.

Q4. Suppose you have entered Gender as a variable in SPSS. Which type should you choose for this variable?

a) Numeric

b) String

c) Date

d) Dollar

Answer: a) Numeric (with value labels)

Explanation: Gender can be coded numerically (e.g., 1 = Male, 2 = Female) and then assigned Value Labels. SPSS prefers numeric coding for analysis.

Q5. You want to compute a new variable “Percentage” from “Marks Obtained” and “Total Marks”. Which menu option should you use?

a) Transform → Recode into Different Variables

b) Transform → Compute Variable

c) Data → Split File

d) Analyze → Descriptive Statistics

Answer: b) Transform → Compute Variable

Explanation: The Compute Variable option lets you create new variables using formulas (e.g., Percentage = (MarksObtained / TotalMarks) * 100).

Q6. Which SPSS procedure would you use to check the mean, median, mode, and standard deviation of students’ scores?

a) Analyze → Compare Means → Independent-Samples T-Test

b) Analyze → Descriptive Statistics → Frequencies

c) Analyze → Correlate → Bivariate

d) Analyze → Regression → Linear

Answer: b) Analyze → Descriptive Statistics → Frequencies

Explanation: The Frequencies option gives mean, median, mode, SD, variance, etc.

Q7. After running a correlation test in SPSS, you get Pearson Correlation = 0.85, p < 0.01. What does this mean?

a) Weak negative correlation

b) Strong positive correlation (significant)

c) No correlation

d) Causal relationship

Answer: b) Strong positive correlation (significant)

Explanation: A value of 0.85 indicates a strong positive relationship, and p < 0.01 shows it is statistically significant.

Q8. In SPSS, which test is used to compare means of two independent groups (e.g., male vs female scores)?

a) Paired Samples T-Test

b) One-Way ANOVA

c) Independent Samples T-Test

d) Chi-Square Test

Answer: c) Independent Samples T-Test

Explanation: The Independent Samples T-Test checks whether the mean difference between two independent groups is significant.

Q9. You want to check if education level (Primary/Secondary/Graduate) influences job preference (Govt/Private/Business). Which test is best in SPSS?

a) Correlation

b) Chi-Square Test of Independence

c) Paired T-Test

d) ANOVA

Answer: b) Chi-Square Test of Independence

Explanation: Both variables are categorical. The Chi-Square test checks if they are associated.

Q10. You ran a Linear Regression in SPSS and got: R = 0.70, R² = 0.49, p < 0.05.
What does R² = 0.49 mean?

a) Predictor explains 49% of the variance in the dependent variable

b) 49% error in prediction

c) Weak correlation

d) The result is not significant

Answer: a) Predictor explains 49% of the variance in the dependent variable

Explanation: R² (coefficient of determination) tells us how much variance in the dependent variable is explained by the independent variable.

Q11. You have developed a questionnaire with 20 Likert-scale items. To check internal consistency in SPSS, which test will you use?

a) Factor Analysis

b) Cronbach’s Alpha (Reliability Analysis)

c) Independent Samples T-Test

d) ANOVA

Answer: b) Cronbach’s Alpha (Reliability Analysis)

Explanation: In SPSS: Analyze → Scale → Reliability Analysis. Cronbach’s Alpha > 0.7 indicates good internal consistency.

Q12. If SPSS gives a Cronbach’s Alpha = 0.35 for your scale, what should you do?

a) Accept it as reliable

b) Revise/remove problematic items

c) Use Chi-Square instead

d) Convert items into categorical variables

Answer: b) Revise/remove problematic items

Explanation: An alpha of 0.35 shows poor reliability. You should check “Item-Total Statistics” in SPSS and drop items that lower reliability.

Q13. Before running Factor Analysis in SPSS, you should check sampling adequacy. Which test is used?

a) KMO and Bartlett’s Test

b) Chi-Square Test

c) ANOVA

d) Paired T-Test

Answer: a) KMO and Bartlett’s Test

Explanation: KMO > 0.6 and Bartlett’s Test significant (p < 0.05) indicate data is suitable for Factor Analysis.

Q14. In SPSS Factor Analysis, which method is commonly used to extract factors?

a) Stepwise Regression

b) Principal Component Analysis (PCA)

c) Descriptive Statistics

d) Cross-tabulation

Answer: b) Principal Component Analysis (PCA)

Explanation: PCA is the most widely used extraction method to reduce data dimensions in Factor Analysis.

Q15. You run a Factor Analysis and see that one item has factor loading = 0.25. What does this mean?

a) Item strongly belongs to the factor

b) Item weakly relates to the factor, should be removed

c) Item has perfect reliability

d) The factor is invalid

Answer: b) Item weakly relates to the factor, should be removed

Explanation: Factor loadings below 0.40 usually suggest the item does not contribute well and can be removed.

Q16. Which SPSS analysis would you use to predict students’ exam scores based on hours studied and motivation level?

a) Chi-Square Test

b) Independent Samples T-Test

c) Multiple Linear Regression

d) Factor Analysis

Answer: c) Multiple Linear Regression

Explanation: Multiple regression predicts one dependent variable (exam score) using multiple independent variables (hours studied, motivation).

Q17. In SPSS regression output, if p-value for the predictor = 0.80, what does this mean?

a) Predictor is significant

b) Predictor is not significant

c) Strong correlation exists

d) Regression is invalid

Answer: b) Predictor is not significant

Explanation: A predictor with p > 0.05 is not statistically significant in predicting the dependent variable.

Q18. If your data is not normally distributed, which SPSS test should you use instead of an Independent Samples T-Test?

a) Chi-Square Test

b) Mann-Whitney U Test

c) One-Way ANOVA

d) Pearson Correlation

Answer: b) Mann-Whitney U Test

Explanation: Mann-Whitney is a non-parametric alternative to the independent t-test for comparing two groups.

Q19. Which SPSS test would you use to compare more than two groups on a non-normal variable?
a) One-Way ANOVA

b) Kruskal-Wallis Test

c) Pearson Chi-Square

d) Paired T-Test

Answer: b) Kruskal-Wallis Test

Explanation: Kruskal-Wallis is the non-parametric alternative to One-Way ANOVA.

Q20. In SPSS, where do you find graphs and tables after running any analysis?

a) Variable View

b) Data View

c) Output Viewer

d) Syntax Window

Answer: c) Output Viewer

Explanation: All results (tables, charts, tests) are displayed in the Output Viewer, from where they can be copied to Word/Excel.

Q21. You want to compare the mean scores of three groups (Arts, Science, Commerce students) in SPSS. Which test will you use?

a) Independent Samples T-Test

b) Paired Samples T-Test

c) One-Way ANOVA

d) Regression

Answer: c) One-Way ANOVA

Explanation: One-Way ANOVA compares means across 3 or more independent groups.

Q22. In a One-Way ANOVA output, the p-value (Sig.) = 0.001. What does this indicate?

a) No significant difference among groups

b) At least one group mean is significantly different

c) All group means are equal

d) The test failed

Answer: b) At least one group mean is significantly different

Explanation: If p < 0.05, reject the null hypothesis → at least one group differs significantly.

Q23. After a One-Way ANOVA, you want to know which groups differ. Which test should you run in SPSS?

a) Correlation

b) Chi-Square Test

c) Post Hoc Test (e.g., Tukey, Scheffé)

d) Mann-Whitney U

Answer: c) Post Hoc Test (e.g., Tukey, Scheffé)

Explanation: Post Hoc tests identify exact group differences after ANOVA shows significance.

Q24. If the assumption of homogeneity of variance is violated in ANOVA, which Post Hoc test is recommended in SPSS?

a) Tukey

b) Bonferroni

c) Games-Howell

d) Kruskal-Wallis

Answer: c) Games-Howell

Explanation: Games-Howell is used when variances are unequal (Levene’s Test significant).

Q25. Which SPSS option tests the strength and direction of the relationship between two continuous variables?

a) Chi-Square Test

b) Bivariate Correlation (Pearson’s r)

c) One-Way ANOVA

d) Regression

Answer: b) Bivariate Correlation (Pearson’s r)

Explanation: Pearson’s correlation measures linear relationship strength (–1 to +1).

Q26. In SPSS, you find Pearson r = –0.62, p < 0.01. What does this mean?

a) Strong positive correlation (significant)

b) Strong negative correlation (significant)

c) No correlation

d) Weak negative correlation (not significant)

Answer: b) Strong negative correlation (significant)

Explanation: –0.62 means strong negative relation, and p < 0.01 means it is statistically significant.

Q27. You want to find the relationship between two categorical variables (e.g., Gender × Career Choice). Which SPSS test is best?

a) Pearson’s r

b) Chi-Square Test of Independence

c) Independent Samples T-Test

d) Paired T-Test

Answer: b) Chi-Square Test of Independence

Explanation: For categorical × categorical variables, Chi-Square checks association.

Q28. In SPSS, what is the purpose of the Syntax Editor?

a) Only to enter raw data

b) To write and run SPSS commands instead of menus

c) To display output results

d) To create graphs

Answer: b) To write and run SPSS commands instead of menus

Explanation: Syntax allows saving commands for reproducibility, automation, and advanced analysis.

Q29. Which command in SPSS Syntax would compute a new variable?

a) FREQUENCIES

b) COMPUTE

c) CROSSTABS

d) FACTOR

Answer: b) COMPUTE

Explanation: Example:

COMPUTE Percentage = (Marks/Total) * 100.

EXECUTE.

Q30. Suppose you want SPSS to run a correlation between age and salary using syntax. Which command is correct?

a) T-TEST VARIABLES=age salary

b) CORRELATIONS VARIABLES=age salary

c) REGRESSION VARIABLES=age salary

d) CROSSTABS VARIABLES=age salary

Answer: b) CORRELATIONS VARIABLES=age salary

Explanation: The syntax is:

CORRELATIONS VARIABLES=age salary.

Q31. Which test in SPSS would you use if you want to see the effect of teaching method (Traditional vs Activity-based) on both students’ achievement scores and motivation scores simultaneously?

a) One-Way ANOVA

b) MANOVA

c) Paired Samples T-Test

d) Multiple Regression

Answer: b) MANOVA

Explanation: MANOVA is used when there are multiple dependent variables tested across groups.

Q32. In MANOVA output, Wilks’ Lambda = 0.45, p < 0.01. What does this mean?

a) No difference between groups

b) Significant multivariate effect of the independent variable

c) Only univariate differences matter

d) Data not suitable for analysis

Answer: b) Significant multivariate effect of the independent variable

Explanation: A low Wilks’ Lambda with p < 0.05 suggests a significant group effect on the combined dependent variables.

Q33. You want to predict whether students pass (1) or fail (0) based on study hours and attendance. Which analysis should you use?

a) Multiple Linear Regression

b) Logistic Regression

c) MANOVA

d) Chi-Square

Answer: b) Logistic Regression

Explanation: Logistic regression is used when the dependent variable is categorical (binary: pass/fail, yes/no).

Q34. In SPSS Logistic Regression output, an Odds Ratio (Exp(B)) = 2.5 for “Study Hours” means:

a) Every 1-unit increase in study hours makes students 2.5 times more likely to pass

b) Study hours have no effect

c) Study hours are negatively correlated with passing

d) Passing is reduced by 2.5 times

Answer: a) Every 1-unit increase in study hours makes students 2.5 times more likely to pass

Explanation: Odds Ratio > 1 means positive effect; < 1 means negative effect.

Q35. Which assumption is important for Logistic Regression in SPSS?

a) Normal distribution of dependent variable

b) Linear relationship between independent and dependent variables

c) No multicollinearity among predictors

d) Homogeneity of variance

Answer: c) No multicollinearity among predictors

Explanation: Logistic regression does not require normality of DV, but predictors should not be highly correlated.

Q36. Which SPSS method would you use to group students into clusters (e.g., High achievers, Moderate, Low achievers) based on multiple performance indicators?

a) Logistic Regression

b) Chi-Square Test

c) Cluster Analysis

d) MANOVA

Answer: c) Cluster Analysis

Explanation: Cluster Analysis groups cases into clusters based on similarity in variables.

Q37. In SPSS Cluster Analysis, which method creates clusters by minimizing within-group differences and maximizing between-group differences?

a) Hierarchical Clustering

b) K-Means Clustering

c) Factor Analysis

d) ANOVA

Answer: b) K-Means Clustering

Explanation: K-Means aims to create compact, well-separated clusters.

Q38. You want to check whether your clusters are meaningful. Which statistic/technique can be used in SPSS?

a) Descriptive Statistics of clusters

b) ANOVA across clusters

c) Cross-tabulation with external variables

d) All of the above

Answer: d) All of the above

Explanation: Validating clusters involves checking ANOVA (mean differences), descriptives, and association with external variables.

Q39. If you want to test the stability of a questionnaire over time (same group answering after 2 weeks), which reliability method in SPSS is best?

a) Split-Half Reliability

b) Test-Retest Reliability

c) Cronbach’s Alpha

d) Factor Analysis

Answer: b) Test-Retest Reliability

Explanation: Test-Retest checks consistency over time by correlating two sets of scores.

Q40. In SPSS, you split a test into two halves (odd vs even items) and correlate them. Which reliability measure is this?

a) Cronbach’s Alpha

b) Split-Half Reliability (Spearman-Brown)

c) KMO Test

d) Intraclass Correlation

Answer: b) Split-Half Reliability (Spearman-Brown)

Explanation: Split-Half method checks internal consistency by dividing items and adjusting correlation using Spearman-Brown coefficient.

Q41. You want to predict which stream (Science, Commerce, Arts) a student belongs to based on scores in Math, Language, and Reasoning tests. Which SPSS method is suitable?

a) Logistic Regression

b) MANOVA

c) Discriminant Analysis

d) Factor Analysis

Answer: c) Discriminant Analysis

Explanation: Discriminant Analysis classifies cases into predefined groups using predictor variables.

Q42. In SPSS Discriminant Analysis, what does a canonical correlation = 0.82 indicate?

a) Weak relationship between predictors and group membership

b) Strong relationship between predictors and group membership

c) No relationship

d) Data is not valid

Answer: b) Strong relationship between predictors and group membership

Explanation: Canonical correlation measures association between discriminant function and group membership; values close to 1 indicate strong classification.

Q43. Which SPSS method is used to analyze time until an event occurs (e.g., dropout from school, recovery from illness)?

a) ANOVA

b) Survival Analysis (Kaplan-Meier)

c) Logistic Regression

d) MANOVA

Answer: b) Survival Analysis (Kaplan-Meier)

Explanation: Kaplan-Meier survival curves estimate the probability of surviving beyond a certain time point.

Q44. In Survival Analysis, censoring means:

a) Removing missing data

b) When an event has not occurred during the study period

c) Transforming variables

d) A type of reliability test

Answer: b) When an event has not occurred during the study period

Explanation: Censoring occurs when participants leave the study or the event (e.g., dropout) has not yet happened.

Q45. Which test in SPSS compares survival distributions of two or more groups?

a) Levene’s Test

b) Chi-Square Test

c) Log-Rank Test

d) Tukey’s Post Hoc

Answer: c) Log-Rank Test

Explanation: The Log-Rank test checks if survival curves differ significantly across groups.

Q46. SEM (Structural Equation Modeling) in SPSS is mostly performed through:

a) SPSS Core

b) AMOS (Add-on to SPSS)

c) Excel Plugin

d) Output Viewer

Answer: b) AMOS (Add-on to SPSS)

Explanation: SPSS AMOS (Analysis of Moment Structures) is an add-on for SEM, path analysis, and confirmatory factor analysis (CFA).

Q47. In SEM, which indicator shows the goodness of fit of a model?

a) Cronbach’s Alpha

b) RMSEA, CFI, GFI

c) T-Test

d) Chi-Square

Answer: b) RMSEA, CFI, GFI

Explanation: Model fit is assessed by indices like RMSEA (<0.08), CFI (>0.90), GFI (>0.90).

Q48. If in AMOS output, Chi-Square/df = 1.8, RMSEA = 0.045, and CFI = 0.92, the model fit is:

a) Poor

b) Acceptable

c) Excellent

d) Invalid

Answer: c) Excellent

Explanation: Chi-Square/df < 2, RMSEA < 0.05, CFI > 0.90 indicate excellent fit.

Q49. Which SPSS syntax command saves all outputs (tables, charts) automatically?

a) SAVE OUTFILE

b) OMS (Output Management System)

c) EXECUTE

d) GET FILE

Answer: b) OMS (Output Management System)

Explanation: OMS routes SPSS output into external files (Word, Excel, Text).

Q50. You want to run the same regression on 10 different datasets in SPSS automatically. Which feature helps best?

a) Split File

b) Loop command in Syntax

c) Compute Variable

d) Graphs

Answer: b) Loop command in Syntax

Explanation: SPSS Syntax supports LOOP–END LOOP and DO REPEAT for automating repetitive tasks across multiple datasets.

Q51. In SPSS, which menu path would you use to generate frequency tables?

a) Analyze → Compare Means

b) Analyze → Descriptive Statistics → Frequencies

c) Transform → Compute Variable

d) Analyze → Correlate

Answer: b) Analyze → Descriptive Statistics → Frequencies

Explanation: To see how often each value occurs in a variable, use Analyze → Descriptive Statistics → Frequencies.

Q52. Which of the following can NOT be generated from the Frequencies dialog box in SPSS?

a) Histograms

b) Pie charts

c) Standard deviation

d) Bar charts

Answer: c) Standard deviation

Explanation: Frequencies can generate histograms, bar, and pie charts, but standard deviation is produced in Descriptives, not in basic Frequencies.

Q53. To calculate the mean, median, mode, variance, and standard deviation of a dataset, which option should you use?

a) Transform → Recode

b) Analyze → Descriptive Statistics → Descriptives

c) Analyze → Compare Means → Means

d) Graphs → Legacy Dialogs

Answer: b) Analyze → Descriptive Statistics → Descriptives

Explanation: Descriptives is specifically designed to calculate summary statistics like mean, variance, and SD.

Q54. In SPSS, which transformation option is best if you want to recode negative values into positive values?

a) Compute Variable → ABS(variable)

b) Recode Into Different Variables → System Missing

c) Compute Variable → LN(variable)

d) Transform → Rank

Answer: a) Compute Variable → ABS(variable)

Explanation: The ABS() function in Compute Variable returns the absolute value (always positive).

Q55. If you want to transform a variable such that each value is squared, which SPSS function would you use?

a) SQRT(var)

b) LN(var)

c) EXP(var)

d) var ** 2

Answer: d) var ** 2

Explanation: In SPSS Compute Variable, you can directly write var*var or var² to square values.

Q56. What happens when you choose “Recode Into Same Variables” in SPSS?

a) Creates a new variable with recoded values

b) Overwrites the original variable

c) Produces a missing value table

d) Creates dummy variables automatically

Answer: b) Overwrites the original variable

Explanation: Recode Into Same Variables permanently changes the existing variable. Safer option is Recode Into Different Variables.

Q57. Which method is commonly used in SPSS to handle missing data by replacing them with the mean?

a) Transform → Compute Variable

b) Analyze → Missing Value Analysis

c) Transform → Replace Missing Values

d) Transform → Rank

Answer: c) Transform → Replace Missing Values

Explanation: The Replace Missing Values procedure substitutes missing entries with mean, median, or a linear trend.

Q58. If you want to check for outliers and extreme values, which SPSS procedure is most appropriate?

a) Analyze → Descriptive Statistics → Explore

b) Analyze → Descriptive Statistics → Frequencies

c) Analyze → Compare Means → Means

d) Transform → Compute

Answer: a) Analyze → Descriptive Statistics → Explore

Explanation: Explore provides boxplots, outlier detection, and normality checks.

Q59. Which SPSS function would you use if you wanted to standardize scores into z-scores?

a) Analyze → Descriptive Statistics → Descriptives

b) Transform → Compute Variable → z(variable)

c) Analyze → Correlate → Bivariate

d) Analyze → Regression

Answer: a) Analyze → Descriptive Statistics → Descriptives

Explanation: In Descriptives, checking the box “Save standardized values as variables” creates z-scores.

Q60. In SPSS, when you choose “Exclude cases listwise” for missing values, what happens?

a) Only cases with complete data for all variables are analyzed

b) Cases are excluded only for the variable being analyzed

c) Missing values are automatically replaced with zero

d) SPSS ignores missing values

Answer: a) Only cases with complete data for all variables are analyzed

Explanation: Listwise exclusion drops entire rows with any missing values across the selected variables.

Q61. In SPSS, the Data View primarily displays:

a) Variable properties

b) Case-by-case raw data values

c) Output results of analysis

d) Syntax commands

Answer: b) Case-by-case raw data values

Explanation: Data View is like a spreadsheet where rows represent cases and columns represent variables.

Q62. Which tab in SPSS allows you to define variable names, labels, types, and measurement scales?

a) Data View

b) Variable View

c) Output Viewer

d) Chart Editor

Answer: b) Variable View

Explanation: Variable View defines metadata of variables such as type, width, decimals, labels, values, and scale.

Q63. A variable measured on a scale of Male = 1, Female = 2 should be defined as:

a) Scale

b) Nominal

c) Ordinal

d) Ratio

Answer: b) Nominal

Explanation: Gender is a categorical variable with labels; hence nominal.

Q64. Which measurement scale in SPSS is used for variables like income in dollars or height in cm?

a) Nominal

b) Ordinal

c) Scale

d) Binary

Answer: c) Scale

Explanation: Continuous variables with equal units of measurement are defined as Scale in SPSS.

Q65. In SPSS, assigning numeric codes to categories (e.g., 1 = Urban, 2 = Rural) is done through:

a) Value Labels

b) Compute Variable

c) Transform → Recode

d) Define Variable Type

Answer: a) Value Labels

Explanation: Value Labels allow you to assign meaningful names to coded numbers.

Q66. To create a new variable as a function of existing ones (e.g., BMI = Weight / Height²), you use:

a) Transform → Recode

b) Analyze → Descriptive

c) Transform → Compute Variable

d) File → New Variable

Answer: c) Transform → Compute Variable

Explanation: Compute Variable creates new variables using mathematical expressions.

Q67. What happens if you define a variable as String in Variable View?

a) It accepts only numeric data

b) It accepts alphanumeric/text values

c) It is automatically treated as Scale

d) It cannot be used in analysis

Answer: b) It accepts alphanumeric/text values

Explanation: String variables are for text entries (e.g., names, IDs).

Q68. The “Measure” column in Variable View allows you to set:

a) Type of graph to display

b) Level of measurement (Nominal, Ordinal, Scale)

c) Value labels

d) Data transformation rules

Answer: b) Level of measurement (Nominal, Ordinal, Scale)

Explanation: Correct measurement ensures appropriate statistical tests.

Q69. Which option lets you check the mean, median, and standard deviation of a variable quickly?

a) Analyze → Compare Means

b) Analyze → Descriptive Statistics → Descriptives

c) Analyze → Correlate → Bivariate

d) Transform → Compute Variable

Answer: b) Analyze → Descriptive Statistics → Descriptives

Explanation: This menu provides quick descriptive statistics like mean, SD, min, max.

Q70. In SPSS, which column in Variable View specifies how many decimal places are shown?

a) Width

b) Measure

c) Decimals

d) Values

Answer: c) Decimals

Explanation: The “Decimals” column controls the display precision of numeric data.

Q71. In SPSS, which option is used to compute Cronbach’s Alpha for scale reliability?

a) Analyze → Compare Means → Means

b) Analyze → Scale → Reliability Analysis

c) Analyze → Descriptive Statistics → Frequencies

d) Analyze → Correlate → Bivariate

Answer: b) Analyze → Scale → Reliability Analysis

Explanation: Cronbach’s Alpha is accessed through Scale → Reliability Analysis for measuring internal consistency.

Q72. A Cronbach’s Alpha value of 0.82 indicates:

a) Poor reliability

b) Excellent reliability

c) Acceptable reliability

d) No reliability

Answer: b) Excellent reliability

Explanation: Alpha >0.80 is considered excellent reliability. 0.70–0.79 is acceptable.

Q73. The Split-Half Reliability method in SPSS divides items into:

a) Odd and Even numbers

b) First half and second half

c) Random halves

d) All of the above

Answer: d) All of the above

Explanation: SPSS allows multiple methods (odd-even, random, first-second half) to calculate split-half reliability.

Q74. The Kaiser-Meyer-Olkin (KMO) Measure tests:

a) Sampling adequacy for factor analysis

b) Reliability of items

c) Normality of variables

d) Linearity between variables

Answer: a) Sampling adequacy for factor analysis

Explanation: KMO checks if partial correlations are small enough for factor analysis. A value >0.6 is good.

Q75. Bartlett’s Test of Sphericity is significant when:

a) p > 0.05

b) p < 0.05

c) p = 1

d) p = 0.5

Answer: b) p < 0.05

Explanation: Significant Bartlett’s test (p < 0.05) indicates variables are correlated enough for factor analysis.

Q76. In SPSS, Principal Component Analysis (PCA) is performed using:

a) Analyze → Descriptive Statistics → Explore

b) Analyze → Dimension Reduction → Factor

c) Analyze → Regression → Linear

d) Analyze → Scale → Reliability Analysis

Answer: b) Analyze → Dimension Reduction → Factor

Explanation: PCA and factor analysis are under Dimension Reduction → Factor.

Q77. Eigenvalues greater than 1 are usually retained according to:

a) Bartlett’s Criterion

b) Kaiser’s Criterion

c) Scree Test

d) Varimax Rule

Answer: b) Kaiser’s Criterion

Explanation: Kaiser’s rule suggests keeping factors with eigenvalues >1.

Q78. In SPSS factor analysis, Varimax rotation is used for:

a) Maximizing correlation between factors

b) Simplifying factor loadings (orthogonal rotation)

c) Testing reliability of factors

d) Reducing variables to one factor

Answer: b) Simplifying factor loadings (orthogonal rotation)

Explanation: Varimax is the most common orthogonal rotation, making interpretation easier.

Q79. A factor loading of 0.65 indicates:

a) Weak relationship with factor

b) Moderate relationship with factor

c) Strong relationship with factor

d) No relationship

Answer: c) Strong relationship with factor

Explanation: Factor loadings >0.60 are considered strong, 0.40–0.59 moderate, <0.30 weak.

Q80. If KMO = 0.45 and Bartlett’s Test is not significant, what should you do?

a) Proceed with factor analysis

b) Drop variables or increase sample size

c) Accept results as final

d) Use regression instead

Answer: b) Drop variables or increase sample size

Explanation: KMO <0.50 and non-significant Bartlett’s Test indicate data is unsuitable for factor analysis.

Q81. In SPSS, which menu path is correct for running Pearson’s correlation?

a) Analyze → Compare Means → Correlation

b) Analyze → Correlate → Bivariate

c) Analyze → Regression → Linear

d) Transform → Compute → Correlation

Answer: b) Analyze → Correlate → Bivariate

Explanation: Pearson correlation is found under Analyze → Correlate → Bivariate, where you can also select Spearman and Kendall correlations.

Q82. What is the valid range of a Pearson correlation coefficient?

a) –2 to +2

b) –1 to +1

c) 0 to +1

d) –0.5 to +0.5

Answer: b) –1 to +1

Explanation: Correlation coefficients range from –1 (perfect negative) to +1 (perfect positive).

Q83. When should Spearman’s correlation be used instead of Pearson’s?

a) When variables are nominal

b) When data are ordinal or not normally distributed

c) When data are interval and normally distributed

d) When testing regression assumptions

Answer: b) When data are ordinal or not normally distributed

Explanation: Spearman’s rho is a non-parametric test for ordinal/rank data or non-normal distributions.

Q84. Which SPSS output table gives the p-value for correlation tests?

a) Descriptive Statistics

b) Correlations

c) Coefficients

d) ANOVA

Answer: b) Correlations

Explanation: The Correlations table shows Pearson/Spearman coefficients and their Sig. (2-tailed) p-values.

Q85. In SPSS, where do you access Simple Linear Regression?

a) Analyze → Correlate → Regress

b) Analyze → Regression → Linear

c) Analyze → Compare Means → Regression

d) Transform → Regression → Linear

Answer: b) Analyze → Regression → Linear

Explanation: Linear regression is found under Analyze → Regression → Linear.

Q86. Which statistic in SPSS output indicates how much variance in the dependent variable is explained by predictors in regression?

a) Beta

b) R²

c) p-value

d) Standard Error

Answer: b) R²

Explanation: The R² value (Coefficient of Determination) shows the proportion of variance explained by the independent variable(s).

Q87. Which of the following is not an assumption of linear regression?

a) Linearity

b) Homoscedasticity

c) Normality of residuals

d) Multicollinearity must be high

Answer: d) Multicollinearity must be high

Explanation: In fact, regression assumes low multicollinearity (independent predictors). High collinearity is problematic.

Q88. In SPSS regression output, which column of the Coefficients table gives the statistical significance of predictors?

a) Beta

b) t

c) Sig.

d) Standard Error

 Answer: c) Sig.

Explanation: The Sig. column shows the p-value, indicating whether each predictor is statistically significant.

Q89. In SPSS, which statistic is used to check multicollinearity among predictors?

a) Variance Inflation Factor (VIF)

b) R²

c) Durbin-Watson

d) Standardized Residuals

Answer: a) Variance Inflation Factor (VIF)

Explanation: VIF and Tolerance in SPSS help detect multicollinearity. A VIF > 10 indicates serious multicollinearity.

Q90. Which test in SPSS regression output checks for autocorrelation of residuals?

a) ANOVA test

b) Durbin-Watson statistic

c) Bartlett’s test

d) KMO test

Answer: b) Durbin-Watson statistic

Explanation: Durbin-Watson values close to 2 suggest no autocorrelation. Values < 1 or > 3 indicate serious problems.

Q91. The main advantage of MANOVA over ANOVA is that:

a) It requires fewer assumptions

b) It controls Type I error across multiple dependent variables

c) It is easier to interpret

d) It does not require post-hoc tests

Answer: b) It controls Type I error across multiple dependent variables

Explanation: MANOVA simultaneously tests multiple DVs, reducing inflated Type I error.

Q92. Which of the following is NOT an assumption of MANOVA?

a) Multivariate normality

b) Homogeneity of covariance matrices

c) Independence of observations

d) No multicollinearity among independent variables

Answer: d) No multicollinearity among independent variables

Explanation: Multicollinearity is a problem in regression, not a formal MANOVA assumption.

Q93. In MANOVA, Wilks’ Lambda is used to:

a) Test equality of variances

b) Measure multivariate effect size

c) Test the overall difference between groups

d) Detect outliers

Answer: c) Test the overall difference between groups

Explanation: Wilks’ Lambda tests whether group means differ across multiple dependent variables.

Q94. Logistic regression is preferred when:

a) Dependent variable is continuous

b) Dependent variable is categorical (binary or multinomial)

c) Independent variables are categorical only

d) Data is normally distributed

Answer: b) Dependent variable is categorical (binary or multinomial)

Explanation: Logistic regression is used for categorical DV (e.g., pass/fail, yes/no).

Q95. In binary logistic regression, the regression coefficients are interpreted in terms of:

a) Raw scores

b) Odds ratios

c) Mean differences

d) Standard deviations

Answer: b) Odds ratios

Explanation: Logistic regression coefficients represent the log odds; exponentiating gives odds ratios.

Q96. In logistic regression, the Hosmer–Lemeshow test is used to check:

a) Normality of residuals

b) Linearity assumption

c) Goodness-of-fit of the model

d) Equality of variances

Answer: c) Goodness-of-fit of the model

Explanation: Hosmer–Lemeshow assesses how well predicted values match observed outcomes.

Q97. Cluster Analysis groups cases based on:

a) Regression weights

b) Similarity or distance measures

c) Factor loadings

d) Random assignment

Answer: b) Similarity or distance measures

Explanation: Clustering is based on similarity/distance metrics such as Euclidean distance.

Q98. In hierarchical cluster analysis, the dendrogram is used to:

a) Show regression coefficients

b) Display clusters and linkage distances

c) Test normality of data

d) Summarize factor loadings

Answer: b) Display clusters and linkage distances

Explanation: A dendrogram is a tree diagram showing how clusters are merged step by step.

Q99. K-Means cluster analysis differs from hierarchical clustering because:

a) It does not require a predefined number of clusters

b) It requires a predefined number of clusters

c) It uses factor analysis as a prerequisite

d) It can only handle categorical variables

Answer: b) It requires a predefined number of clusters

Explanation: K-Means requires the researcher to specify the number of clusters (k) in advance.

Q100. Which statistic is often used to determine the optimal number of clusters in K-Means clustering?

a) Eigenvalues

b) Wilks’ Lambda

c) Silhouette coefficient or elbow method

d) Cronbach’s Alpha

Answer: c) Silhouette coefficient or elbow method

Explanation: The elbow method and silhouette coefficient help determine the ideal number of clusters.

Q101. In SPSS, which menu path is used to run a One-Way ANOVA?

a) Analyze → Compare Means → Independent-Samples T Test

b) Analyze → Compare Means → One-Way ANOVA

c) Analyze → Regression → Linear

d) Analyze → Descriptive Statistics → Explore

Answer: b) Analyze → Compare Means → One-Way ANOVA

Explanation: One-Way ANOVA is found under Analyze → Compare Means → One-Way ANOVA.

Q102. Which assumption is required for One-Way ANOVA?

a) Homogeneity of variance

b) Independence of observations

c) Normally distributed dependent variable

d) All of the above

Answer: d) All of the above

Explanation: ANOVA assumes normality, independence, and equal variances across groups.

Q103. In SPSS, the Levene’s Test in ANOVA output tests for:

a) Normality of distribution

b) Equality of means

c) Homogeneity of variances

d) Linearity of relationship

Answer: c) Homogeneity of variances

Explanation: Levene’s test checks whether group variances are equal, which is an ANOVA assumption.

Q104. If the Levene’s Test is significant (p < 0.05), which Post Hoc test is more appropriate?

a) Tukey HSD

b) Scheffé

c) Games-Howell

d) Bonferroni

Answer: c) Games-Howell

Explanation: Games-Howell is recommended when variances are unequal.

Q105. In a Two-Way ANOVA, besides main effects, which additional effect can be tested?

a) Covariate effect

b) Regression slope

c) Interaction effect

d) Reliability

Answer: c) Interaction effect

Explanation: Two-Way ANOVA allows testing of main effects and interaction effects between factors.

Q106. In SPSS, the option Post Hoc is available for which ANOVA type?

a) One-Way ANOVA

b) Two-Way ANOVA

c) MANOVA

d) Repeated Measures ANOVA

Answer: a) One-Way ANOVA

Explanation: Post Hoc options are available in One-Way ANOVA when you have more than two groups.

Q107. In SPSS One-Way ANOVA output, the “Between Groups” sum of squares represents:

a) Variance within each group

b) Variance explained by group differences

c) Variance unexplained by model

d) Total variance in the dataset

Answer: b) Variance explained by group differences

Explanation: Between Groups variance shows how much of the total variance is explained by differences among group means.

Q108. Which statistic in SPSS ANOVA output indicates whether group means are significantly different?

a) Mean Square Within

b) F-ratio and p-value

c) Levene’s Test statistic

d) Partial Eta Squared

Answer: b) F-ratio and p-value

Explanation: The F-ratio and its associated p-value tell if group means differ significantly.

Q109. In MANOVA, instead of one dependent variable, you analyze:

a) Two or more categorical variables

b) Two or more dependent variables simultaneously

c) Two or more independent variables simultaneously

d) Interaction between categorical and continuous variables

Answer: b) Two or more dependent variables simultaneously

Explanation: MANOVA (Multivariate ANOVA) is used when there are multiple dependent variables.

Q110. In SPSS MANOVA, which statistic is commonly reported to test overall significance?

a) Cronbach’s Alpha

b) Wilks’ Lambda

c) Chi-Square

d) Phi Coefficient

Answer: b) Wilks’ Lambda

Explanation: Wilks’ Lambda is the most commonly reported statistic in MANOVA output.

Leave a Reply

Your email address will not be published. Required fields are marked *

recaptcha placeholder image