Research article

Self-Perceived Loneliness and Depression During the COVID-19 Pandemic: a Two-Wave Replication Study

Authors
  • Alessandro Carollo orcid logo (Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy)
  • Andrea Bizzego orcid logo (Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy)
  • Giulio Gabrieli orcid logo (School of Social Sciences, Nanyang Technological University, Singapore, Singapore)
  • Keri Ka-Yee Wong orcid logo (Department of Psychology and Human Development, University College London, London, UK)
  • Adrian Raine orcid logo (Departments of Criminology, Psychiatry, and Psychology, University of Pennsylvania, Philadelphia, PA, USA)
  • Gianluca Esposito orcid logo (Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy)

This is version 2 of this article, the published version can be found at: https://doi.org/10.14324/111.444/ucloe.000051

Abstract

The global Covid-19 pandemic has forced countries to impose strict lockdown restrictions and mandatory stay-at-home orders with varying impacts on individual’s health. Combining a data-driven machine learning paradigm and a statistical approach, our previous paper documented a U-shaped pattern in levels of self-perceived loneliness in both the UK and Greek populations during the first lockdown (17 April to 17 July 2020). The current paper aimed to test the robustness of these results by focusing on data from the first and second lockdown waves in the UK. We tested a) the impact of the chosen model on the identification of the most time-sensitive variable in the period spent in lockdown. Two new machine learning models – namely, support vector regressor (SVR) and multiple linear regressor (MLR) were adopted to identify the most time-sensitive variable in the UK dataset from Wave 1 (n = 435). In the second part of the study, we tested b) whether the pattern of self-perceived loneliness found in the first UK national lockdown was generalisable to the second wave of the UK lockdown (17 October 2020 to 31 January 2021). To do so, data from Wave 2 of the UK lockdown (n = 263) was used to conduct a graphical inspection of the week-by-week distribution of self-perceived loneliness scores. In both SVR and MLR models, depressive symptoms resulted to be the most time-sensitive variable during the lockdown period. Statistical analysis of depressive symptoms by week of lockdown resulted in a U-shaped pattern between weeks 3 and 7 of Wave 1 of the UK national lockdown. Furthermore, although the sample size by week in Wave 2 was too small to have a meaningful statistical insight, a graphical U-shaped distribution between weeks 3 and 9 of lockdown was observed. Consistent with past studies, these preliminary results suggest that self-perceived loneliness and depressive symptoms may be two of the most relevant symptoms to address when imposing lockdown restrictions.

Keywords: Covid-19, depression, lockdown, loneliness, global study, machine learning, SARS-CoV-2

Rights: © 2022 The Authors.

1129 Views

Published on
03 Nov 2022
Peer Reviewed

 Open peer review from GIULIA BALBONI

Review

Review information

DOI:: 10.14293/S2199-1006.1.SOR-SOCSCI.AZRWIM.v1.RWTTIW
License:
This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com .

ScienceOpen disciplines: Psychology , Clinical Psychology & Psychiatry , Public health
Keywords: loneliness , COVID-19 , machine learning , global study , lockdown , SARS-CoV-2 , Health , depression

Review text

Just one comment:

[R3.8] Line 196, please justify using the non-parametric statistical test or any other tests that will be used and compute the effect size for any significantly statistical results.

Why was used a non-parametric and not a parametric test? Please also clarify in the paper. Thanks.



Note:
This review refers to round 2 of peer review.

 Open peer review from GIULIA BALBONI

Review

Review information

DOI:: 10.14293/S2199-1006.1.SOR-SOCSCI.AKD3GL.v1.RWMIQY
License:
This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com .

ScienceOpen disciplines: Psychology , Clinical Psychology & Psychiatry , Public health
Keywords: loneliness , machine learning , COVID-19 , global study , SARS-CoV-2 , lockdown , Health , depression

Review text

I enjoyed reading the paper and think that this may be an excellent opportunity to present the learning approach and its utility in the field of mental health.

I would suggest the Authors emphasize this uniqueness. This paper is an excellent opportunity to introduce this method and show its advantage compared to the methods usually used in the field.

Nevertheless, for this aim, the machine learning approach must be described profoundly, and all the assumptions and characteristics must be explicated using an appropriate scientific language that may be easily understood.

Line 178, what are the differences between the models used, Random Forest and Support Vector Regressor? Why may it be interesting to study if two different models produce the same results?

Line 186, Please describe the Mean Squared Error. Is there any cutoff or value range that may allow the reader to understand the present study's findings?

Line 194, please describe the parameter C. What does it represent? Is there any cutoff or value range that may allow the reader to understand the present study's findings?

Line 224, Figure 1, please describe the metric used for the importance

Line 259, based on which data it can be said that depression symptoms were the best at predicting lockdown duration in weeks?

Line 102, is that randomized in the order of the questionnaires?

Lines 137 and 163, please, also describe the age range

Line 196, please justify using the non-parametric statistical test or any other tests that will be used and compute the effect size for any significant statical results found.

What is the utility to having found a U-shape?

May it be interesting to verify the invariance of the results across age or gender?

I think that the sample size for each week in the second wave is too small to allow any comparison also with a non-parametric test.



Note:
This review refers to round 1 of peer review and may pertain to an earlier version of the document.

 Open peer review from clarissa ferrari

Review

Review information

DOI:: 10.14293/S2199-1006.1.SOR-SOCSCI.ABACKK.v1.RVDNBU
License:
This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com .

ScienceOpen disciplines: Psychology , Clinical Psychology & Psychiatry , Public health
Keywords: loneliness , machine learning , COVID-19 , global study , SARS-CoV-2 , lockdown , Health , depression

Review text

The manuscript faces a topical issue regarding the interrelations between mental health assessment and the lockdown duration. The gathered data are of great interest and constitute a strength point of the study. In addition, the application of machine learning techniques   in such a context represents an added value. However, many methodological problems considerably mitigate my enthusiasm.   My major concerns regard the poor readability of the study and the choice to model as outcome variable the lockdown duration. The poor readability is mainly due to lack of specifications, descriptions and details that make the analyses hard to reproduce. A paramount purpose of a scientific paper should be the reliability and the reproducibility of the results through a detailed description of the applied models and methods (it should be very useful to add -maybe- in supplementary materials the code or pseudo-code used for the analysis). The second concern regards the SVR approach and in particular the choice to predict the lockdown duration. It is not clear the rationale for which the mental health variables should predict the lockdown duration! It will be reasonable to assess the reverse relation, i.e. the relation between the duration of lockdown (as predictor of) and mental health variables (as outcomes/dependent variables).

Other major and minor comments and suggestions are reported below.

Abstract

Abstract is quite difficult to read. It is the first part on which a reader focus his/her attention so it has to  convey clearly the main information. Please try to re-edit the abstract with well separated subsections of background, methods, results and conclusions

  • lines 18-19: Please clarify here that this study exclude Greek sample
  • line 19: aim a) is not clear, dependence on...what? Please specify
  • line 27: the most important variable in... predicting…what? please clarify

Introduction

  • lines 74-65. From a statistical point of view,  the sentence "found a statistically significant U-shaped pattern" does not make sense without further specifications. Did the authors test the U-shape with a kolmogorov test for distributions?
  • Lines 81-83. Aim a) seems a validation of a previous applied method and as such, It should have been done in the previous paper. Paramount purposes of a scientific paper are the reliability and robustness of results (i.e. results should be robust in terms of methodological approach or model used). If the Authors in this study will find  different results from their previous one, they are denying themselves. Please explain better or provide a justification for this controversial aim.
  • Line 86-87: please change the sentence “unique opportunity”. Actually, every researcher should be able to replicate a previous study, this is a prerogative of a scientific paper!

Table 1

  • to improve readability and interpretability of the assessment scale, please provide the range for each of them in table 1
  • it is not clear why some instrument have Cronbach'a alpha value and other not. Please explain. Moreover, the use of Cronbach'a alpha (for evaluating the internal consistency) should be described in somewhere in the Data  analysis section.

Participants section

  • line 127-132 should be part of a methodological/data analysis section and not of participants section
  • lines 135-138: the sentences reported in these lines allude the presence in table 2 of demographic features, please re-edit (same holds for lines 160-163)

Data Analysis section

  • line 169: Different model with respect to...which one?
  • line 169: data-driven not data-drive
  • Line 170: most influential in...what? maybe influential in explaining (or for) Self-Perceived Loneliness and Depression I guess.  Please explain.
  • Lines 179-185. Paramount targets of a scientific paper are the readability and (mainly) the reproducibility. The authors should provide all the necessary details and explanation to: i) easily understand the purpose and the results of each applied methods, and ii) to replicate the analyses. Please explain: 1) which is the final purpose of the SVR, 2) the choice of 10x15 values for the cross-validation, 3)  the choice of splitting in 75% vs 25%  for training and test set. Moreover, nowhere is specified the output/dependent  variable of SVR.
  • lines 189-192 are unclear, please explain.
  • Lines 196-202. Kruskal-wallis test is the corresponding non-parametric test of ANOVA test for comparing independent samples. Here the Authors declare to compare variable changes over time, i.e. to compare correlated data (?) If so, the Kruskal-wallis is not the right test to use. If the Authors want to compare same variable, evaluated on same sample, across time they have to use the Friedman test. Differently, if the Authors want to compare independent sample, this should be better explained.

Results

-lines 221-222. I am sorry, but I can see a clear U-shape in figure2. Please explain.

Discussion

-line 254. The reader has to reach the discussion section to know which is the outcome variable under investigation with SVR: the lockdown duration. Moreover, is not clear the rationale for which the mental health variables should predict the lockdown duration. It would be reasonable to assess the reverse relation, i.e. the relation between the duration of lockdown (as predictor of) and mental health variables (as outcomes/dependent variables). In fact, seems to me that the true intention of the Authors to assess the reverse relation is revealed by the statement in lines 304-306. It is worth to note this point that appears crucial. The nature of variables cannot be ignored. The lockdown duration cannot be a random variable since it is measured without error and it is the same for all subjects involved in the survey. Conversely, the mental health variables are random variables because they vary among subjects.  In light of this, the whole paper medialisation should be rethought by considering the mental health variables as the main outcomes (the target variables) in relation with/affected by lockdown duration.



Note:
This review refers to round 1 of peer review and may pertain to an earlier version of the document.

 Open peer review from YOUYOU WU

Review

Review information

DOI:: 10.14293/S2199-1006.1.SOR-SOCSCI.AOQOQB.v1.RNSZQE
License:
This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com .

ScienceOpen disciplines: Psychology , Clinical Psychology & Psychiatry , Public health
Keywords: loneliness , machine learning , COVID-19 , global study , SARS-CoV-2 , lockdown , Health , depression

Review text

The paper has two goals: the first goal is to replicate a previous using the same dataset but a different machine learning model. The previous finding was that “perceived loneliness”, among 12 mental health indicators, is most related to time into a COVID lockdown in the UK. The second goal is to confirm a u-shape relationship between perceived loneliness and weeks into the lockdown, using a different dataset from the second national lockdown.

My biggest concern is that there is little discussion on the effect size. We only knew about the MSE of the overall model and that “perceived loneliness” is relatively more related to time into lockdown than other variables (but not by how much). The authors mentioned in their previous paper (CITE) that the overall performance is bad, which I’d agree even without comparing the MSE or R2 with other similar machine learning tasks. Therefore, among a collection of highly correlated mental health variables that together are not so related to time into a lockdown, does it really matter that we identify the one that’s slightly more related to time? I’d like to see more justification of how this analysis is meaningful, taking into account effect sizes.

Now assuming the purpose of the analysis is justified, I move on to talk about the mechanics of the machine learning task. The analysis is based on a sample of 435 participants, which is admittedly quite many for a longitudinal study but small for a machine learning task. The authors are quite right on the need to replicate the effect using a different model given the small sample. Going down that route, I’d recommend go as far as replicating it using multiple models beyond the SVR to see if they agree. Having said that, I’d argue it’s more important to replicate the finding across different data sources than using a different model. I hope the authors could search other longitudinal data sources with similar variables and replicate the findings. At the very least, it will be good to know from the paper that there is no other suitable data source for this question and the finding based on this one dataset is preliminary.

If I am reading Table 2 correctly, the sample size seems incredibly small (5 participants from week 3, and 2, 3, and 1 participant from week 4, 5, and 6 for the second analysis. The week-by-week comparison would not be meaningful at all given the small sample. Hence the data from the second wave is not suitable for confirming or rejecting the U-shape finding in the first wave.



Note:
This review refers to round 1 of peer review and may pertain to an earlier version of the document.