Plenty. If you scroll down, there’s tens of research articles linked. You just have to click on the circles for most of the articles :-)
Here’s an excerpt from the bottom of the article’:
The most conclusive long-term study on sleep training to date is a 2012 randomized controlled trial on 326 infants, which found no difference on any measure—negative or positive—between children who were sleep trained and those who weren’t after a 5 year follow up. The study includes measurements of sleep patterns, behavior, cortisol levels, and, importantly, attachment.
That’s an interesting point. But maybe there are some compounds that can induce a state that fools people who’ve never tried psychoactive compounds? I’ve heard of studies using dehydrated water as a placebo for alcohol as it induces some of the same effects:
Like ethanol, heavy water temporarily changes the relative density of cupula relative to the endolymph in the vestibular organ, causing positional nystagmus, illusions of bodily rotations, dizziness, and nausea. However, the direction of nystagmus is in the opposite direction of ethanol, since it is denser than water, not lighter.
To a certain extent I agree, but I also think it’s a tricky topic that deals a fair bit with the ethics of medicine. The Atlantic has a pretty good article with arguments for and against: https://web.archive.org/web/20230201192052/https://www.theatlantic.com/health/archive/2011/12/the-placebo-debate-is-it-unethical-to-prescribe-them-to-patients/250161/
Yes, in your three situations, I’d agree that option C is the best one. But you’re disregarding a major component of any drug: side effects. Presumably ecstasy has some nonnegligible side effects so just looking at the improvement on the treated disease might now show the full picture
I agree that it’s a shame that it’s so difficult to eliminate the placebo effect from psychoactive drugs. There’s probably alternative ways of teasing out the effect, if any, from MDMA therapy, but human studies take a long time and, consequently, costs a lot of money. I’d imagine the researchers would love to do the studies, but doesn’t have the resources for it
I think the critique about conflicts of interest seems a bit misguided. It’s not the scientists who doesn’t want to move further with this. It’s the FDA
But if they know they’re getting ecstasy, the improvement might originate from placebo which means that they’re not actually getting better from ecstasy. They’re just getting better because they think they should be getting better
That’s a super cool link. Thanks for sharing!
I think those are all good questions that I don’t think anyone really have conclusive answers to (yet). Hopefully the researchers will have the funds in the future to investigate those and more!
From the article:
Squeezed in alongside their main projects, the investigation took eight years and included dozens of participants. The results, published in 2016, were revelatory [1]. Two to three months after giving birth, multiple regions of the cerebral cortex were, on average, 2% smaller than before conception. And most of them remained smaller two years later. Although shrinkage might evoke the idea of a deficit, the team showed that the degree of cortical reduction predicted the strength of a mother’s attachment to her infant, and proposed that pregnancy prepares the brain for parenthood.
I think that hypothesis still holds as it has always assumed training data of sufficient quality. This study is more saying that the places where we’ve traditionally harvested training data from are beginning to be polluted by low-quality training data
From the article:
To demonstrate model collapse, the researchers took a pre-trained LLM and fine-tuned it by training it using a data set based on Wikipedia entries. They then asked the resulting model to generate its own Wikipedia-style articles. To train the next generation of the model, they started with the same pre-trained LLM, but fine-tuned it on the articles created by its predecessor. They judged the performance of each model by giving it an opening paragraph and asking it to predict the next few sentences, then comparing the output to that of the model trained on real data. The team expected to see errors crop up, says Shumaylov, but were surprised to see “things go wrong very quickly”, he says.
What they see as “bad research” is looking at an older cohort without taking into consideration their earlier drinking habits - that is, were they previously alcoholics or did they generally have other problems with their health?
If you don’t correct for these things, you might find that people who are not drinking seems less healthy than people who are. BUT, that’s not because they’re not drinking, it’s just because of their preexisting conditions. Their peers who are drinking a little bit tend to not have these preexisting conditions (on average)
Here’s an actual explanation of the ‘sneaked reference’:
However, we found through a chance encounter that some unscrupulous actors have added extra references, invisible in the text but present in the articles’ metadata, when they submitted the articles to scientific databases. The result? Citation counts for certain researchers or journals have skyrocketed, even though these references were not cited by the authors in their articles.
Yea, not the most clear title about what the article is about hahah
Is it possible for you to somehow quantify traffic originating from AdNauseum? If so, how?
But that study was done on people aged 65+ for 11 weeks? I mean, sure, they didn’t measure any significant changes to the brain, but that doesn’t preclude changes forever. 11 weeks is not long to practice a language
When talking about measurements, “precision” and “accuracy” have slightly different meanings See here
Ahh that’s wack. The article it’s based on is open-access: https://www.nature.com/articles/s41586-024-07856-5