July 14, 2025

Dating Wishlists: Are we happier when we get what we want in a mate?

Dating Wishlists: Are we happier when we get what we want in a mate?

Loyal, funny, hot — you’ve probably got a wish list for your dream partner. But does checking all your boxes actually lead to happily ever after? In this episode, we dive into a massive global study that put the “ideal partner” hypothesis to the test. Do people really know what they want, and does getting it actually make them happier? We explore surprising statistical insights from over 10,000 romantics in 43 countries, from mean-centering and interaction effects to the good-catch confounder. Along the way, we dig into dessert metaphors, partner boat-count regression models, and the one trait that people say doesn’t matter — but secretly makes them happiest.


Statistical topics

  • Regression
  • Random Slopes and Intercepts (Random Effects) in Regression
  • Standardized Beta Coefficients in Regression
  • Interaction Effects in Regression
  • Mean Centering
  • Exploratory Analyses


Methodological morals

“Good science bares it all.”

“When the world isn't one size fits all, don't fit just one line; use random slopes and intercepts.”



References

  • Eastwick PW, Sparks J, Finkel EJ, Meza EM, Adamkovič M, Adu P, Ai T, Akintola AA, Al-Shawaf L, Apriliawati D, Arriaga P, Aubert-Teillaud B, Baník G, Barzykowski K, Batres C, Baucom KJ, Beaulieu EZ, Behnke M, Butcher N, Charles DY, Chen JM, Cheon JE, Chittham P, Chwiłkowska P, Cong CW, Copping LT, Corral-Frias NS, Ćubela Adorić V, Dizon M, Du H, Ehinmowo MI, Escribano DA, Espinosa NM, Expósito F, Feldman G, Freitag R, Frias Armenta M, Gallyamova A, Gillath O, Gjoneska B, Gkinopoulos T, Grafe F, Grigoryev D, Groyecka-Bernard A, Gunaydin G, Ilustrisimo R, Impett E, Kačmár P, Kim YH, Kocur M, Kowal M, Krishna M, Labor PD, Lu JG, Lucas MY, Małecki WP, Malinakova K, Meißner S, Meier Z, Misiak M, Muise A, Novak L, O J, Özdoğru AA, Park HG, Paruzel M, Pavlović Z, Püski M, Ribeiro G, Roberts SC, Röer JP, Ropovik I, Ross RM, Sakman E, Salvador CE, Selcuk E, Skakoon-Sparling S, Sorokowska A, Sorokowski P, Spasovski O, Stanton SCE, Stewart SLK, Swami V, Szaszi B, Takashima K, Tavel P, Tejada J, Tu E, Tuominen J, Vaidis D, Vally Z, Vaughn LA, Villanueva-Moya L, Wisnuwardhani D, Yamada Y, Yonemitsu F, Žídková R, Živná K, Coles NA. A worldwide test of the predictive validity of ideal partner preference matching. J Pers Soc Psychol. 2025 Jan;128(1):123-146. doi: 10.1037/pspp0000524
  • Love Factually Podcast: https://www.lovefactuallypod.com/


Kristin and Regina’s online courses: 

Demystifying Data: A Modern Approach to Statistical Understanding  

Clinical Trials: Design, Strategy, and Analysis 

Medical Statistics Certificate Program  

Writing in the Sciences 

Epidemiology and Clinical Research Graduate Certificate Program 


Programs that we teach in:

Epidemiology and Clinical Research Graduate Certificate Program 


Find us on:

Kristin -  LinkedIn & Twitter/X

Regina - LinkedIn & ReginaNuzzo.com


  • (00:00) -
  • (00:00) - Intro
  • (04:57) - Actual dating profile wishlists vs study wishlists
  • (09:12) - Juicy paper details
  • (18:31) - What the study actually asked – wishlist, partner resume, relationship satisfaction
  • (24:10) - Linear regression illustrated through number of boats your partner has
  • (30:37) - Standardized regression coefficients illustrated through spouse height concordance
  • (34:52) - Good catch confounder: We all just want the same high-quality ice cream / mate
  • (39:46) - Does your personalized wishlist matter? Results
  • (42:01) - Wishlist regression interaction effects: like chocolate and peanut butter
  • (45:51) - Partner traits result in happiness bonus points
  • (49:51) - What do we say we want – and what really makes us happy? Surprise
  • (54:10) - Gender stereotypes and whether they held up
  • (56:51) - Random effects models and boats again
  • (59:30) - Other cool things they did
  • (01:00:41) - One-minute paper summary
  • (01:02:23) - Wrap-up, rate the claim, methodological morals


00:00 -

00:00 - Intro

04:57 - Actual dating profile wishlists vs study wishlists

09:12 - Juicy paper details

18:31 - What the study actually asked – wishlist, partner resume, relationship satisfaction

24:10 - Linear regression illustrated through number of boats your partner has

30:37 - Standardized regression coefficients illustrated through spouse height concordance

34:52 - Good catch confounder: We all just want the same high-quality ice cream / mate

39:46 - Does your personalized wishlist matter? Results

42:01 - Wishlist regression interaction effects: like chocolate and peanut butter

45:51 - Partner traits result in happiness bonus points

49:51 - What do we say we want – and what really makes us happy? Surprise

54:10 - Gender stereotypes and whether they held up

56:51 - Random effects models and boats again

59:30 - Other cool things they did

01:00:41 - One-minute paper summary

01:02:23 - Wrap-up, rate the claim, methodological morals

[Regina] (0:00 - 0:04)
I came up with a weird analogy that might help illustrate the problem.


[Kristin] (0:04 - 0:05)
Is it sex related, Regina?


[Regina] (0:06 - 0:10)
Surprisingly, it is not. It is dessert related.


[Kristin] (0:10 - 0:40)
Well, you know, those are not mutually exclusive. Welcome to Normal Curves. This is a podcast for anyone who wants to learn about scientific studies and the statistics behind them.


It's like a journal club, except we pick topics that are fun, relevant, and sometimes a little spicy. We evaluate the evidence and we also give you the tools that you need to evaluate scientific studies on your own. I'm Kristen Sinani.


I'm a professor at Stanford University.


[Regina] (0:40 - 0:46)
And I'm Regina Nuzzo. I'm a professor at Gallaudet University and part-time lecturer at Stanford.


[Kristin] (0:46 - 0:52)
We are not medical doctors. We are PhDs, so nothing in this podcast should be construed as medical advice.


[Regina] (0:52 - 1:04)
Also, this podcast is separate from our day jobs at Stanford and Gallaudet University. Kristen, today we're going to talk about dating and romance. It's been a while since we've done something on dating and romance.


[Kristin] (1:04 - 1:08)
Oh, good. We're getting back to our roots because that's one of the themes of this podcast.


[Regina] (1:08 - 1:48)
We are going to talk about this idea that when we date, we usually have this wish list of priorities for the sort of person that we want. Maybe I want someone who is ambitious and confident and successful, or maybe I'm the sort of person who wants, like, a warm, loyal, good listener. Right.


We all have certain things that we think we want. Right. And the assumption is that we know what we want and that we are happier when we get what we want.


This idea has a name in psychology research. It's called the matching hypothesis, but I don't like that name. So, I am thinking of it as the dating wish list hypothesis.


[Kristin] (1:48 - 1:51)
Oh, that is a much better name. Yes, Regina.


[Regina] (1:52 - 2:08)
Thank you. So, that's the claim we're looking at today, is that this is correct, that the more our real life partner matches this ideal partner we have in our head, the more satisfied we are with the relationship. It's kind of like, do we actually want what we say we want?


[Kristin] (2:08 - 2:25)
I mean, it seems like that should be true. And, Regina, isn't this the whole premise of online dating? That if they can match you up closely enough to what you say you want, you're going to end up in a better relationship.


That's all the fancy algorithms, right? That is the dream, at least, in online dating.


[Regina] (2:26 - 3:56)
But it's weird because psychology researchers are not even sure if this idea is true, that we know what we want and that it makes us happy. It's been studied for about the past 25 years and the results have been mixed. Sometimes studies show these big effects and sometimes nothing at all.


And today we're going to look at the evidence for this wish list hypothesis from just one study, but it is a very large study, very rigorous, statistically interesting. There's a lot to talk about. But, Kristen, it's a special paper for another reason that is relevant to you and me, and that is because the first and third authors, Paul Eastwick and Eli Finkel, in addition to being psychology professors and lovers of statistics, they are co-hosts of a podcast called Loved Factually.


Oh, that's a great podcast. It's so much fun. I love it.


So, they take romantic comedies, rom-com movies, and then they analyze them through the lens of psychology research. So, they did Barbie and how to navigate the friend zone, you know, and bringing all the science in there. Kristen, I realized it's kind of like the inverse of what we do, actually, because you and I start with the science and the stats and the paper, and then somehow we accidentally end up talking about sex in the city or hot or not.com, right?


Then we get to the rom-coms.


[Kristin] (3:56 - 4:18)
Regina, I think they have it a little bit easier, though, because they are love researchers and we are statisticians. So, they are already closer to a podcast than we are. I think we have a harder job.


We do have a harder job. Speaking of starting with the statistics first, Regina, you mentioned that there are good, juicy stats in this paper. So, can you say a little bit about what we're talking about on the statistics front today?


[Regina] (4:18 - 4:37)
Yeah, really cool things. Mean centering, for one, and a lot of nifty things in regression models, including interaction effects, standardized beta coefficients of random slopes and intercepts, and all of those are much more interesting than they sound, actually, I promise.


[Kristin] (4:37 - 4:48)
Oh, well, actually, this is my wish list for the podcast, Regina, not my wish list for dating. Random slopes and intercepts, it just warms my heart. Is there random slopes and intercepts in dating, too?


[Regina] (4:49 - 4:55)
I don't know what that would be. Slopes and intercepts, that sounds kind of dirty, actually.


[Kristin] (4:57 - 5:02)
Okay, so, Regina, do you have a wish list for dating?


[Regina] (5:02 - 5:13)
I do, actually. At least, I always fill out the part on the dating app where it asks you what you're looking for, and I went back to look at one of them to see what I actually wrote.


[Kristin] (5:14 - 5:15)
Oh, and are you going to share it with us?


[Regina] (5:15 - 5:42)
Yeah, I was afraid you were going to ask that, and this is a bit personal, but whatever, here goes. I only had 140 characters, so it had to be very to the point, and it's looking for smart, generous, curious, engaging, outdoorsy, sensual, playful partner who knows how to communicate and smells great when he's sweaty.


[Kristin] (5:42 - 6:00)
Oh, I love your writing, Regina. I noticed you slipped in the smells great when sweaty. This is going back to the pheromones episode.


Exactly, the sweaty t-shirt study. Regina, this was not checkboxes then, though. This was free text, like you had to put your wish list in tweet form.


[Regina] (6:00 - 6:03)
It was exactly 140 characters.


[Kristin] (6:04 - 6:13)
But how does this dating app use this to match you? Are they using AI or text mining to match you based on this description? How does that work?


[Regina] (6:13 - 6:21)
Oh, that sounds more sophisticated than what they are actually doing. I think this is just what the potential dates see. This is what they see on my profile.


[Kristin] (6:21 - 6:22)
Got it.


[Regina] (6:22 - 6:35)
Sometimes they do respond and tell me if they think they fit my list or how well they fit it. One recently said that he found it to be a very challenging list, but that he was digging my writing style.


[Kristin] (6:35 - 7:36)
Digging, I like that verb. What about you, though, Kristen? Do you have a wish list?


It's been a long time since I dated, and I think when I was young, I didn't really have an explicit wish list. I had a type, though, like exotic, dark, cute, bad boy vibe, and also not too short because I'm tall. And probably in the back of my head, I also had some basics like not homeless or unemployed.


That bar was kind of low, I guess. Regina, I am older and wiser now, though, so if I was making a wish list today, I would definitely prioritize differently. I think it would be like supportive, honest, emotionally mature, accountable, generous.


And I know that sounds kind of boring, but this is my advice for young listeners. That is what you should be going for, and I have learned this through hard life lessons. Aw.


All right, Regina, getting back to the paper, though, I'm sure they didn't have participants fill out a text box because that is awful data to analyze. So how did they measure the wish list in this study?


[Regina] (7:36 - 8:06)
They had a set of 35 core traits. So attractiveness, or nice body, or honesty, patience, intelligence, sensitivity, supportive, religious, dressed as well, financially secure, ambitious, sexy. It's like a whole wide net.


I'll put them all in the show notes. People had to rate each of these 35 traits on a scale of 1 to 11 on how important each of these is for you in a mate.


[Kristin] (8:06 - 8:23)
Oh, I see. So this is not a binary, yes, I want financially secure, or no, I don't. It's a rating scale.


So you might rate good dresser as a 6, but intelligence as an 11, for example. And it's how you prioritize these things. Right.


All right. So, Regina, tell me more about the details of the paper now.


[Regina] (8:24 - 8:31)
The title is, get ready, A Worldwide Test of the Predictive Validity of Ideal Partner Preference Matching.


[Kristin] (8:32 - 8:43)
That is your typical boring academic title. Ideal Partner Preference Matching, is that their fancy academic name for the wish list hypothesis, Regina?


[Regina] (8:43 - 8:50)
That is exactly what it is, yes. Maybe they were going for Google Scholar search engine optimization points.


[Kristin] (8:51 - 9:04)
Could be. I feel like we need to give this paper a more exciting name, Regina, in honor of the Love Factually podcast. I think we should give this paper a movie title name, right?


What would this paper's name be if it were a movie?


[Regina] (9:04 - 9:06)
Oh, a movie, like a rom-com?


[Kristin] (9:07 - 9:07)
Yes. What would it be?


[Regina] (9:09 - 9:11)
Be Careful What You Wish For.


[Kristin] (9:12 - 9:35)
Oh, I love it. Yeah, see, this paper would have got so much more attention if they named it Be Careful What You Wish For instead of this boring academic title. Regina, I'm also thinking in honor of the Love Factually podcast, at the beginning of every episode, they do this summary of the movie in under a minute.


I want to give, by the end of this episode, the one-minute movie summary of this paper. What would the paper be if it were a movie?


[Regina] (9:35 - 9:41)
It's a long paper, so it might be like the never-ending story. That's the challenge, though, is that you have to put it in 60 seconds.


[Kristin] (9:41 - 9:44)
That's what they do on the Love Factually podcast, so we're going to do it at the end.


[Regina] (9:45 - 9:57)
Okay, I can't wait. Let me tell you more about the paper, then. It was published in the Journal of Personality and Social Psychology.


It will actually be published in October of this year, available early.


[Kristin] (9:57 - 10:04)
Oh, so we're covering a paper that's not even officially published yet. We're getting a scoop here. We are so timely.


I'm proud of us, Regina.


[Regina] (10:04 - 10:09)
We are so timely, considering that our first episode was on a paper 35 years old.


[Kristin] (10:10 - 10:13)
We're covering all the bases, classic papers and contemporary.


[Regina] (10:14 - 10:27)
This might be an orgy, because there were 100 authors just listing all of their names, affiliations, and what role they played in the manuscript. It took up over four pages.


[Kristin] (10:28 - 10:35)
So I'm guessing, Regina, that this might be what we call big science, like having a lot of teams come together to do a big project.


[Regina] (10:36 - 10:54)
Yes. Let's talk more about how they pulled that off later, though. The research team did a lot of other things that we like, statistically.


Not only did they pre-register the study, Kristen, you and I talk a lot about that, but this is what is called a registered report. I love a registered report.


[Kristin] (10:55 - 10:58)
You need to explain what a registered report is for our audience, Regina.


[Regina] (10:58 - 11:36)
Registered report is a bad name because it makes it sound so blah, registered report. But really, it's kind of like shiny and sparkly. It's a special type of paper in a journal, and it means that the researchers submitted their entire protocol to the journal before they even started the study.


And that protocol was peer-reviewed. That's the important thing. And the journal agreed to publish the resulting paper no matter what the outcome was.


So there's no hiding. This has become a really big thing in the past 15 years. It's catching on in psychology, where they really needed this extra rigor, and I wrote about registered report for Nature.


[Kristin] (11:36 - 11:42)
We'll put a link in the show notes to that paper. Regina, did they also have open data and code?


[Regina] (11:42 - 11:50)
All of their data and their data analysis code that they used to analyze the data available online Open Science Framework.


[Kristin] (11:50 - 11:56)
That is fantastic. So more data that I can use in class along with the hookworm therapy data.


[Regina] (11:57 - 12:13)
You're so hip. So, Kristen, it's a very long paper, plus a 31-page supplemental file. They put everything in there, all kinds of analyses.


And to make sure that this episode is not 10 hours long, I will just be presenting a few results today.


[Kristin] (12:14 - 12:36)
Regina, I love supplements because they are great for statistical sleuthing. Sometimes there's some additional analyses that I want to see, and I can find it in a supplement that makes me very happy. I do have to say, Regina, that this paper is a little unwieldy, and it's the opposite of something I frequently complain about with published papers, which is salami slicing.


Salamis?


[Regina] (12:39 - 12:41)
That sounds dirty, so please explain that.


[Kristin] (12:41 - 13:33)
Salami slicing is when researchers, this is an academic term, when researchers take a single data set or single study, and then they chop it into a bunch of thin papers. Like, we're going to look at pain in the left knee in one paper, but then in a separate paper, we're looking at pain in the right knee. And they do this, of course, because they want to get as many papers as possible.


They want to pad their publication record. But the problem is that with all of these little papers, you're losing the context. You don't see the whole picture.


And also, it's just cluttering the literature often with a lot of small and flimsy studies. Charcuterie. I'm going to call it the charcuterie effect.


Charcuterie effect, yes. But this paper we are talking about today is the exact opposite of that. They have crammed everything in this one paper, and it's like one massive redwood tree thick log of salami.


[Regina] (13:34 - 13:35)
I'm not even going to go there.


[Kristin] (13:35 - 13:42)
I'm not even going to say anything about your massive thick log. I'm talking totally academic here, Regina.


[Regina] (13:42 - 13:44)
Totally. Not Freudian at all.


[Kristin] (13:44 - 13:45)
Exactly.


[Regina] (13:46 - 14:00)
Well, Kristen, salamis aside, I feel like you need to be careful what you wish for, because you think you know what will make you happy in journal papers. But will your journal paper wish list really make you satisfied?


[Kristin] (14:01 - 14:37)
It is true. I do have a pretty set wish list for journal papers, unlike for dating. Open data, pre-registered, not salami sliced, good statistics.


And they are hitting a lot on my checklist. But I have to admit, it almost went too far on the anti-salami slicing, because this paper is very dense and a little overwhelming. Yes, be careful what you wish for.


Too big. Yes. Some things can be too big.


Regina, speaking of big, how big was this study? What was the sample size?


[Regina] (14:39 - 14:44)
The N was very, very big. 10,000 participants, actually.


[Kristin] (14:44 - 14:53)
Wow. That is a big sample size. And that is also on my journal paper wish list.


I'm not sure it's on my dating wish list.


[Regina] (14:56 - 15:10)
It gets better and better. This was global, 43 countries. China, Slovenia, Turkey, Malaysia, Nigeria, United Arab Emirates, 60 research sites altogether, 22 different languages.


[Kristin] (15:10 - 15:31)
This explains why they have so many authors, right? All of these different teams, hence a lot of authors. But Regina, who were these participants?


Were these all college students in Psych 101 class looking for extra credits like we talked about back in that red dress effect episode? We talked about that study with 23 only college students.


[Regina] (15:32 - 15:57)
This is about as far from that as you can get. So not only 10,000, two-thirds of the sites included non-college folks. So much better.


Average age was 29, which seems young. But then when you look at the standard deviation, it was 12 years. So that means they got some older folks, too.


They got two-thirds women, one-third men, a good mix of sexual orientations and educational levels, too.


[Kristin] (15:57 - 15:59)
That's impressive. It's really good for generalizability.


[Regina] (16:00 - 16:21)
Here is something that these researchers did that most previous studies did not do, and that is include single people in here as well. The researchers here thought, maybe there's a difference when we're talking about wish lists if you're already stuck with someone in a relationship versus out there on the market looking at potential partners.


[Kristin] (16:22 - 16:23)
I think you're right about that, Virginia.


[Regina] (16:24 - 16:42)
Yes. So what the researchers did for these single people, they asked single people to think of a person they wish they could be with, crush or a love interest. The instructions were to name the person with whom you would most desire to have a romantic relationship.


[Kristin] (16:43 - 16:54)
Oh, interesting. I think we should call this person the target crush. And, Regina, does it have to be somebody from your everyday life or could it be someone like Brad Pitt, for example?


[Regina] (16:55 - 16:56)
No, it had to be a real person that you know.


[Kristin] (16:56 - 17:00)
No, I'm not sure who I would pick for this. It's actually kind of a hard question.


[Regina] (17:01 - 17:16)
I can do this pretty easily, I think. I thought about this going through the paper. I'm thinking of maybe an ex and you were compatible, but you were too young and life took you in different directions, maybe.


Okay. Or maybe it's a crush and you like them, but they don't like you. Right.


[Kristin] (17:16 - 17:17)
So you're not dating.


[Regina] (17:18 - 17:38)
Yep, yep. Been there. Or more tragically, but also more frequently, a nice friend who seems like a good catch, except that good catch already got caught by someone else first.


They're already taken. I find this last one that happens more frequently as time goes by. I've got a lot of those.


[Kristin] (17:38 - 17:42)
Oh, that sounds specific, Regina. Are you thinking of somebody specific? Do tell.


[Regina] (17:44 - 17:51)
Kristen, I shared my dating profile with you. I am definitely not sharing my secret married crushes.


[Kristin] (17:52 - 18:02)
Okay, fine. All right. So Regina, once they have all of these people, both partnered and single across 43 countries, what exactly did they ask them to do in this study?


[Regina] (18:02 - 18:16)
Each individual research site was in charge of logistics for their own participants. So it's not like Paul and Eli were sitting around emailing surveys to 10,000 people around the world in between podcasting about Pretty Woman.


[Kristin] (18:17 - 18:19)
Oh, they have an episode about Pretty Woman? That's such a classic.


[Regina] (18:20 - 18:29)
Oh, it's such a good one, right? So this episode, they talked about stereotypes and how transactional mindsets can undermine relationships.


[Kristin] (18:29 - 18:31)
Oh, I'm going to have to go back and listen to that.


[Regina] (18:31 - 18:48)
So, okay, research teams, they send online surveys to the participants. And the first thing the participants did, rate how important each of those 35 traits was in an ideal partner. 1 to 11, not at all desirable to extremely desirable.


[Kristin] (18:49 - 19:03)
These are the traits like attractiveness, good dresser, honesty, all the things on the wishlist. But Regina, I'm wondering why 1 to 11? I'm used to like rating things from 1 to 10.


So is there some special statistical thing that they picked 1 to 11?


[Regina] (19:03 - 19:14)
Good question. I don't know other than 1 to 10 does not give you a neutral point. So at least 1 to 11 lets you choose 6 as the meh, I don't care one way or the other point.


[Kristin] (19:14 - 19:18)
Right. 6 would be right in the middle, so it can be entirely neutral. That makes sense.


[Regina] (19:18 - 19:31)
So the average for most of the traits was about 8 or 9 out of 11. Funny, for example, humorous was 9 on average. The lowest was religious, and that was about 5 on average.


Oh, okay.


[Kristin] (19:32 - 19:39)
That makes sense. Some people don't want religious in a partner, but it's good that there was some differences in the preferences. It wasn't just like everybody was 11.


[Regina] (19:40 - 19:47)
Then after the wishlist, that's when the researchers asked them to give the first name of their romantic partner or target crush.


[Kristin] (19:47 - 19:55)
So you had to put a name down. I hope that they omitted the names when they put this into an open data set online, publicly available.


[Regina] (19:56 - 20:21)
Properly anonymized, properly anonymized, yes. So we've got the wishlist, the name of their partner or target crush, and now the participants had to rate the partner or target crush on each of those same traits. Again, a scale of 1 to 11.


1 is the person does not really have the trait at all. 11 means they really have it. We can call this the partner resume.


[Kristin] (20:21 - 20:39)
Ah, I guess this is why you can't use a celebrity like Brad Pitt, because you have to know them personally to be able to judge how sensitive or how honest they are. But, Virginia, this is still really tricky for single people. How do you rate someone that you wish you were dating?


You may not know them well enough.


[Regina] (20:40 - 21:03)
True. This is a problem. And honestly, I think this is a weakness of the study.


Anyway, now we have the wishlist and the partner resume. Those are the two predictor variables. Now we need the key outcome variable of interest, and that is how happy are you in the relationship?


The researchers called this romantic evaluation.


[Kristin] (21:04 - 21:11)
Romantic evaluation. That's kind of a vague term. Virginia, can you explain it and also maybe give it a less jargony name?


[Regina] (21:11 - 21:39)
The researchers used a set of questions that measure relationship satisfaction, like how happy or satisfied are you with your partner? So we could just call it relationship satisfaction. But, Kristen, we have to remember that there are single people in the sample, and single people have no actual real partner.


So it's like imaginary relationship satisfaction for them.


[Kristin] (21:40 - 21:45)
Relationship satisfaction. I like that name. What kinds of questions did they ask to get at this, Regina?


[Regina] (21:45 - 22:14)
Researchers asked the participants to rate on a scale of 1 to 11 how much they agreed with a statement like the following with their target crush or partner in there. Adam is the first person I would turn to if I had a problem. It is important to me to see or talk with Adam regularly.


So if I'm a single person and I'm talking about my target married crush, assuming that I have a close relationship with Adam and that Adam's wife is completely cool with this.


[Kristin] (22:14 - 22:30)
Right. I can see how this is a tricky question for somebody who's single. Maybe you want Adam to be the first person you turn to, but he's probably not already.


And, Regina, I happen to know that you have an Adam in your X files. So now this is making me wonder, is that your target crush? Are you revealing that to us now?


[Regina] (22:31 - 22:39)
First of all, you know me too well. And second of all, Adam might just be a random name that I picked out of the blue.


[Kristin] (22:39 - 23:09)
Right, starting with the A's. Sure. All right.


So, Regina, just to recap, if I was in this study, the first thing I would be doing is rating those 35 traits to say how important they are to me in a partner. And then I'd be naming some person, either an actual partner or a person I'd like to date. And then I would be rating that person on all 35 traits.


And then finally, I would be reporting how satisfied I am with this relationship in either a real or imaginary sense.


[Regina] (23:10 - 23:16)
Exactly. And then you do a little statistical wizardry and you get some really cool results out at the end.


[Kristin] (23:17 - 23:22)
Well, I can't wait to hear about their statistical analyses and their results, but let's take a short break first.


[Regina] (23:31 - 23:37)
Kristen, we've talked about your course, Writing in the Sciences on Coursera. Maybe you could tell people a little bit more about it.


[Kristin] (23:37 - 24:09)
It's a self-paced course for anyone who needs to write scientific papers. And I give a lot of practical demonstrations for how to improve your writing to make it much more clear and concise. And you can earn a certificate from Coursera.


You can find a link to that course on our website, NormalCurves.com. Welcome back to Normal Curves. Today, we're looking at the claim that what we think we want in a partner actually makes us happier.


This is the wishlist hypothesis.


[Regina] (24:10 - 24:50)
And now I'm going to talk about four main results that I found most interesting. Three of them directly tested this wishlist hypothesis. The fourth was exploratory, but really fascinating, so I'm including it anyway.


The first analysis calculated a match score by correlating each participant's wishlist with their partner's resume, basically seeing how closely these pairs of ratings matched across all 35 traits. And that correlation is what I am calling the match score. But, Kristen, you'll enjoy this.


They called it the pattern metric.


[Kristin] (24:52 - 25:12)
Pattern metric makes me think of a quilt and a stopwatch. It's classic academic jargon. So, yes, let's just call it the match score.


And this match score, Regina, it's basically what online dating companies try to calculate, right? To figure out who's a good fit for you? Yeah, pretty much.


All right, so they've calculated the match score, and what do they do with that?


[Regina] (25:13 - 25:32)
They use linear regression to see if that match score predicts relationship satisfaction, basically how happy you are. Kristen, you and I have talked about regression before, but maybe right now we could do a little statistical detour and help explain the basics of regression intuitively.


[Kristin] (25:33 - 26:29)
Yes, let's do a deep dive on linear regression. And, Regina, I want to use a simple example. Would it be okay if I used a snippet from a dating story that you were recently recounting to me and my kids because it really cracked us up?


Uh-oh. It just makes a good variable for my simple example. Go ahead.


So you were telling us about a date who was basically reading off his partner resume to you, and he mentioned that he has seven boats. And for some reason, that just cracked my kids' eyes. Seven boats.


It got me wondering, though, does the number of boats that your partner owns predict relationship happiness? So let's use those two variables for our regression example. Number of boats will be our x variable.


That's the variable that goes on the horizontal axis. And relationship happiness score will be our y variable that goes on the vertical axis. And imagine we plot a bunch of people's data.


[Regina] (26:29 - 26:32)
We must be taking a random sample from the yacht club, right?


[Kristin] (26:33 - 26:49)
Well, clearly this is not a representative sample when we are talking about people having multiple boats, yes. But we have this scatter plot. We plot the data, and then we fit the line that best fits the data.


And that line is going to help tell us whether the number of boats predicts relationship happiness.


[Regina] (26:50 - 27:10)
I am not sure for me that would actually be a straight line, Kristen. Happiness might go up with one or two boats. I'll give you that.


But then it might hit a peak and then plateau or maybe even go down. Is there a point at which more boats is just too many boats? You don't want a boat hoarder.


[Kristin] (27:10 - 27:32)
You don't want a hoarder in anything, Regina. Trust me on that one. Yes, you're right.


This may not be linear, but for the purposes of this fake example, we are going to pretend that a straight line fits the data well. And when we say that we are finding the line that best fits the data, what we mean is that we are finding the equation of that line. And what is the equation of a line, Regina?


[Regina] (27:32 - 27:38)
Y equals mx plus b. And we're going to talk about each component now, right?


[Kristin] (27:38 - 27:56)
Yes. In our example, y is our relationship happiness, and x is the number of boats. And then m and b are the two values that define the line.


m is the slope of the line, b is the y-intercept. And those are the two values we are solving for in a linear regression. We're trying to find the slope and the intercept.


[Regina] (27:57 - 28:02)
I think we're giving people flashbacks to high school here. Yes, probably.


[Kristin] (28:02 - 28:14)
For this fake example, I'm going to make up some values out of thin air, but imagine that we find that the slope is positive one and the y-intercept is three. What would that mean, Regina? A little quiz here.


[Regina] (28:15 - 28:38)
Oh, I love these quizzes. Let's start with y-intercept. That just tells us where the line is hitting the y-axis.


When x is zero, what is y? This means that when a man has zero boats, your relationship happiness is, on average, a three. So that's your baseline happiness when your dude has zero boats.


[Kristin] (28:38 - 28:48)
Often we don't care much about the y-intercept, Regina. It doesn't tell us anything about the relationship between x and y, but we do need that value to anchor the line. You can't have a line without b.


[Regina] (28:49 - 28:54)
That sounds very philosophical. What would a line be without a b?


[Kristin] (28:55 - 29:03)
It would be a line that goes through the origin, Regina, of course. But back to slope now, the m part. Can you interpret that for us, Regina?


[Regina] (29:03 - 29:35)
Remember, slope is rise over run. How much does y change as x changes? And in this case, the value is positive one, which would mean that for every extra boat your partner has, your happiness goes up by one point.


So for my boat guy date, this would predict that he would give me relationship happiness of 10. That is a baseline of 3 plus 7 times 1 equals 10 happiness points.


[Kristin] (29:35 - 29:43)
And this is obviously an artificial made-up example. Regina, as long as we're being philosophical, do you know why do they call the slope m?


[Regina] (29:43 - 29:57)
I actually looked it up, Kristen, and no one knows. The theory that I like best is that it comes from the French word monti, which means to mount, which is the same Latin root as mountain.


[Kristin] (29:57 - 30:36)
Oh, m for mountain. That's a good way to remember it, because the slope is the steepness of the line like a mountain. You know, they use m in geometry and algebra, but in statistics, we give the slope an even fancier name.


We give it a Greek letter. It's called the beta coefficient. Regina, one thing to keep in mind about slope, the value depends totally on the units.


And I chose nice units here, happiness points, and number of boats. But you can't interpret slope if you don't know the units. Let's get back to the paper now.


That match score variable, it is not like number of boats. It's not a tangible unit. So what units did they use, Regina?


[Regina] (30:37 - 30:49)
Here they did something nice. They gave the slope as a standardized beta, which means that they converted the units into standard deviation units. It's a kind of universal currency.


[Kristin] (30:49 - 31:14)
Right, we've seen standardized effect sizes before. In the red dress effect episode, we learned about Cohen's d, which is a measure of how much two groups differ. Cohen's d is different than standardized betas.


Standardized betas answer a different statistical question. They're asking about the strength of the relationship between two variables. But both of these share the same universal units.


They both have units of standard deviation. Right.


[Regina] (31:15 - 31:44)
And here the standardized beta was 0.37. Statistically significant, and it is positive, which means the higher the match score with your partner, the more satisfied you are with your partner. But now let's talk about that value of 0.37. What does it mean? It means that for every one standard deviation increase in match score, your satisfaction goes up by 37% of a standard deviation.


[Kristin] (31:44 - 32:01)
This is hard to interpret if you're not a statistician. Standard deviation, let alone 37% of a standard deviation. But Regina, in the red dress effect episode, you helped us to get a feel for Cohen's d using an analogy with heights.


Do you have something similar here in your statistical bag of analogy tricks?


[Regina] (32:03 - 32:29)
I did look up something to use as an analogy for standardized beta. It's not perfect, but let's just go with it. OK.


Anyway, let's talk about height between spouses, first of all. So that relationship tends to have a standardized beta of 0.1, 0.2, something like that, pretty small. This means if you are taller than average, then your spouse is slightly more likely to be taller than average also.


[Kristin] (32:29 - 32:33)
Well, that makes sense because these do tend to go together, but not a huge amount.


[Regina] (32:33 - 33:04)
Yeah, not huge. Let's talk about something that is a little stronger. Heights of two siblings of the same sex.


So two sisters or two brothers. The standardized beta here is about 0.5 now. This just means, again, if you are taller than average, then your sibling is also likely to be taller than average.


If you're shorter than average, they are likely to be shorter than average. It makes sense that this relationship would be stronger than it was with spouses because siblings share about 50% of their DNA, unlike spouses.


[Kristin] (33:05 - 33:09)
Sure. Siblings tend to be similar in height, but they're not exactly the same height. Yeah.


[Regina] (33:09 - 33:22)
Now let's go super duper big and talk about the height of identical twins. Just to put it into context, the standardized beta is going to be at least 0.9 because identical twins share 100% of their DNA.


[Kristin] (33:22 - 33:28)
Right. They tend to be very similar in height. There might be a little bit of variation because of diet or something, but they're very close.


[Regina] (33:29 - 33:48)
I think all of this puts into context our 0.37 standardized beta for the link between match score and relationship satisfaction. It's not as strong as the concordance you'd see in height with siblings, but stronger than the concordance you'd see in height with spouses.


[Kristin] (33:48 - 34:14)
0.37 is actually considered a medium effect. Remember, with Cohen's d, we talked about these rough rules of thumb for small, medium, and large. The values, of course, are different for standardized betas.


0.1 is considered a small effect, 0.3 is medium, and 0.5 is large. As we said with Cohen's d, these should not be taken as gospel. They are just really rough guidelines.


Exactly.


[Regina] (34:15 - 34:29)
Let's talk now about what this means to actual people. We have this medium-sized effect of match score actually having some impact on your relationship satisfaction, and this is good.


[Kristin] (34:29 - 34:40)
Yeah, it seems good for online dating companies, Regina. Actually, if this is true, can we get some of those companies, like Match.com or eHarmony, to advertise on our podcast? I think we should try.


[Regina] (34:41 - 34:47)
Technically, this is Paul and Eli's paper, not ours, so maybe their podcast is going to get the advertisers.


[Kristin] (34:47 - 34:52)
But we have the cooler title for the paper, so I think the companies should advertise with us, too.


[Regina] (34:52 - 34:58)
But, Kristen, I must confess there is a problem with this analysis, which I kind of buried.


[Kristin] (34:58 - 34:58)
Uh-oh.


[Regina] (34:58 - 35:18)
We need to talk about it. The thing is, we all generally want the same kind of thing in our partner. Researchers have looked at this.


We all want someone who is kind and honest and attractive and emotionally stable. I came up with a weird analogy that might help illustrate the problem.


[Kristin] (35:18 - 35:19)
Is it sex-related, Regina?


[Regina] (35:20 - 35:24)
Surprisingly, it is not. It is dessert-related.


[Kristin] (35:24 - 35:26)
Well, you know, those are not mutually exclusive.


[Regina] (35:27 - 35:30)
Good point. This is just about ice cream. No sex.


[Kristin] (35:30 - 35:31)
Oh, well, I love ice cream.


[Regina] (35:31 - 35:37)
Okay, we're going to stick with vanilla ice cream, which I know is boring gastronomically and sexually.


[Kristin] (35:37 - 35:39)
Okay, but vanilla ice cream, let's go with it.


[Regina] (35:39 - 35:48)
So, Kristen, what is your wish list for vanilla ice cream? If you could have the perfect scoop of vanilla ice cream, what are its characteristics?


[Kristin] (35:49 - 36:01)
All right, dense, creamy, not crystallized, sweet, full fat, real vanilla, smooth texture, no weird aftertaste from artificial ingredients.


[Regina] (36:02 - 36:26)
I feel like that's my wish list for a man, too. Okay, Kristen, that is your personal wish list, but I bet most people are similar, right? No one wants the icy, artificial, weird textured stuff, right?


There are small differences, probably. Some people want those little vanilla bean specks. Some don't.


[Kristin] (36:26 - 36:27)
I do not.


[Regina] (36:27 - 36:59)
Okay, but overall, I'm betting the wish lists are pretty similar. I would guess, yes. Okay, here is the issue.


Let's say we interview 100 people to get their ice cream wish list, just like I did with you, and then I partner each person with a random brand of ice cream. They have to rate its traits, like smooth, creamy, dense, whatever, and then we can calculate a match score between their wish list and the ice cream resume. Can you see where I'm going with this?


[Kristin] (36:59 - 37:23)
I think I see where you're going with this. Since most people have similar wish lists, creamy, dense, etc., the match score will just end up being high whenever someone gets objectively good ice cream, like Ben & Jerry's or Haagen-Dazs, and the match score will be low whenever they get the crappy stuff like Kroger's brand. So the match score isn't really capturing personal matching.


It's just about the overall quality of the ice cream.


[Regina] (37:24 - 38:00)
Exactly. Perfect. So when we look at how happy people are with their ice cream, it looks like that match score predicts happiness, but really, people are just happier when they get the expensive good stuff, and Kristen, the same goes for dating.


Some partners might just be objectively better and higher quality, just like ice cream, and the lucky people who win the partner lottery and end up with these objectively better partners will have both high match scores and be happier, but it's not the match that causes the happiness. It's just the quality of the partner.


[Kristin] (38:01 - 38:19)
Right. We have a confounder here, and I think, Regina, we should call it the good catch confounder. Oh, I love that name.


Yes. And you're saying that 0.37, it looks big, but in fact, that is just potentially due to this good catch confounding, and therefore, we haven't really shown the wish list hypothesis to be true. It's too crude of an analysis.


[Regina] (38:19 - 38:37)
Yep, that is where we come to analysis number two, and this is where they tried to correct for that good catch confounder. They came up with a personalized match score, and it controls for the fact that everyone wants attractiveness and humor and creaminess in their partner.


[Kristin] (38:38 - 38:51)
Kristen, they called it the corrected pattern metric. Uh, I think we should stick with the personalized match score. That's a better name.


So it's trying to get at whether your partner matches your unique preferences, not just the things that everybody wants.


[Regina] (38:52 - 38:58)
Perfect. And how did they do it? They used what's called mean centering over the entire sample.


[Kristin] (38:58 - 39:08)
Oh, mean centering. All right. That means they took the average or mean for a trait, say, funny, humorous.


What was the average for funny in the sample, Regina? Nine.


[Regina] (39:08 - 39:09)
Nine out of 11.


[Kristin] (39:09 - 39:26)
Okay. And they subtracted that mean from everybody's individual pick for funny. If you said funny was an 11 on your wishlist, Regina, then they're going to subtract 11 minus nine, and you would get a two for funny, meaning that you like funny more than the average person in the sample.


[Regina] (39:27 - 39:45)
Right. So now it's not just about having a funny partner. It's about whether you want funny more than most people do and whether your partner delivers on that.


Right. So they did that for each of the 35 traits, got that personalized match score overall, and then re-ran the same analysis.


[Kristin] (39:46 - 39:53)
Right. So they ran a regression. Relationship satisfaction is still the outcome variable, but now our predictor is this personalized match score.


And what did they find?


[Regina] (39:53 - 40:18)
The standardized beta dropped dramatically. It went down from 0.37 before now to 0.19. So about half the effect. And Kristen, if we bring back those height analogies from before, now we're talking about concordance of height between spouses.


I mean, it's not nothing, but it is definitely not a smack you in the face noticeable kind of thing.


[Kristin] (40:19 - 40:21)
Right. It's closer to a small effect rather than a medium effect.


[Regina] (40:21 - 40:22)
Yep.


[Kristin] (40:22 - 40:24)
So this is the real test of the wishlist hypothesis.


[Regina] (40:25 - 40:35)
Yeah. Now we have isolated whether your unique preferences matter for how happy you are. And apparently the answer is a little, but not much.


[Kristin] (40:36 - 40:43)
Interesting. Virginia, you did mention that they had single people and partnered people, and did they look at that separately? Was it different for those two groups?


[Regina] (40:44 - 40:45)
Pretty much no difference. Oh, really?


[Kristin] (40:46 - 40:55)
Interesting. So for both groups, the effect size is not overwhelming. I guess that's good news because it's saying that even if you didn't get matched through some online dating wizardry, you could still be happy.


[Regina] (40:56 - 41:14)
From a self-knowledge point of view, I think it's also interesting because maybe this wishlist I have on my profile doesn't actually matter as much as I hoped. I took all this time to figure out the different characteristics I wanted, but maybe I just need to be open to happiness coming from unexpected places.


[Kristin] (41:15 - 41:15)
I like that, Virginia.


[Regina] (41:16 - 41:28)
Maybe I need to erase everything and change it to surprise me. Some friends pointed out if I do that, though, I might get some unsolicited photographs. Yeah.


[Kristin] (41:28 - 41:37)
I wouldn't toss out your wishlist completely. I mean, they did find some evidence for the wishlist hypothesis. It's just that it was a smaller, more subtle effect than we might have expected.


Good point.


[Regina] (41:37 - 42:00)
Now, that was looking at overall wishlist matching across all 35 traits. But what's really interesting is looking at the individual traits. Ambition, good listener, nice body.


Now I want to dig into this third analysis, the juicy stuff, and get down to that specific trait level and isolate them. Kristen, they gave this one a jargony name.


[Kristin] (42:00 - 42:01)
Oh, surprise, surprise.


[Regina] (42:01 - 42:46)
But I propose we call it bonus points analysis. I'll explain why in a moment. Okay, good.


Sounds interesting. They used regression models again. Now, though, we're going to have a separate regression model for each of the 35 traits.


Let's just say we're talking about funny in this one. The outcome Y is still relationship satisfaction, but now what they did is change the predictors of this, the X variables. They included three things at their predictors.


The first is how highly I rated funny as being important in my wishlist for my ideal partner. The second is the partner resume, how highly I rated my partner or target crush on actually being funny. The third thing is the cool thing they included an interaction term between the two.


[Kristin] (42:47 - 42:53)
Oh, interaction terms. I teach a lot about interactions in my classes. And Regina, these are actually quite tricky.


How do you teach interaction?


[Regina] (42:54 - 43:18)
Yeah, super difficult. These are. I always think of interaction effects as the magic of synergy between two things.


And my go-to example is chocolate and peanut butter. Chocolate and peanut butter, they are great separately, but you put them together. I cannot stop eating them.


It's addictive as crack. And that is the synergy between chocolate and peanut butter. That is interaction effect.


[Kristin] (43:19 - 43:35)
Oh, Regina, you're making me hungry with the ice cream and the chocolate peanut butter cups. But remember, as we talked about in the sugar sag episode, chocolate peanut butter cups have a lot of AGEs, advanced glycation end products, because the peanuts are roasted and then plus you add the sugar. I don't care.


[Regina] (43:35 - 43:43)
I think that it is absolutely worth it to have my face fall down on my knees as long as I get some peanut butter cups and French fries.


[Kristin] (43:44 - 43:45)
Good analogy, though, for interactions.


[Regina] (43:45 - 44:15)
Thank you. Thank you. So this analysis, again, checking to see each trait, if it's so important, the people who care about it are essentially weighing it more heavily.


It's like for humor, you've got the chocolate, and that is me caring about humor. I put it down as an 11. And then you got the peanut butter, the partner who makes me laugh all the time with his bad jokes.


I rate him as a 10. And, mmm, is that yummy. Like, great.


It's synergism there. And I am extra happy because he knew me.


[Kristin] (44:15 - 44:16)
That combination important.


[Regina] (44:16 - 44:35)
Yes, the combination. Exactly. Of course, we're looking at the entire sample, right?


What people do on average, not just me. But if the answer is yes, that in general, we are giving bonus points for partners who match the traits we care about most, then we will see in the regression model a strong interaction effect pop out.


[Kristin] (44:35 - 45:50)
If there is an interaction, that tells us that the wish list hypothesis is true for that specific trait. Take funny, for example. If we do find an interaction that supports the wish list hypothesis for funny, it means people who care more about humor get a bigger happiness boost from having a funny partner.


And when I teach interactions, I like to simplify the problem by splitting one of the variables into two groups. It just makes it easier to picture. And let's use religion because that breaks down into two groups pretty nicely.


We can divide people into those who really care about religion and those who don't care much about religion. If we fit a line between the religiousness of the partner and relationship happiness among people who really care about religion, that line may have a very steep slope. These people get much happier when their partner shares their religious views.


But for people who don't care much about religion, if we fit a line for them separately, that line might be flat or even have a negative slope. A more religious partner doesn't make them happier and it might even make them less happy. The key is that the slopes of those two lines are different.


That's what an interaction is. It tells us that the slopes of the lines are different in different groups. In this case, it tells us that the effect of partner religiousness on happiness depends on how much you personally care about religion.


[Regina] (45:51 - 46:32)
That, Kristen, was an excellent way of describing it. Thank you. Let's talk about the results now.


In general, the effect sizes for these individual traits were pretty small, which makes sense because we're looking at each trait in isolation, so no single trait is likely to have a big impact. And since the overall effect was small, we just talked about, we would expect that the effects for individual traits would be even smaller. But they did find some interesting traits where that interaction effect, that peanut butter cup effect, really popped out more than others.


And the one that had the biggest interaction, standardized beta of 0.13, that was religion.


[Kristin] (46:33 - 46:54)
That actually makes sense to me, Regina, because you can imagine religion being something where some people really care and others don't. And so matching on this trait might be more important than matching on some of the more generic traits. Yep.


Second strongest, extroversion. Oh, that's interesting. Maybe if you're an extroverted person and you really want somebody else who's extroverted to party with, then you're happier if you get a partier.


[Regina] (46:55 - 47:40)
Yeah, you know, researchers talk about these as vertical and horizontal traits, and it's an interesting way of looking at it. So vertical traits are those that are generally socially desirable, like I want someone financially secure and attractive. We tend to want to climb that ladder to maximize these, right?


That's why they're vertical. Everyone wants them. But in contrast, not everyone is going to want to maximize religion or partying or ambition.


These are the horizontal traits. And I think it's a helpful way to think about dating. When I'm making my own wish list, what are the things that everyone wants versus the things that I personally want?


[Kristin] (47:41 - 47:52)
That is a good way of thinking about it, Regina. And it really matches what we're seeing in the data that the traits that are more horizontal, like religion, those are the ones where the wish list hypothesis is really borne out. Right.


[Regina] (47:52 - 47:59)
So other ones that came up with strong effect, ambition, sporty, and creative and unconventional.


[Kristin] (48:00 - 48:05)
Oh, which again, makes sense, because some people may value unconventional more than others. Yep.


[Regina] (48:06 - 48:15)
Kristen, I think we are now ready to talk about the exploratory data analysis, realizing, of course, that it's exploratory, but super cool and fun.


[Kristin] (48:15 - 48:36)
Oh, I can't wait to hear about this. But let's take a short break first. Regina, I've mentioned before on this podcast, our clinical trials course on Stanford Online.


It's called Clinical Trials, Design, Strategy, and Analysis. I want to give our listeners a little bit more information about that course. It's a self-paced course.


[Regina] (48:36 - 48:44)
We cover some really fun case studies designed for people who need to work with clinical trials, including interpreting, running, and understanding them.


[Kristin] (48:45 - 49:16)
You can get a Stanford professional certificate, as well as CME credit. You can find a link to that course on our website, normalcurves.com. And our listeners get a discount.


The discount code is normalcurves10. That's all lowercase. Welcome back to Normal Curves.


Today, we're looking at the claim that we know what we want in a romantic partner and that getting that makes us happier. We were about to look at the results from some bonus exploratory analyses.


[Regina] (49:16 - 49:31)
Yes, Kristen, as we have talked about before, it's important when you see exploratory results to treat them with just a bit more skepticism, because if you're just poking around the data, it's easy to find interesting things just by random chance. Right.


[Kristin] (49:31 - 49:39)
There's nothing wrong with exploring your data, but you need to be transparent and open and tell your reader, hey, this is exploratory, so they can take it with a grain of salt. Yep.


[Regina] (49:40 - 49:45)
These results were also what we call descriptive, meaning no p-values.


[Kristin] (49:45 - 49:50)
We talked about descriptive analyses in the mail equipment episode. We don't always need p-values.


[Regina] (49:51 - 50:09)
So, what they did here was pretty clever, because they are getting at this idea of, do we really know what makes us happy, but in a different way? And first, they wanted to see what traits people said were most important as a group. You know, what traits made the top five list?


[Kristin] (50:10 - 50:17)
Okay, so they looked at that 1 to 11 rating on people's wish list, and they looked at which traits had the highest average.


[Regina] (50:17 - 50:44)
Yep, exactly. And for this, they just wanted to see the relative ranking. So, what was the number one most desired trait on average, the second, etc.


So, drum roll. That's top five most wish-listed traits. We say we want someone who is a loyal, honest, supportive, understanding, and a good listener.


Aw, that's so sweet. It feels like a dog. Dogs are loyal and supportive.


[Kristin] (50:44 - 50:47)
My dog is a very good listener and very supportive, yes.


[Regina] (50:47 - 50:49)
He is. He's good with a ball, too.


[Kristin] (50:49 - 50:49)
Yes, he is.


[Regina] (50:51 - 51:07)
Okay, then the researchers tried to figure out, on average, what really secretly makes us happy. Not the things we say make us happy, but the partner traits that actually most strongly correlate with relationship satisfaction.


[Kristin] (51:07 - 51:18)
This is clever. So, they looked at the beta coefficient between the partner resume, their actual traits, and relationship happiness. And these beta coefficients tell us what actually makes us happy.


Ooh, I like it.


[Regina] (51:18 - 51:30)
Exactly. What did they find? This part was fascinating.


The top five traits that secretly make us most happy are, number one, a good lover. No way.


[Kristin] (51:30 - 51:35)
Really? Good lover? That wasn't even in the top five of what people said they wanted.


[Regina] (51:35 - 51:41)
Right? A good lover was actually ranked 12th out of 35 when people said what they wanted. Wow.


[Kristin] (51:41 - 51:54)
So, this implies that we really want a good lover, but either we don't know that or we just can't admit it. And maybe there's some social desirability bias going on here. What were the other of the top five traits that actually make us happy, Regina?


[Regina] (51:55 - 52:12)
Number two and three were loyal and supportive. So, we still do want them to be nice and dog-like, right? I'm thinking maybe if they're a very good lover, you really want them loyal.


Even more. Guess what number four was? Smells good.


[Kristin] (52:12 - 52:22)
No way. Pheromones. We talked about pheromones in our first full episode.


So, maybe there's something to smell, even if it isn't specifically about those MHC genetics.


[Regina] (52:22 - 52:30)
Right. When people said what they wanted, smells good was ranked 15th out of 35. So, when it comes to body order, we really don't know.


[Kristin] (52:30 - 52:33)
Might be some social desirability bias for that one, too. I'm thinking.


[Regina] (52:33 - 52:37)
I feel justified, though, in including smells great when he's sweaty. Am I dating properly?


[Kristin] (52:37 - 52:38)
Oh, that's right. You have that there.


[Regina] (52:39 - 52:41)
Yeah. Okay. Number five was honest.


[Kristin] (52:41 - 52:41)
Okay.


[Regina] (52:41 - 52:57)
So, let me put all of this together. It looks like the person who actually makes us most happy is great in bed with great body odor, and they are not lying or having sex with anyone else, and they're cleaning the kitchen and doing all the cooking while you're working.


[Kristin] (52:57 - 53:01)
And where do I find this man? I'm not sure such a man exists, Regina.


[Regina] (53:02 - 53:17)
I'm thinking maybe you could get three or four out of five of those, but not all of them. The researchers separated out men and women, and for women, you know what made top five? Sexy.


Really? Not for men, just for women. Wow.


[Kristin] (53:17 - 53:32)
Well, but sexy had to knock something else out of the top five. So, what did it knock out? Honesty.


Oh, well, you know, I'm going to dispute that one. I think that honesty is important, too, and you really need both. You definitely don't want a dishonest partner.


You do not.


[Regina] (53:32 - 53:34)
I'm going to give you six traits, then.


[Kristin] (53:34 - 53:36)
Yeah, I'm going to need honesty in that top.


[Regina] (53:36 - 53:55)
Yeah, it was fascinating because women said sexy was ranked 23rd out of 35. So, it was really down at the bottom, but then it shot up to number five when you look at what really matters. So, hey, guys, being a sexy good lover matters.


Don't forget to be good in bed. Take care of her.


[Kristin] (53:55 - 54:00)
You know, Regina, maybe you should add this to your dating profile, looking for a good lover.


[Regina] (54:02 - 54:03)
I guess I would get a lot of dates.


[Kristin] (54:03 - 54:09)
I think you would. Probably, though, a bunch of overconfident men that maybe aren't as good as they think they are in bed.


[Regina] (54:10 - 54:23)
The researchers dug a little deeper in here to check out gender stereotypes, too. Yep. They looked at overall animal attraction.


Okay. Yeah, which is they made it a combination of attractive, nice body, and sexy.


[Kristin] (54:23 - 54:23)
Okay.


[Regina] (54:24 - 54:43)
And they also looked at overall earning potential, which was a combination of ambitious, financially secure, and a good job. Now, for animal attraction, they found that, true to stereotypes, men said they wanted that more than women did. But in reality, animal attraction was just as important for women's happiness as for men's.


[Kristin] (54:43 - 54:50)
Oh, wow. That's interesting. Well, you know, maybe women don't want to admit this because, again, social desirability bias.


Yep.


[Regina] (54:50 - 55:08)
The other combined score, earning potential, again, true to stereotypes, women said they wanted this more than men did. But in reality, earning potential was just as important to men's happiness as women's. Wow.


That's surprising. So, Kristen, I think this means men like ambitious, successful women.


[Kristin] (55:08 - 55:09)
Really?


[Regina] (55:09 - 55:15)
They just cannot admit it. I feel like this is good news for a lot of women like us on the dating market.


[Kristin] (55:15 - 55:29)
Well, except that they don't know that they want it. So that might be the problem. The other thing, Regina, is if you're a strong, successful woman, you've got to be careful that you don't end up with a man who just lets you do all the work while he sits around and does nothing because that is a risk.


[Regina] (55:30 - 55:34)
In that case, I guess he really does need to be a good lover.


[Kristin] (55:34 - 55:35)
Yes.


[Regina] (55:35 - 56:36)
And honest and loyal and supportive and cooking dinner and cleaning house. Well, that would work. Okay, Kristen, now I want to take just a moment for me to do some fangirling for Paul and Eli and talk about some cool things that they did with their data.


Because I actually knew Paul and Eli back in my science journalism days when I was writing about dating. I interviewed both of them and thought they were very quotable. But now with my stats hat on, I can see that they're also very good quantitative researchers.


Yeah, there's a lot to like in this paper, Regina. Yep. Okay, first cool thing was how they managed to pull this study off with 60 teams, 100 co-authors, 43 countries.


They did it through an organization called Psychological Science Accelerator, which is pretty cool. It brings together small individual teams of researchers to create these more powerful mega research teams. It kind of looks to me like matchmaking, but for researchers.


[Kristin] (56:37 - 56:41)
Do each of the sites have their wish list for their ideal collaborators?


[Regina] (56:41 - 56:42)
Oh, I am sure they do.


[Kristin] (56:43 - 56:50)
Yeah, this is great because it allows researchers to get to huge sample sizes, which is something we love. Like this study had 10,000 participants.


[Regina] (56:51 - 57:04)
Yep. Okay, another cool thing they did that, Kristen, I am sure that you liked knowing you. They included a special thing in their regression models called random effects.


So how about you talk a little bit about why they are so important?


[Kristin] (57:05 - 59:22)
Yes, I did notice that in the paper, and I was happy to see this because this is something I teach about. So let's actually do a quick statistical detour on this, Regina. Remember when we talked about fitting a line, y equals mx plus b?


The goal is to find the slope and the intercept. But here's the thing. This study had data from 43 countries and 60 different research sites.


So the big question is, should we really be fitting just one line for everyone? And I want to go back to the boat example, your boat guy date, Regina. So imagine we're trying to predict relationship happiness based on number of boats, and we are using data from 60 different research sites across the world.


In some places, maybe people are really happy, even if their partner has zero boats. But in other places, people might be sad if their partner has no boats. So and that's your intercept.


That's the starting level of happiness when boat count is zero, but it might differ from site to site. And maybe in some places, owning more boats makes people a lot happier, whereas other places, accumulating boats matters less. And that's your slope, how much happiness goes up with each additional boat.


And again, that may vary from site to site. Recognizing this variation, we could try to fit 60 different lines for the 60 different research sites, but that's messy and inefficient. And then we're going to get back to small sample sizes then for each line.


We could also just pretend that everyone's the same, and we could fit one global line for everyone, but that ignores potential real differences between the different sites. So random effects are this compromise between those two extreme options. They let each research site have its own line, its own intercept and slope, but they assume that those lines are related, that they come from a family or distribution of lines.


This means that some lines start higher and some rise faster, but they all come from the same family, and that family is defined by two normal curves. Our namesake, Regina. There's a normal curve for intercepts and a normal curve for slopes.


And the power of random effects is to estimate a normal curve, you just need to estimate a few additional parameters. So with just a few extra estimates, your model can capture both the overall trend and the site-to-site quirks. And we're not overcomplicating things, but we're also not oversimplifying things.


[Regina] (59:23 - 59:30)
Exactly. I think of it as allowing each site their own personality, mathematically speaking.


[Kristin] (59:30 - 59:57)
I like that, Regina. Another thing I really like about this paper is that they include the regression equations written out in equation form in the footnotes of their tables. I love this because then I don't have to go hunting around in the rest of the paper to try to figure out what they put in their model.


And, you know, sometimes you can't even find the models written out anywhere in the paper, and that's really annoying because I'm just guessing what's in the model. So it's super helpful to readers if you put the equations right there as part of the table.


[Regina] (59:57 - 1:00:40)
Yep. What I also like about Paul and Eli is that they do not overcomplicate life. I hate it when researchers jump automatically straight to the fanciest models because they sometimes think, you know, the more complicated the analysis, the more accurate and impressive your results must be.


But it's not. The fancier the model, the more assumptions are hidden under the covers, and that means more uncertainty in the results. What Paul and Eli did was make it just as complicated as it needed to be and not anymore.


So I love it. When researchers present something like this, a clear, solid, simple, well-thought-out analysis, I am more impressed by how smart they are, not less.


[Kristin] (1:00:41 - 1:02:15)
Oh, absolutely. When you are clear and straightforward, you appear smarter. This is true not only in data analysis, but in writing.


And there's actually a paper that we're going to talk about in a future episode which shows that if you write simpler, you actually sound smarter to people. Regina, I think we're ready to wrap this up. But first, as promised, I want to try my hand at the under one minute movie summary of this paper.


I can't wait. So I am going to summarize this paper as if it were a movie in under one minute in honor of the Love Factually podcast. I've got my stopwatch ready.


Okay, here we go. So 10,000 people across 43 countries make a wish list for their perfect partner. Funny, good in bed, smart, good listener, whatever.


Then they rate a real person they know, current partner, crush, friend with mixed signals, or maybe that guy from accounting, and say how happy that real or imagined relationship would be. Enter the scientists, 100 of them, nerdy, noble, slightly over-caffeinated. Think Avengers, but with clipboards and regression models.


Their mission? Figure out if getting what you say you want actually makes you feel more sparkly about someone. And yes, kinda, but the effect is not huge and only applies to some traits.


And then there's a major plot twist. People overestimated the importance of patience and listening skills and underestimated how much they care about attraction and sexual chemistry. The heart wants what it wants, and apparently it's hot and good in bed.


So no, they didn't end up with the perfect partner on paper. They ended up with someone who smelled amazing, kissed like they meant it, and made them forget all about their little spreadsheet of hopes and dreams. And somehow, despite the random slopes and interaction terms, they lived happily ever after.


[Regina] (1:02:18 - 1:02:22)
Bang on. I got 58 seconds. Nice job.


[Kristin] (1:02:23 - 1:02:29)
Thank you, Regina. All right. Now we're ready to rate the strength of the evidence for the claim.


And what was our claim today?


[Regina] (1:02:29 - 1:02:49)
The claim is that this wish list hypothesis is correct. And technically, it's that the more our real-life partner matches the ideal partner that we have in our head, the more satisfied we are with the relationship. But I like to think of it as, are we actually happier if we get what we say we want in a partner?


[Kristin] (1:02:50 - 1:03:00)
And how do we evaluate the strength of the evidence? It's with our very scientific smooch rating scale, where one smooch means little to no evidence, and five means very strong evidence. So, Regina, kiss it or diss it.


[Regina] (1:03:00 - 1:03:40)
I am going to give this one several small friendly kisses. This was a very well-done study, very rigorous. So, we have more trust in the results, and they did find effects that support this wish list hypothesis, but they were pretty small overall.


So, maybe getting a perfect match can make us a little bit happier. I'm thinking now we're just all out there fighting each other for these high-quality, good catch mates, and it has nothing to do with us personally. I am not even convinced that we know what makes us happy, and maybe it's all just a big crapshoot.


So, four friendly pecks on the cheek for this one.


[Kristin] (1:03:41 - 1:04:12)
Okay, you're going with four. I'm going to nudge it up slightly, 4.5, because if the question is simply, are people at least a little happier when they get what they say they want, then, yeah, I think there's solid evidence for that. Again, the effect isn't huge, it's stronger for some traits versus others, but the study makes a pretty convincing case that the effect is at least there to some extent.


And honestly, I'd be surprised if this weren't at least a little true. Actually, I'm pretty surprised that the effect isn't bigger here. All right, what about methodologic morals?


[Regina] (1:04:12 - 1:04:27)
I loved their transparency, the pre-registration, the fact that they put everything online and put all of their results in the paper. So, here's mine. Good science bears it all.


Ooh, I love it, and that goes along.


[Kristin] (1:04:27 - 1:05:01)
Nicely with the theme of our podcast. I couldn't decide between interaction terms and random effects for my methodologic moral, but I went with random effects. So, here it is.


When the world isn't one size fits all, don't fit just one line, use random slopes and intercepts. Kind of nerdy. Ooh, I love it, though.


It works. It's very true. Regina, thank you so much.


This has been fascinating, and I think we have done a big favor to a lot of people because, as we said, this is a very thick and dense paper, and you should still read the whole thing, but now we have summarized it, so maybe you don't have to read every word.


[Regina] (1:05:02 - 1:05:05)
We did the reading, so you don't have to.


[Kristin] (1:05:05 - 1:05:07)
Or you could just do the one-minute summary.


[Regina] (1:05:09 - 1:05:14)
Okay, I'm going off to erase everything on my dating profile. Uh-oh. Yep, surprise me.


[Kristin] (1:05:15 - 1:05:21)
And we will hear the outcome of that in a future episode, I hope. All right, thanks, Regina.


[Regina] (1:05:21 - 1:05:21)
Thanks, Kristin.


[Kristin] (1:05:25 - 1:05:26)
Bye-bye.