Nature, in Code

# Social Conformity Vs. Distinctiveness

Yesterday, I came across an intriguing paper with the title "Social conformity despite individual preferences for distinctiveness" by Paul E. Smaldino and Joshua M. Epstein (OA paper here). The gist of the paper is that individuals in a population may converge to a situation where they all have the same trait, even if they all have individual preferences to be distinct.

I decided to recreate the basic model in JavaScript to see if I could replicate the findings.

#### The model

The paper looks at a few versions of the same basic model. In a nutshell, individuals in a population have some trait xi, and a distinctiveness preference di. It is assumed that everyone knows about the mean $$\bar{x}$$ of the trait in the population, and the standard deviation σ. Then, at each timestep t, individuals update their trait in the following way:

$$x_i(t+1)=x_i(t)+k[x^∗_i(t)−x_i(t)]$$
where k is an adjustment parameter, and x*i is the ideal position of individual i, which itself is given by

$$x^∗_i(t)=\bar{x}(t)+d_iσ(t)$$
They start implementing the model with each individual having the exact same distinctiveness preference, and find that everyone converges to the same trait. Then, they extend the model by assuming individuals have different distinctiveness preferences, which I find more realistic.

It turns out that the population still converges to conformity, even if everyone starts with a random trait:

This confirms the main finding of the paper, which is quite interesting.

It turns out that there are some interesting dynamics in this model. In the above example, the distinctiveness preferences of the individuals are different, but only to a certain extent. Concretely, these values are generated using the following code:

spread_distinctiveness*(Math.random()-0.5)

In other words, they are random values between spread_distinctiveness * -0.5 and spread_distinctiveness * 0.5. Thus, by increasing spread_distinctiveness, which was set to 1 in the example above, we can increase the variance in individual distinctiveness preferences.

It turns out the the model starts behaving very differently when the variance is large.

For example, at spread_distinctiveness = 2, we still observe the conformity phenomenon:

At spread_distinctiveness = 3, things start to get interesting:

And at spread_distinctiveness = 3.5, there is no more conformity, and sometimes the traits go off into all directions (reload this simulation to see the various results).

Beyond spread_distinctiveness = 3.5, the population will always diverge.

Presumably, this result is related to the logistic map, but I haven't put too much thought into it yet. Feedback welcome.

#### Code


var N = 100;
var population = [];
var number_of_generations = 2000;
var k = 0.01; // adjustment_rate
var spread_distinctiveness = 3; // change this to see effect above
var data = [];

function Individual(trait, distinctiveness) {
this.trait = trait;
this.distinctiveness = distinctiveness;
}

function init_population() {
for (var i = 0; i < N; i++) {
data.push([]);
}
}

function calculate_trait_mean() {
var sum = 0;
for (var i = 0; i < N; i++) {
sum += population[i].trait;
}
return sum / N;
}

function calculate_trait_standard_deviation(mean) {
var sum = 0;
for (var i = 0; i < N; i++) {
sum += Math.pow((mean - population[i].trait), 2);
}
return Math.pow(sum / N, 0.5);
}

function run_generation() {
var current_mean = calculate_trait_mean();
var current_standard_deviation = calculate_trait_standard_deviation(current_mean);
for (var i = 0; i < N; i++) {
var current_individual = population[i];
var ideal_position = current_mean + current_individual.distinctiveness * current_standard_deviation;
current_individual.trait += k * [ideal_position - current_individual.trait];
data[i].push(current_individual.trait);
}
}

init_population();
for (var i = 0; i < number_of_generations; i++) {
run_generation();
}
draw_line_chart(data, "time step", "trait", []);

Note: the draw_line_chart function is built with D3.js and can be found here.