to update-strategy set cooperator? [cooperator-last?] of one-of my-neighbors with-max [payoff] end

to update-strategy ifelse random-float 1 < prob-mutation [set cooperator? one-of [true false]] [ set cooperator? [cooperator-last?] of one-of my-neighbors with-max [payoff] ] end

to go ifelse synchronous-updating? [ ask patches [play] ask patches [set cooperator-last? cooperator?] ask patches [update-strategy] ] [ ] tick ask patches [update-color] end

to go ifelse synchronous-updating? [ ask patches [play] ask patches [set cooperator-last? cooperator?] ask patches [update-strategy] ] [ ask patches [ play set cooperator-last? cooperator? update-strategy ] ] tick ask patches [update-color] end

set my-neighbors (patch-set neighbors self)

set my-neighbors neighbors

set cooperator? [cooperator-last?] of one-of my-neighbors with-max [payoff]

set cooperator? [cooperator-last?] of one-of (patch-set my-neighbors self) with-max [payoff]

- The (square) payoff matrix will be introduced by the user in an "input box".
- The number of strategies (
*n-of-strategies*) will be infered from the number of rows in the payoff matrix. - The initial number of players (patches) playing each strategy will be introduced by the user in an "input box". The total number of players must equal the number of patches in the grid (number of patches in the NetLogo world).

- the three usual buttons for the
*to setup*and*to go*procedures, - a new multiline input box for a global variable that we can name
*payoffs*, - the slider in the range 0-1 for the
*prob-mutation*variable, - the Switch for the
*synchronous-updating?*variable, - a new input box for the initial number of players (patches) playing each strategy, corresponding to a global variable that we can name
*n-of-agents-for-each-strategy*, - a Monitor that will show the number of patches in the world,
- the NetLogo world and
- the
*Strategy Distribution*graph.

patches-own [ cooperator? cooperator-last? payoff my-neighbors-and-me my-coplayers num-my-coplayers ] to setup clear-all ask patches [ set payoff 0 set cooperator? false set cooperator-last? false set my-neighbors-and-me (patch-set neighbors self) set my-coplayers ifelse-value include-yourself? [ my-neighbors-and-me ] [neighbors] set num-my-coplayers (count my-coplayers) ] ask n-of (round initial-%-of-C-players * count patches / 100) patches [ set cooperator? true set cooperator-last? true ] ask patches [update-color] reset-ticks end to go ifelse synchronous-updating? [ ask patches [play] ask patches [set cooperator-last? cooperator?] ask patches [update-strategy] ] [ ask patches [ play set cooperator-last? cooperator? update-strategy ] ] tick ask patches [update-color] end to play let n-of-cooperators count my-coplayers with [cooperator?] set payoff n-of-cooperators * ifelse-value cooperator? [CC-payoff] [DC-payoff] + (num-my-coplayers - n-of-cooperators) * ifelse-value cooperator? [CD-payoff] [DD-payoff] end to update-strategy ifelse random-float 1 < prob-mutation [set cooperator? one-of [true false]] [ set cooperator? [cooperator-last?] of one-of my-neighbors-and-me with-max [payoff] ] end to update-color set pcolor ifelse-value cooperator? [ifelse-value cooperator-last? [blue] [lime]] [ifelse-value cooperator-last? [yellow] [red]] end

globals [ payoff-matrix n-of-strategies strategy-frequencies num-agents ]

patches-own [ strategy strategy-last payoff my-neighbors-and-me my-coplayers num-my-coplayers ]

to setup clear-all setup-payoffs setup-agents setup-graph reset-ticks update-graph ask patches [update-color] reset-ticks end

to setup-payoffs set payoff-matrix read-from-string payoffs set n-of-strategies length payoff-matrix end

- Create a variable
*initial-distribution*to store the list of values that must have been written (as a string) by the user in the input box*n-of-agents-for-each-strategy.* - Check that the number of values in
*initial-distribution*coincides with the number of strategies (rows in the payoff matrix), and show a warning message otherwise. - Check that the sum of values in
*initial-distribution*coincides with the number of patches in the NetLogo world (number of players), and show a warning message otherwise. - Give initial values for the properties of patches. The
*strategy*property in the population of patches will take an integer value between 0 (for the first strategy) and the number of strategies minus 1 (for the last strategy) and it must be distributed according to the values on the*initial-distribution*list: the first value on the list corresponds to the number of patches playing strategy 0; the second number, to the number of patches playing strategy 1;... - Make the global variable
*num-agents*take as value the number of patches in the NetLogo world.

to setup-agents let initial-distribution read-from-string n-of-agents-for-each-strategy if length payoff-matrix != length initial-distribution [ user-message (word "The number of items in n-of-agents-for-each-strategy (i.e. " length initial-distribution "):\n" n-of-agents-for-each-strategy "\nshould be equal to the number of rows of the payoff matrix (i.e. " length payoff-matrix "):\n" payoffs ) ] if sum initial-distribution != count patches [ user-message (word "The total number of agents in n-of-agents-for-each-strategy (i.e. " sum initial-distribution "):\n" n-of-agents-for-each-strategy "\nshould be equal to the number of patches (i.e. " count patches ")" ) ]

ask patches [set strategy false] let i 0 foreach initial-distribution [ x -> ask n-of x (patches with [strategy = false]) [ set payoff 0 set strategy i set strategy-last i set my-neighbors-and-me (patch-set neighbors self) set my-coplayers ifelse-value include-yourself? [ my-neighbors-and-me ] [neighbors] set num-my-coplayers (count my-coplayers) ] set i (i + 1) ] set num-agents count patches end

to setup-graph set-current-plot "Strategy Distribution" foreach n-values n-of-strategies [ i -> i ] [ i -> create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i ] end

- Change the new property names
*strategy*and*strategy-last*for the old ones*cooperator?*and*cooperator-last?* - Include a call to the
*to update-graph*procedure after the*tick*command, as in this case the commands to draw the fraction of agents using each strategy after every tick are not included in the graph itself, but in the*to update-graph*procedure.

to go ifelse synchronous-updating? [ ask patches [play] ask patches [set strategy-last strategy] ask patches [update-strategy] ] [ ask patches [ play set strategy-last strategy update-strategy ] ] tick update-graph ask patches [update-color] end

- Creating a list that contains the numbers of coplayers playing each strategy (the
*count-of-coplayers-with-strategy-?*list). - Considering the payoff that the patch gets when playing with each of the possible strategies. This information is the corresponding row of the payoff matrix:
item strategy payoff-matrix

- Adding the element to element products of both lists.

to play let count-of-coplayers-with-strategy-? n-values n-of-strategies [ i -> count my-coplayers with [strategy = i] ] let my-payoffs (item strategy payoff-matrix) set payoff sum (map * my-payoffs count-of-coplayers-with-strategy-?) end

to update-strategy ifelse random-float 1 < prob-mutation [set strategy (random n-of-strategies)] [set strategy [strategy-last] of one-of my-neighbors-and-me with-max [payoff] ] end

to update-graph let strategy-numbers (n-values n-of-strategies [ n -> n ]) set strategy-frequencies map [ n -> count patches with [strategy = n] / num-agents ] strategy-numbers set-current-plot "Strategy Distribution" let bar 1 foreach strategy-numbers [ n -> set-current-plot-pen (word n) plotxy ticks bar set bar (bar - (item n strategy-frequencies)) ] set-plot-y-range 0 1 end

to update-color set pcolor 25 + 40 * strategy end

globals [ payoff-matrix n-of-strategies strategy-frequencies num-agents ] patches-own [ strategy strategy-last payoff my-neighbors-and-me my-coplayers num-my-coplayers ] to setup clear-all setup-payoffs setup-agents setup-graph reset-ticks update-graph ask patches [update-color] reset-ticks end to setup-payoffs set payoff-matrix read-from-string payoffs set n-of-strategies length payoff-matrix end to setup-agents let initial-distribution read-from-string n-of-agents-for-each-strategy if length payoff-matrix != length initial-distribution [ user-message (word "The number of items in n-of-agents-for-each-strategy (i.e. " length initial-distribution "):\n" n-of-agents-for-each-strategy "\nshould be equal to the number of rows of the payoff matrix (i.e. " length payoff-matrix "):\n" payoffs ) ] if sum initial-distribution != count patches [ user-message (word "The total number of agents in n-of-agents-for-each-strategy (i.e. " sum initial-distribution "):\n" n-of-agents-for-each-strategy "\nshould be equal to the number of patches (i.e. " count patches ")" ) ] ask patches [set strategy false] let i 0 foreach initial-distribution [ x -> ask n-of x (patches with [strategy = false]) [ set payoff 0 set strategy i set strategy-last i set my-neighbors-and-me (patch-set neighbors self) ;; see logit-on-grid for more neighborhoods set my-coplayers ifelse-value include-yourself? [ my-neighbors-and-me ] [neighbors] set num-my-coplayers (count my-coplayers) ] set i (i + 1) ] set num-agents count patches end to setup-graph set-current-plot "Strategy Distribution" foreach n-values n-of-strategies [ i -> i ] [ i -> create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i ] end to go ifelse synchronous-updating? [ ask patches [play] ask patches [set strategy-last strategy] ask patches [update-strategy] ] [ ask patches [ play set strategy-last strategy update-strategy ] ] tick update-graph ask patches [update-color] end to play let count-of-coplayers-with-strategy-? n-values n-of-strategies [ i -> count my-coplayers with [strategy = i] ] let my-payoffs (item strategy payoff-matrix) set payoff sum (map * my-payoffs count-of-coplayers-with-strategy-?) end to update-strategy ifelse random-float 1 < prob-mutation [set strategy (random n-of-strategies)] [set strategy [strategy-last] of one-of my-neighbors-and-me with-max [payoff] ] end to update-graph let strategy-numbers (n-values n-of-strategies [ n -> n ]) set strategy-frequencies map [ n -> count patches with [strategy = n] / num-agents ] strategy-numbers set-current-plot "Strategy Distribution" let bar 1 foreach strategy-numbers [ n -> set-current-plot-pen (word n) plotxy ticks bar set bar (bar - (item n strategy-frequencies)) ] set-plot-y-range 0 1 end to update-color set pcolor 25 + 40 * strategy end

Tools are mainly developed for one-dimensional CAs, and for two or higher dimensional CAs, "run and watch" is the primary technique to study the behavior. Bhattacharjee, K., Naskar, N., Roy, S. et al. Nat Comput (2018). A survey of cellular automata: types, dynamics, non-uniformity and applications. https://doi.org/10.1007/s11047-018-9696-8

- An A-player with (AA) neighbours who switches to B breaks 4 (directed) AA links.
- An A-player with (AB) neighbours who switches to B breaks 2 AA links.
- A B-player with (AA) neighbours who switches to A creates 4 AA links.
- A B-player with (AB) neighbours who switches to A creates 2 AA links.

- a population of agents,
- a game that is recurrently played by the agents,
- an assignment rule, which determines how revision opportunities are assigned to agents, and
- a revision protocol, which specifies how individual agents update their (pure) strategies when they are given the opportunity to revise.

Player 2 |
|||

Keep | Transfer | ||

Player 1 |
Keep | 1000 , 1000 |
3000 , 0 |

Transfer | 0 , 3000 |
2000 , 2000 |

- Every agent obtains a payoff by selecting another agent at random and playing the game.
- With probability prob-revision, individual agents are given the opportunity to revise their strategies. The revision rule –called
**“imitate if better”**– reads as follows:Look at another (randomly selected) agent and adopt her strategy if and only if her payoff was greater than yours.

- Three buttons:
- One button named to setup. , which runs the procedure
- One button named to go. , which runs the procedure
- One button named to go indefinitely. , which runs the procedure

In the Code tab, write the procedures to setup and to go, without including any code inside for now.to setup ;; empty for now end to go ;; empty for now end

In the Interface tab, create a button and write setup in the "commands" box. This will make the procedure to setup run whenever the button is pressed. Create another button for the procedure to go (i.e., write go in the commands box) with display name to emphasize that pressing the button will run the procedure to go just once. Finally, create another button for the procedure to go, but this time tick the "forever" option. When pressed, this button will make the procedure to go run repeatedly until the button is pressed again. - A slider to let the user select the number of players.
Create a slider for global variable n-of-players. You can choose limit values 2 (as the minimum) and 1000 (as the maximum), and an increment of 1.
- An input box where the user can write a string of the form [ [
*A*_{00}*A*_{01}] [*A*_{10}*A*_{11}] ] containing the payoffs*A*that an agent playing strategy_{ij}*i*obtains when meeting an agent playing strategy*j*(*i,**j*∈ {0, 1}).Create an input box with associated global variable payoffs. Set the input box type to "String (reporter)". Note that the content of payoffs will be a string (i.e. a sequence of characters) from which we will need to extract the payoff numeric values. - A slider to let the user select the probability of revision.
Create a slider with associated global variable prob-revision. Choose limit values 0 and 1, and an increment of 0.01.
- A plot that will show the evolution of the number of agents playing each strategy.
Create a plot and name it Strategy Distribution. Since we are not going to use the 2D view (i.e. the large black square in the interface) in this model, you may want to overlay it with the newly created plot.

globals [payoff-matrix]

breed [players player]

players-own [ strategy payoff ]

- To clear everything up. We initialize the model afresh using the primitive clear-all:
`clear-all`

- To transform the string of characters the user has written in the payoffs input box (e.g. "[[1 2][3 4]]") into a list (of 2 lists) that we can use in the code (e.g. [[1 2][3 4]]). This list of lists will be stored in the global variable named payoff-matrix. To do this transformation (from string to list, in this case), we can use the primitive read-from-string as follows:
set payoff-matrix read-from-string payoffs

- To create n-of-players players and set their individually-owned variables to an appropriate initial value. At first, we set the value of payoff and strategy to 0:[footnote]By default, user-defined variables in NetLogo are initialized with the value 0, so there is no actual need to explicitly set the initial value of individually-owned variables to 0, but it does no harm either.[/footnote]
create-players n-of-players [ set payoff 0 set strategy 0 ]

ask AGENTSET [set strategy 1]

where`AGENTSET`

should be a random subset o players. To randomly select a certain number of agents from an agentset (such as players), we can use the primitive n-of (which reports another –usually smaller– agentset):ask (n-of SIZE players) [set strategy 1]

where`SIZE`

is the number of players we would like to select. Finally, to generate a random integer between 0 and n-of-players we can use the primitive random:random (n-of-players + 1)

The resulting instruction will be:ask n-of (random (n-of-players + 1)) players [set strategy 1]

- To initialize the tick counter. At the end of the setup procedure, we should include the primitive reset-ticks, which resets the tick counter to zero (and also runs the "plot setup commands", the "plot update commands" and the "pen update commands" in every plot, so the initial state of the model is plotted):
`reset-ticks`

globals [ payoff-matrix ] breed [players player] players-own [ strategy payoff ] to setup clear-all set payoff-matrix read-from-string payoffs create-players n-of-players [ set payoff 0 set strategy 0 ] ask n-of random (n-of-players + 1) players [set strategy 1] reset-ticks end to go end

ask players [play] ask players [ if (random-float 1 < prob-revision) [update-strategy] ]

(random-float 1 < prob-revision)will be true with probability prob-revision. Having the agents go once through the code above will mark an evolution step (or generation), so, to keep track of these cycles and have the plots in the interface automatically updated at the end of each cycle, we include the primitive tick at the end of to go.

`tick`

let mate one-of other players

item strategy payoff-matrixIn a similar fashion, the payoff to extract from this sublist is determined by the strategy of the running player’s mate (i.e., [strategy] of mate). Thus, the payoff obtained by the running agent is:

item ([strategy] of mate) (item strategy payoff-matrix)Finally, to make the running agent store her payoff, we can write:

set payoff item ([strategy] of mate) (item strategy payoff-matrix)

let observed-agent one-of other players

if ([payoff] of observed-agent) > payoff [ set strategy ([strategy] of observed-agent) ]

globals [ payoff-matrix ] breed [players player] players-own [ strategy payoff ] to setup clear-all set payoff-matrix read-from-string payoffs create-players n-of-players [ set payoff 0 set strategy 0 ] ask n-of random (n-of-players + 1) players [set strategy 1] reset-ticks end to go ask players [play] ask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick end to play let mate one-of other players set payoff item ([strategy] of mate) (item strategy payoff-matrix) end to update-strategy let observed-agent one-of other players if ([payoff] of observed-agent) > payoff [ set strategy ([strategy] of observed-agent) ] end

Edit the plot by right-clicking on it, choose a color and a name for the pen showing the number of agents with strategy 0, and in the “pen update commands” area write:

plot count players with [strategy = 0]

Add a second pen to show the number of players with strategy 1.

Exercise 2. Consider a Stag hunt game with payoffs [[3 0][2 1]] where strategy 0 is "Stag" and strategy 1 is "Hare". Does the strategy with greater initial presence tend to displace the other strategy?

P.S. You can explore this model's (deterministic) mean dynamic approximation with this program. [caption id="attachment_644" align="alignleft" width="400"] Picture by Ming Jun Tan[/caption] Exercise 3. Consider a Hawk-Dove game with payoffs [[0 3][1 2]] where strategy 0 is "Hawk" and strategy 1 is "Dove". Do all players tend to choose the same strategy? Reduce the number of players to 100 and observe the difference in behavior (press the setup button after changing the number of players). Reduce the number of players to 10 and observe the difference. P.S. You can explore this model's (deterministic) mean dynamic approximation with this program.^{ CODE }Exercise 4. Reimplement the procedure to update-strategy so the revising agent uses the imitative pairwise-difference protocol that we saw in section 0.1.

The essence of agent-based modeling

The defining feature of the agent-based modeling approach is that it establishes a direct and explicit correspondence
*explicitly* and *individually* represented in the model (Edmonds, 2001).[footnote]These three videos by Bruce Edmonds and Michael Price and Uri Wilensky nicely describe what ABM is about.[/footnote] Beyond this, no further assumptions are made in agent-based modeling.

- between the individual units in the target system to be modeled and the parts of the model that represent these units (i.e. the agents), and also
- between the interactions of the individual units in the target system and the interactions of the corresponding agents in the model (figure 1).

- Agents' heterogeneity. Since agents are explicitly represented in the model, they can be as heterogeneous as the modeler deems appropriate.
- Interdependencies between processes (e.g. demographic, economic, biological, geographical, technological) that have been traditionally studied in different disciplines, and are not often analyzed together. There is no restriction on the type of rules that can be implemented in an agent-based model, so models can include rules that link disparate aspects of the world that are often studied in different disciplines.
- Out-of-equilibrium dynamics. Dynamics are inherent to ABM. Running a simulation consists in applying the rules that define the model over and over, so agent-based models almost invariably include some notion of time within them. Equilibria are never imposed a priori: they may emerge as an outcome of the simulation, or they may not.
- The micro-macro link. ABM is particularly well suited to study how global phenomena emerge from the interactions among individuals, and also how these emergent global phenomena may constrain and shape back individuals' actions.
- Local interactions and the role of physical space. The fact that agents and their environment are represented explicitly in ABM makes it particularly straightforward and natural to model local interactions (e.g. via networks).

In my personal (albeit biased) view, the best simulations are those which just peek over the rim of theoretical understanding, displaying mechanisms about which one can still obtain causal intuitions. Probst (1999)

- strategy (a number)
- payoff (a number)
- my-coplayers (the set of agents with whom this agent plays the game)
- color (the color of this agent)

- to play (play a certain game with my-coplayers and set the payoff appropriately)
- to update-strategy (revise strategy according to a certain revision protocol)
- to update-color (set color according to strategy)

Individually-owned variables and instructions

In this model, agent's individually-owned variables are:

- color, which can be either blue or orange,
- (xcor, ycor), which determine the agent's location on the grid, and
- happy?, which indicates whether the agent is happy or not.

- to move, to change the agent's location to a random empty cell, and
- to update-happiness, to update the agent's individually-owned variable happy?.

- Choosing the side of the road on which to drive.
- Choosing WhatsApp or Facebook messenger as your default text messaging app.
- Choosing which restaurant to go in the following situation:
Imagine that over breakfast your partner and you decided to have lunch together, but you did not specify where. Right before lunch time you discover that you left your cellphone at home, and there is no other way to contact your partner quickly. Fortunately, you are not completely lost: there are only two restaurants that you both love in town. Which one should you go to?

- the set of individuals who interact (called
*players*), - the different choices available to each of the individuals when they are called upon to act (named actions or
*pure strategies*), - the
*information*individuals have at the time of making their decisions, - and a
*payoff*function that assigns a value to each individual for each possible combination of choices made by every individual (Fig. 1). In most cases, payoffs represent the preferences of each individual over each possible outcome of the social interaction,[footnote]A common misconception about game theory relates to the roots of players’ preferences. There is no assumption in game theory that dictates that players’ preferences are formed in complete disregard of each other’s interests. On the contrary, preferences in game theory are assumed to account for anything, i.e. they may include altruistic motivations, moral principles, and social constraints (see e.g. Colman (1995, p. 301), Vega-Redondo (2003, p. 7), Binmore and Shaked (2010, p. 88), Binmore (2011, p. 8) or Gintis (2014, p. 7)).[/footnote] though there are some evolutionary models where payoffs represent Darwinian fitness.

Player 2 |
|||

Player 2 chooses A |
Player 2 chooses B |
||

Player 1 |
Player 1 chooses A |
1 , 1 |
0 , 0 |

Player 1 chooses B |
0 , 0 |
2 , 2 |

- The players would be you and your partner.
- Each of you may choose restaurant A or restaurant B.
- Neither of you have any information about the other's choice at the time of making your decision.
- Both of you prefer eating together rather than being alone, no matter where, and you both prefer restaurant B over restaurant A. Thus, the best possible outcome for both would be to meet at restaurant B; second best would be meeting at restaurant A; and any lack of coordination would be equally awful.[footnote]Note that there is no inconsistency in being indifferent about outcomes {A, B} and {B, A}, even if you prefer restaurant B. It is sufficient to assume that you care about your partner as much as about yourself.[/footnote]

- At initial round
*t*= 1, play B. - At round
*t*> 1, play B if the other player chose B at time*t*- 1. Otherwise play A.

- Players (who most often represented non-human animals) were assumed to be pre-programmed to play one given strategy, i.e. players were seen as mere carriers of a particular fixed strategy that had been genetically endowed to them and could not be changed during the course of the player's lifetime. As for the number of players, the main interest in early EGT was to study
*large populations*of animals, where the actions of one single individual could not significantly affect the overall success of any strategy in the population. - Strategies, therefore, were not assumed to be selected by players, but rather hardwired in the animals’ genetic make-up. Strategies were, basically, phenotypes.
The concept is couched in terms of a 'strategy' because it arose in the context of animal behaviour. The idea, however, can be applied equally well to any kind of phenotypic variation, and the word strategy could be replaced by the word phenotype; for example, a strategy could be the growth form of a plant, or the age at first reproduction, or the relative numbers of sons and daughters produced by a parent. Maynard Smith (1982, p. 10)

- Since strategies are not consciously chosen by players, but they are simply hardwired, information at the time of making the decision plays no significant role.
- Payoffs did not represent any order of preference, but Darwinian fitness, i.e. the expected reproductive contribution to future generations.

As manyThe essence of this simple and groundbreakingmore individuals of each species are born than can possibly survive; and as, consequently, there is a frequently recurring struggle for existence, it follows that any being, if itvaryhowever slightly in any manner profitable to itself, under the complex and sometimes varying conditions of life, will have abetter chance of surviving, and thus benaturally selected. From the strongprinciple of inheritance, anyselected variety will tend to propagate its new and modified form. Darwin (1859, p. 5)

- More offspring are produced than can survive and reproduce, and
- variation within populations:
- affects the fitness (i.e. the expected reproductive contribution to future generations) of individuals, and
- is heritable,

- evolution by natural selection occurs.

- Players using different strategies obtain different payoffs, and
- they occasionally revise their strategies (by e.g. imitation or direct reasoning over gathered information), preferentially switching to strategies that provide greater payoffs,

- the frequency of strategies with greater payoffs will tend to increase (and this change in strategy frequencies may alter the future relative success of strategies).

- A population of agents,
- a game that is recurrently played by the agents,
- a procedure by which revision opportunities are assigned to agents, and
- a revision protocol, which dictates how individual agents choose their strategy when they are given the opportunity to revise.

- Under the
*imitative pairwise-difference protocol*(Helbing (1992), Hofbauer (1995), Schlag (1998)), a revising agent looks at another individual at random and imitates her strategy only if that strategy yields a higher expected payoff than his current strategy; in this case he switches with probability proportional to the payoff difference. It can be proved that the dynamics of this protocol in large populations will tend to approach the state where every agent plays action B if the initial proportion of B-players is greater than ⅓, and will tend to approach the state where every agent plays action A if the initial proportion of B-players is less than ⅓ (Fig. 2).[footnote]To prove this statement, note that the mean dynamic of this revision protocol is the well-known replicator dynamic (Taylor and Jonker, 1978) [/footnote] [caption id="attachment_3654" align="aligncenter" width="450"] Figure 2. Mean dynamic of the imitative pairwise-difference protocol in a coordination game.[/caption] The following video shows some NetLogo simulations that illustrate these dynamics. In this book, we will learn to implement this model.[footnote]See exercise 4 in section 1.0[/footnote] [video width="674" height="318" mp4="https://wisc.pb.unizin.org/app/uploads/sites/28/2017/01/nxn-imitative-pairwise-difference.mp4" poster="https://wisc.pb.unizin.org/app/uploads/sites/28/2018/10/nxn-imitative-pairwise-difference.jpg"][/video] - Under the
*best experienced payoff protocol*(Osborne and Rubinstein (1998), Sethi (2000), Sandholm et al. (2017), Izquierdo et al. (2018b)), a revising agent tests each of the two strategies against a random agent, with each play of each strategy being against a newly drawn opponent. The revising agent then selects the strategy that obtained the greater payoff in the test, with ties resolved at random. It can be proved that the dynamics of this protocol in large populations will tend to approach the state where every agent plays action B for any initial condition (Fig. 3).[footnote]This statement is a direct application of proposition 4.5 in Izquierdo et al. (2018b)[/footnote] [caption id="attachment_3653" align="aligncenter" width="450"] Figure 3. Mean dynamic of the best experienced protocol in a coordination game.[/caption] The following video shows some NetLogo simulations that illustrate these dynamics. We will learn to implement this model in this book.[footnote]See exercise 5 in section 1.0[/footnote] [video width="674" height="318" mp4="https://wisc.pb.unizin.org/app/uploads/sites/28/2017/01/nxn-best-experienced-payoff.mp4" poster="https://wisc.pb.unizin.org/app/uploads/sites/28/2018/10/nxn-best-experienced-payoff.jpg"][/video]

Evolutionary Game Theory and Engineering

Many engineering infrastructures are becoming increasingly complex to manage due to their large-scale distributed nature and the nonlinear interdependences between their components (Quijano et al., 2017). Examples include communication networks, transportation systems, sensor and data networks, wind farms, power grids, teams of autonomous vehicles, and urban drainage systems. Controlling such large-scale distributed systems requires the implementation of decision rules for the interconnected components that guarantee the accomplishment of a collective objective in an environment that is often dynamic and uncertain. To achieve this goal, traditional control theory is often of little use, since distributed architectures generally lack a central entity with access or authority over all components (Marden and Shamma, 2015).
The concepts developed in EGT can be very useful in such situations. The analogy between distributed systems in engineering and the social interactions analyzed in EGT has been formally established in various engineering contexts (Marden and Shamma, 2015). In EGT terms, the goal is to identify revision protocols that will lead to desirable outcomes using limited local information only. As an example, at least in the coordination game discussed above, the best experienced payoff protocol is more likely to lead to the most efficient outcome than the imitative pairwise-difference protocol.

- EGT tends to study
*large*populations of*small*agents, who interact*anonymously*. This implies that players' payoffs only depend on the distribution of strategies in the population, and also that any one player's actions have little or no effect on the aggregate success of any strategy at the population level. In contrast, TLG is mainly concerned with the analysis of small groups of players who repeatedly play a game among them, each of them in her particular role. - The revision protocols analyzed in EGT tend to be fairly simple and use information about the current state of the population only. By contrast, the sort of algorithms analyzed in TLG tend to be more sophisticated, and make use of the history of the game to decide what action to take.

- Overviews:
- Introductory: Colman (1995).
- Advanced: Vega-Redondo (2003).

- Traditional game theory:
- Introductory: Dixit and Nalebuff (2008).
- Intermediate: Osborne (2004).
- Advanced: Fudenberg and Tirole (1991), Myerson (1997), Binmore (2007).

- Evolutionary game theory:
- Introductory: Maynard Smith (1982), Gintis (2009), Sandholm (2009).
- Advanced: Hofbauer and Sigmund (1988), Weibull (1995), Samuelson (1997), Sandholm (2010).
- Recent literature review: Newton (2018).

- The theory of learning in games:
- Introductory: Vega-Redondo (2003, chapters 11 and 12).
- Advanced: Fudenberg and Levine (1998), Young (2004).

NetLogo is a well-written, easy-to-install, easy-to-use, easy-to-extend, and easy-to-publish-online environment. The entry level is simple enough and the tutorials provided in the package are straightforward and clear enough that anyone who can read and is comfortable using a keyboard and mouse could create their own models in a short time, with little or no additional instruction. Sklar (2007, p. 7)NetLogo (Wilensky, 1999) is a modeling environment designed for coding and running agent-based simulations.[footnote]NetLogo was created by Uri Wilensky and is under continuous development at the Northwestern's Center for Connected Learning and Computer-Based Modeling. It is also important to acknowledge Seth Tisue, who "

NetLogo stands out as the quickest to learn and the easiest to use. Gilbert (2007, p. 49)The language used to code models within NetLogo –which is also called NetLogo– has been designed following a "Low Threshold, No Ceiling" philosophy (Wilensky and Rand, 2015). All reviews of the software highlight how easy it is to learn. To be concrete, we would estimate that an average scholar without previous coding experience can learn the basics of the language and be in a position to write a simple agent-based model after 2-4 days of work. Someone with programming experience could reduce the estimated time to 1-2 days. One characteristic that makes the NetLogo language easy to learn is that it is remarkably close to natural language. As a matter of fact, NetLogo language could perfectly be used as pseudo-code to communicate algorithms implemented in other languages.

Since NetLogo was designed to be easily readable, we believe that NetLogo code is about as easy to read as any pseudo-code we would have used. NetLogo also has the big advantage over pseudo-code of being executable, so the user can run and test the examples. (Wilensky and Rand, 2015, p. xiv)NetLogo language is definitely simpler to use than e.g. Java or Objective-C, and can often reduce programming efforts significantly when compared with other languages.

NetLogo has become a standard platform for agent-based simulation, yet there appears to be widespread belief that it is not suitable for large and complex models due to slow execution. Our experience does not support that belief. Railsback et al. (2017, abstract)NetLogo is powerful in that it can accommodate reasonably large and complex simulations, and its execution speed is more than acceptable for most purposes. NetLogo can easily run simulations with several tens of thousands of agents.

NetLogo is by far the most professional platform in its appearance and documentation. Railsback et al. (2006, p. 613)One of the reasons why NetLogo is so easy to learn is that it is very well documented. The user manual includes three tutorials to help beginners get started, an excellent programming guide, and a comprehensive dictionary with the definitions of all NetLogo primitives, including examples of how to use them. NetLogo also comes with an extensive library of models from different disciplines (e.g. art, biology, chemistry, computer science, mathematics, networks, philosophy, physics, psychology, and other social sciences) and several code examples which succinctly illustrate particular features and coding techniques.

- By modifying parameter values at runtime, with immediate effect on the simulation. This feature is very convenient to assess the impact of different assumptions in the model and conduct exploratory work.
- By running commands in the middle of a run to e.g. create new agents, remove others, or make a subset of them take some action.
- By probing agents to see –and potentially set– the value of any of their individually-owned variables at any time.

This reviewer, who has used NetLogo for both research and teaching at several levels, highly recommends it for instructors from elementary school to graduate school and for researchers from a wide range of fields. Sklar (2007, p. 8)

- The rnd extension, which provides efficient primitives to make random weighted selections, with or without replacement.
- The nw extension, which adds many primitives to generate networks, compute several network-related metrics, and import and export network data.
- The matrix extension, which adds a matrix data structure to NetLogo and several primitives to operate with it.
- The GIS (Geographic Information Systems) extension.

NetLogo is now a powerful tool widely used in science and we recommend it strongly, especially for those new to modeling and programming but also for serious scientists with software experience. Lytinen and Railsback (2012)NetLogo can be linked with advanced software tools like R (R Core Team, 2019), Python (Python Software Foundation, 2019), Mathematica (Wolfram Research, Inc., 2019) or Matlab (The MathWorks, Inc., 2019). Specifically, using an R package called RNetLogo (Thiele (2014); Thiele et al. (2012a, 2012b, 2014)), it is possible to run and control NetLogo models from R, execute NetLogo commands, and obtain any information from a NetLogo model. The connector PyNetLogo (Jaxa-Rozen and Kwakkel, 2018) provides the same functionalty for Python, and the so-called Mathematica link (Bakshy and Wilensky, 2007) for Mathematica. The Mathematica link comes bundled as part of the latest NetLogo releases. Conversely, one can also call R, Python and Matlab commands from within NetLogo using the R-Extension (Thiele and Grimm, 2010), the NetLogo Python extension (Head, 2018) and MatNet (Biggs and Papin, 2013) respectively.

- Download and install NetLogo following the instructions at https://ccl.northwestern.edu/netlogo/. In this book we will be using NetLogo version 6.0.2.[footnote]Please, make sure you download version 6.0.2 or greater, since NetLogo syntax changed significantly in version 6.0.[/footnote]
- Go through the three tutorials in the NetLogo user manual, i.e.

- Every agent obtains a payoff by selecting another agent at random and playing the game.
- With probability prob-revision, individual agents are given the opportunity to revise their strategies. The revision rule –called
**“imitate if better”**– reads as follows:Look at another (randomly selected) agent and adopt her strategy if and only if her payoff was greater than yours.

- Make the payoffs input box bigger and let its input contain several lines.
In the Interface tab, select the input box (by right-clicking on it) and make it bigger. Then edit it (by right-clicking on it) and tick the "Multi-Line" box.
- Remove the "pens" in the Strategy Distribution plot. Since the number of strategies is unknown until the payoff matrix is read, we will need to create the required number of "pens" via code.
In the Interface tab, edit the Strategy Distribution plot and delete both pens.

globals [ payoff-matrix n-of-strategies ]

to setup clear-all set payoff-matrix read-from-string payoffs create-players n-of-players [ set payoff 0 set strategy 0 ] ask n-of random (n-of-players + 1) players [set strategy 1] reset-ticks end

to setup clear-all setup-payoffs setup-players setup-graph reset-ticks update-graph end

to setup-players create-players n-of-players [ set payoff 0 set strategy (random n-of-strategies) ] end

set-current-plot "Strategy Distribution"Then, for each strategy

- Create a pen with the name of the strategy. For this, we use the primitive create-temporary-plot-pen to create the pen, and the primitive word to turn the strategy number into a string.
create-temporary-plot-pen (word i)

- Set the pen mode to 1 (bar mode) using set-plot-pen-mode. We do this because we plan to create a stacked bar chart for the distribution of strategies.
set-plot-pen-mode 1

- Choose a color for each pen. See how colors work in NetLogo.
set-plot-pen-color 25 + 40 * i

range n-of-strategiesThe final code for the procedure to setup-graph is then:

to setup-graph set-current-plot "Strategy Distribution" foreach (range n-of-strategies) [ i -> create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i ] end

let strategy-numbers (range n-of-strategies)To compute the (relative) strategy frequencies, we apply to each element of the list strategy-numbers, i.e. to each strategy number, the operation that calculates the fraction of players using that strategy. To do this, we use primitive map. Remember that map requires as inputs a) the function to be applied to each element of the list and b) the list containing the elements on which you wish to apply the function. In this case, the function we wish to apply to each strategy number (implemented as an anonymous procedure) is:

[n -> ( count (players with [strategy = n]) ) / n-of-players]In the code above, we first identify the subset of players that have a certain strategy (using with), then we count the number of players in that subset (using count), and finally we divide by the total number of players n-of-players. Thus, we can use the following code to obtain the strategy frequencies, as a list:

map [n -> count players with [strategy = n] / n-of-players] strategy-numbersFinally, to build the stacked bar chart, we begin by plotting a bar of height 1, corresponding to the first strategy. Then we repeatedly draw bars on top of the previously drawn bars (one bar for each of the remaining strategies), with the height diminished each time by the relative frequency of the corresponding strategy. The final code of procedure to update-graph will look as follows:

to update-graph let strategy-numbers (range n-of-strategies) let strategy-frequencies map [ n -> count players with [strategy = n] / n-of-players ] strategy-numbers set-current-plot "Strategy Distribution" let bar 1 foreach strategy-numbers [ n -> set-current-plot-pen (word n) plotxy ticks bar set bar (bar - (item n strategy-frequencies)) ] set-plot-y-range 0 1 end

to go ask players [play] ask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick update-graph end

globals [ payoff-matrix n-of-strategies ] breed [players player] players-own [ strategy payoff ] to setup clear-all setup-payoffs setup-players setup-graph reset-ticks update-graph end to setup-payoffs set payoff-matrix read-from-string payoffs set n-of-strategies length payoff-matrix end to setup-players create-players n-of-players [ set payoff 0 set strategy (random n-of-strategies) ] end to setup-graph set-current-plot "Strategy Distribution" foreach (range n-of-strategies) [ i -> create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i ] end to go ask players [play] ask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick update-graph end to play let mate one-of other players set payoff item ([strategy] of mate) (item strategy payoff-matrix) end to update-strategy let observed-player one-of other players if ([payoff] of observed-player) > payoff [ set strategy ([strategy] of observed-player) ] end to update-graph let strategy-numbers (range n-of-strategies) let strategy-frequencies map [n -> count players with [strategy = n] / n-of-players] strategy-numbers set-current-plot "Strategy Distribution" let bar 1 foreach strategy-numbers [ n -> set-current-plot-pen (word n) plotxy ticks bar set bar (bar - (item n strategy-frequencies)) ] set-plot-y-range 0 1 end

Exercise 2. Consider a game with payoff matrix [[1 1 0][1 1 1][0 1 1]]. Set the probability of revision to 0.1. Press the button and run the model for a while (then press the button again to change the initial conditions). Can you explain what happens?

- using the primitive repeat instead of foreach.
- using the primitive while instead of foreach.
- using the primitive loop instead of foreach.

- The possibility of setting initial conditions explicitly. This is an important feature because initial conditions can be very relevant for the evolution of a system.
- The possibility that revising agents select a strategy at random with a small probability. This type of noise in the revision process may account for experimentation or errors in economic settings, or for mutations in biological contexts. The inclusion of noise in a model can sometimes change its dynamical behavior dramatically, even creating new attractors. This is important because dynamic characteristics of a model –such as attractors, cycles, repellors, and other patterns– that are not robust to the inclusion of small noise may not correspond to relevant properties of the real-world system that we aim to understand. Besides, as a positive side-effect, adding small amounts of noise to a model often makes the analysis of its dynamics easier to undertake.

- Every agent obtains a payoff by selecting another agent at random and playing the game.
- With probability prob-revision, individual agents are given the opportunity to revise their strategies. In that case, with probability noise, the revising agent will adopt a random strategy; and with probability (1 - noise), the revising agent will choose her strategy following the
**“imitate if better”**protocol:Look at another (randomly selected) agent and adopt her strategy if and only if her payoff was greater than yours.

- Create an input box to let the user set the initial number of players using each strategy.
In the Interface tab, add an input box with associated global variable n-of-players-for-each-strategy. Set the input box type to “String (reporter)”.
- Note that the total number of players (which was previously set using a slider with associated global variable n-of-players) will now be computed totaling the items of the list n-of-players-for-each-strategy. Thus, we should remove the slider, and include the global variable n-of-players in the Code tab.
globals [ payoff-matrix n-of-strategies

**n-of-players**] - Add a monitor to show the total number of players. This number will be stored in the global variable n-of-players, so the monitor must show the value of this variable.
In the Interface tab, create a monitor. In the "Reporter" box write the name of the global variable n-of-players.
- Create a slider to choose the value of parameter noise.
In the Interface tab, create a slider with associated global variable noise. Choose limit values 0 and 1, and an increment of 0.001.

let initial-distribution read-from-string n-of-players-for-each-strategy

if length initial-distribution != length payoff-matrix [ user-message (word "The number of items in\n" ;; "\n" is used to jump to the next line "n-of-players-for-each-strategy (i.e. " length initial-distribution "):\n" n-of-players-for-each-strategy "\nshould be equal to the number of rows\n" "in the payoff matrix (i.e. " length payoff-matrix "):\n" payoffs ) ] ;; It is not necessary to show the user ;; the value of n-of-players-for-each-strategy ;; and payoffs again, ;; but when creating an error message, ;; it is good practice to give the user ;; as much information as possible, ;; so the error can be easily corrected.

let i 0 foreach initial-distribution [ j -> create-players j [ set payoff 0 set strategy i ] set i (i + 1) ]

set n-of-players count players

to update-strategy let observed-player one-of other players if ([payoff] of observed-player) > payoff [ set strategy ([strategy] of observed-player) ] end

ifelse CONDITION [ COMMANDS EXECUTED IF CONDITION IS TRUE ] [ COMMANDS EXECUTED IF CONDITION IS FALSE ]In our case, the

`CONDITION`

should be true with probability noise. Bearing all this in mind, the final code for procedure to update-strategy could be as follows:
to update-strategy ifelse random-float 1 < noise ;; the condition is true with probability noise [ ;; code to be executed if there is noise set strategy (random n-of-strategies) ] [ ;; code to be executed if there is no noise let observed-player one-of other players if ([payoff] of observed-player) > payoff [ set strategy ([strategy] of observed-player) ] ] end

globals [ payoff-matrix n-of-strategies n-of-players ] breed [players player] players-own [ strategy payoff ] to setup clear-all setup-payoffs setup-players setup-graph reset-ticks update-graph end to setup-payoffs set payoff-matrix read-from-string payoffs set n-of-strategies length payoff-matrix end to setup-players let initial-distribution read-from-string n-of-players-for-each-strategy if length initial-distribution != length payoff-matrix [ user-message (word "The number of items in\n" "n-of-players-for-each-strategy (i.e. " length initial-distribution "):\n" n-of-players-for-each-strategy "\nshould be equal to the number of rows\n" "in the payoff matrix (i.e. " length payoff-matrix "):\n" payoffs ) ] let i 0 foreach initial-distribution [ j -> create-players j [ set payoff 0 set strategy i ] set i (i + 1) ] set n-of-players count players end to setup-graph set-current-plot "Strategy Distribution" foreach (range n-of-strategies) [ i -> create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i ] end to go ask players [play] ask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick update-graph end to play let mate one-of other players set payoff item ([strategy] of mate) (item strategy payoff-matrix) end to update-strategy ifelse random-float 1 < noise [ set strategy (random n-of-strategies) ] [ let observed-player one-of other players if ([payoff] of observed-player) > payoff [ set strategy ([strategy] of observed-player) ] ] end to update-graph let strategy-numbers (range n-of-strategies) let strategy-frequencies map [ n -> count players with [strategy = n] / n-of-players ] strategy-numbers set-current-plot "Strategy Distribution" let bar 1 foreach strategy-numbers [ n -> set-current-plot-pen (word n) plotxy ticks bar set bar (bar - (item n strategy-frequencies)) ] set-plot-y-range 0 1 end

Exercise 2. Consider a Rock-Paper-Scissors game with payoff matrix [[0 -1 1][1 0 -1][-1 1 0]]. Set prob-revision to 0.1 and noise to 0. Set the initial number of players using each strategy, i.e. n-of-players-for-each-strategy, to [100 100 100]. Press the button and run the model for a while. While it is running, click on the noise slider to set its value to 0.001. Can you explain what happens?

Exercise 3. Consider a game with payoff matrix [[1 1 0][1 1 1][0 1 1]]. Set prob-revision to 0.1, noise to 0.05, and the initial number of players using each strategy, i.e. n-of-players-for-each-strategy, to [500 0 500]. Press the button and run the model for a while (then press the button again to change the initial conditions). Can you explain what happens? Exercise 4. Consider a game with- The computer simulation approach consists in a) running many simulations –i.e. sampling the model many times– and, with the data thus obtained, b) trying to infer general patterns and properties of the model. In terms of types of reasoning, this approach involves logical deduction first (when the data is generated by the computer following the algorithmic rules that define the model) and induction later (when general patterns are inferred from the generated data).
- Mathematical approaches do not look at individual simulation runs, but instead analyze the rules that define the model directly, and try to derive their logical implications. Mathematical approaches use deductive reasoning only, so their conclusions follow with logical necessity from the assumptions embedded in the model (and in the mathematics employed).

[ "prob-revision" 0.01 0.05 0.1 ]If, in addition, we would like to explore the values of noise 0, 0.01, 0.02 ... 0.1, we could use the syntax for loops, [

[ "noise" [0 0.01 0.1] ] ;; note the additional bracketsIf we made the two changes described above, then the new computational experiment would comprise 33000 runs, since NetLogo would run 1000 simulations for each combination of parameter values (i.e. 3 × 11). The original experiment shown in figure 2, which consists of 1000 simulation runs only, takes a couple of minutes to run. Once it is completed, we obtain a

- The number of states is too large.[footnote]As an example, in a 4-strategy game with 1000 players, the number of possible states in our model is $ \binom{1000 + 4 - 1}{4-1} = 167668501$.[/footnote]
- The transition probabilities are hard to derive.
- The model has too many parameters.

$\dot{x} = x (x-1)\left(x^2-3 x+1\right)$

where $x$ stands for the fraction of 0-strategists.[footnote]For details, see Izquierdo and Izquierdo (2013).[/footnote][footnote]One unit of clock time is defined in such a way that each player expects to receive one revision opportunity per unit of clock time. In this particular example, since prob-revision = 0.01, one unit of clock time corresponds to 100 ticks (i.e. 1 / prob-revision).[/footnote] The solution of this ordinary differential equation with initial condition $x_0 = 0.5$ is shown in figure 4 below. It is clear that the mean dynamic provides a remarkably good approximation to the average transient dynamics plotted in figures 1 and 3. [caption id="attachment_1309" align="aligncenter" width="600"] Figure 4. Trajectory of the mean dynamic of the example in section 2, showing the proportion of 0-strategists as a function of time (rescaled to match figures 1 and 3).[/caption] Naturally, the mean dynamic can be solved for many different initial conditions, providing an overall picture of the transient dynamics of the model when the population is large. Figure 5 below shows an illustration, created with the followingPlot[ Evaluate[ Table[ NDSolveValue[{x'[t] == x[t] (x[t] - 1) (x[t]^2 - 3 x[t] + 1), x[0] == x0}, x, {t, 0, 10}][t/100] , {x0, 0, 1, 0.01}] ], {t, 0, 1000}]

$\dot{x} = x (1-x)(1-2 x)$

where $x$ stands for the fraction of 0-strategists, i.e. "Hawk" agents.[footnote]For details, see Izquierdo and Izquierdo (2013).[/footnote] Solving the mean dynamic reveals that most large-population simulations starting with at least one "Hawk" and at least one "Dove" will tend to approach the state where half the population play "Hawk" and the other play "Dove", and stay around there for long. Figure 6 below shows several trajectories for different initial conditions. [caption id="attachment_1472" align="aligncenter" width="600"] Figure 6. Trajectories of the mean dynamic of an imitate-if-better Hawk-Dove game, showing the proportion of "Hawks" as a function of time for different initial conditions.[/caption] Naturally, simulations do not get stuck in the half-and-half state, since agents keep revising their strategy in a stochastic fashion (see figure 7). To understand this stochastic flow of agents between strategies near equilibria, it is necessary to go beyond the mean dynamic. Sandholm (2003) shows that –under rather general conditions– stochastic finite-population dynamics near rest points can be approximated by a diffusion process, as long as the population size $N$ is large enough. He also shows that the standard deviations of the limit distribution are of order $\frac{1}{\sqrt{N}}$. To illustrate this order $\frac{1}{\sqrt{N}}$, we set up one simulation run starting with 10 agents playing "Hawk" and 10 agents playing "Dove". This state constitutes a so-called "Equilibrium", since the expected change in the strategy distribution is zero. However, the stochasticity in the revision protocol and in the matching process imply that the strategy distribution is in perpetual change. In the simulation shown in figure 7, we modify the number of players at runtime. At tick 10000, we increase the number of players by a factor of 10 up to 200 and, after 10000 more ticks, we set n-of-players to 2000 (i.e., a factor of 10, again). The standard deviation of the fraction of players using strategy "Hawk" (or "Dove") during each of the three stages in our simulation run was: 0.1082, 0.0444 and 0.01167 respectively. As expected, these numbers are related by a factor of approximately $\frac{1}{\sqrt{10}}$.[footnote]A good approximation for these standard deviations over long time spans when the number of agents is large can be exactly computed (see Izquierdo et al. (2018a, example 3.1)).[/footnote] [caption id="attachment_1482" align="aligncenter" width="700"] Figure 7. A simulation run of an imitate-if-better Hawk-Dove game, set up with 20 agents during the first 10000 ticks, then 200 agents during the following 10000 ticks, and finally 2000 agents during the last 10000 ticks. Payoffs: [[0 3][1 2]]; prob-revision: 0.01; noise 0; initial conditions [10 10].[/caption]- the set of agents with whom agent A plays the game, and
- the set of agents that agent A may observe at the time of revising his strategy.

- Every agent plays the game with all his neighbors (once with each neighbor) and with himself (Moore neighborhood). The total payoff for the player is the sum of the payoffs in these encounters.
- All agents
*symultaneously*revise their strategy according to the following rule:Consider the set of all your neighbors plus yourself; then adopt the strategy of one of the agents in this set who has obtained the greatest payoff. If there is more than one agent with the greatest payoff, choose one of them at random to imitate.

- The 2D view of the NetLogo world (i.e. the large black square in the interface), which is made up of patches. This view is already on the interface by default when creating a new NetLogo model.
Choose the dimensions of the world by clicking on the "Settings..." button on the top bar, or by right-clicking on the 2D view and choosing Edit. A window will pop up, which allows you to choose the number of patches by setting the values of min-pxcor, max-pxcor, min-pycor and max-pycor. You can also determine the patches' size in pixels, and whether the grid wraps horizontally, vertically, both or none (see Topology section). You can choose these parameters as in figure 2 below: [caption id="attachment_2678" align="aligncenter" width="447"] Figure 2. Model settings.[/caption]
- Three buttons:
- One button named to setup. , which runs the procedure
- One button named to go. , which runs the procedure
- One button named to go indefinitely. , which runs the procedure

In the Code tab, write the procedures to setup and to go, without including any code inside for now. Then, create the buttons, just like we did in the previous chapter.Note that the interface in figure 1 has an extra button labeled . You may wish to include it now. The code that goes inside this button is proposed as Exercise 2. - Four sliders, to choose the payoffs for each possible outcome (CC, CD, DC, DD).
Create the four sliders, with global variable names CC-payoff, CD-payoff, DC-payoff, and DD-payoff. Remember to choose a range, an interval and a default value for each of them. You can choose minimum 0, maximum 2 and increment 0.01.
- A slider to let the user select the initial percentage of C-players.
Create a slider for global variable initial-%-of-C-players. You can choose limit values 0 (as the minimum) and 100 (as the maximum), and an increment of 1.
- A plot that will show the evolution of the number of agents playing each strategy.
Create a plot and name it Strategy Distribution.

- Whether the patch is a C-player or D-player. For efficiency and code readability we can use a boolean variable to this end, which we can call C-player? and which will take the value true or false.
- Whether the patch was a C-player or a D-player before it revised and updated its strategy. This is needed because we want to model synchronous updating, i.e. we want all agents to change their strategy at the same time. To do this, we need to keep in memory the strategy of every agent before the first agent switches strategy. For this purpose, we may use the boolean variable C-player-last?.
- The total payoff obtained by the patch playing with its neighbours. We can call this variable payoff.
- For efficiency, it will also be useful to endow each patch with the set of neighbouring patches plus itself. The reason is that this set will be used many times, and it never changes, so it can be computed just once at the beginning and stored in memory. We will store this set in a variable named my-nbrs-and-me.
- The following variable is also defined for efficiency reasons. Note that the payoff of a patch depends on the number of C-players and D-players in its set my-nbrs-and-me. To spare the operation of counting D-players, we can calculate it as the number of players in my-nbrs-and-me (which does not change in the whole simulation) minus the number of C-players. To this end, we can store the number of players in the set my-nbrs-and-me of each patch as an individually-owned variable that we naturally name n-of-my-nbrs-and-me.

patches-own [ C-player? C-player-last? payoff my-nbrs-and-me n-of-my-nbrs-and-me ]

- Clear everything up, so we initialize the model afresh, using the primitive clear-all:
`clear-all`

- Set initial values for the variables that we have associated to each patch. We can set the payoff to 0,[footnote]By default, user-defined variables in NetLogo are initialized with the value 0, so there is no actual need to explicitly set the initial value of individually-owned variables to 0, but it does no harm either.[/footnote] and both C-player? and C-player-last? to false (later we will ask some patches to set these values to true). To set the value of my-nbrs-and-me, NetLogo primitives neighbors and patch-set are really handy.
ask patches [ set payoff 0 set C-player? false set C-player-last? false set my-nbrs-and-me (patch-set neighbors self) set n-of-my-nbrs-and-me (count my-nbrs-and-me) ]

- Ask a certain number of randomly selected patches to be C-players. That number depends on the percentage initial-%-of-C-players chosen by the user and on the total number of patches, and it must be an integer, so we can calculate it as:
round (initial-%-of-C-players * count patches / 100)

To randomly select a certain number of agents from an agentset (such as patches), we can use the primitive n-of (which reports another –usually smaller– agentset). Thus, the resulting instruction will be:ask n-of (round (initial-%-of-C-players * count patches / 100)) patches [ set C-player? true set C-player-last? true ]

- Color patches according to the four possible combinations of values of C-player? and C-player-last?. The color of a patch is controled by the NetLogo built-in patch variable pcolor. A first (and correct) implementation of this task could look like:
ask patches [ ifelse C-player? [ ifelse C-player-last? [set pcolor blue] [set pcolor lime] ] [ ifelse C-player-last? [set pcolor yellow] [set pcolor red] ] ]

ask patches [ set pcolor ifelse-value C-player? [ifelse-value C-player-last? [blue] [lime]] [ifelse-value C-player-last? [yellow] [red]] ]

- Reset the tick counter using reset-ticks.

- Points 2 and 3 above are about setting up the players, so, to keep our code nice and modular, we could group them into a new procedure called to setup-players. This will make our code more elegant, easier to understand, easier to debug and easier to extend, so let us do it!
- The operation described in point 4 above will be conducted every tick, so we should create a separate procedure to this end that we can call to update-color, to be run by individual patches. Since this procedure is rather secondary (i.e. our model could run without this), we have a slight preference to place it at the end of our code, but feel free to do it as you like, since the order in which NetLogo procedures are written in the Code tab is immaterial.

patches-own [ C-player? C-player-last? payoff my-nbrs-and-me n-of-my-nbrs-and-me ] to setup clear-all setup-players ask patches [update-color] reset-ticks end to setup-players ask patches [ set payoff 0 set C-player? false set C-player-last? false set my-nbrs-and-me (patch-set neighbors self) set n-of-my-nbrs-and-me (count my-nbrs-and-me) ] ask n-of (round (initial-%-of-C-players * count patches / 100)) patches [ set C-player? true set C-player-last? true ] end to go end to update-color set pcolor ifelse-value C-player? [ifelse-value C-player-last? [blue] [lime]] [ifelse-value C-player-last? [yellow] [red]] end

- To play with its neighbours in order to calculate its payoff. For modularity and clarity purposes, we should do this in a new procedure named to play.
- To store the current value of C-player? (i.e. the player's current strategy) in the variable C-player-last?. In this way, the variable C-player-last? will keep the strategy with which the current payoff has been obtained, and we can update the value of C-player? without losing that information, which will be required by neighboring players when they update their strategy.
- To update its strategy (i.e. the value of C-player?). To keep our code nice and modular, we will do this in a separate new procedure called to update-strategy.
- To update its colour according to their new C-player? and C-player-last? values.

to go ask patches [play] ask patches [set C-player-last? C-player?] ask patches [update-strategy] tick ask patches [update-color] end

let n-of-C-players count my-nbrs-and-me with [C-player?]Note that if the calculating patch is a C-player, the payoff obtained when playing with another C-player is CC-payoff, and if the calculating patch is a D-player, the payoff obtained when playing with with a C-player is DC-payoff. Thus, in general, the payoff obtained when playing with a C-player can then be obtained using the following code:

ifelse-value C-player? [CC-payoff] [DC-payoff]Similarly, the payoff obtained when playing with a D-player is:

ifelse-value C-player? [CD-payoff] [DD-payoff]Taking all this into account, we can implement procedure to play as follows:

to play let n-of-C-players count my-nbrs-and-me with [C-player?] set payoff n-of-C-players * ifelse-value C-player? [CC-payoff] [DC-payoff] + (n-of-my-nbrs-and-me - n-of-C-players) * ifelse-value C-player? [CD-payoff] [DD-payoff] end

one-of (my-nbrs-and-me with-max [payoff])Now remember that strategy updating in this model is

to update-strategy set C-player? [C-player-last?] of one-of my-nbrs-and-me with-max [payoff] end

patches-own [ C-player? C-player-last? payoff my-nbrs-and-me n-of-my-nbrs-and-me ] to setup clear-all setup-players ask patches [update-color] reset-ticks end to setup-players ask patches [ set payoff 0 set C-player? false set C-player-last? false set my-nbrs-and-me (patch-set neighbors self) set n-of-my-nbrs-and-me (count my-nbrs-and-me) ] ask n-of (round (initial-%-of-C-players * count patches / 100)) patches [ set C-player? true set C-player-last? true ] end to go ask patches [play] ask patches [set C-player-last? C-player?] ask patches [update-strategy] tick ask patches [update-color] end to play let n-of-C-players count my-nbrs-and-me with [C-player?] set payoff n-of-C-players * ifelse-value C-player? [CC-payoff] [DC-payoff] + (n-of-my-nbrs-and-me - n-of-C-players) * ifelse-value C-player? [CD-payoff] [DD-payoff] end to update-strategy set C-player? [C-player-last?] of one-of (my-nbrs-and-me with-max [payoff]) end to update-color set pcolor ifelse-value C-player? [ifelse-value C-player-last? [blue] [lime]] [ifelse-value C-player-last? [yellow] [red]] end

- Blue if it is occupied by a C-player and in the previous period it was also occupied by a C-player.
- Green if it is occupied by a C-player and in the previous period it was occupied by a D-player.
- Red if it is occupied by a D-player and in the previous period it was occupied by a D-player.
- Yellow if it is occupied by a D-player and in the previous period it was occupied by a C-player.

To complete the Interface tab, edit the graph and create the pens as in the image below:
[caption id="attachment_2717" align="aligncenter" width="752"] Figure 3. Plot settings.[/caption]

ask patch 0 0 [set C-player? false]If you now click on , you should see the following beautiful patterns: [video width="406" height="406" mp4="https://wisc.pb.unizin.org/app/uploads/sites/28/2018/09/0-2x2-imitate-best-nbr-video-2.mp4" poster="https://wisc.pb.unizin.org/app/uploads/sites/28/2018/09/0-2x2-imitate-best-nbr-video-2.jpg"][/video]

With a small probability ε, each player errs and chooses evenly between strategies C and D; with probability 1-ε, the player follows the Nowak and May update rule.You may wish to rerun the sample run above with a small value for ε. You may also want to replicate the experiment shown in Mukherji et al. (1996, fig. 1).

During each period, players fail to update their previous strategy with a small probability, θ.You may wish to rerun the sample run above with a small value for θ. You may also want to replicate the experiment shown in Mukherji et al. (1996, fig. 1).

After following the Nowak and May update rule, each cooperator has a small independent probability, ϕ, of cheating by switching to defection.You may wish to rerun the sample run above with a small value for ϕ. You may also want to replicate the experiment shown in Mukherji et al. (1996, fig. 1).]]>

- The program must generate an instance of a G(n-of-nodes, link-prob) Erdős–Rényi random graph model, where the number of nodes (
*n-of-nodes*) and the probability of each possible link (*link-prob*) are chosen by the user. The network can be generated by adding new nodes sequentially and creating links from each newly added node to the existing ones, each possible link being created with probability*link-prob*. - The NetLogo world will show a representation of the network.
- The program will calculate and show the accesibility distribution, the clustering coefficient and the size of the greatest component.

- Two sliders to let the user select the number of nodes (
*n-of-nodes*) and the link probability (*link-prob*). - A command button to create and show the graph. This is the "create-Erdos-Reny-graph" button in the picture, which will call a procedure with the same name (to be created).
- A button to change the graphical representation of the network. This is the "relax-network" button in the picture, which will call a procedure named
*to do-layout*(to be created). - A button to activate the drag and drop option for nodes on the graph. This is the "drag-and-drop" button, which will call a procedure with the same name (to be created).
- A chooser to select the number of steps to consider when calculating node accesibility. This defines a global variable that we can call
*accesibility-steps*. - A command button to update the node accesibility values after the value of
*accesibility-steps*is changed. This button will call the procedure*to plot-accesibility*(to be created) - A plot to show a histogram of the node accesibility values. This is the "Accesibility Distribution" plot in the picture.
- Three monitors. One to show the average accesibility (whose reporter will be the global variable
*avg-accessibility*, to be created in the code), one to show the clustering coefficient (whose reporter will be the global variable*clustering-coefficient*, to be created in the code) and one to show the size of the largest component (whose reporter will be named*size-of-greatest-component*, to be created in the code).

globals [ avg-accessibility clustering-coefficient ] breed [nodes node] nodes-own [ node-clustering-coefficient ]

- To clear any results, graphs and values from previous simulations.
- To give nodes a circle shape.
- To create the required number of nodes.
- To create links with the required probability. Note that, as nodes are created, the set of nodes that have been created previously to any given node can be obtained as " other nodes with [self < myself] " and each possible link with those previous nodes must be created with probability
*link-prob*. - To draw the accesibility histogram. For this we will create a procedure named
*to plot-accessibility.* - To calculate the clustering coefficient of the created network. For this we will create a procedure named
*to find-clustering-coefficient.*

` `

to create-Erdos-Reny-graph clear-all set-default-shape nodes "circle" create-nodes n-of-nodes [ create-links-with other nodes with [ (self < myself) and (random-float 1.0 < link-prob) ] ] ;; note that each node is asking only nodes with "who" number less than his. ;; Thus, each possible pair is considered only once (as it should be). plot-accessibility find-clustering-coefficient end

to-report accessibility [num-steps] let step 1 let reachable-nodes link-neighbors let old-length 0 let new-length count reachable-nodes while [ new-length != old-length and step < num-steps] [ set old-length new-length set reachable-nodes turtle-set remove-duplicates sort (turtle-set reachable-nodes [link-neighbors] of reachable-nodes) set new-length (count reachable-nodes) set step (step + 1) ] report new-length - ifelse-value (member? self reachable-nodes) [1] [0] end

to plot-accessibility let steps accessibility-steps if accessibility-steps = "Infinity" [set steps n-of-nodes] let accessibility-list [accessibility steps] of nodes ;; list of the accesibility value of each node let max-accessibility max accessibility-list ;; for the x-range of the plot set-current-plot "Accessibility Distribution" set-plot-x-range 0 (max-accessibility + 1) ;; + 1 to make room for the width of the last bar histogram accessibility-list set avg-accessibility mean accessibility-list end

to-report size-of-greatest-component report 1 + max [accessibility (count nodes)] of nodes end

- Wilensky, U. (2005). NetLogo Small Worlds model. http://ccl.northwestern.edu/netlogo/models/SmallWorlds. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.

to-report in-neighborhood? [ hood ] report ( member? end1 hood and member? end2 hood ) end to find-clustering-coefficient ifelse all? nodes [count link-neighbors <= 1] [ ;; it is undefined set clustering-coefficient "undefined" ] [ let total 0 ask nodes with [ count link-neighbors <= 1] [ set node-clustering-coefficient "undefined" ] ask nodes with [ count link-neighbors > 1] [ let hood link-neighbors set node-clustering-coefficient (2 * count links with [ in-neighborhood? hood ] / ((count hood) * (count hood - 1)) ) ;; find the sum for the value at nodes set total total + node-clustering-coefficient ] ;; take the average set clustering-coefficient total / count nodes with [count link-neighbors > 1] ] end

to do-layout ;; the number 3 here is arbitrary; more repetitions slows down the ;; model, but too few gives poor layouts repeat 3 [ ;; the more nodes we have to fit into the same amount of space, ;; the smaller the inputs to layout-spring we'll need to use let factor sqrt count nodes ;; numbers here are arbitrarily chosen for pleasing appearance layout-spring (nodes with [any? link-neighbors]) links (1 / factor) (7 / factor) (3 / factor) display ;; for smooth animation ] ;; don't bump the edges of the world let x-offset max [xcor] of nodes + min [xcor] of nodes let y-offset max [ycor] of nodes + min [ycor] of nodes ;; big jumps look funny, so only adjust a little each time set x-offset limit-magnitude x-offset 0.1 ;; limit-magnitude is a reporter procedure with 2 arguments, defined below set y-offset limit-magnitude y-offset 0.1 ask nodes [ setxy (xcor - x-offset / 2) (ycor - y-offset / 2) ] end to-report limit-magnitude [number limit] ;; if |number| > limit, number is trimmed if number > limit [ report limit ] if number < (- limit) [ report (- limit) ] report number end

to drag-and-drop if mouse-down? [ let candidate min-one-of nodes [distancexy mouse-xcor mouse-ycor] if [distancexy mouse-xcor mouse-ycor] of candidate < 1 [ ;; The WATCH primitive puts a "halo" around the watched turtle. watch candidate while [mouse-down?] [ ;; If we don't force the view to update, the user won't ;; be able to see the turtle moving around. display ;; The SUBJECT primitive reports the turtle being watched. ask subject [ setxy mouse-xcor mouse-ycor ] ] ;; Undoes the effects of WATCH. Can be abbreviated RP. reset-perspective ] ] end

- An Erdős–Rényi random graph model, where the probability of each possible link (link-prob) is chosen by the user. These networks tipically present a small average shortest path length and a small clustering coefficient.
- A small-world model using the Watts–Strogatz mechanism, where the mean degree (avg-degree-small-world) and the rewiring probability (rewiring-probability) are chosen by the user. Small-world networks tipically present a small average shortest path length but a large clustering coefficient.
- A scale-free model using the preferential attachment Barabási–Albert mechanism. This model will create an initial connected network with a number of nodes equal to (pref-attachment-links + 1), and new nodes (up to the total number n-of-nodes) will be sequentially added to the network. Each newly added node will create pref-attachment-links links to the nodes in the network, with the probability of creating a link to any node in the network being proportional to the current degree of the node. Scale-free graphs have a power-law degree distribution.
- A cycle graph or bidirectional ring.
- A star graph.
- A lattice graph or grid with ... (grid-radius)

- As the links corresponding to some random node are removed from the network (failure of a random node)
- As the links of one of the nodes with maximum degree are removed from the network (failure of one of the most connected nodes, or "targeted attack")

- A slider to let the user select the number of nodes (n-of-nodes).
- A command button (to startup procedure (to be created). In NetLogo, if there is a procedure with such name, it will also be run when the model is first loaded. ) to create and show the graph. This will call the
- A chooser to select the network-generating algorithm. The options will be called "Erdos-Reny", "small-world", "preferential attachment", "cycle", "star", "grid 4 nbrs" and "grid 8 nbrs".
- A slider to select the link probability (link-prob) needed for the Erdős–Rényi model.
- Two sliders to select the mean degree (avg-degree-small-world) and the probability of rewiring (rewiring-probability) needed for the small-world Watts–Strogatz mechanism.
- A slider to select the number of links (pref-attachment-links) added by any additional node when using the preferential attachment Barabási–Albert mechanism.
- A slider to select ...
- The "relax-network" and "drag-and-drop" buttons that we already have from the previous model, as well as the existing interface elements related to accesibility, and the other existing monitors.
- An additional monitor to show the current number of links (whose reporter will be the global variable num-links, to be created in the code).
- An additional monitor to show what fraction of the total number of possible links have been created.
- A plot...
- Four buttons...

Paulo Shakarian, Patrick Roos, Anthony Johnson, A review of evolutionary graph theory with applications to game theory, In Biosystems, Volume 107, Issue 2, 2012, Pages 66-80, ISSN 0303-2647, https://doi.org/10.1016/j.biosystems.2011.09.006.
(http://www.sciencedirect.com/science/article/pii/S0303264711001675)
Keywords: Evolutionary dynamics; Structured populations; Game theory; Fixation probability; Time to fixation

]]>However, it is important that your code be readable, so others can understand it. In the end, computer time is cheap compared to human time. Therefore, it should be noted that, whenever there is a possibility of trade-off, clarity of code should be preferred over efficiency. Wilensky and Rand (2015, pp 219–20)Our personal opinion is that this decision is best made case by case, taking into account the objectives and constraints of the whole modelling exercise in the specific context at hand. Our hope is that, after reading this book, you will be prepared to make these decisions by yourself in any specific situation you may encounter.

The number of players in the simulation can be changed at runtime with immediate effect on the dynamics of the model, using parameter n-of-players:

- If n-of-players is reduced, the necessary number of (randomly selected) players are killed.
- If n-of-players is increased, the necessary number of (randomly selected) players are cloned.

if (random-float 1 < prob-revision) [update-strategy]And parameter noise is used only in procedure to update-strategy, in the following line:

ifelse (random-float 1 < noise)Whenever NetLogo reads the two lines of code above, it uses the current values of the two parameters. Because of this, we can modify the parameters' values on the fly and immediately see how that change affects the dynamics of the model. By contrast, changing the value of parameter n-of-players-for-each-strategy at runtime will have no effect whatsoever. This is so because parameter n-of-players-for-each-strategy is only used in procedure to setup-players, which is executed at the beginning of the simulation –triggered by procedure to setup– and never again. To enable the user to modify the population size at runtime, we should create a slider for the new parameter n-of-players. Before doing so, we have to remove the declaration of the global variable n-of-players in the Code tab, since the creation of the slider implies the definition of the variable as global.

globals [ payoff-matrix n-of-strategies;; n-of-players <== We remove this line]

to update-n-of-players let diff (n-of-players - count players) if diff != 0 [ ifelse diff > 0 [ repeat diff [ ask one-of players [hatch-players 1] ] ] [ ask n-of (- diff) players [die] ] ] end

to goupdate-n-of-players ;; <== New lineask players [play] ask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick update-graph end

- Computations that we conduct but we do not use at all.
- Computations that we conduct several times despite knowing that their outputs will not change.

to go update-n-of-players;; ask players [play] <== We remove this lineask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick end to update-strategy let observed-player one-of other playersplay ;; <== New lineask observed-player [play] ;; <== New lineif ([payoff] of observed-player) > payoff [ set strategy ([strategy] of observed-player) ] end

other playersseveral times in every tick, but we could conduct it just once for each agent in each simulation. To be sure, we conduct that operation every time an agent computes her payoff in to play:

let mate one-ofAnd also every time an agent revises her strategy in to update-strategy:other players

let observed-agent one-ofThis computation may not sound very expensive, but if the number of agents is large, it may well be (see exercise 3 below). To make the model run faster, we could create an individually-owned variable named e.g. other-players, as followsother players

players-own [ strategy payoffother-players]

ask players [ set other-players other players ]

to update-n-of-players let diff (n-of-players - count players) if diff != 0 [ ifelse diff > 0 [ repeat diff [ ask one-of players [hatch-players 1] ] ] [ ask n-of (- diff) players [die] ]ask players [set other-players otherplayers]] end

other playerswe should write other-players instead. These changes will make simulations with many players run faster.

other playerswe could write the following reporter:

to-report time-other-players reset-timer ask players [let temporary-var other players] report timer end

extensions [profiler]

to show-profiler-report setup ;; set up the model profiler:start ;; start profiling repeat 1000 [ go ] ;; run something you want to measure profiler:stop ;; stop profiling print profiler:report ;; print the results profiler:reset ;; clear the data end

`show-profiler-report`

follows:
BEGIN PROFILING DUMP Sorted by Exclusive Time Name Calls Incl T(ms) Excl T(ms) Excl/calls PLAY1191302804.3302804.3300.024 UPDATE-STRATEGY 60131 4441.4291637.0990.027 UPDATE-GRAPH 1000 231.718 231.718 0.232 GO 1000 4823.320 147.693 0.148 UPDATE-N-OF-PLAYERS 1000 2.4802.4800.002 Sorted by Inclusive Time GO 1000 4823.320 147.693 0.148 UPDATE-STRATEGY 60131 4441.429 1637.099 0.027 PLAY 119130 2804.330 2804.330 0.024 UPDATE-GRAPH 1000 231.718 231.718 0.232 UPDATE-N-OF-PLAYERS 1000 2.480 2.480 0.002 Sorted by Number of Calls PLAY 119130 2804.330 2804.330 0.024 UPDATE-STRATEGY 60131 4441.429 1637.099 0.027 GO 1000 4823.320 147.693 0.148 UPDATE-GRAPH 1000 231.718 231.718 0.232 UPDATE-N-OF-PLAYERS 1000 2.480 2.480 0.002 END PROFILING DUMP

- Simulations spend most of the time executing procedure to play (2804.330 ms) and procedure to update-strategy (1637.099 ms).
- The procedure that is called the greatest number of times is to play, which is called 119130 times. This makes sense, since there were 600 agents in this simulation, prob-revision was 0.1, a revision requires a play by the agent and by the opponent he observes, and we ran the model 1000 ticks (600 × 0.1 × 2 × 1000 = 120000).
- Our implementation to allow the user to modify the number of agents at runtime hardly takes any computing time (just 2.480 ms).

globals [ payoff-matrix n-of-strategies ] breed [players player] players-own [ strategy payoff other-players ] to setup clear-all setup-payoffs setup-players setup-graph reset-ticks update-graph end to setup-payoffs set payoff-matrix read-from-string payoffs set n-of-strategies length payoff-matrix end to setup-players let initial-distribution read-from-string n-of-players-for-each-strategy if length initial-distribution != length payoff-matrix [ user-message (word "The number of items in\n" "n-of-players-for-each-strategy (i.e. " length initial-distribution "):\n" n-of-players-for-each-strategy "\nshould be equal to the number of rows\n" "in the payoff matrix (i.e. " length payoff-matrix "):\n" payoffs ) ] let i 0 foreach initial-distribution [ j -> create-players j [ set payoff 0 set strategy i ] set i (i + 1) ] set n-of-players count players ask players [set other-players other players] end to setup-graph set-current-plot "Strategy Distribution" foreach (range n-of-strategies) [ i -> create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i ] end to go update-n-of-players ask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick update-graph end to play let mate one-of other-players set payoff item ([strategy] of mate) (item strategy payoff-matrix) end to update-strategy ifelse (random-float 1 < noise) [ set strategy (random n-of-strategies) ] [ let observed-player one-of other-players play ask observed-player [play] if ([payoff] of observed-player) > payoff [ set strategy ([strategy] of observed-player) ] ] end to update-graph let strategy-numbers (range n-of-strategies) let strategy-frequencies map [n -> count players with [strategy = n] / n-of-players] strategy-numbers set-current-plot "Strategy Distribution" let bar 1 foreach strategy-numbers [ n -> set-current-plot-pen (word n) plotxy ticks bar set bar (bar - (item n strategy-frequencies)) ] set-plot-y-range 0 1 end to update-n-of-players let diff (n-of-players - count players) if diff != 0 [ ifelse diff > 0 [ repeat diff [ ask one-of players [hatch-players 1] ] ] [ ask n-of (- diff) players [die] ] ask players [set other-players other players] ] end

^{ CODE } Exercise 2. In this section we have reduced the number of times procedure to play is called (as long as prob-revision is less than 0.5). To illustrate this, compare the number of times this procedure is called in a 1000-tick simulation with 600 agents and prob-revision 0.1, before and after our efficiency improvement. Can you compute the number of times procedure to play is called in the general case?

`other players`

is conducted by creating an individually-owned variable (named other-players). To compare these two approaches, write a short NetLogo program where 10000 agents conduct this operation.
ask patch 0 0 [set C-player? false] ask patches [update-color]

to update-strategy set C-player? ifelse-value (random-float 1 < epsilon) [ one-of [true false] ] [ [C-player-last?] of one-of my-nbrs-and-me with-max [payoff] ] end

ask patches [update-strategy]

ask patches [if random-float 1.0 < (1 - theta) [update-strategy]]

if random-float 1.0 < phi [set C-player? false]

- Three buttons:
Write in the Code tab the procedures to setup and to go, without including any code inside for now.

ask turtles-on neighbors [set label "Hi!"]You will have to type these instructions on the bottom line of the window that pops up when you inspect a turtle (see figure 5). To inspect a turtle, right-click on it, select the name of the turtle (e.g. turtle 21), and click on "inspect". Alternatively, you can just type the following instruction in the command center:

inspect turtle 21]]>

- With 1000 agents, simulations will tend to approach the state where half the population play "Hawk" and the other half "Dove", and stay around there for very long. Eventually, the simulation will end up in an absorbing state (most likely, the H-state), but we would not recommend waiting for that.
- With 100 agents, we will observe dynamics qualitatively similar to those with 1000 agents, but the variability around the rest point of the mean dynamics increases considerably. Sandholm (2003) shows that –under rather general conditions– standard deviations of the limit distribution near the rest point in large populations are of order $\frac{1}{\sqrt{N}}$, where $N$ is the population size.
- With 10 agents, it is very likely that we will observe the asymptotic dynamics of the model pretty soon, i.e., most simulations will end up in the state where all agents use the "Hawk" strategy.

to update-strategy ;; imitative pairwise-difference protocol let observed-player one-of other players let payoff-diff ([payoff] of observed-player - payoff) if random-float 1 < (payoff-diff / max-payoff-difference) [ set strategy [strategy] of observed-player ] ;; If your strategy is the better, payoff-diff is negative, ;; so you are going to stick with it. ;; If it's not, you switch with probability ;; (payoff-diff / max-payoff-difference) end

globals [ payoff-matrix max-payoff-difference;; <== New line]

to setup clear-all set payoff-matrix read-from-string payoffs create-players n-of-players [ set payoff 0 set strategy 0 ] ask n-of random (n-of-players + 1) players [set strategy 1] reset-ticks;; New line belowset max-payoff-difference (max-of-matrix payoff-matrix) - (min-of-matrix payoff-matrix) end

;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; SUPPORTING PROCEDURES ;;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

;;;;;;;;;;;;;;;; ;;; Matrices ;;; ;;;;;;;;;;;;;;;;

to-report max-of-matrix [m] report max reduce sentence m end

to-report min-of-matrix [m] report min reduce sentence m end

to update-strategy ;; best-experienced-payoff protocol ;; with 2 strategies

;; play strategy 0 against a random agent set strategy 0 play let payoff-strategy-0 payoff ;; play strategy 1 against a random agent set strategy 1 play let payoff-strategy-1 payoff ;; select the strategy that gave you the greater payoff, ;; breaking ties at random if (payoff-strategy-0 > payoff-strategy-1) [set strategy 0] if (payoff-strategy-0 < payoff-strategy-1) [set strategy 1] if (payoff-strategy-0 = payoff-strategy-1) [set strategy one-of [0 1]]

```
end
```

n-values n-of-strategies [n -> n]where

`n`

is just the name of the input variable of the anonymous procedure, so we could use pretty much any other symbol (as long as it does not conflict with the rest of the code).
Solution to Exercise 1.1.4.
- One possible implementation is the following:
to setup-graph set-current-plot "Strategy Distribution"

**let i 0 ;; <== New line****repeat n-of-strategies [ ;; <== Modified line**create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i**set i (i + 1) ;; <== New line**] end - One possible implementation is the following:
to setup-graph set-current-plot "Strategy Distribution"

**let i 0 ;; <== New line****while [i < n-of-strategies] [ ;; <== Modified line**create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i**set i (i + 1) ;; <== New line**] end - One possible implementation is the following:
to setup-graph set-current-plot "Strategy Distribution"

**let i 0 ;; <== New line****loop [ ;; <== Modified line**create-temporary-plot-pen (word i) set-plot-pen-mode 1 set-plot-pen-color 25 + 40 * i**set i (i + 1) ;; <== New line****if i = n-of-strategies [stop] ;; <== New line**] end

to update-strategy let observed-players n-of 5 other players let observed-player-with-highest-payoff max-one-of observed-players [payoff] set strategy [strategy] of observed-player-with-highest-payoff end

to update-strategy let strategy-of-observed-player [strategy] of one-of other players let payoffs-for-each-strategy n-values n-of-strategies [s -> item strategy-of-observed-player (item s payoff-matrix)] set strategy position (max payoffs-for-each-strategy) payoffs-for-each-strategy ;; note that, in case of tie, we will ;; select the strategy with the lowest index, ;; sincepositionreports the FIRST position ;; in the list end

- The vertices of the simplex are not absorbing anymore.
- The state where all strategies are equally represented is a globally asymptotically stable state.

n = 10; s = 2; DiscretePlot[ PDF[MultinomialDistribution[n, Table[1/s, s]], {x, n - x}] , {x, 0, n} , PlotRange -> All]

n = 10; s = 3; DiscretePlot3D[ PDF[MultinomialDistribution[n, Table[1/s, s]], {x, y, n-x-y}] , {x, 0, n}, {y, 0, n} , RegionFunction -> Function[{x, y}, x + y <= n] ]

to setup clear-all setup-payoffs setup-players;; setup-graph <== We comment this line outreset-ticks;; update-graph <== We comment this line outend

to go ask players [play] ask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick;; update-graph <== We comment this line outend

to show-profiler-report profiler:start ;; start profilingsetup ;; set up the modelrepeat 1000 [ go ] ;; run something you want to measure profiler:stop ;; stop profiling print profiler:report ;; print the results profiler:reset ;; clear the data end

BEGIN PROFILING DUMP Sorted by Exclusive Time Name Calls Incl T(ms) Excl T(ms) Excl/calls PLAY60000022216.983 22216.983 0.037 UPDATE-STRATEGY 59766 1490.643 1490.643 0.025 GO 100024383.186420.217 0.420 UPDATE-GRAPH 1001 264.426 264.426 0.264 SETUP-PLAYERS 135.20235.202 35.202 SETUP 163.55910.942 10.942 SETUP-GRAPH 1 6.374 6.374 6.374 SETUP-PAYOFFS 1 1.958 1.958 1.958 Sorted by Inclusive Time GO 1000 24383.186 420.217 0.420 PLAY 600000 22216.983 22216.983 0.037 UPDATE-STRATEGY 59766 1490.643 1490.643 0.025 UPDATE-GRAPH 1001 264.426 264.426 0.264 SETUP 1 63.559 10.942 10.942 SETUP-PLAYERS 1 35.202 35.202 35.202 SETUP-GRAPH 1 6.374 6.374 6.374 SETUP-PAYOFFS 1 1.958 1.958 1.958 Sorted by Number of Calls PLAY 600000 22216.983 22216.983 0.037 UPDATE-STRATEGY 59766 1490.643 1490.643 0.025 UPDATE-GRAPH 1001 264.426 264.426 0.264 GO 1000 24383.186 420.217 0.420 SETUP 1 63.559 10.942 10.942 SETUP-PLAYERS 1 35.202 35.202 35.202 SETUP-GRAPH 1 6.374 6.374 6.374 SETUP-PAYOFFS 1 1.958 1.958 1.958 END PROFILING DUMP

BEGIN PROFILING DUMP Sorted by Exclusive Time Name Calls Incl T(ms) Excl T(ms) Excl/calls PLAY1182443059.806 3059.806 0.026 UPDATE-STRATEGY 59686 4854.632 1794.826 0.030 UPDATE-GRAPH 1001 308.082 308.082 0.308 GO 10005335.569177.336 0.177 SETUP-PLAYERS 164.46864.468 64.468 SETUP 1102.04721.755 21.755 SETUP-GRAPH 1 5.659 5.659 5.659 UPDATE-N-OF-PLAYERS 10003.7843.784 0.004 SETUP-PAYOFFS 1 1.901 1.901 1.901 Sorted by Inclusive Time GO 1000 5335.569 177.336 0.177 UPDATE-STRATEGY 59686 4854.632 1794.826 0.030 PLAY 118244 3059.806 3059.806 0.026 UPDATE-GRAPH 1001 308.082 308.082 0.308 SETUP 1 102.047 21.755 21.755 SETUP-PLAYERS 1 64.468 64.468 64.468 SETUP-GRAPH 1 5.659 5.659 5.659 UPDATE-N-OF-PLAYERS 1000 3.784 3.784 0.004 SETUP-PAYOFFS 1 1.901 1.901 1.901 Sorted by Number of Calls PLAY 118244 3059.806 3059.806 0.026 UPDATE-STRATEGY 59686 4854.632 1794.826 0.030 UPDATE-GRAPH 1001 308.082 308.082 0.308 GO 1000 5335.569 177.336 0.177 UPDATE-N-OF-PLAYERS 1000 3.784 3.784 0.004 SETUP 1 102.047 21.755 21.755 SETUP-PLAYERS 1 64.468 64.468 64.468 SETUP-GRAPH 1 5.659 5.659 5.659 SETUP-PAYOFFS 1 1.901 1.901 1.901 END PROFILING DUMP

- The inefficient version needs 24 383.186 ms to run the 1000 ticks, while the efficient version only takes 5 335.569 ms.
- The inefficient version needs slightly less time to set up than the efficient version (63.559 ms as opposed to 102.047 ms), due to the different ways players are set up.
- The inefficient version makes 600 000 calls to procedure to play, many of which are unnecessary. The efficient version only makes 118 244 calls to this procedure.
- Our implementation to allow the user to modify the number of agents at runtime hardly takes any computing time (just 3.784 ms).

breed [players player] players-own [ other-players ] to setup clear-all create-players 10000 [ set other-players other players ] end to-report time-other-players setup reset-timer ask players [let temporary-var other players] report timer end to-report time-other-players-efficient setup reset-timer ask players [let temporary-var other-players] report timer end

players-own [ strategy payoff other-playersplayed?;; <== new line ] to setup-players;; ... some lines of code ... ;;foreach initial-distribution [ j -> create-players j [ set payoff 0 set strategy iset played? false ;; <== new line] set i (i + 1) ] set n-of-players count players ask players [set other-players other players] end to go update-n-of-playersask players [set played? false] ;; <== new lineask players [ if (random-float 1 < prob-revision) [update-strategy] ] tick update-graph end to play let mate one-of other-players set payoff item ([strategy] of mate) (item strategy payoff-matrix)set played? true ;; <== new lineend to update-strategy ifelse (random-float 1 < noise) [ set strategy (random n-of-strategies) ] [ let observed-player one-of other-playersif not played? [ play ] ;; <== modified lineask observed-player [if not played? [ play ]] ;; <== modified lineif ([payoff] of observed-player) > payoff [ set strategy ([strategy] of observed-player) ] ] end

- If you are interested in learning to analyze finite-population evolutionary models, and in their relation with other models in Evolutionary Game Theory, but
**you do not wish to program**, then you should skip all the sections preceded by the label^{ CODE }. - If you want to become proficient in coding agent-based models, but
**you are not interested in learning how to analyze these models**, please do reconsider your preference. If your preference persists after careful reflection, you may want to skip the section titled "Analysis of these models", which you will find at the end of most chapters.

Very basics | More advanced | Final polishing |
---|---|---|

The three tabs | Ask | Consistency within procedures |

Types of agents | Lists | Breeds |

Instructions | Agentsets | Ticks and Plotting |

Variables | Synchronization | Skeleton of many NetLogo models |

The code for Schelling-Sakoda model |

**Turtles**. Turtles are agents that can move.**Patches**: The NetLogo world is two-dimensional and is divided up into a grid of patches. Each patch is a square piece of "ground" over which turtles can move.**Links**: Links are agents that connect two turtles. Links can be directed (from one turtle to another turtle) or undirected (one turtle with another turtle).**The observer**: There is only one observer and it does not have a location. You can think of the observer as the conductor of the whole NetLogo orchestra.

- Whether the instruction is
**implemented by the user**(*procedures*),**or**whether it is**built into NetLogo**(*primitives*). Once you define a procedure, you can use it elsewhere in your program. The NetLogo Dictionary has a complete list of built-in instructions (i.e. primitives). The following code is an example of the implementation of procedure to setup:to setup ;; comments are written after semicolon(s) clear-all ;; clear everything create-turtles 10 ;; make 10 new turtles end ; (one semicolon is enough, but I like two)

- Whether the instruction produces an
**output**(*reporters*)**or not**(*commands*).- A reporter computes a result and
**reports**it. Most reporters are nouns or noun phrases (e.g. “average-wealth”, “most-popular-girl”). These names are preceded by the keyword to-report. The keyword end marks the end of the instructions in the procedure.to-report average-wealth ;; this reporter returns the report mean [wealth] of turtles ;; average wealth in the end ;; population of turtles

- A command is an action for an agent to carry out. Most commands begin with verbs (e.g. "create", "die", "jump", "inspect", "clear"). These verbs are preceded by the keyword to (instead of to-report). The keyword end marks the end of the procedure.
to go ask turtles [ forward 1 ;; all turtles move forward one step right random 360 ;; and turn a random amount ] end

- A reporter computes a result and
- Whether the instruction takes an
**input**(or several inputs)**or not**. Inputs are values that the instruction uses in carrying out its actions.to-report absolute-value [number] ;; number is the input ifelse number >= 0 ;; if number is already non-negative [ report number ] ;; return number (a non-negative value). [ report (- number) ] ;; Otherwise, return the opposite, which end ;; is then necessarily positive.

**Global variables**: If a variable is a global variable, there is only one value for the variable, and every agent can access it. You can declare a new global variable either*in the Interface tab*–by adding a switch, a slider, a chooser or an input box–*or in the Code tab*–by using the globals keyword at the beginning of your code, like this:globals [ n-of-strategies ]

**Turtle, patch, and link variables**: Each turtle has its own value for every turtle variable, each patch has its own value for every patch variable, and each link has its own value for every link variable. Turtle, patch, and link variables can be*built-in*or*defined by the user*.**Built-in**variables: For example, all turtles and all links have a color variable, and all patches have a pcolor variable. If you set this variable, the corresponding turtle, link or patch changes color. Other built-in turtle variables are xcor, ycor, and heading. Other built-in patch variables include pxcor and pycor. Other built-in link variables are end1, end2, and thickness. You can find the complete list in the NetLogo Dictionary.**User-defined**turtle, patch and link variables: You can also define new turtle, patch or link variables using the turtles-own, patches-own, and links-own keywords respectively, like this:turtles-own [ energy ] ;; each turtle has its own energy patches-own [ roughness ] ;; each patch has its own roughness links-own [ weight ] ;; each link has its own weight

**Local variables**: A local variable is defined and used only in the context of a particular procedure or part of a procedure. To create a local variable, use the let command. You can use this command anywhere. If you use it at the top of a procedure, the variable will exist throughout the procedure. If you use it inside a set of square brackets, for example inside an ask, then it will exist only inside those brackets.to swap-colors [turtle1 turtle2] ;; turtle1 and turtle2 are inputs let temp ([color] of turtle1) ;; store the color of turtle1 in temp ask turtle1 [ set color ([color] of turtle2) ] ;; set turtle1’s color to turtle2’s color ask turtle2 [ set color temp ] ;; now set turtle2’s color to turtle1’s (original) color end ;; (which was conveniently stored in local variable “temp”).

ask turtle 5 [ show color ] ;; turtle 5 shows its color ask turtle 5 [ set color blue ] ;; turtle 5 becomes blue

show [color] of turtle 5 ;; observer shows turtle 5's color

Finally, a turtle can read and set the variables of the patch it is standing on directly, e.g.

ask turtles [ set pcolor red ]

to setup clear-all ;; clear everything create-turtles 100 ;; create 100 new turtles with random heading ask turtles [ ;; ask them set color red ;; to turn red and forward 50 ;; to move 50 steps forward ] ask patches [ ;; ask patches if (pxcor > 0) [ ;; with pxcor greater than 0 set pcolor green ;; to turn green ] ] end

to setup clear-all ;; clear the world create-turtles 3 ;; make 3 turtles ask turtle 0 [ fd 10 ] ;; tell the first one to go forward 10 steps ask turtle 1 [ ;; ask the second turtle (with who number 1) set color green ;; ... to become green ] ask patch 2 -2 [ ;; ask the patch at (2,-2)... set pcolor blue ;; ... to become blue ] ask turtle 0 [ ;; ask the first turtle (with who number 0) create-link-to turtle 2 ;; to link to turtle with who number 2 ] ask link 0 2 [ ;; ask the link between turtle 0 and 2... set color blue ;; ... to become blue ] ask turtle 0 [ ;; ask the turtle with who number 0 ask patch-at 1 0 [ ;; ... to ask the patch to her east set pcolor red ;; ... to become red ] ] end

set my-list [2 4 6 8]

set my-random-list list (random 10) (random 20)

show (list random 10) show (list (random 10) (turtle 3) "a" 30) ;; inner () are not necessary

set fitness-list ([fitness] of turtles) ;; list containing the fitness of each turtle (in random order) show [pxcor * pycor] of patches

set my-list [2 7 5 "Bob" [3 0 -2]] ;; my-list is now [2 7 5 "Bob" [3 0 -2]] set my-list replace-item 2 my-list 10 ;; my-list is now [2 7 10 "Bob" [3 0 -2]]

[ [input-1input-2...] ->code of the procedure] ;; this syntax was different in versions before NetLogo 6.0

[ [x] -> abs x ] ;; you can use any symbol instead of x [ x -> abs x ] ;; if there is just one input ;; you do not need the square brackets

foreach [1.2 4.6 6.1] [ n -> show (word n " rounded is " round n) ] ;; output: "1.2 rounded is 1" "4.6 rounded is 5" "6.1 rounded is 6"

map [ element -> round element ] [1.2 2.2 2.7] ;; returns [1 2 3]

map round [1.2 2.2 2.7] ;; (see Anonymous procedures in Programming Guide)

(map [[el1 el2] -> el1 + el2] [1 2 3] [10 20 30]) ;; returns [11 22 33] (map + [1 2 3] [10 20 30]) ;; a shorter way of writing the same

turtles with [color = red] ;; all red turtles patches with [pxcor > 0] ;; patches with positive pxcor [my-out-links] of turtle 0 ;; all links outgoing from turtle 0 turtles in-radius 3 ;; all turtles three or fewer patches away other turtles-here with-min [size] ;; other turtles with min size on my patch (patch-set self neighbors4) ;; von Neumann neighborhood of a patch

- Use ask to make the agents in the agentset do something.
- Use any? to see if the agentset is empty.
- Use all? to see if every agent in an agentset satisfies a condition.
- Use count to find out exactly how many agents are in the set.

ask one-of turtles [ set color green ] ;; one-of reports a random agent from an agentset

ask (max-one-of turtles [wealth]) [ donate ] ;; max-one-of agentset [reporter] reports an agent in the ;; agentset that has the highest value for the given reporter

show mean ([wealth] of turtles with [gender = male]) ;; Use of to make a list of values, one for each agent in the agentset.

show (turtle-set turtle 0 turtle 2 turtle 9 turtles-here) ;; Use turtle-set, patch-set and link-set reporters to make new ;; agentsets by gathering together agents from a variety of sources

show (turtles with [gender = male]) = (turtles with [wealth > 10]) ;; Check whether two agentsets are equal using = or !=

show member? (turtle 0) turtles with-min [wealth] ;; Use member? to see if an agent is a member of an agentset.

if all? turtles [color = red] ;; use all? to see if every agent in the [ show "every turtle is red!" ] ;; agentset satisfies a certain condition

ask turtles [ create-links-to other turtles-here ;; on same patch as me, not me, with [color = [color] of myself] ;; and with same color as me. ]

show [([color] of end1) - ([color] of end2)] of links ;; check everything’s OK

ask turtles [ forward random 10 ;; move forward a random number of steps (0–9) wait 0.5 ;; wait half a second set color blue ;; set your color to blue ]

ask turtles [ forward random 10 ] ask turtles [ wait 0.5 ] ;; note thatyouwill have to wait ask turtles [ set color blue ] ;; (0.5 * number-of-turtles) seconds

set my-list-of-agents sort-by [[t1 t2] -> [size] of t1 < [size] of t2] turtles ;; This sets my-list-of-agents to a list of turtles sorted in ;; ascending order by their turtle variable size. For simple orderings ;; like this, you can use sort-on, e.g.: sort-on [size] turtles

foreach my-list-of-agents [ ag -> ask ag [ ;; each agent undertakes the list of commands forward random 10 ;; (forward, wait, and set) without being wait 0.5 ;; interrupted, i.e. the next agent does not set color blue ;; start until the previous one has finished. ] ]

to setup create-turtles 10 forward 1 end

to setup create-turtles 10 show xcor end ;; Here we would obtain the error: ;; "You can't use XCOR in an observer context, because XCOR isturtle-only"

to setup create-turtles 10 show pxcor end ;; Here we would obtain the error: ;; "You can't use PXCOR in an observer context, because PXCOR isturtle/patch-only"

to setup create-turtles 10 show end1 end ;; Here we would obtain the error: ;; "You can't use END1 in an observer context, because END1 islink-only"

breed [plural-namesingular-name]

breed [sellers seller] breed [buyers buyer]

- we initialize the model from scratch using the primitive clear-all,
- we set up all initial conditions (this often implies creating several agents), and
- we finish with the primitive reset-ticks.

globals [ … ] ;; global variables (also defined with sliders, …) turtles-own [ … ] ;; user-defined turtle variables (also <breeds>-own) patches-own [ … ] ;; user-defined patch variables links-own [ … ] ;; user-defined link variables (also <link-breeds>-own)…

to setup clear-all … setup-patches ;; procedure where patches are initialized … setup-turtles ;; procedure where turtles are created … reset-ticks end…

to go conduct-observer-procedure … ask turtles [conduct-turtle-procedure] … ask patches [conduct-patch-procedure] … tick ;; this will update every plot and every pen in every plot end…

to-report a-particular-statistic … report the-result-of-some-formula end

ask turtles-on neighbors [set label "Hi!"]You will have to type these instructions on the bottom line of the window that pops up when you inspect a turtle (see figure 5). To inspect a turtle, right-click on it, select the name of the turtle (e.g. turtle 21), and click on "inspect". Alternatively, you can just type the following instruction in the command center:

inspect turtle 21Developing these skills will be useful, since programming in NetLogo most often involves looking up the dictionary very often and testing short snippets of code. Once you have understood most of the code below we can start building our first agent-based evolutionary model in the next chapter!

;;;;;;;;;;;;;;;;; ;;; VARIABLES ;;; ;;;;;;;;;;;;;;;;; turtles-own [ happy? ] ;;;;;;;;;;;;;;;;;;;;;;;; ;;; SETUP PROCEDURES ;;; ;;;;;;;;;;;;;;;;;;;;;;;; to setup clear-all setup-agents reset-ticks end

to setup-agents set-default-shape turtles "person" ask n-of number-of-agents patches [ sprout 1 [set color cyan] ] ask n-of (number-of-agents / 2) turtles [ set color orange ] ask turtles [update-happiness] end ;;;;;;;;;;;;;;;;;;;;;; ;;; MAIN PROCEDURE ;;; ;;;;;;;;;;;;;;;;;;;;;; to go if all? turtles [happy?] [stop] ask one-of turtles with [not happy?] [move] ask turtles [update-happiness] tick end ;;;;;;;;;;;;;;;;;;;;;;;;;;; ;;; TURTLES' PROCEDURES ;;; ;;;;;;;;;;;;;;;;;;;;;;;;;;; to move move-to one-of patches with [not any? turtles-here] end

to update-happiness let my-nbrs (turtles-on neighbors) let n-of-my-nbrs (count my-nbrs) let similar-nbrs (count my-nbrs with [color = [color] of myself]) set happy? similar-nbrs >= (%-similar-wanted * n-of-my-nbrs / 100) end