As mentioned on the page “Interaction of alpha and beta”, we've been using ‘population correlation’ to refer to what is often called ‘effect size’ in experimental design. This is a valid substitution since we've limited ourselves to talking about linear regressions and significance testing.
Talking about effect size can be somewhat confusing since most of the time we don't know the population correlation; if we did, we wouldn't need to be trying to find a sample correlation. Yet we need to know the effect size in order to determine the statistical power of an analysis.
In Figure 01 the statistical power is the shaded area under† (where “under” means between the curve and the vertical baseline) the sample probability curve on the left side of the graph. The size of that area is determined by two things: the shape of the probability curves, and the distance between the lines bisecting the curves.
The two curves are identical, since they are determined by the sample size.
The line that bisects the population probability curve on the right side of the graph is the minimum statistically significant sample correlation. The location of the curve, and thus its bisecting line, is determined by the sample size and the value of alpha.
The line that bisects the sample probability curve on the left side of the graph is the effect size. That line itself is what determines the placement of the sample probability curve. You typically estimate the effect size based on previous studies of the same, or similar, phenomena.
Figure 02 shows two examples that are identical except for the effect size. The different effect size shifts the curve on the left side of the graph, and therefore shifts the relative proportions of the statistical power and beta for the analysis.
Of course, you can't really adjust effect size to your liking. So, to manipulate the statistical power, you have to adjust either the sample size (see: “Impact of sample size”), or alpha (see: “Interaction of alpha and beta”).