# BIG Experiment




Auto move:
Scores:

📗 Editor mode:
Note
📗 Number of influencers (2 - 5):
➭ Target positions:

📗 Number of receivers (1 - 5):
📗 Icon size:
📗 Green is valid positions for the influencers.
📗 Gray is invalid positions for the influencers (not the receivers).
➭ Vertices (add a line with at least two points to add a polygon):

➭ Drag red circles to move vertices.
➭ Click on the plus signs to add a point.
➭ Drag the green square to move all vertices.
➭ Drag the green circle to rotate the vertices.
➭ (Technical note: all polygons must be convex, and gray polygons should be non-overlapping and in the interior of the green polygons for the demo to work properly.)
📗 Players (influencer:target):
Note
📗 0 for human, 1 for enumeration, 2 for projected gradient descent
➭ enumeration size (total of \(n^{2}\) sample points): \(n\) = .
➭ projected gradient maximum steps: .
➭ projected gradient initial learning rate: \(\alpha\) = (uses \(\dfrac{\alpha}{\sqrt{t}}\) in iteration \(t\)).
📗 Influencer weights: \(v\) =
Note
📗 Influencer \(i\) maximizes \(\displaystyle\sum_{j} v_{i j} \left\|\hat{x}_{j} - t_{i}\right\|^{2}\) where \(\hat{x}_{j}\) is the final position of receiver \(j\) and \(t_{i}\) is the target position of influencer \(i\).
➭ Suppose \(\hat{x}_{j} = \displaystyle\sum_{i} w_{j i} x_{i}\), then projected gradient descent is \(\nabla_{x_{i}}\)  = \(2 \displaystyle\sum_{j} v_{i j} w_{j i} \left(\hat{x}_{j} - t_{i}\right)\).
📗 Receivers:
📗 Receiver weights: \(w\) =
Note
📗 Receiver \(j\) moves to position \(\hat{x}_{j} = \displaystyle\sum_{i} w_{j i} x_{i}\) where \(x_{i}\) is the position of influencer \(i\). \(w_{j \cdot}\) is not normailzed to sum up to 1.





Last Updated: June 06, 2025 at 8:54 AM