The mechanisms of how the brain orchestrates multi-limb joint action have yet to be elucidated and few computational sensorimotor (SM) learning approaches have dealt with the problem of acquiring bimanual affordances. We propose a series of bidirectional (forward/inverse) SM maps and its associated learning processes that generalize from uni- to bimanual interaction (and affordances) naturally, reinforcing the motor equivalence property. The SM maps range from a SM nature to a solely sensory one: full body control, delta SM control (through small action changes), delta sensory co-variation (how body-related perceptual cues covariate with object-related ones). We make several contributions on how these SM maps are learned: (1) Context and Behavior-Based Babbling: generalizing goal babbling to the interleaving of absolute and local goals including guidance of reflexive behaviors; (2) Event-Based Learning: learning steps are driven by visual, haptic events; and (3) Affordance Gradients: the vectorial field gradients in which an object can be manipulated. Our modeling of bimanual affordances is in line with current robotic research in forward visuomotor mappings and visual servoing, enforces the motor equivalence property, and is also consistent with neurophysiological findings like the multiplicative encoding scheme.