Bret Battey blogging sundry ideas, favorable events, works in progress, and miscellaneous solutions in digital music and video-music research and creation. I see this as a subsidiary of my web site, BatHatMedia.com.
Monday, August 27, 2012
Forms (process) by Memo Aktin, Quayola +
Forms (process) from Memo Akten on Vimeo.
This provides some exceptional food for thought regarding the khyāl gesture mapping challenge. Akten and team demonstrate powerfully that there is a sweet spot to be found between a direct mapping of body motion and a more abstracted visualisation.
Why is this so effective?
The multiple layers of material originate from the same physical motion, but have independence from each-other and from that motion.
The original motion is, in a sense, providing the impulse that shapes the behavior of different systems. The resulting system behaviors don't just "track" the motion directly, but also clearly reflect the velocity and trajectory of that motion. These velocity patterns cause changes in the behaviors that echo past the time sequences that shape them. Like the "memory trail" one might have of the original motion, the systems emphasize that the dynamics of the motion have perceptual implications beyond the specific narrow windows where they occur.
Because these layered systems both have their own behaviors and are impulsed by the same original motion, there is an engaging "counterpoint" arising between the different layers, all pointing clearly back to the character of the originating motion — even while that original motion has been eliminated from the picture. (I have a long-standing fascination with the idea that audio-visual complexes could achieve coherence due to their relationship to an underlying system that is neither audio nor visual and is not directly perceived).
It is interesting that the left particle system helps emphasise the body mass and its distribution, while the other systems seem to emphasise more the trajectories of points on that mass. I find having both there quite compelling; they tell different aspects of the story.
In the right-hand system, the spline curves seem to shift even after they have been drawn, perhaps even more so with wide arcs of high velocity. It seems likely simply that those points take on an initial velocity and trajectory and that velocity decays over time. Lovely idea.
I'm still scratching my head a bit on the center system — though some kind of spring model seems at play.
If one applies some similar techniques to the khyāl gesturing, it seems that there is a risk that the time independence of the visual systems could blur, rather than highlight, the motion-to-music relationship. But this is certainly worth experimenting with.
Further, it is worth considering what other higher-order aspects (beyond velocity) in the originating motion could fruitfully be applied to shaping system behavior.
Monday, August 13, 2012
Colonal Colonies in Cartes Flux 2012
Clonal Colonies Movement I will be screening in Cartes Flux, Espoo, Finland, 15-22 October 2012
Friday, August 10, 2012
Duplicating Objects in Blender 2.6 Python Scripting
Duplicating objects with Blender 2.6 Python scripting is not quite as straightforward as one might hope, and for me a web search failed to return a simple, clear solution. If found my best solution by searching through the Blender scripts addons folder for files containing the 'copy()' function.
One has to create a new mesh, then copy the data of the source object, then link to the scene. Here's the short function I'm using to do this.
One has to create a new mesh, then copy the data of the source object, then link to the scene. Here's the short function I'm using to do this.
# The following function is adapted from # Nick Keeline "Cloud Generator" addNewObject # from object_cloud_gen.py (an addon that comes with the Blender 2.6 package) # def duplicateObject(scene, name, copyobj): # Create new mesh mesh = bpy.data.meshes.new(name) # Create new object associated with the mesh ob_new = bpy.data.objects.new(name, mesh) # Copy data block from the old object into the new object ob_new.data = copyobj.data.copy() ob_new.scale = copyobj.scale ob_new.location = copyobj.location # Link new object to the given scene and select it scene.objects.link(ob_new) ob_new.select = True return ob_new
Thursday, August 9, 2012
Towards Essential Body Relationships
Though I am intending to map the motions of Tofail Ahmed to an abstract visualisation, I want to ensure that important perceptual aspects of the body motion translate effectively. So it is helpful to analyze the performance motions. My intent is to identify high-priority relationships or parameters that I will seek to honour in the abstract visualisation. What "honouring" can and will mean in practice remains to be seen.
Gesture Types
Borrowing terms from Martin Clayton's study of gesture in Khyāl performance (2007), the performances I recorded of Mr Ahmed contained physical gestures that serve as "markers", "illustrators" and "emblems".
Marker gestures indicate a specific time point in musical structure, such as beating a pulse, identifying a downbeat, etc.
Illustrators appear analogous to the melodic flow/motion. The vast majority of the motion falls into this category. It is one vast territory that covers a lot that is very interesting — and very difficult to talk about analytically. This, perhaps, is precisely why it is so valuable!
Emblems, or symbolic gestures, are more based on cultural convention and can be translated relatively readily into a verbal equivalent. An example would be indicating approval with a hand wave.
Emblems would be the most problematic type of gesture given my intent. An emblem is at high risk of disappearing in an abstraction, given the very precise body-arrangement and audience reading it entails. Fortunately, there are very few emblems in the performance I am working with. The closest is an invocation-type emblem. This general position — palms close together, often in front of the face — is important. It creates the impression of focus and preparation, and occurs at the beginning of the first and last phrases of the performance. It also appears at the start of 11 other phrases of the 60 total phrases. This invocation emblem is also often acting as a marker of the start of phrases. It is doing dual duty.
Invocation: Hand position at start of phrase 1 |
I am wagering that the element of proximity/closedness is as crucial here as the fact that this can be read as a sign of invocation/gathering.
If the invocation emblem often marks the start of a phrase, the most common marker of a phrase start involves the fingers oriented towards each other horizonally, at mid-body level. The example below is appears at the start of phrase 3:
Rest Position: Hand position at start of phrase 3 |
Though there are true full-rest positions of hands on knees (at start) or hands in lap (at end), these seem like outliers that won't be useful as a base position, since none of the Illustrator gestures operate in those spaces.
Measures
Given the above clues, I created some measures to help explore what body relationships might be highly correlated to the structure of the music. Basically, I am assuming that if my abstract visualisations at least clearly carry some of large scale articulators of phrasing, the details within will "take care of themselves".
Proximity to Rest Position (PRP): Clayton (2007) used distance of hand from a rest position in his analysis. I am taking a similar approach, measuring the distance from each finger tip to the neutral position and summing those distances. [Aug 10: However, I might consider using a different term, like neutral position, since Clayton appeared to use rest position to apply to true resting of hands on the legs.]
I did this in Blender by creating an Empty at the rest position and an Empty to represent the distance. I animated the Z axis of the latter with a Driver:
Finger Tip Proximity (FP) and Thumb+Finger Tip Proximity (FTP): The finger tips are in close proximity in both the invocation and the neutral position. So it could be of value to simply measure the distance between the finger tips.
Plus, I took a more refined measurement of hand tip proximity is to take the distance between the finger tips and between the thumb tips, and average them.
As it turns out, close finger tip proximity is often closely related to the beginning of phrases. However, finger tip proximity also closes in fairly often during other mid-phrase events.
Shoulder to Hand Proximity (SHP): The close-to-body versus far-from-body contrast also seems important. As the arms straighten, hands move away from the body. So a simply measure of the degrees of far-from-body is the distance between the shoulder and the hand. This measure sums the left and right shoulder-to-wrist distances.
Hand Closedness (HC): This sums the left and right finger-tip to wrist distances, to provide a measure of the open versus closed state of the hand.
Video
The above video shows the first 18 phrases, with graphic representation of the above measures. This is enough to convince me that Proximity to Rest Position is an excellent candidate to focus on. (But how?)I could spend a lot of time now analyzing the relationships of these measurements to the music. But that probably will not actually help me attain my immediate goals, so I might have to set that aside for another time.
References
Clayton, M. (2007) "Time, Gesture and Attention in Khyāl Performance". Asian Music v.38 n.2.Sunday, August 5, 2012
Tofail Motion Capture Mapping Test "simple-03"
This test makes me confident that the hierarchical motion mapping idea is worth pursing:
This is a very simple mapping of the upper body motion of Tofail Ahmed as he sings an khyal alaap in raag Bhairavi.
The spheres follow the middle finger and thumb endpoints, with the camera in the viewer position facing Tofail.
The planks are linked in a parenting hierarchy and receive local rotations of bones in the skeleton. (The capacity to do this is easily missed in Blender: when identifying the target object of a Copy Rotations constraint, one indicate the armature, then a second drop down will appear that lets one indicate the target bone.)
Also, to smooth out issues I was having with sudden shifts in the thorax and pelvis rotations, I substituted in the head at the thorax point and made that the root. So the line is head : clavicle: humerus : radius : <hand : finger > <thumb>.
An empty provides parenting to the root, and the empty is rotated 180° on each axis across the whole performance. This provides a foundation of slow, continuous motion to match, if you will, the fundamental drone. The head block, then, applies the rotations from the head as an offset to this base angle.
I'm intrigued by the juxtaposition of the direct position mapping (the balls) with the hierarchical mapping (the planks).
This is a very simple mapping of the upper body motion of Tofail Ahmed as he sings an khyal alaap in raag Bhairavi.
The spheres follow the middle finger and thumb endpoints, with the camera in the viewer position facing Tofail.
The planks are linked in a parenting hierarchy and receive local rotations of bones in the skeleton. (The capacity to do this is easily missed in Blender: when identifying the target object of a Copy Rotations constraint, one indicate the armature, then a second drop down will appear that lets one indicate the target bone.)
Also, to smooth out issues I was having with sudden shifts in the thorax and pelvis rotations, I substituted in the head at the thorax point and made that the root. So the line is head : clavicle: humerus : radius : <hand : finger > <thumb>.
An empty provides parenting to the root, and the empty is rotated 180° on each axis across the whole performance. This provides a foundation of slow, continuous motion to match, if you will, the fundamental drone. The head block, then, applies the rotations from the head as an offset to this base angle.
I'm intrigued by the juxtaposition of the direct position mapping (the balls) with the hierarchical mapping (the planks).
Saturday, August 4, 2012
Importing Vicon IQ Motion Capture into Blender
In the Fused Media Lab at De Montfort University's Faculty of Technology, I used a Vicon multi-camera infrared tracking system to capture the upper-body, arm and hand motions of Tofail Ahmed while he sang khyāl alāps. The software was Vicon IQ. IQ is no longer supported by Vicon, and its export formats are not widely recognized any more.
Therefore, I explored a multitude of dead-ends in trying to get the motion capture data into Blender 2.6x. Here's the solution I ultimately developed. One probably wouldn't want to go through this for high volumes of motion capture sessions for different subjects, but it is a reasonable solution for transferring one session.
<Skeleton>
<Segment NAME="pelvis" POSITION="0 0 0" …>
…
<Segment NAME="thorax" POSITION="-50 0 Back" …>
…
<Segment NAME="head" POSITION="0 0 Neck" …>
…
</Segment>
…
<Segment NAME="lclavicle" POSITION="0 0 Neck" …>
…
Notice that the position of each joint is defined relative to its parent rather than in global coordinates. We will need global coordinates to create the skeleton in Blender.
Further, the Vicon coordinate system is Y pointing left, X pointing back, Z pointing up, while the Blender coordinate system is X pointing right, Y pointing back, Z pointing up. So we need to map Vicon X to Blender Y, the negative of Vicon Y to Blender X, and Vicon Z to Blender Z.
I built an Excel spreadsheet to take the Vicon local coordinates and convert them to Blender coordinates, then, based on the hierarchy, accumulate the values into global coordinates:
So, using these absolute/global values, one can create the skeleton in Blender. To be safe, I took care to ensure that the roll was set to 0 for all bones. This skeleton will be huge by the standards of Blender units. Scaling will come later.
To import the Vicon IQ motion data from the CSVs, I used Hans P.G.'s CSV F-Curve Importer Blender addon (much thanks to Hans). This requires some preparation, however…
… then I added second sheet and placed the search criteria for a filtering operation there. We want to select any row where FrameMod = 1. Notice in the formula bar that one has to enter ="=1" for this to work.
Using Data > Advanced Filter…, designate "Copy to another location". The list range will be the range of the original data, criteria range will be the two search criteria cells, and destination should be a starting cell somewhere below the original data. The filtered data will appear here.
The frame numbers will need to be resequenced. I copied frame numbers from the original data and pasted to the new, filtered data.
I then deleted the FrameMod column.
Therefore, I explored a multitude of dead-ends in trying to get the motion capture data into Blender 2.6x. Here's the solution I ultimately developed. One probably wouldn't want to go through this for high volumes of motion capture sessions for different subjects, but it is a reasonable solution for transferring one session.
Export Data
First, export the skeleton joint movement and rotations data from Vicon, using the CSV (comma separated value) format, global rather than local orientation. It contains world-space rotations and translations for each joint. The joint angles are Euler angles, specified by X Y and Z rotations in degrees, applied in that order (as they appear in the spreadsheet).Create The Blender Skeleton
The next step is to manually create a skeleton of armatures in Blender matching the calibrated skeleton used in the motion capture. One can guide the process by looking at the calibrated skeleton file — the Vicon.vsk file. The .vsk is in XML format, so can be opened in a text editor. The first section is the <KinematicModel> section. Within that, the <Parameters> section will list the name and values of parameters used in the construction of the skeleton. After that, the <Skeleton> section defines each "segment" or bone and the hierarchy of relationships between the bones. The hierarchy is modeled within the XML hierarchy itself. For example, in my skeleton, pelvis is the parent of thorax is the parent of head, lclavicle and rclavicle. So this part of the XML, simplified, is arranged as follows (… indicates stuff left out). Notice how the parameters defined above now appear in the skeleton definition, usually in defining the position of joints (and hence the length of bones):<Skeleton>
<Segment NAME="pelvis" POSITION="0 0 0" …>
…
<Segment NAME="thorax" POSITION="-50 0 Back" …>
…
<Segment NAME="head" POSITION="0 0 Neck" …>
…
</Segment>
…
<Segment NAME="lclavicle" POSITION="0 0 Neck" …>
… (whole left arm descends from here)
…
</Segment>
<Segment NAME="rclavicle" POSITION="0 0 Neck" …>
</Segment>
<Segment NAME="rclavicle" POSITION="0 0 Neck" …>
… (whole right arm descends from here)
…
</Segment>
</Segment>
Notice that the position of each joint is defined relative to its parent rather than in global coordinates. We will need global coordinates to create the skeleton in Blender.
Further, the Vicon coordinate system is Y pointing left, X pointing back, Z pointing up, while the Blender coordinate system is X pointing right, Y pointing back, Z pointing up. So we need to map Vicon X to Blender Y, the negative of Vicon Y to Blender X, and Vicon Z to Blender Z.
I built an Excel spreadsheet to take the Vicon local coordinates and convert them to Blender coordinates, then, based on the hierarchy, accumulate the values into global coordinates:
So, using these absolute/global values, one can create the skeleton in Blender. To be safe, I took care to ensure that the roll was set to 0 for all bones. This skeleton will be huge by the standards of Blender units. Scaling will come later.
Import Motion Data
To import the Vicon IQ motion data from the CSVs, I used Hans P.G.'s CSV F-Curve Importer Blender addon (much thanks to Hans). This requires some preparation, however…
Convert Frame Rate
My data was captured at 120 fps, so it had to be converted to 30 fps by throwing away 3 out of every 4 rows. I did this by using Excel's "Advanced Filter". First I added a column next to the 'frames' column and using mod(4) applied to the frame number (using Fill Down to copy the formula to all cells)…… then I added second sheet and placed the search criteria for a filtering operation there. We want to select any row where FrameMod = 1. Notice in the formula bar that one has to enter ="=1" for this to work.
Using Data > Advanced Filter…, designate "Copy to another location". The list range will be the range of the original data, criteria range will be the two search criteria cells, and destination should be a starting cell somewhere below the original data. The filtered data will appear here.
The frame numbers will need to be resequenced. I copied frame numbers from the original data and pasted to the new, filtered data.
I then deleted the FrameMod column.
Convert Axis Orientation
The Y values of the translation parameters (NOT the angle parameters) need to be multiplied by -1. One way to do this is to place a -1 in an empty cell and copy the cell. Then select the column that needs to be altered and choose Edit > Paste Special… > Multiply. (The translation parameters are indicated by the <t-X>, <t-Y>, and <t-Z> column heads.)Convert to Headerless CSV
The header row (the row giving the names for each column) needs to be removed. Now save to a CSV file.
It will be useful to copy that header row into another Excel file for easy reference during the import step, since we will need to know which column number is which. The columns will be referred to by 0-based count, so I found it useful to number them:
I had to comment out the following in the v0_7_alpha1code to get it to run in Blender 2.63. That may not be necessary now:
* import unittest
* the def main()... block
* the class Test_FCurvePointAdder... block
The spreadsheet now contains Vicon global X -Y Z for each joint. I swapped this to -Y X Z during the input to map properly. I assigned a single Action Name to each XYZ set. Here is an example of reading in the Head location from 0-based column numbers 13, 14 and 15, indexed 1 0 2 to do the axis swap (notice that the F-Curve Importer pane shows up under Scene Properties):
Each bone after this point requires only one constraint: a TrackTo to the next joint location:
This will likely move the skeleton way off to some other location in Blender space, but it should now be animated.
N.B. the above may not treat location of free joints (pelvis, thorax, and head in this example) precisely correctly. Comparing the global and local joint export files from IQ, I was not able to come up with a consistent interpretation of how to handle these. For example, it seems to me that free joints below the root joint imply variable bone-lengths, which does not make much sense to me – and I suspect can't be implemented in Blender. However, the above worked well enough for my purposes.
One way to do this is to copy the column of data to another spreadsheet. Then fill the next column with an =RADIANS formula. Select and copy the resulting numbers, and paste over the data in the original spreadsheet using Paste Special > Values.
Select the Head locator empty and run the F-Curve importer to import the X Y & Z rotations. As with the translation import above, these should be indexed 1 0 2 in order to swap the X and Y axis.
In my case, it made sense to add a block to represent the head, and add a Copy Location constraint and Copy Rotation constraint, both tied to the Head locator.
If one needs to make joint locators also reflect joint rotation, one could add a Copy Rotation constraint to a locator, select the Armature as the target, then — in the bone indicator that will appear — indicate the bone. [Edit 22 Sep 2015 -- this is a bad idea, actually. Creates a circular definition between the skeleton constraints and the locator, yielding a 'dependency cycle' error]
Setup Locator Empties for the Joints
Create an Empty for each joint. These are what will be keyframed…Import the Joint Locations
I used the CSV F-Curve Importer v0_7_alpha1 to import the joint location data one joint at a time. Select a single joint-locator Empty and run the importer to keyframe its location.I had to comment out the following in the v0_7_alpha1code to get it to run in Blender 2.63. That may not be necessary now:
* import unittest
* the def main()... block
* the class Test_FCurvePointAdder... block
The spreadsheet now contains Vicon global X -Y Z for each joint. I swapped this to -Y X Z during the input to map properly. I assigned a single Action Name to each XYZ set. Here is an example of reading in the Head location from 0-based column numbers 13, 14 and 15, indexed 1 0 2 to do the axis swap (notice that the F-Curve Importer pane shows up under Scene Properties):
Constrain the Bones to the Joint Locators
Working in Pose Mode, one can apply bone constraints. The root of the skeleton (Pelvis, in my case) should have a Copy Location constraint tying it to the Pelvis joint locator Empty. Then add a TrackTo constraint pointing at the next joint locator (Thorax, in my case) [Edit 22 Sep 2015 a better solution for all of the tracking of bones to locators is to use the StretchTo constraint rather than TrackTo, no volume effect, Plane = z was my preference]Each bone after this point requires only one constraint: a TrackTo to the next joint location:
This will likely move the skeleton way off to some other location in Blender space, but it should now be animated.
N.B. the above may not treat location of free joints (pelvis, thorax, and head in this example) precisely correctly. Comparing the global and local joint export files from IQ, I was not able to come up with a consistent interpretation of how to handle these. For example, it seems to me that free joints below the root joint imply variable bone-lengths, which does not make much sense to me – and I suspect can't be implemented in Blender. However, the above worked well enough for my purposes.
Add Head Rotations
So we are able to get this far without having to import any actual rotation data. The head provides an exception. We now know its location, but not its rotation. This will need to be imported from the Global CSV. But this requires some more prep. The Vicon CSV is in angles, but Blender's internal routines work in radians (even if the interface displays degrees). So the CSV angle data needs to be converted to radians.One way to do this is to copy the column of data to another spreadsheet. Then fill the next column with an =RADIANS formula. Select and copy the resulting numbers, and paste over the data in the original spreadsheet using Paste Special > Values.
Select the Head locator empty and run the F-Curve importer to import the X Y & Z rotations. As with the translation import above, these should be indexed 1 0 2 in order to swap the X and Y axis.
In my case, it made sense to add a block to represent the head, and add a Copy Location constraint and Copy Rotation constraint, both tied to the Head locator.
Parent, Reposition and Scale
I created an empty at the exact origin of the pelvis, then parented all locator Empties, the head block, and the skeleton to this Empty. This empty serves as the root of the whole bundle, providing one point of control for positioning, rotation and scaling. A scale of 0.01 brought my figure down to something closer to normal Blender working scale.Optional Joint Rotations
If one needs to make joint locators also reflect joint rotation, one could add a Copy Rotation constraint to a locator, select the Armature as the target, then — in the bone indicator that will appear — indicate the bone. [Edit 22 Sep 2015 -- this is a bad idea, actually. Creates a circular definition between the skeleton constraints and the locator, yielding a 'dependency cycle' error]
Subscribe to:
Posts (Atom)