This weekend I spent some time implementing a rough version of the texture projection for MakeHuman alpha7. The ultimate goal is to be able to create a texture from the different reference pictures used to create a model. Currently it only uses the current reference picture, even when you have several pictures assigned to different views. While the projection code from Saturday still had a lot a lot of seams in the rendering, the one from Sunday fixes this, as you can see in the rendering below.
(In case you are wondering what image I used to project, it's one of the wallpapers from the local burger chain: http://www.mos.co.jp/
There are still a few points which can be improved. The most noticeable in the rendering below is the texture sampling. I currently use nearest neighbor instead of bi-linear sampling, so it looks a bit aliased in places where the texture surface is much lower than the object surface. If you use a higher resolution picture, this shouldn't be such a problem.
Another improvement would be speed. Since 99% of the projection is implemented in python, it is rather slow at the moment. However I plan to write an OpenGL version once I have some time to add FBO (framebuffer object) support to our render engine.
Besides these two technical problems, there is the GUI aspect. As I mentioned before, we will need to combine projections, for example front and side view, to obtain a more usable texture as currently polygons orthogonal to the projection plane will have very washed out texturing.
We will also need to add a way to mask area's in the source image to avoid that the entire texture gets projected. Right now you need to erase the unneeded regions in Gimp or Photoshop and save the image with an alpha channel.
I haven't tried the projection with a real photograph of a face yet, as I was more interested in coding the projection than actually using it. But I'm sure some of you will try it soon.