When playing back content from my 3D camcorder, if you aren't wearing shutter glasses then the result is pretty blurry and bland looking. After putting on the glasses, the scene jumps to life. So while putting together a sample photo to promote how it works, I wanted to superimpose a better 2D image, and stretch it over the existing image of a screen in the photo. It had been awhile since I've done anything real with math, so I took a half hour and figured out a couple equations to do the mapping, and wrote a program to do the translation for each of the pixels into the quasi-3D space of the monitor screen, ending up with this result:

(The superimposed image doesn't look that believable here, but when I resized the picture down a bit, it looked better.)
The routine works by mapping each pixel from the source image into the proper x/y coordinate for the destination space, seen above. And it works fine *except* that where the image gets stretched a bit much, there are moiré kind of effects that pop up. These can be seen at the right side of the image above as lines of speckles.
Here's the formulas that do the mapping to create the result seen above:
destX=((lrx - llx) * (srcX / srcWid) + llx - ((urx - ulx) * (srcX / srcWid) + ulx)) * (srcY / sHt) + (urx - ulx) * (srcX / srcWid) + ulx
destY=((lry - ury) * (srcY / srcHt) + ury - ((lly - uly) * (srcY / srcHt) + uly)) * (srcX / sWd) + (lly - uly) * (srcY / srcHt) + uly
And here's what all my crazy variable names mean:
srcWid = source image width
srcHt = source image height
srcX = source X coordinate
srcY = source Y coordinate
destX = destination X coordinate
destY = destination Y coordinate
Coordinates that define the location of the destination image:
ulx = upper left corner of the image, x coordinate
uly = upper left corner of the image, y coordinate
urx = upper right corner of the image, x coordinate
uly = upper right corner of the image, y coordinate
llx = lower left corner of the image, x coordinate
lly = lower left corner of the image, y coordinate
lrx = lower right corner of the image, x coordinate
lry = lower right corner of the image, y coordinate
So basically if you supply a source x/y, you can then find what the appropriate destination x/y should be, and plot the color from one to the other. So the code iterates through all the source pixels and plots each one to a destination pixel. But it would be cleaner and often faster to go the other way, iterating through all the destination pixels to find what source pixel (if any) matches up to that point. So basically the reverse of the above equations. The real sticky part is that each of the equations is dependant on not just the srcX, but also srcY. And as well, each point on the destination could conceviably map to *two* source pixels (but not more since the highest power expressed here is effectively a squared term.)
Anyone out there a math whiz that can think of an easy way to "go the other way", in other words in the two equations above, solve for srcX and srcY? I found it kinda challenging. A fun brain challenge more than anything else. My hotshot math whiz brother may find the answer, who knows.