Just checked out the wiki on Video Synthesizers (https://en.wikipedia.org/wiki/Video_synthesizer) and Video Art (https://en.wikipedia.org/wiki/Video_art) and learned some things :
- Doing things in “Real Time” distinguishes video synths from 3D renderers. Video synthesis in analog and even digital was done before computers.
- In Chicago, there were video experiment sessions called “Electronic Visualization Events”
- “position, brightness, and color were completely interchangeable and could be used to modulate each other during …This led to various interpretations of the multi-modal synesthesia of these aspects of the image in dialogues that extended the McLuhanesque language of film criticism of the time”
- ” Today, address based distortions are more often accomplished by blitter operations moving data in the memory, rather than changes in video hardware addressing patterns. “
****
Cool article on glitches http://beza1e1.tuxen.de/lore/sparkling_tile.html
- The experts solving the problem often are faced with difficult to reproduce glitches. The solution is often to be patient, and observe unexpected correlations systematically. Also deeply understanding the functioning of the systems. Having logs is important to see what happened right before the crash.
*****
Who is my audience ? I am reading lots of posts from hacker news, am I now trying to appear to a more engineering / larger tech literate crowd on the internet ?
******
Some mindbending perspective 2D/3D video games :
Viewfinder
Superliminal
********
Stockhausen
And a great series of video by Hainbach, including some rudimentary vinyl recording devices, bar code card samplers, flanging with two synchronized tape players, and a “Stockhausen speed-run” reproducing beautiful sounds with essentially just a sine oscillator, a splicer and a tape recorder. I realized that this is exactly what I was going for with my early prototypes of the video samplers, trying to construct compositions out of simple parts, and giving one hands-on experience of a time based medium.
*****
Checked out the world of hardware video samplers and discovered that most people use their laptops, so this may be an opportunity !
The OG is the Korg Kaptivator :
Snapbeat https://snapbeat.net/snapbeat-simple-lofi-hardware-sampler/
Here is an open-source one called the r_e_c_r, based on a rpi :
www.20.piksel.no/2020/11/21/r_e_c_u_r-an-open-diy-video-sampler/
And Gieskes arduino video sampler : https://gieskes.nl/visual-equipment/?file=gvs1#p3
****
I have been looking at indie games and their construction. Small scale video games marry tech and art in some a unique way.
On the rainworld game design (youtube.com/watch?v=sVntwsrjNe4). The artist/developer Joar Jakobsson advocates for using programming to create illusions, finding pragmatic, instead of purist, solutions. For their creatures, they have a physics + AI engine first and then a computationally expensive and more complex stylized layer is added on to that. Performance optimization concerns appear to help the creative development of the game. Another insight is that because they are working with made up creatures, no one can fault their representations (we could tell if a horse was moving funny).
On the Inside rendering (youtube.com/watch?v=RdN06E6Xn9E&t=1885s).
talks about various tricks to create the illusion of fog, quickly render flashlight cones, etc. often in a struggle against 8 bit values that leave bands on the screen. Tricks seem to involve adding different kinds of noise and down/up sampling, using smooth minimum functinos for lighting to eliminate these. The connection between the rendering pipeline, and its bandwidth constraints, and the artistic atmosphere are intertwined. Interesting that this 2.5D game fixes the camera and therefore this allows the artists working on the game to know exactly what will be seen by the player. They also mention how many effects they “stole” from other game developers and are sharing theirs in an open source ethos.