I find it interesting to observe the difference between what we see with the eye and what the camera records. They’re different mechanisms, different algorithms. For example…
To the eye at sunset the other night, looking south toward the Santa Monica Mountains I saw the bright lights of the setting sun reflecting off of a handful of windows that happened to be tilted in just the right direction. They were like golden jewels, pinpoints. The mountains were still lit at their tops while below the world slipped into shadow. On the valley floor below I could see red & white pinpoints, the traffic along Valley Circle Boulevard going and coming as folks got home from the day.
It was quite the scene. I grabbed my camera.
The camera still sees the golden glints and the traffic, but what stands out is what my brain had completely ignored and filtered out, the maze of power lines and the giant pole right in the middle. Sure, they were there when I was looking. But I wasn’t focused on them, opitically or mentally, but on what was in the distance. Unconsicously my brain filtered out all of those objects. Yet when the camera locks them into place in a two-dimensional image, they’re impossible to ignore.
What else is right there in front of us, but ignored, filtered out because we’re focused on something shiny behind us?
Rather than beat you over the head with the analogy, the philisophical implications are left as an exercise for the student…
Funny. I was out in the forest this morning at 7 am, for a bird survey but with camera as it was a fantastic morning. The main view I took I had to wait several times because the stream of traffic on the dual carriageway in the distance were reflecting the morning sun.
Maybe they are still more eyecatching than the lovely still pond in the foreground 🙂
LikeLike