The way we handle screens is all wrong.
Looking at all the “Black Friday” advertisements, I’m struck by how cheap bigscreen TV’s have become. Of course, this is a trend that has been going on for a long time. Just like Moore’s law predicts that computer speed will double every two years, I suspect there is a similar “law” at work for displays.
If the data were available, I would love to chart the dropping cost per pixel over the last 30 years. Bottom line: more and more of our visual world will be dynamically controlled.
At this point I usually go off thinking about laser retinal diplays, but we still have a few good years of playing with old-fashioned screens before then.
The problem right now is that we are unreasonably committed to the principle that one device has one screen, period.
- What happens when I’ve found a video on my phone that I want to show everyone on the bigscreen TV? (Answer: Try to find the same site and perform the same search on whatever computer is hooked up to the TV. Ugh.)
- What happens when I’m in a presentation and want to show everyone a chart that’s on my laptop? I’d like to quickly throw it up on the projector, but that entails unplugging cables and figuring out resolution/DVI/VGA issues. Not worth it for one image, so instead I hold the laptop above my head and hope everyone can see it. Lame.
- What happens when the person sitting next to me has a screenfull of information that I need a copy of? It’s ridiculously hard to get that information to move the 30 inches over to my machine. Email it? Send it via chat?
- What happens when I’m walking by one of those advertisement screens by NYC subways and I want more info about the product? Right now I have remember a web page and maybe look it up later. Why can’t I grab information right onto my phone?
What we *want* is to separate the concept of screens and devices. When sitting in your living room, there are several media sources (cable TV, game systems, phones, browsers, other applications) and there are several screens (on phones, on laptops, on TV’s, on refrigerators, etc). It should be a seamless gesture to move content from one display to another.
Talking with Ben Brightwell the other night, I realized that this is tantamount to a hardware implementation of the MVC paradigm. For you non-nerds, MVC stands for “model-view-controller”, and has come to be standard good practice for software. It means that you separate data (the model) from the display of that data (the view) from the manipulation of that data (the controller). If you build your software application this way, its no problem to have the same data showing up on (for example) a web page, an iphone app, a desktop application, or a refrigerator.
What we have now in hardware is similar to what used to happen in software: the model, view, and controller are all so tangled together that it is near impossible to change anything.
It’s not clear what the perfect solution will be. High speed wireless will help. Bluetooth will probably play a part. Maybe one day all screens will come with wireless connections and there will be smart protocol that lets them simply “announce” themselves to the rest of your network that they are available for display. There will need to be gestures standardized for moving windows (or whatever) from one screen to another.
I’m happy that the Oblong guys are re-thinking interfaces, including screens. It used to be one computer for one person, sitting in front of one little screen, dragging around one little mouse, running one little program. Those days are gone, and its time that the hardware caught up.