JudyKay Posted January 20, 2016 Report Posted January 20, 2016 This is not really a PTE software question, rather it is a best practice question. I have not been future proofing my slide shows well. Yes, I zip them up, usually with 16:9 aspect ratio, 8 bit, 1920 X 1080 images. Have been for years. Some of those shows don't size well or look ideal on my high res laptop. Not bad at all, just less than they could have been if I had future proofed better. Nothing to worry about yet. PTE does its job perfectly, but I have not done as well. But now that 4K screens are ubiquitous, and 8K beginning to appear, now that we have 10 bit color in Rec 2020 and DCI-P3 color space on the horizon; now that we are hearing about HDR 10 format video with nit range of .05-1000 or more, I wonder about backup strategy that might better future proof shows than what I describe at the top. Of course I have all those old images at original res, 16 bit originals stored safely in Lightroom, but don't fancy finding and re-exporting them and then fitting them to correct filenames (some of which I altered for numerical arrangement) in PTE to recreate a show. Your thoughts? Just go with the current times? The past was beautiful, but the present and future are only getting better, so should we just shrug and let the past be the past? Should we go ahead and save images at 32 bit and export images at 8K 7680×4320? Good grief, our Canon 5D's produce only a paltry 5760 × 3840. Most of us are capturing originals at 12 or 14 bit. That's all we have to start with. Should I swing by B&H and pick me up a new Hasselblad and shoot 60 mp 8956 x 6708? If 8 bit images produce about 11 stops of dynamic range and the human eye can see 10-14 stops, how much are we and our grandchildren going to miss? Quote
Lin Evans Posted January 21, 2016 Report Posted January 21, 2016 LOL - I wouldn't bother producing any show using images at a resolution higher than about 8 megapixel (3504x2336) unless doing a very deep zoom on an image to 1:1 or greater. As for dynamic range, even though the human eye may be able to detect up to 14 stops or even more, we don't do that all at once but rather in multiple scans and over time. In a single view without the eye changing it's internal brain wired "f/stop" we can only detect about ten stops at max and today's printers can only handle less than that range. Throw in any significant animation and we begin to reach the limits of present day systems to render smoothness when we use ultra high resolution images. Even though displays may produce 8K resolution, it's mostly wasted because unless we are using huge wall sized displays as long as the pixels are smaller than the eye/brain can detect as discrete points of light we have reached the limits of visual resolution. This is why images look so good even at lower resolution on cell phone displays. The same resolution when displayed on say a 60 inch LCD might look pretty jagged. I believe we have long since reached the "practical" resolution limits of conventional photography. Higher resolution cameras are now being used to encompass more real-estate in a single capture than to render greater detail of smaller objects. The new ultra resolution spy cameras take super high resolution video with which the observer can capture a wide geographic area and zoom to a resolution of perhaps a couple inches per pixel rather than see blackheads on the human nose. There is little value in my opinion of rendering greater resolution for practical purposes than the unaided human eye can digest. When we stand face to face and talk with one another, we do not see what we would in a magnifying mirror, so why would we want this level of detail in our slides? If we do, then unless we are photographing moving subjects, we can simply take overlapping telephoto frames and stitch multiple images to allow us the option of zooming into the subject beyond what we could see with the unaided human eye from a distance of a few feet. I think in the future of photography, high resolution sensors will soon be used in holographic projection rather than super resolution 2D frames. Once we perfect sensors which work more like the Foveon rather than the much more common conventional color filter array (CFA) Bayer which interpolates color then we can eliminate the halos and chromatic aberrations which are the bane of present day digital captures and the present sensor resolution will be more than sufficient to produce razor sharp images at reasonable pixel dimensions rather than the megapixel race we have presently dominating marketing. It's exceedingly expensive to produce flawless and very large silicon wafers and because of the costs, cameras such as the 100 megapixel Sony sensor in the new Phase One back are never going to be in common production. The demands on system resources already exceed the value realized by these super high resolution systems IMHO... Best regards, Lin Quote
Contentawarephil Posted January 21, 2016 Report Posted January 21, 2016 Interesting topic. Some of what you propose is already do-able. I can make a 6000x3375 16:9 EXE file from images out of my camera and by using 100% quality it is difficult if not impossible to tell the difference between the JPEG and the 16Bit RAW file conversion from which it was created. But the downside of that is the size of the EXE. 100 images comes in at about 1.5Gb. If the 50MP DSLR camera is used then your goal of "8K" is a possibility. This leaves no room for Pans and Zooms etc but in some cases this is not required. Using 6000x4000 (or bigger) images in a "4K" EXE would allow for some animation. Maybe now would be a good time to start making 3840x2160 Projects? They can be made to play on all monitors up to 3840x2160 but still play as normal on 1920x1080 etc Monitors. Regarding Bit Depth we can only work with what is available to us at any point in time. I have never used anything other than 14Bit RAW in camera but this always converts to an 8Bit JPEG. Will that change any time soon? Phil Quote
Urmas Posted January 24, 2016 Report Posted January 24, 2016 There is definitely a trend, where old films available on DVD (film) are remade to full HD quality. Sure, they have better look when compared to nonexistant DVD resolution. On the other hand, resolution is not the only thing, that matters. It the content is rubbish, then even gigapixels can't save it. And if the content has wow factor, I can still enjoy "old" DVD resolution. Regarding multimedia slideshows made with PTE. I personally prefer 16:10 aspect ratio. It is pretty close to golden ratio and for photos works better for me. I have remastered some of my 4:3 (almost all first projectors had 4:3 aspect) shows to 16:10. I like to use full screen. Probably because so far projector resolution has been somewhat limiting. I remember days we had only 1024 pixels to play with. 1270 is better and 1920 is much better. On (small) computer screen it might be reasonable already. On the other hand I just upgraded my 1920 pixel wide 24 inch monitor to 2560 pixel wide 27 inch display (109 ppi). Wouldn't go back. It you present 1920 pixels of projector image on 5 m wide screen you will have pretty lousy resolution. If my calculations are right, it will be only 9.7 pixels per inch. Human eye can resolve much more. So, the resolution is limiting. There is interesting analysis of human vision and 4K here. There conclusion is, that 4K resolution is required to produce maximally sharp and seemingly continuous pixels for a majority of viewers. I tend to agree. However, 8K seems to be overkill for most uses except on extra large screens. I am pretty sure, that in future we will enjoy 4K slideshows. Especially, when 4K projectors will become reasonably priced and available for ordinary people like us. If the content is worth it, there will be reasons to remaster it for 4K. If the content is worth viewing (has wow factor), we can still enjoy it "as it was". Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.