Jump to content
WnSoft Forums

Lin Evans

Moderator
  • Posts

    8,206
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by Lin Evans

  1. Hi Lyn, Thanks so much for the feedback. It's amazing how something as seemingly unrelated as an issue with the sound card software can cause such headaches. But in retrospect, we have sometimes had even mp3 files which were not "standard" format cause no sound at all when even one of many was defective. The simple "fix" was to normalize them by loading them in a sound editor such as Audacity and exporting them again. Was it the the "Conexant - CHDRT64.sys" driver file which was the problem? Best regards, Lin Sound Devices ------------- Description: Speakers (Conexant SmartAudio HD) Default Sound Playback: Yes Default Voice Playback: Yes Hardware ID: HDAUDIO\FUNC_01&VEN_14F1&DEV_506E&SUBSYS_17AAA001&REV_1000 Manufacturer ID: 1 Product ID: 100 Type: WDM Driver Name: CHDRT64.sys Driver Version: 8.54.0047.0000 (English) Driver Attributes: Final Retail WHQL Logo'd: n/a Date and Size: 9/3/2012 06:26:02, 1609376 bytes Other Files: Driver Provider: Conexant HW Accel Level: Basic Cap Flags: 0x0 Min/Max Sample Rate: 0, 0 Static/Strm HW Mix Bufs: 0, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX 2.0 Listen/Src: No, No I3DL2 Listen/Src: No, No Sensaura ZoomFX: No
  2. Thanks Jean-Cyprien, That's a perfect example of how multiple small png sections can be used and manipulated to build a curved surface. It takes lots of time to build and adjust but the effect is very, very realistic. Very nice example of this... Best regards, Lin
  3. There is a way to actually simulate curved surfaces such as the edge of a coin using multiple PNG sections which work well if the intent is to leave the coin in such a position as to have the edge surface face the observer. It's done by breaking the photo of the coin edge into about five pieces consisting of one long and four shorter sections. The shorter sections are then "hinged" with their centers of rotation moved in such a manner as to simulate a curve when viewed straight on or even at a relatively shallow angle. Which way to proceed depends on how realistic you want to make the animation and actually how quickly the coin is flipped. For even a reasonably fast flip it's generally not necessary to even bother with the "thickness" because the coin passes through the zero position so quickly that the human eye can't even discern the fact that it had zero thickness at that exact time. The value of looking at the cube tutorial is to learn how pan Z works and to see the value of using frames in the construction for controlling placement and motion while acting as the "glue" which holds the components together. Best regards, Lin
  4. Hi Brian, You have chosen a project which will have a bit of a learning curve, but will I believe be a fun adventure for you to pursue. To understand how 3D space in PicturesToExe works I think you might be helped a great deal if you download and study my tutorial on creating and manipulating a cube with PTE. Essentially, what you will have to do for this animation the way you want to see it is to create the two faces for your coin (heads and tails) as png with transparency objects and also create the edge thickness object also as a PNG with transparency. You may want to actually photograph a rather thick coin from the edge then size and manipulate that photo into a PNG transparency. Once you have these components, then you will essentially create a three dimensional object in the PicturesToExe objects and animations screen using these components in very much the same manner as in the creation of a cube. Think of the coin as a very thin "cube" which has a slightly different shape and with parts of it invisible. In the 3D space of PTE the object will still have six sides even though it's actually round and has only two important faces rather than six. The other four faces are the coin edge as they would be viewed if you rotated the coin in your hand. The heads and tails portions will be adjusted (positioned) in space via the "pan Z" parameter with the edge PNG file occupying that area between. You will use a very similar type construct to the one I describe for the cube in the tutorial link referenced below being mindful of properly checking off the proper view viz the front and back (you will understand this after watching the tutorial). You will also have to adjust the spacing between the front and back of the coin to suit the thickness you decide on for the edge views. Here's the link to the cube tutorial. A very similar construct can be created for your coin using the controlling frame and sub frames. It's possible to modify the construct to a four rather than six sided affair, but you will discover the fun of experimenting and which way will best suit your own animation as you conceive it for your project. Scroll down to the bottom portion past the red descriptives to number 18 and download the AVI tutorial: The construct for the coin will involve a great deal more experimentation and use of addional objects such as black rectangles, etc., for masking the edges and the use of changing transparency for the edges as the coin is rotated. It's not a simple animation to get perfect, but you will have fun experimenting. http://www.picturestoexe.com/forums/index.php?/topic/7901-pte-made-easy-tutorials-continuously-updated/ Best regards, Lin
  5. Hi Lynda, Tell us a bit about the laptop - brand, amount of RAM etc., Generally you can get this information by running a Windows resident file called "dxdiag.exe" which Windows "should" be able to find if you just right click on the white Windows icon (the one with four little square white divisions) then from the dropdown menu choose "Run" and then enter the word "dxdiag" without the quotation marks and press the Enter button. What will be of interest is what the Display and System tabs reveal... Best regards, Lin
  6. Hi Lynda, The first question I would have is do each of these copied files on the "pendrive" (I'm assuming this pendrive is a USB flash memory stick?) run from that drive correctly when it's connected to the desktop? The next question is, were these all single executable files or were some of them created using the "Safe Executable File for Internet" feature of PTE? If they are not all single exe files and some were created as Safe Executable Files, are both the data file and exe file resident on the pen drive and were both transferred to the laptop? Finally, does the laptop have sufficient resources to run the executables in terms of dedicated or shared resource video card, etc.? Once these questions are answered perhaps we can chase down the source of the error(s). Best regards, Lin
  7. Just to clarify - you don't need to actually add the AVI file at all to the objects list, just add the video as an audio then follow Dave's advice to use the offset to match where you want the video's audio to begin. You can also visually "drag" the audio track of the video where you want it to begin from the timeline which in turn, changes the offset automatically. You are treating the video as if it were an audio file only (which it actually is via PTE"s feature). Best regards, Lin
  8. Hi John, If the mask is rectangular (such as the internal rectangle mask in PTE) it's very easy to do this. Just use a solid "rectangle" and put it on the layer beneath the mask and stretch it slightly larger than the mask rectangle. Screen capture below shows how to construct this: In this case I just put the identical map into the mask, enlarged and moved to get to the display desired then created the solid rectangle and placed it on the layer beneath the mask and sized and positioned it to suit... Lin
  9. Hi Robert, Dave has pointed you to the manual which describes the procedure but so others who read this thread understand, there is no envelope for audio in a video. To get an envelope you must perform two steps. First mute the audio on the video, then add the video as an audio which is one of the two choices when you choose "Add an audio file" from the "Audio Tab" in Project Options. Where you see "Files of type," the default being "Audio files" click on the little blue box to the right with the small down arrow icon and you will see an additional choice "Video files." Click on this choice, navigate to the video and click and only then will you have the envelope and complete sound control over the video sound. You have muted the audio of the video then added the video as an audio and PTE then extracts the sound track from the video and provides complete envelope control. Best regards, Lin
  10. Another way to do this is to use the great Bezier Curve software written by one of our talented French users Michel Pouchin. It works in a somewhat similar fashion to the way Dave describes in terms of creating a route, but actually can generate literally thousands of keyframes nearly instantly and makes creating a tiny executable map code portion as simple as falling off a log!! Jean Cyprien has written an incredible user guide which I only wish I could translate into English. It's available as an EXE and I have no clue how Jean Cyprien does some of the examples he programs with PTE and Bezier Curve but it's absolutely brilliant. The basics of using Bezier Curve can be learned in five minutes but the more esoteric features are a bit harder to digest. I only really tried using this software last night but was able to create a very simple map route example in just a few minutes. I know Igor was considering adding a Bezier Curve feature to a future version of PTE. This software is something anyone interested in route generation might want to look into I think. It doesn't leave a "trace" color along the way, but allows moving a PNG object smoothly along virtually any track and curve. I was amazed at how quickly it's possible to accomplish this. Of course the program has many, many more uses and some very interesting complexities I've yet to fully understand, but I think users interested in moving an object in complex ways should definitely look into the myriad possibilities. Best regards, Lin
  11. Interesting - I've never really thought about the small differences between the nominal 3:2 aspect ratios. I suspect that Nikon probably didn't either possibly thinking that in the vast majority of cases such as in prints, the image would be cropped to a different aspect ratio anyway. It's nice though that the newer DX cameras have a true 3:2 even if they haven't yet implemented that for the FX models. Yes, Canon took a bit of a beating when the DXO folks demonstrated that their new top-of-line 50 mp sensor didn't even measure up to the DX Nikon models for IQ determiners. I'm perfectly happy with my D7200 for my own personal purposes. I may eventually get another 36mp such as the D810 when prices drop and if I have the money. For the majority of my own use, I prefer the crop sensor for the telephoto boost and for my landscapes I use the Sigma Merrill cameras which give me the equivalent of 30 mp in resolution but with far better sharpness across the full image than I can achieve with any current Nikon or Sigma lens on my Nikon's. For several years I used medium format digital but the costs became prohibitive since I've retired. I would love a new Hassy 80 mp but it's far out of reach these days. Best regards, Lin
  12. Hi Dave, I've got a bunch of things I have to do today so won't get a chance to test it. Here's a link to one of my D5300 nef files if you want to try it. NX2 doesn't work with my d7200 files.... http://www.lin-evans.org/dave/DSC_004.NEF Best regards, Lin
  13. Yes, even though it appears intuitively that 6016 by 4016 "should" be a perfect 3:2 it's actually 3:2.003 I'm unsure why Nikon needs the extra pixels on the sensor. Seems like they would have made it 6000x4000 just like the D5300, D7100 and D7200... If by no other way, just by masking off the additional pixels. Since it's not possible in a practical sense (acually you can have sub pixels) to have a fractional portion of a pixel I guess they couldn't have made it 6016 x 4010.6. Who knows what their engineers were drinking before final design... probably some kind of sake - LOL Best regards, Lin
  14. Hi Dave, Sorry - I've been away for a couple hours - I'll have to think about this on for a bit and get back... Later: I would assume that in order to crop precisely either complete remapping or perhaps another technique might have to be used such as done in photoshop with "Edit" "Fill" "Content Aware" where those pixels around the sides which are affected by the crop are remapped such as by borrowing the values from adjacent pixels and duplicating those values. This is honestly only a wild guess because I really have no idea how the designer of the tool would proceed. One has to wonder whether perhaps the entire image is resized then cropped or only a content aware fill is used such as in the suggestion above. Since only a scant few pixels are necessarily involved and since all the changes can be done on the extreme periphery it's difficult to know how designers of the firmware or software might elect to proceed. Interestingly - my two 24 megapixel Nikons (D5300 and D7200) each produce a 6000x4000 pixel image which works out precisely to 3:2 however, when I bring one of my full sized images into PTE with the aspect ratio set for 3:2, use a white background and expand the view to 1000% (type it in manually from 500%) and look at the periphery of the image, I see an approximate couple pixels of white around the right and left sides. The bounding rectangle sets outside of this view slightly. When I set it to Automatic everything seems to line up perfectly. At 500% view there is a small amount of background visible on the left but not on the right. Assuming that the original image is being sized appropriately, that leaves either the background in question or the bounding rectangle math may not be absolutely precise sufficiently for the enlargement. So the answer to your question may have to do with more than just the original images pixel count on the horizontal... Looking back at my calculations it appears that for the D600 I substituted 6015 pixels on the horizontal for the 6016 which dPReview gives as the horizontal pixel count. If that's the case and there is 4016 pixels on the vertical, doesn't that work out to a perfect 3:2 aspect ratio?? I'm not certain what's actually happening now because it appears to be a correct number of both horizontal and vertical pixels to make a perfect 3:2 aspect ratio.... I looked up your D700 specs and dPReview gives 4756 horizontal by 2832 vertical which works out to an aspect ratio of 3 to 1.99624 so the D700 definitely isn't a perfect 3:2, but the D600 appears to be a perfect 3:2... (EDIT) Actually it isn't a perfect 3:2 but rather 3:2.003... bummer.... L
  15. Eric, The differences as Judy was explaining concern the amount of data being processed. Your camera is a 12 megapixel model with a tiny sensor and the file size which must be processed is tiny compared with say a Nikon or Sony 36 megapixel dSLR or even one of the Canon 50 megapixel files which has four times the file size of the consumer model you are discussing. For example, RAW files from my Sigma DP2 Merrill are close to 60 megabytes in size. Processing these internally in the camera is a bit different than processing the relatively tiny files from the 12 mp FZ-150.. Best regards, Lin
  16. Hi Dave, Yep, that should do it for an exact 3:2 ... L
  17. Hi Dave, Isn't that why Igor included the "Low Quality of Resizing" check box in the Properties Tab of O&A? It seems we had some discussions about this a long time ago - can't remember exactly. Yes there are microscopic differences in aspect ratio between Canon and Nikon and as it relates to PTE it's a good point. On the other hand I've never worried much about it - I just size the image visually in PTE and usually drag it out to a very slight crop for normal display. Of course that won't work so well when one is doing precise geometry for animations so I see where you're coming from. Best regards, Lin
  18. Hi Eric, Not really - there are myriad differences and many types of "cameras." There are Color Filter Array (CFA) sensors some with and some without anti-aliasing filters. There are Foveon sensors which rather than using a CFA have a three layer sensor with no color filter array or AA filter which detects RGB differentially on each of the three layers of silicon. There are various types of "film," etc. There are large arrays of sensors in some cameras and tiny sensors in others. I would say absolutely no - a "camera" is not a "camera." Saying "a camera is a camera" is like saying a boat is a ship... there's a bit of difference between a row-boat and the Queen Elizabeth. Most would agree on this... Best regards, Lin
  19. Hi Dave, I guess we would have to look at a large number of Nikons to be certain, but for example this is the reported aspect ratio of my older D800E which I traded off a few months back: Below per dPReview which shows both 5:4 and 3:2 aspect ratios (improperly - in my opinion, referred to on dPReview's chart as "other resolutions."). Other resolutions 6144 x 4912, 6144 x 4080, 5520 x 3680, 4800 x 3200, 4608 x 3680, 4608 x 3056, 3680 x 2456, 3600 x 2400, 3072 x 2456, 3072 x 2040, 2400 x 1600 Image ratio w:h 5:4, 3:2 Sensor size Full frame (35.9 x 24 mm) Sensor type CMOS Processor Expeed 3 Color space sRGB, Adobe RGB Color filter array Primary Color Filter Crop factor and aspect ratios are somewhat related - of course not exactly the same thing but the sensor is so close to 34x24 mm that for all practical purposes I would refer to it as 3:2 myself. The D600 specifications are given below: It also appears that Nikon (where these figures come from) consider it 3:2 so let's see how the math works out: Below Edited later: 3/6015 as X/4016 ... solving for "X" we get 12048/6015 = 2.0029925187032418952618453865337 That's probably about as close to 3:2 as we can get with little light wells I suspect.... ??? (this is the wrong horizontal pixel count - looking again I should have used 6016 which then works out to a perfect 3:2 aspect ratio...) Best regards, Lin D600: 6016 x 4016 Other resolutions 4512 x 3008, 3936 x 2624, 3008 x 2008, 3008 x 1688, 2944 x 1968 Image ratio w:h 3:2 Effective pixels 24 megapixels Sensor photo detectors 25 megapixels Sensor size Full frame (35.9 x 24 mm) Sensor type CMOS Processor Expeed 3 Color space sRGB, Adobe RGB Color filter array Primary Color Filter
  20. Hi Dave, Some are, some are not - it depends on the the Nikon... At least the DX Nikon cameras I have are 1.5x crop according to Nikon... Could you explain further? My DX Nikons are D7000, D5300 and D7200 - they have 15.8 x 23.6 mm sensors, while 35mm film and FX digital sensors measure 24 x 36mm. Doesn't that equal a 1.5x crop? Best regards, Lin
  21. Hi Eric, But this is about dSLR's - the consumer cameras like the FZ150 are a totally different animal... Best regards, Lin
  22. Crop Factor Sensor VS Full Frame (36mm x24mm) so called 35mm Sensor This is just a brief explanation of a number of issues and questions facing the digital photographer today. I will include an easy “formula” for determining the actual number of pixels painting a subject when a so called “Full Frame” sensor image is cropped to the field of view (FOV) of various so called “Crop Factor” sensors. Also I will discuss advantages and disadvantages of each in a simplified fashion. Today, there are a number of different “sensors” for digital cameras ranging from quite expensive professional model dSLR (digital single lens reflex) models to much less expensive consumer models. What I want to concentrate on are primarily the dSLR models rather than the mirrorless and consumer cameras. The reason for this explanation is that there appears to be a lot of confusion about which, if either, is “best,” and why some cost so much more than others… A so called full frame dSLR has a sensor approximately the size of 35mm film which is 36mm in width and 24mm in height. This works out to an aspect ratio of 3:2 and is found in the most expensive dSLR models made by the major camera manufacturers. Below are some typical sensor sizes: 36x24 mm (approximate) full frame sensors are found variously in dSLR models by Canon, Nikon, Pentax and Sony. Canon: 1X (FF), 1.3X and 1.6X Nikon: 1X (FF), 1.5X Pentax: 1X (FF), 1.5X Sony: 1X (FF), 1.5X Sigma 1.7X, 1.5X Olympus 2X Above are typical dSLR’s and their various iterations of sensors. There are also 2.7X sensors such as found in the Nikon 1 series and in a few other major manufacturer models and of course a host of 5x, 6x, etc., which are found in consumer model digital cameras. So what does this all actually mean? There are two main aspects to digital sensors which are commonly used in advertising. First there is the term “megapixel.” For the purposes of this explanation, we will assume that a megapixel is one million pixels. So a camera which is called a ten megapixel camera has ten million pixels used to capture the image while a camera which has thirty six megapixels uses thirty six million pixels to capture an image. The actual size of the sensor pixel which is the tiny “well” which gathers light on the silicon determines what is called pixel pitch. For example, a tiny sensor the size of perhaps your smallest fingernail, might have eighteen million pixels packed onto this tiny space. This would represent a very dense “pixel pitch” while a full frame sensor on say a Canon camera could also have eighteen million pixels but would have comparatively much larger pixels. So what is the difference between larger and smaller pixels? Larger pixels gather more light. Having more light means, generally, a better “signal to noise” ratio which in practical terms means better low light performance allowing greater amplification of the signal without a corresponding huge increase in noise. So a full frame dSLR with an eighteen megapixel sensor will generally allow the photographer to shoot in lower light conditions using higher ISO numbers and still get superior images which can be enlarged a great deal more than the same image, for example as from a tiny sensor in a consumer level digital camera. So we come to the first advantage of the larger sensor. Better low light performance with correspondingly less noise in the image. So what then is the advantage of the smaller sensor such as the so called “crop factor” sensors in the dSLR lines, if any? Primarily what is referred to as “telephoto boost.” A 100mm lens attached to a full frame dSLR gives a 100mm field of view. But when this same lens is attached to say a 1.6x crop factor Canon dSLR, it gives a 160mm field of view which would encompass the same geography as a 160mm lens mounted on a FF camera and shot from the identical position at the identical subject. The actual focal length has not changed, it’s still 100mm and the true “magnification” has not changed, but the subject appears much closer because a large proportion of the field of view as would be seen from the full frame camera has been lost to the crop. Because the sensor itself is smaller, the light falling on this crop frame sensor from the circle of light gathered by the lens does not included much of the geography which would be seen had there been a 36x24 mm sensor rather than the perhaps 22.3 x 14.9 mm 1.6x Canon crop sensor. Assuming the same number of pixels, but a greater pixel density (more pixels squeezed into a smaller space), there will be the same number of pixel painting a smaller subject area. But because the pixels are smaller and therefore gather less light, there will be more noise and correspondingly less image quality especially when shooting in low light conditions. Advances in sensor technology and electronic processing have closed the gap a great deal between the actual quality of image which can be captured by the full frame sensor versus the dSLR crop factor sensor. There are a much smaller differences today than in the past and because of the telephoto “boost” many photographers who shoot distant or small subjects (wildlife, birds, etc.) often use one of the crop factor sensors rather than a full frame sensor. On the other hand, landscape photographers and portrait studios often favor the full frame sensor because of the admittedly now smaller advantages in image quality as well as the correspondingly better ability to control depth of field, and better wide angle performance of the full frame sensor. Before getting into the math on how to determine pixel counts when cropping full frame images to crop factor fields of view, let me briefly talk about advantages to some wildlife photographers when using crop factor sensors with dSLR’s. I am primarily a wildlife photographer who also shoots landscapes. I work in very rugged terrain at high altitudes and often spend a number of days in the back country shooting at altitudes above 13,000 feet and sometimes above 14,000 feet. I must carry my camera equipment plus my survival and camping equipment all on my back while climbing over boulders, traversing dangerous forty five degree angle slippery scree fields where one misstep could mean death or serious injury. Sometimes I will walk ten to fourteen miles in a day of shooting then camp and begin again before sunup and I may do this for a week or so. This is extremely hard physically and the older I get, the more difficult it has become. When I was a young man and very strong, I thought nothing of hiking twenty five or even thirty miles per day, but even then, because of the altitude and difficult terrain, every pound of equipment I had to carry was very important. Let’s put this into photographic perspective. A good full frame dSLR may weigh around three pounds. To get the images I want, I need at least 800mm focal length. A Sigma 300-800mm F5.6 lens weighs more than twelve pounds. To use this lens it must be mounted on a sturdy tripod with a relatively large and heavy tripod head. All together this combined weight (tripod, head, lens, camera) is around 23 pounds as a conservative estimate and it can’t be used effectively without the tripod. To complicate things, often I have less than five seconds to raise my camera and press the shutter before my elusive prey disappears so I must generally work hand-held. My “go to” camera these days is a 24 megapixel Nikon D7200 and if I couple this with a Sigma 150-600 mm stabilized lens, I have about an eight pound combination which gets me an equivalent 900mm hand held focal length. I shave about fifteen pounds off the weight I must carry plus a huge bulk difference and gain the ability to hand hold and quickly raise my camera and get the frame before my subject disappears. In addition, I get my full 24 megapixels vested in the 1.5x cropped image. Just for a few seconds, let’s calculate what I would get in terms of the number of actual pixels on the subject should I use the same lens and say a full frame 36 megapixel Nikon D810 camera and a 1.4x telephoto converter which gives me 840mm. I’m going to keep it simple with the math and let the reader see how the calculations are done; So we take the full frame size of 36x24 mm as a ratio to 36 megapixels. 36 time 24 is 864 so we have 864/36. Now when we crop this to the 1.5X we have to take the size of my 1.5x crop sensor as a ratio to “X” to determine the number of actual pixels we would have on the subject. This is 23.5 x 15.6 mm which when multiplied equals 366.6. So solving the ratio and proportion we have 864/36 is as 366/x. Solving for “X” we get 864X = (366.6x36) or 13197.6/864 = 15.275 megapixels painting the subject. So, using the Sigma 300-800mm lens I lose 8.725 million pixels without the 1.4x teleconverter and I lose sixty mm of focal length with it. Using the Sigma 150-600mm lens with the FF Nikon I lose a stop of light and the ability to get reasonably good hand held frames. Which then is the better tool for me? For me, experience with both tells me that the 1.5X crop factor D7200 in this scenario far outstrips the 36 megapixel D810 for my purposes. I get my full 24 megapixel resolution, and I don't have the one stop light loss of the 1.4x teleconverter plus some image quality loss because of the teleconverter. But primarily, I don't have to carry the big tripod, head and much heavier and larger lens and I don't have to set it all up to get an image of an elusive subject. So in this case, my 24 megapixel D7200 for wildlife at high altitudes is the better choice because of weight, bulk, resolution and photo quality for enlargement. Were I shooting landscapes, undoubtedly the D810 with the Sigma 300-800 f/5.6 would be the far better choice. My main point here is not to decide for anyone which is the better way to go but to explain how to do the relatively simple calculations. You just take the full frame sensor size of 36 mm by 24 mm which when multiplied gives 864 and use this as a ratio over your resolution in megapixels. Then find the actual size of your crop factor sensor in mm and multiply the width by the height and put this number over the unknown represented by “X”. The solve the simple ratio/proportion and you will have the actual number of pixels the full frame sensor will give you at the same field of view of your crop factor sensor. So the actual "formula" is 864/number of full frame pixels as a ratio to crop factor expressed as (length times width of your crop factor sensor in mm) over "X". Solve for "X" which gives you the actual number of pixels painting the subject from the FF sensor when cropped to this crop focal length. An example below: 864/36 (full frame 36 mp sensor) 366.6/X (Nikon 1.5x crop) divided by "X" so 864/36 to 366.2/X 864x=366.6 times 36 X equals 13197.6/866 equals 15.24. So when you crop a 36 mp Nikon FF sensor to the field of view of a 1.5x crop factor it paints that FOV with 15.24 megapixels. Hopefully this will somewhat simplify the mystery for some of how to calculate these values. Best regards, Lin
  23. That's great Dave - you now have the best of the best. The only limits with this program are the imagination of the user! I'm glad it's all working correctly for you now! Best regards, Lin
  24. Hi Dave, Yes, you do need the DeLuxe version for that... Best regards, Lin
  25. Hi Dave, Go to Project Options "Advanced Tab" and simply uncheck Synchronize Soundtrack and Slides. Best regards, Lin
×
×
  • Create New...