One of the coolest things about Panasonic’s Micro 4/3 lens system is that it allows virtually any third-party lens to be used. Here is my AF-100 with a couple of antique Leica screw-mount lenses.
As you can see in this ungraded test clip, the old lens looks perfectly clean … There’s no discernible difference between the video shot by these 50+ year-old examples of Steampunk-looking hand-craftsmanship and the much more modern Canon FD glass I shoot with primarily. Of course, that’s an untested hypothesis; I’ll be shooting a comparison and posting it soon.
After doing some preliminary testing to get the basic feel of the Panasonic AF100, I put it through some more strenuous testing: real shoots.
One of the strengths of the 5D Mark II is its ability to capture beautiful shots under heavily backlit conditions. I was pleasantly surprised to see that – with a decent lens on the camera (in this case, the EOS 16-35 f/2.8), the AF100 performs beautifully. Color shift in the highlights seems to be more severe than the 5D, but with a little bit of color grading, I was quite pleased with the shots I captured on a golf course at “golden hour.” I was also delighted to see how smooth the 60p footage looks. It is gorgeous!
The only issue I ran into with backlit shots was that the EOS adapter for the AF100 does add a fair bit of vignetting (especially at wide angles), and this can translate into fairly severe banding in the sky, as in this exterior shot. The best solution I could come up with was to simply blow out the affected part of the sky in the color grade. Also note that, in this shot, I really should have adjusted the white balance in the camera. However, despite the shot being significantly too blue, it easily color graded to a lovely, warm tone.
The next setup called for bread & butter shots: sit-down interviews. While it was a pleasure to be able to plug a microphone directly into the camera, as opposed to hassling with dual-system sound, I must say that I was a little nonplussed to see how skintones were rendered by the AF100. With the Skintone Detail level set to the factory default of “off,” and the other detail options set to “0,” skintones did indeed seem to have zero detail. Even at 60 IRE – nowhere near overexposure – the subjects’ skin has a plastic, color-shifted, very video-ish look that (at least to my eye) looks far inferior to the smooth tones I’ve gotten used to from the 5D.
When color grading the interview clips, I verified that the skintones were not overexposed, they just looked that way because of how they were rendered by the camera. I’ll be doing some testing with the Skintone Detail function to see if I can ameliorate this issue. During the color grade, I found myself pulling the midtones farther down than I wanted to, because it seemed as though any skintones over about 70 IRE were starting to look clipped and overexposed. Also, the AF100 seemed to render skin as noticeably pink, so I compensated (perhaps a little too much) by pulling them towards orange.
Since my initial tests found the AF100 to be much less tolerant of bright highlights than the 5D Mark II, I decided to try all the different gamma presets that the camera offers to see which provided the most detail.
After shooting a spotlight on a gray background, and a group of differently-textured toys (including a stuffed platypus), I determined that “Cine-Like D” was the closest thing to a “flat” setting that offered maximum dynamic range. The difference between most of the other settings was subtle, to say the least. (It’s easier to see it in the video than in these freeze frames.)
I also noticed that the banding in my spotlight shots was awful. Not surprising, given that this has always been a problem for electronic imaging, but rarely have I had reason to see it this up close and personal. (This is a PNG file exported directly from the raw footage, by the way, so you’re not seeing JPG compression).
My next step was to shoot the same setup with the 5D Mark II, and then compare the two clips. I was pleased to see that – apples to apples – the AF100 looks pretty darn good.
I then used Colorista II to color-grade the AF100 footage to match the 5D Mark II clip. Basically, I lowered the blacks, highlights & saturation, and raised & warmed the midtones (the AF100 footage seems to have a greener hue, even though both cameras were set to a 3200K white balance).
Just based on that simple test (keep in mind that I didn’t grade or transcode the 5D footage), I am confident that footage from these two cameras can be cut together virtually seamlessly.
[UPDATE: If you’d like to see similar projects I’ve shot with the 5D2 and the AF100, click here.]
The AF100, with its (removable) handle and shotgun mic mount, hand-strap and fairly traditional compact camcorder profile is, by design, far more well suited to motion picture work than and DSLR. With that said, the plastic body of the AF100 feels a little flimsy after 18 months of man-handling the metal-clad 5D2.
Perhaps because I had a traditional film-school education, I quite enjoy working with still-photo lenses. The fact that the AF100’s Micro 4/3 mount accommodates a huge array of third-party lenses really appeals to me. After purchasing a few adapters (mostly in the $50 range from China-based merchants on eBay), not only can I use all my EOS glass on the AF100, but also my Canon FD (manual aperture) prime lenses, the Peleng 8mm that I bought years ago to use with a Krasnogorsk 16mm film camera, and even a couple of antique Leica lenses that I happen to have tucked in my closet.
The downside, of course, is that the Micro 4/3 crop factor is severe: the field of view of any given lens is just about half what it is on a full-frame sensor. My 20mm prime worked beautifully on the 5D2 for hand-held work and cramped interior locations. Now, when attached to the AF100, it has the field of view of a 40mm lens, but retains the depth of field of a 20mm lens. The most convenient lens I have for the AF100 is a Sigma 12-24, which looks like a 24-50 on the AF100. The problem with that lens is that I bought it for architectural interiors, and it only opens up to 4.0. Fine for sunny, outdoor shoots, but a challenge for available-light interiors.
On the plus side, I finally get to use my Peleng 8mm. An 8mm lens is ridiculously fisheye-distorted on a full-frame sensor, but on the AF100, it looks like a pretty cool ultra-wide lens. It has a tad more distortion than I care for, but until I find a good substitute for my 20mm, this will be my go-to wide-angle lens on the Panny. UPDATE: The diffraction in the 8mm lens renders the image unacceptably soft. The lens also catches a ridiculous amount of flare in virtually every situation. The Sigma 12-24 is a much better bet.
It’s very possible to get shallow depth of field shots with the AF100. A 50mm lens at f1/4 still delivers razor-thin DOF, just with more of a closeup feel than I’m used to. Ditto a 135mm lens wide open at f/2.5.
Under The Hood
I’m very pleased to see that one of my favorite geeky functions on the 5D2 – manual white-balance – has been incorporated into the AF-100 (where its called “Variable White Balance”). Panasonic actually made it extremely convenient to access too, which is a pleasant surprise.
After being able to easily dial in the various scene setting components (contrast, saturation, etc.) in the 5D2, I do find the AF-100 profile settings a little limiting. I’m going to have to do some experimenting to see which of the various gamma curves is most useful in real-world settings, and I’m not sure I have much control over anything else.
Anything besides framerate, that is! The ability to effortlessly dial in any framerate up to 60 fps on the AF-100 feels like a decadent luxury. I can’t wait to shoot some slo-mo!
Since the 5D2 is a still camera, the shutter speed controls are front and center. Many’s the time I’ve inadvertently tweaked the shutter speed off of 1/50 or 1/60 without realizing it. On the AF-100, shutter speed is more of a “set it and forget it” function. You can either set it manually (e.g. to 1/50), or select the “180 degree” function, and then never worry about it again, secure in the knowledge that no cinema purist will be able to criticize your motion blur, regardless of your framerate.
I’ve more or less gotten used to looking at the 5D2’s histogram to check my exposure, but I’ve gotta say that the waveform pop-up on the AF-100 was like a breath of fresh air. Finally, I can actually expose a shot with confidence!
With that said, I will say that the Cinelike-D gamma that the AF100 offers will take some getting used to. Given the same scene at the same exposure (see test footage video above), the 5D2 tended to lose detail in the shadows and retain it in the highlights, while the AF100 did the opposite. Mental note: if it glows, it blows … So err on the side of underexposure! UPDATE: I have tweaked the AF100 scene settings to my taste, with excellent results. See this post.
One of the biggest complaints I had about the 5D2 was the horrible aliasing it created on textures like pinstriped or ribbed clothing, brick walls, fences, etc. Panasonic claims that their camera is far superior, but honestly, I’ve been a little disappointed in my preliminary tests. Shooting a scene with a bunch of fences and air grates, the 5D2 delivered the usual vibrating-verticals that we’ve come to know and hate, but – kinda – so did the AF100. I will say that, at 100% on my studio monitor, the AF100 footage IS cleaner and has less aliasing. But by the time it gets to web resolution, it’s fairly hard to see a clear advantage.
One of my other big complaints about the 5D2 was the footage hassle. Working on Final Cut Pro, I had to either transcode or re-render everything I shot. Panasonic’s AVCHD codec might not be perfect, but at least I won’t have to transcode the files. UPDATE: Unfortunately, with Final Cut Studio, I STILL have to transcode the files to ProRes!
Based on my preliminary tests, it looks like the AF100 footage looks, overall, a little softer than the 5D2 footage, but it’s also largely free of the noise and artifacts that I’ve come to expect from the 5D2. So, in my opinion, it’s darn good video, any way you slice it. UPDATE: The softness in the footage I initially observed was due to the diffraction effect of shooting at apertures above f/16. Shooting through the “best part of the lens” – f/5.6, f/8 – the AF100 actually looks sharper than the 5D.
On this subject, there’s absolutely no comparison. Dealing with audio on the 5D2 is always a pain in the butt. Even when everything works perfectly – either with dual-system sound, an onboard microphone, or an external mixer – there’s always the stress of being unable to monitor the audio with headphones, or even to keep an eye on the levels while recording. The AF-100 is a real video camera, with XLR inputs, a headphone jack, and built-in phantom power. I was a little surprised to see that it doesn’t have an auto-levels feature. I know the audio geeks hate auto-levels, but many’s the time on hectic shoots when I’ve been glad that my Sony HDV cameras offered that option. When you’re shooting b-roll in some unpredictable environment, the knowledge that your camera is getting audio that is at the very least usable is very comforting. On the plus side, the AF100’s on-screen levels are fairly prominent, so it’s easy to tell if something is going way low or way high.
The big caveat to that, however, is that in “VFR” (Variable Frame Rate) mode, the AF100 will give you audio levels and headphone output, but not record anything to the card. That’s a big issue, since you could be shooting at a standard framerate (e.g. 24fps or 30fps), while in VFR mode, thinking that you’re in fixed 24p or 30p, and get home only to find that you have no audio. Panasonic should either disable the audio confidence monitors in VFR, or put some kind of display on the viewfinder. Right now, without pushing the “Dial Select” button, you can’t tell if you’re in VFR mode or not. UPDATE: Panasonic released a firmware update that puts a small “no audio” icon in the viewfinder. Problem solved.
As I shoot more with the Panasonic AF100AF100, I’m sure I’ll find plenty of things that bug me. Certainly dealing with multiple lens adapters is not something I’m looking forward to. And, when I need oodles of shallow depth-of-field goodness, the 5D2 will always be the go-to tool. However, for day-to-day video work, I can’t wait to stop worrying about dual-system sound and crazy aliasing, and start using my AF100!
If you’re expecting some insights into work/life balance or following your bliss, sorry, I’m not talking about that kind of motivation. In the world of film and video production, motivation simply means having a justification or a reason for why something exists in the world that the viewer is experiencing.
The traditional actor’s question, “What’s my motivation?” has become a bit of a joke, and of course it refers to dramatic motivation. But every component of a video project – including sound, and particularly lighting – should be motivated as well. If an actress has a lovely hair-light, it reads as being a lot more natural and real if there’s some visible motivation – for example, a window – in the frame. Of course, one of the advantages of video is that you can tell a story in multiple shots, so as long as you see a window in one shot, it motivates light in every subsequent shot.
“Behind the scenes” decisions need to be motivated as well. In “The Pelican Brief,” there’s a quiet scene in which Denzel Washington sits on a park bench. In one shot, the White House is visible in the distant background. In a subsequent shot, Denzel is in the same position, but the director and DP have used a much longer lens, so the White House is now much larger, looming over his shoulder. This doesn’t happen by accident. In the context of the scene, Denzel’s character is feeling increasing pressure by the government. Making the White House visually more prominent as the scene progresses is motivated by the story.
The aforementioned example was actually shown to me in films school. At the time, it seemed very unlikely that anyone would put that much thought into a shot. We just zoomed in or out to get what we wanted. In fact, I remember that one of my classmates joked that we “could probably find the same thing in ‘Porky’s.'” But, you know what? We were wrong. Because now, ten years later, I absolutely consider the composition and compression of space in my shots, and I’m sure the director of “The Pelican Brief” did too.
Shot selection and structure should be motivated by as well. Very often, a moving shot – such as Brian DePalma’s lengthy opening move in “Snake Eyes” – is motivated by a character who is walking through an environment. On a deeper level, shots can be motivated by the underlying narrative of the piece. Michael Curtiz, the director of “Casablanca,” liked to start scenes on close-ups, instead of the traditional wide, establishing shot. So, for example, the first time we see the character of Rick in “Casablanca,” we don’t see Humphrey Bogart’s face, we see his hand signing a check, “OK, Rick,” next to a chess board. Curtiz was motivated by a desire to give the viewer information about Rick’s personality, his position of authority, and his name, and he did it all with one shot and no dialogue.
Sound is a little different, because viewers are very used to sound design that is “non-diegetic,” i.e. that is not caused by anything that exists within the story. In the case of a traditional soundtrack, the background music is motivated only by the narrative or emotional content of the scene. Diegetic sound, by contrast, is actually motivated by a device within the story – for example, the scene in “V For Vendetta” in which V and Evey listen to a record of Julie London singing “Cry Me A River” is a perfect use of diegetic sound. The fact that the characters are experiencing the music imbues the scene with a totally different meaning than if it were accompanied by an orchestra that only the viewer can hear.
Viewers are generally very forgiving about mixing diegetic and non-diegetic sound. Certainly the genre of movie musicals – in which characters sing diegetic lyrics to non-diegetic accompaniment – makes no logical sense, but is perfectly understandable to the audience. Indeed, filmmakers often play with the confusion between diegetic and non-diegetic sound for comic effect. For example, in “Mr. and Mrs. Smith” (and other action movies) the intense soundtrack of a gun battle is interrupted by elevator music when the main characters spend a few moments in an elevator. That type of gag usually gets a chuckle from the audience. One of the earliest examples of what has now become a bit of a comedy cliché appears in Mel Brooks’ “High Anxiety.” Here, the protagonists react with confusion when the soundtrack begins, and begin looking around nervously. In a moment, it is revealed that they are hearing the soundtrack as well, because the music is actually being played by an orchestra in a passing bus.
Whether you’re looking for comic relief or not, the fundamental principle here is that being aware of motivation – regarding all the elements in your project – will make your video more effective and more cinematic. So, instead of making choices based simply instinct or aesthetics, ask yourself whether you’d be able to explain to a skeptical crew member why you want something the way you want it. Why this lens? Why this framing? Why this depth of field? Why this prop? By challenging yourself in this way, you’ll think more about the creative choices you make. And, the more thought out and well-considered your production is, the higher its overall quality will be.
If you’re just getting into – or considering getting into – video production, how you edit your projects will be just as important as how you shoot them. And, just as you need to be aware of the strengths and weaknesses of the camera you use to capture your footage, you need to consider carefully the platform on which you’ll be assembling that footage.
For years, my advice to video production newbies was the same: Get a Mac, and put Final Cut Pro on it. The software and hardware are made by the same company, so there are no compatibility issues, and you don’t have to buy any additional hardware.
The basic premise – that Apple software and hardware work together to provide a less error-prone editing environment – is still valid, but the big picture isn’t as simple as it was a few years ago.
Traditionally, PCs have been for “suits,” and Macs have been for “creatives.” However, just as the line between suits and creatives has become blurred in the past few years, the demographics of Mac owners and PC owners have become less homogenous.
Perhaps the most telling indication of this shift is that Adobe’s new creative suite of programs – CS5 – actually runs faster on out-of-the-box PCs than on out-of-the-box Macs. Moreover, not only do Apple computers cost more than PCs of equivalent horsepower, but the nVidia video cards that accelerate the performance of CS5 aren’t even available as an option on new Macs, meaning that they have to be purchased aftermarket, driving up the machine cost even more.
I bring up Adobe CS5 for two reasons. First, almost every video editor has to use Photoshop at some point, and usually quite frequently, which makes its performance an issue. Secondly, CS5 includes Premiere Pro, a traditional runner-up to Final Cut Pro, which now outperforms Final Cut on several key metrics, most notably render time and native support of HDSLR footage (which has to be transcoded to be effectively used by Final Cut Pro).
Only Steve Jobs’ inner circle knows what’s planned for Final Cut, but since Apple has spent the last few years focusing the vast majority of its resources on developing the iPod/iPhone/iPad family of gadgets, pro users have been left scratching their heads and wondering whether vague promises of “awesome” updates will come to fruition, or whether Final Cut Studio will be left to whither on the vine while Apple pursues higher volume, higher profit-margin consumer electronics.
Another new element in the equation is the emergence of truly high-quality open-source (free) software. The 3D modeling and animation program Blender 3D includes a full-featured video editor, complete with node-based keyer and effects, and the Academy and Emmy-award winning editing software Lightworks (used on blockbusters like “Shutter Island”) will soon be released in an open-source version. Numerous other open-source editing programs are being developed by smaller enterprises as well. Any one of these programs (particularly Lightworks, in my opinion) could prove to be a real threat to Final Cut Studio and its $1,000 pricetag.
While Blender and some of the smaller open-source editors run on Mac, Lightworks will initially only be available on PC, and a number of other open-source editors run exclusively on Linux workstations, which are usually built from PC hardware.
As of now, there’s no clear winner. Apple has synergy and a long track record on its side, while PCs are more cost-effective, and offer more software options, including the unleashed Adobe Premier Pro. The good news is that this means you can stick with whatever you already have, and make it work. The bad news is that if you’re looking to invest time and money into a system you’ll be able to use for years to come, you may as well flip a coin. Video editing is complex, and every software package has its own way of approaching all the different moving parts. There’s a substantial learning curve associated with switching platforms, and it becomes difficult (or impossible) to access projects created in a different system.
I’m not ready to jump ship from Apple yet, but I’m keeping a close eye on the horizon. If Apple leap-frogs Premiere with a truly impressive Final Cut Studio update, I’ll be happy to stay where I am. On the other hand, if Lightworks or Premiere has a substantial advantage in power, quality, or flexibility by the time I’m ready to buy another computer, I will seriously consider switching sides. My advice to novices would be to avoid paying too much for anything at this point: if you have a Mac, use iMovie, or get an old copy of Final Cut on eBay. If you’re on a PC, pick up a “lite” version of Premiere and see how you like it. Either way, don’t get too comfortable, because the next year or two are likely to bring substantial changes to the editing playing field.
I recently received an email from a person who couldn’t afford to buy a new mic or mixer, and was thinking about buying used equipment on eBay.
I would be wary of buying used microphones on eBay … People tend to hold on to microphones until they start to deteriorate, and then sell them. Depending on what you’re doing, my suggestion to you would be to not economize on both a mixer and a microphone, but to get a decent shotgun mic now, and then a mixer or stand-alone recorder later. Without the amplification from an external mixer, the audio recorded by a DSLR won’t be as clean, but I think you’re better off buying a microphone that you’ll be able to use for years, then spending almost as much on a used mic that’s going to crap out on you in short order.
One big warning though: DSLR cameras do not provide phantom power, so many shotgun microphones simply will not work when plugged directly into the camera. You have to make sure to buy a microphone (new or used) that will accept batteries.
I’m a big fan of Audio-Technics microphones, because they provide excellent value. Here’s a video (not done by me) that compares three different low-cost AT mics. This is a more detailed version of what I would tell you myself.
If you decide to buy a used mic and mixer to avoid the phantom power issue (virtually any mixer will provide phantom power), I would suggest looking for any Shure mixer that’s in decent shape. Three inputs are fine, and whether the meters are analog or digital doesn’t really matter (it’s more of a style choice than a functional consideration).
3.5 years ago, I paid $10,000 for a 1/3″ HDV camera (Sony S270U) that I’ve barely used since buying the 5D2.
The fact that the 35mm video revolution is starting at a price point under $5K is unbelievable.
Sure, I was also a little nonplussed to discover that I wouldn’t be able to use my EOS glass with the AF-100 I just pre-ordered. So, I got on eBay and bought a small set of lovely Canon FD primes for a couple hundred bucks, plus two adapters (at $49 each) that will allow me to use said primes, plus the lenses from my antique Leica cameras, on the micro four-thirds system.
If Sony includes their much-hyped EVIL autofocus technology on their answer to the AF-100, it’ll be awesome. But in this industry, you can either be first or best. Panasonic is the first to market with a 35mm video camera, and I can’t wait to get my hands on it.
HDSLRs won’t be “over,” but they will not be the only option. Nor will the AF-100. Sony, et al. will surely bring innovations to market with their competitive products.
For surreptitious b-roll, ultra-low-light work, and student projects, HDSLRs are a great choice. For narrative and commercial work, let’s be honest: aside from the image quality, they suck. I want XLR inputs, I want proper monitoring, and I don’t want to have to worry about aliasing, moiré or dual-system sound. If the atomos Ninja ProRes recorder delivers as advertised (on-board ProRes 422 recording for $995), the codec issue will be a moot point.
The bottom line is that this thing is a Panavision mini-Genesis. $5K for a camera that delivers features you couldn’t touch for under $100K a few years ago is a steal.
I’ll kill the suspense by answering the title question right now. Yes, Scribus is the open-source InDesign. It is a full-featured page layout program capable of handling anything from a business card design to a lengthy, hyperlink-filled, interactive PDF publication. It lacks the fit and finish of its commercial brethren Quark Xpress and Adobe InDesign, particularly in the areas of precision and flexibility, but overall it’s an exceptionally impressive piece of software. And, of course, you can’t beat the price … Scribus is available free of charge for Mac, Windows and Linux systems.
Here you see Scribus 1.3.7, as it appears on the 10.1″ screen of my HP Mini 5103. The document you see there is a Montessori elementary curriculum outline that I transcribed from a hand-written original for my wife. Although Scribus does have the capability to create tables, this particular diagram was so complex that I decided to create it from scratch. I also figured that this would be a better test of the program’s real-world performance than a brochure or other document with relatively few separate elements. For this project, each line is a separate element, and each subject heading and curriculum component is a separate text box.
At this point, I should mention that I make no claims to significant design ability. In college I worked as a graphic designer for a Sir Speedy print shop, and doubled as the layout monkey for the college newspaper I edited, but for the past twelve years, I have focused on commercial video and photography, so I don’t exactly have a design portfolio. However, I’ve been using page layout programs since the early 1990s (since about PageMaker 3.0, if memory serves), and am familiar with the toolset of both Quark and InDesign, as well as the general workflow of document design.
The learning curve for Scribus is quite reasonable. The program feels as though it is designed by the people who use it, meaning that the most commonly-needed navigation tools (e.g. layers, zoom in/out) are accessible and conveniently located. Drop-down menus conform to industry standards, and provide the functionality I expected. (One exception is the Step and Repeat or Multiple Paste edit function, which Scribus called “Multiple Duplicate” and puts under the Item menu).
In Quark or InDesign, you can use a control palette that dynamically adjusts to context: if you select an image box, it will show you image controls, and if you select a text box, it will show you text controls. Scribus doesn’t have this feature. Instead, it uses a multi-function ?Properties box that requires a lot of manual switching between tabs (geometry, text, line, etc.). On the plus side, virtually every control function is available through this one window, which is actually a lot more convenient than the multiple palettes and tiny sub-menus I’m used to working with in InDesign.
Working with text in Scribus – particularly numerous small text boxes, as in this test document – is a bit of a chore. Scribus has a story editor function, which probably works very well for long-form text, but since I was only typing a couple of words at a time, I didn’t want to use it. Typing directly in a text box though, I found that my cursor would disappear if I used the arrow keys to move backwards (to correct a typo, for example). More annoying still, font attributes set for text would disappear if the text was overwritten, meaning that I had to continually reapply styles, or take care to leave a letter of the old word in place before typing the new one.
I also noticed that the frame edges of selected items disappeared when I tried to align guides to them (for example, once I got one text box into position, I wanted to snap a guide to it, so that I could snap all the other text boxes in that row to the guide). This is a minor detail, but since guides are one of the few elements that I didn’t see any way to position numerically, it caused some headaches.
Scribus offers character and paragraph styles, but they’re a tad buggy. For example, I created a paragraph style for the curriculum text boxes, and assigned a hotkey to it. When I selected some text and pressed the hotkey, nothing happened. When I clicked on the style in the style palette, nothing happened again. Finally, when I went into the Properties dialog, clicked on the Text tab, and selected the style, it was applied.
The biggest shortcoming of Scribus is precision. In page layout, a milimeter is a mile, and – to Scribus’ credit – the program offers numeric positioning of elements to four decimal places. Unfortunately, for everything other than manually typing in coordinates, there’s a degree of slop that will be quite disconcerting to anyone accustomed to InDesign or Quark. For example, in the image below, the rules had all been snapped to the guides, and the text had been centered. Notice that the rules aren’t actually on the guides at all, and the text is not actually centered.
Trying to get text to center properly was really frustrating. In some cases, a space left at the end of a line would throw it off visually (InDesign and Quark compensate for this automatically), but in other cases, there were no spaces at the end of the lines, and the text still refused to center properly. Also, Scribus doesn’t offer any function to align text vertically within its box, which meant that each row of text boxes for this chart had to be more or less eyeballed into place. For this project, I can live with it, but for a more demanding client, this lack of precision could easily turn into a nightmare.
Once the project was finished, I was very pleased to see how easy it was to export. Scribus offers extensive pre-press options, and comprehensive PDF capabilities. The option to export as an image directly from the main menu is most welcome as well.
All in all, I was very impressed with Scribus, and look forward to using it more. Everything important – i.e. the ability to freely manipulate text and graphics – is in place, and I suspect that the technical shortcomings I struggled with will be ironed out in future updates. Indeed, as I mentioned, I’m running the Linux version of Scribus on a netbook, so I don’t know if these bugs appear at all on other platforms, or on more powerful systems.
As of right now, because of the precision issues, I would still classify Scribus as the low-budget alternative to the big pro apps, rather than as a fully competitive alternative. But that shouldn’t be taken as a slight or dismissal; Adobe and Quark have spent many years and untold dollars developing their software packages. The fact that a group of volunteers have gotten together and made something that’s 90% as good is quite remarkable. More to the point, for the many students and aspiring artists who can’t afford name-brand products, Scribus offers a truly viable means to realize design concepts. Some kid in the middle of the Third World who might be using an ancient school computer can now download some free fonts, a full-featured page layout program, and put together a design portfolio. In an industry in which tools have largely defined the quality of work for 500 years, the fact that the barriers of entry have now fallen to level of a basic computer and a few minutes of internet time is remarkable. I commend the Scribus team for their hard work, and I would encourage anyone with a need for page layout software to check it out, and to spread the word of Scribus.
To try Scribus for yourself, visit the official website: http://www.scribus.net.
I was recently sent the following question:
In your video you talk about possibly having a Shotgun mic and 3 wireless mics hooked into the Azden FMX-42 field mixer at the same time, and the Azden mixer hooked into your Canon 5D. Is the 5D capable of recording 4 channels of sound? Will it import into Final Cut Pro with all 4 channels? If so, what cables do we need?
?Great questions. Unfortunately, the 5D – like most video cameras – only records two channels of audio.
Indeed, the Azden – and the vast majority of other field mixers – mix all their inputs down to two channels of output: “left” and “right” channels. This is because, until very recently, ALL cameras only recorded two channels of audio. One of the functions of a field mixer has historically been to “mix” – i.e. blend – multiple inputs (the standard used to be three, but now it’s four) down to two outputs, so that a camcorder could capture sound from more than two sources.
Traditionally, the breakdown has been some variation of the following, depending on the setup and the type of program:
– Boom on one channel, lav mics on the other
– Camera mic on one channel, off-camera mics on the other (not an option with the 5D)
– Most important speaker (e.g. host) on one channel, all other audio (e.g. guests) on the other
Now, with all that said, there are some cameras that record four channels of audio – my Sony S270U, for example. However, I have never been able to get Final Cut to recognize or import channels 3 and 4, so I’ve found the capacity to be of limited utility! (In fairness, I have seen Final Cut digest 4-channel Panasonic data just fine.)
For my bigger 5D shoots, I hire a friend of mine who runs audio for a number of nationally-televised reality shows. He uses a $6,000 Sound Devices mixer/recorder that has 8 analog inputs, and records a total of 12 tracks to independent WAV files on an internal harddrive. With that kind of hardware, there’s no issue: I take a stereo feed from the mixer to the camera as a safety/reference track, and use Pluraleyes to sync it to the WAV files in Final Cut.
There are some other – less pricey – alternatives to the top-of-the-line Sound Devices, but there’s still a pretty big gap between the regular two-channel-output field mixers that cost a few hundred dollars, and the ones that do internal recording.
If you can get away with mixing your four inputs down to two outputs, you’re good to go. If you can’t, you might want to look into hiring a local audio operator with a Sound Devices HD recorder (or something similar). I wouldn’t recommend trying to rent a high-end mixer to use yourself, as the learning curve on the operation of a machine like that is pretty steep. Indeed, for any shoot when you have the budget for it, you should hire a professional audio op.
Perhaps my idea of taking a SUSE Linux Enterprise netbook and seeing how effective it can be as an open-source creative workstation wasn’t such a good one. But, to be honest, it’s going fairly well.
When I first got my HP Mini 1503, I was appalled by the tiny text, choppy internet video playback, lack of any available software, and general uselessness.
That was three days ago. Since then, I’ve learned a lot.
1) To avoid the feeling of going blind when I used the default display settings on the 10.1″ screen, I changed two things: the font size (under Control Center – Appearance – Fonts – set all to 13 point), and the display brightness (Screensaver – Power Management – On Battery Power – Reduce Backlight Brightness – uncheck the check box). Websites still have to be manually magnified with ctrl++, and I’m sure the brighter display setting drains the battery faster, but at least the OS interface is comfortable now.
2) The seeming lack of available applications stemmed from the fact that, during initial setup, the computer had prompted me to sync with Novell Customer Center. Since the wireless internet hadn’t been set up yet, this process failed. Once I did this step manually, I gained access to a bunch of useful apps.
In Linux, programs aren’t installed the way they are under Mac or Windows: first, you find a “repository” that contains the software you want, and install that, and then you can go into the repository (which acts sort of like a catalog or index), and select the applications you want to actually install on your system. Linux SUSE Enterprise is a close cousin of OpenSUSE (another flavor of Linux), so the repositories for OpenSUSE work fairly well with it. Unfortunately, as I learned about this process, I wound up installing so much garbage that my system started crashing, and I had to do a system recovery. About five times.
3) The biggest obstacle I encountered was dealing with “restricted formats.” Linux is open-source, and in order to stay free, it can’t contain any copyrighted intellectual property: such as the code required to play common video files. In theory, installing a media player and associated codec libraries would be easy. It isn’t.
After spending about two days following directions from various forums, and struggling to install the interdependent bits of code that would allow me to do something as simple as play an MP4 file, I stumbled across the work of Mr. Petersen, who has thoughtfully compiled a repository specifically for SLED machines. He has spent a lot of time on this, and (quite reasonably) requests a $20 donation to access the fruits of his labors. It is absolutely worth it!
Within 5 minutes of the Paypal transaction, I successfully installed the full version of ffmpeg (the software that most media players use to code and decode video, which I had previously only been able to get to play the audio portion of MP4 video), and a hassle-free VLC media player, along with the Cinelerra video editor that I had been looking for, the Scribus page layout app, and Blender 3D, all of which worked perfectly. Finally, I was able to work with conventional video files!
While I was vainly trying to get multimedia working, I did have some fun working with one of the most distinctive features of Linux: the command line interface. Clearly, I have a lot to learn about this, but the power and granular control of a good old-fashioned terminal interface appeals to me greatly.
My next challenge will be getting GIMP – the open-source equivalent to Photoshop – to work. I’ve tried to install it twice, and both times it’s loaded but fails to open. I’ll also be test-driving some HTML editors and an FTP program or two. My goal is to be able to do as much work as possible on this little computer, using exclusively open-source software. So far, so good!