The major benefit of twins, the capability to return to base after an engine failure, was initially problematic for a carrier-based airplane. The pilots approached power-on in level flight at the lowest safe speed and minimum altitude, cutting the engine just when the airplane would settle onto the deck. This was essentially the same technique used for a short-field landing ashore, where it assured a touchdown very close to the approach end of the runway, allowing the maximum distance for stopping.
The pilot of a twin-engine propeller-driven airplane with one engine inoperative had to take into account the minimum control speed in that situation. Since the engines were almost always placed out on the wing, when one failed the other produced a significant turning moment which had to be counteracted by the rudder. Since rudder effectiveness varied with airspeed, at some point the pilot could no longer stop the airplane from turning with the engine at full power. What’s worse, the turning generated a roll because of the difference in the lift on the wing on the outside of the turn versus the one moving slower on the inside. Since the ailerons also lost effectiveness with decreasing airspeed, a loss of control in roll would result as well if the engine power was not immediately reduced.
The tyranny of minimum-control speed was also imposed on an airplane taking off, but it was more draconian at sea than ashore. If the pilot taking off from a runway lost an engine while still below minimum-control speed, he would simply close the throttle on the good engine and reject the takeoff. The outcome varied with the length of the runway and if it came to that, the landscape beyond its end, but was rarely as dire as faced by the pilot of an airplane less than one hundred feet above the sea after being launched from an aircraft carrier at less than the minimum-control speed with full power on the operating engine. If he reduced power on it engine to maintain control, he almost certainly would not have enough for level flight, much less to climb or accelerate to a speed at which he could use all the power available.
Having a second engine was therefore not as good a deal for a pilot flying from an aircraft carrier as it was for one flying from an airport. Although it enabled one to divert to a land base or get back to friendly ships and ditch if an engine was lost in flight, it doubled the risk of an engine failure during a critical, albeit short, time during takeoff and landing. Twin-engine airplanes also tended to be bigger than singles whereas compactness was a virtue on an aircraft carrier.
Nevertheless, there were benefits beyond the ability to continue flight after an engine failure. The easy way to improve the performance of fighter airplanes is to incorporate more powerful engines in new or existing designs. Increasing power in piston engines basically meant adding more and/or bigger cylinders and supercharging. By the late 1930s, the engine manufacturers were beginning to approach the limits of existing technology and incremental horsepower increases were resulting in increasingly smaller increases in speed and greater engine complexity. The obvious next step was the twin-engine fighter, a doubling of power available without requiring the time and expense of a new engine development.
The U.S. Navy solicited proposals for a twin-engine carrier-based fighter in 1937 but none of the submittals were deemed to be acceptable. In 1938, the Navy had Lockheed modify an Electra Junior to have a fixed tricycle landing gear and tail hook. It was designated XJO-3 and delivered in October 1938. On 30 August 1939, Navy pilots made 11 takeoffs and landings from Lexington (CV-2) to evaluate it from both twin engine and tricycle landing gear standpoints.
In spite of having as much or a little more installed horsepower than the XF4U, the XF5F was slower and couldn’t climb as high although its rate of climb through 20,000 feet was essentially the same. As a result, the Navy elected to proceed with the F4U for development and production. Nevertheless, the Bureau of Aeronautics continued to be interested in a twin-engine carrier-based fighter. On 30 June 1941, Grumman received a contract for the two XF6Fs and two XF7Fs. The F7F program suffered from the priority on F6F Hellcat development but the prototype Tigercat finally flew for the first time on 3 November 1943.
As soon as Grumman test pilots flew the XF7F-1, they realized that it did not have a big enough fin and rudder for an acceptable minimum control speed in the event of an engine failure on takeoff or a wave-off. Design of a bigger fin and rudder was initiated and introduced on the F7F-3. Although all models of the F7F were carrier qualified, the likelihood of and/or concern about a successful single-engine wave-off must have been low as there was no description of the technique for a single-engine carrier landing in the flight manual. In any event, the Tigercat never deployed with an air group on a carrier, probably due to its size as much as anything else.
One of the Navy’s first carrier-based jets, the McDonnell FD-1 Phantom was a twin, mainly because the Westinghouse-provided engine wasn’t very big. It grew to become the F2H Banshee, the first twin-engine airplane to regularly deploy on carriers. Two of its contemporaries, the Douglas F3D Skyknight and the North American AJ Savage, were also multi-engined. The AJ had three engines, two turning propellers and a jet. Like the F7F Tigercat, the Skynight was primarily operated by the Marines and made very few deployments. The Savage did deploy because of its critical mission of long-range nuclear strike, but because of its size, it generally was held in readiness at nearby Naval air stations during a carrier’s deployment. These jets were less limited from a minimum control speed standpoint in the event of a one-engine-inoperative situation than previous twin-engine propeller-driven airplanes because the engines were located close to the centerline; the AJs were slightly better off if one of its piston engines failed because the jet engine was located on its centerline.
However, North American was concerned about minimum control speed as evidenced by the size of the AJ’s original fin and rudder, made even bigger because carrier basing necessitated a fairly short airplane. Unfortunately, the rudder proved to be too big for high speed flight and resulted in a fatal accident when it broke the tail off in a flight test maneuver. The empennage was redesigned to increase the size of the fin, reduce the size of the rudder, and delete the dihedral in the horizontal stabilizer.
The Navy did regularly deploy one twin-engine propeller-driven airplane at sea for more than two decades beginning in the mid-1950s on axial deck carriers, in part because Grumman had learned a lot about operating twin-engine airplanes from aircraft carriers with the F7F program. Its S2F (S-2) was as short-coupled as carrier airplanes get, so in order to size the rudder both for the single engine takeoff and wave-off condition and—relatively speaking—high-speed flight, it had a two-piece rudder. Up and away, the forward portion of the rudder was just used for directional trim and only the aft portion of the rudder moved with the rudder pedals. For takeoffs and landings, the forward and aft portions of the rudder could be selected to move as a unit, doubling the width of the rudder and reducing the S2F's minimum control speed to one suitable for carrier launches and wave-offs.
The introduction of steam catapults, angled decks, and descending, constant angle of attack approaches also reduced the degree of difficulty of one-engine-inoperative takeoffs and landings.
By the time Grumman engineers designed the F-14, they felt confident enough in their handling qualities analysis to widely separate its engines to provide a "tunnel" where two of the big Phoenix missiles could be carried side-by-side.
Finally, click HERE for a great tale of how a second engine and a naval aviator saved an airplane...