This assessment is based almost entirely on a 1/50th scale model in the Grumman History Archives on Long Island.
The empty weight proposed for the F-111 was even more optimistic than usual in winner-take-all paper competitions. As is customary, Grumman and General Dynamics initiated a two-pronged F-111B weight reduction study effort, the Super Weight Improvement Program and the Colossal Weight Improvement Program, even before first flight. Roughly speaking, the ground rules for the SWIP were to reduce the weight but not significantly depart from the design and mission requirements. The CWIP allowed a great deal more flexibility, basically tossing out anything imposed only by the Air Force low-altitude strike mission and preferences like the crew escape capsule.
For the CWIP configuration, Grumman engineers deleted the bomb bay and escape capsule and reduced the volume required for the main landing gear by not allowing for the large high-flotation tires required for operation from unprepared fields. That enabled them to shorten the forward fuselage by about five feet. The shorter forward fuselage presumably allowed them to delete the ventral fins, with the original vertical fin now adequate for directional stability even at high angles of attack. However, the horizontal stabilizers were slightly increased in size, presumably for improved low-speed handling qualities for the carrier approach.
All six Phoenix missiles were now carried on the fuselage, four semi-submerged and two on short pylons on the lower sides of the fuselage. This arrangement eliminated both the wing pylons and swivel mechanisms required to keep the missiles aligned when the wings were swept. I haven't yet found any information on the main landing gear configuration change required by putting two missiles on the centerline of the belly, but presumably it resembled that on either the Grumman F11F Tiger or the North American A3J Vigilante.
The center fuselage with the engine inlets and wing mounting structure were basically unchanged except for the main landing gear bay. The wings were also unchanged. The engines appear to have been moved forward by about two feet to restore the center of gravity after the nose was shortened.
Although the canopy appears to be bulged upwards, my preliminary assessment is that the visibility over the nose was no better than it was on the original F-111B, which was determined to be unsatisfactory. However, the lower weight would have resulted in a lower angle of attack for the same lift, possibly providing the same over-the-nose visibility improvement as the raised cockpit that was eventually required.
The government program team elected to incorporate most if not all of the SWIP changes in the 12th F-111A and the 4th F-111B. The CWIP specification changes stayed on the drawing board until Grumman was able to apply them to what became the F-14.
By Tommy H. Thomason
Friday, November 19, 2010
Thursday, November 4, 2010
One if by Land, Two if by Sea
“One if by land, and two if by sea.” That line from Longfellow’s poem commemorating Paul Revere’s famous ride in 1775 was one of the justifications used by the Navy in 1976 to select the twin-engine McDonnell F-18 over the single-engine F-16 for its VFAX program. Some viewed it as dissembling on the Navy's part since the desirability, much less necessity, for twin-engine carrier-based aircraft had not been very evident up until then. In fact, although single-engine airplanes were in the minority in the air wings at the time, that was a relatively recent change from past practice. A year earlier there was still an air wing aboard Hancock (CV-19) that was almost entirely comprised of single-engine aircraft. Now, of course, there are no single-engine airplanes in the carrier air wings.
The major benefit of twins, the capability to return to base after an engine failure, was initially problematic for a carrier-based airplane. The pilots approached power-on in level flight at the lowest safe speed and minimum altitude, cutting the engine just when the airplane would settle onto the deck. This was essentially the same technique used for a short-field landing ashore, where it assured a touchdown very close to the approach end of the runway, allowing the maximum distance for stopping.
The pilot of a twin-engine propeller-driven airplane with one engine inoperative had to take into account the minimum control speed in that situation. Since the engines were almost always placed out on the wing, when one failed the other produced a significant turning moment which had to be counteracted by the rudder. Since rudder effectiveness varied with airspeed, at some point the pilot could no longer stop the airplane from turning with the engine at full power. What’s worse, the turning generated a roll because of the difference in the lift on the wing on the outside of the turn versus the one moving slower on the inside. Since the ailerons also lost effectiveness with decreasing airspeed, a loss of control in roll would result as well if the engine power was not immediately reduced.
Unfortunately, the minimum-control speed at the power required to climb with the gear and flaps down was almost certainly higher than the required approach speed dictated by arresting gear, which made a successful wave-off an iffy proposition.
The tyranny of minimum-control speed was also imposed on an airplane taking off, but it was more draconian at sea than ashore. If the pilot taking off from a runway lost an engine while still below minimum-control speed, he would simply close the throttle on the good engine and reject the takeoff. The outcome varied with the length of the runway and if it came to that, the landscape beyond its end, but was rarely as dire as faced by the pilot of an airplane less than one hundred feet above the sea after being launched from an aircraft carrier at less than the minimum-control speed with full power on the operating engine. If he reduced power on it engine to maintain control, he almost certainly would not have enough for level flight, much less to climb or accelerate to a speed at which he could use all the power available.
Having a second engine was therefore not as good a deal for a pilot flying from an aircraft carrier as it was for one flying from an airport. Although it enabled one to divert to a land base or get back to friendly ships and ditch if an engine was lost in flight, it doubled the risk of an engine failure during a critical, albeit short, time during takeoff and landing. Twin-engine airplanes also tended to be bigger than singles whereas compactness was a virtue on an aircraft carrier.
Nevertheless, there were benefits beyond the ability to continue flight after an engine failure. The easy way to improve the performance of fighter airplanes is to incorporate more powerful engines in new or existing designs. Increasing power in piston engines basically meant adding more and/or bigger cylinders and supercharging. By the late 1930s, the engine manufacturers were beginning to approach the limits of existing technology and incremental horsepower increases were resulting in increasingly smaller increases in speed and greater engine complexity. The obvious next step was the twin-engine fighter, a doubling of power available without requiring the time and expense of a new engine development.
The U.S. Navy solicited proposals for a twin-engine carrier-based fighter in 1937 but none of the submittals were deemed to be acceptable. In 1938, the Navy had Lockheed modify an Electra Junior to have a fixed tricycle landing gear and tail hook. It was designated XJO-3 and delivered in October 1938. On 30 August 1939, Navy pilots made 11 takeoffs and landings from Lexington (CV-2) to evaluate it from both twin engine and tricycle landing gear standpoints.
In parallel with this research program, the 1938 competition for a new fighter was opened to both single and twin-engine designs. This time, the Grumman design number G-34 was considered worthy of evaluation by the Navy as the XF5F along with single-engine designs from Vought, the XF4U-1 powered by the big new P&W R-2800; and Bell, offering a derivative of the Army Air Forces P-39, the XFL-1.
The XF5F, probably in consideration of the one-engine-inoperative requirement, had the engines mounted as far inboard as possible and twin vertical fins, one in each engine’s slipstream. One-engine-inoperative wave offs were evaluated at altitude: "(A wave-off) might be accomplished (on one engine) provided the airspeed is about 80 knots or more and no more the 1/2 power on the operative engine were used." The "proper" approach speed based on stall speed, however, was defined as about 74 knots.
In spite of having as much or a little more installed horsepower than the XF4U, the XF5F was slower and couldn’t climb as high although its rate of climb through 20,000 feet was essentially the same. As a result, the Navy elected to proceed with the F4U for development and production. Nevertheless, the Bureau of Aeronautics continued to be interested in a twin-engine carrier-based fighter. On 30 June 1941, Grumman received a contract for the two XF6Fs and two XF7Fs. The F7F program suffered from the priority on F6F Hellcat development but the prototype Tigercat finally flew for the first time on 3 November 1943.
As soon as Grumman test pilots flew the XF7F-1, they realized that it did not have a big enough fin and rudder for an acceptable minimum control speed in the event of an engine failure on takeoff or a wave-off. Design of a bigger fin and rudder was initiated and introduced on the F7F-3. Although all models of the F7F were carrier qualified, the likelihood of and/or concern about a successful single-engine wave-off must have been low as there was no description of the technique for a single-engine carrier landing in the flight manual. In any event, the Tigercat never deployed with an air group on a carrier, probably due to its size as much as anything else.
One of the Navy’s first carrier-based jets, the McDonnell FD-1 Phantom was a twin, mainly because the Westinghouse-provided engine wasn’t very big. It grew to become the F2H Banshee, the first twin-engine airplane to regularly deploy on carriers. Two of its contemporaries, the Douglas F3D Skyknight and the North American AJ Savage, were also multi-engined. The AJ had three engines, two turning propellers and a jet. Like the F7F Tigercat, the Skynight was primarily operated by the Marines and made very few deployments. The Savage did deploy because of its critical mission of long-range nuclear strike, but because of its size, it generally was held in readiness at nearby Naval air stations during a carrier’s deployment. These jets were less limited from a minimum control speed standpoint in the event of a one-engine-inoperative situation than previous twin-engine propeller-driven airplanes because the engines were located close to the centerline; the AJs were slightly better off if one of its piston engines failed because the jet engine was located on its centerline.
However, North American was concerned about minimum control speed as evidenced by the size of the AJ’s original fin and rudder, made even bigger because carrier basing necessitated a fairly short airplane. Unfortunately, the rudder proved to be too big for high speed flight and resulted in a fatal accident when it broke the tail off in a flight test maneuver. The empennage was redesigned to increase the size of the fin, reduce the size of the rudder, and delete the dihedral in the horizontal stabilizer.
The lack of U.S. Navy concern about engine failures in the late 1940s was evident by the initiation of single-engine airplane programs, the Douglas F4D Skyray and the McDonnell F3H Demon, to replace the twin-engine all-weather Banshee. It was still true in 1958, when the Navy had to choose between the single-engine Vought F8U-3 and the twin-engine McDonnell F4H. The safety record of twin versus single-engine airplanes was examined and determined to not be a deciding factor. In fact, the only twin-engine airplane in the deployed carrier air groups at the time was the Douglas A3D Skywarrior, which had two engines because it was too big to be powered by only one. The F4H was selected because it had a dedicated radar operator, not because it had two engines.
The Navy did regularly deploy one twin-engine propeller-driven airplane at sea for more than two decades beginning in the mid-1950s on axial deck carriers, in part because Grumman had learned a lot about operating twin-engine airplanes from aircraft carriers with the F7F program. Its S2F (S-2) was as short-coupled as carrier airplanes get, so in order to size the rudder both for the single engine takeoff and wave-off condition and—relatively speaking—high-speed flight, it had a two-piece rudder. Up and away, the forward portion of the rudder was just used for directional trim and only the aft portion of the rudder moved with the rudder pedals. For takeoffs and landings, the forward and aft portions of the rudder could be selected to move as a unit, doubling the width of the rudder and reducing the S2F's minimum control speed to one suitable for carrier launches and wave-offs.
The introduction of steam catapults, angled decks, and descending, constant angle of attack approaches also reduced the degree of difficulty of one-engine-inoperative takeoffs and landings.
By the time Grumman engineers designed the F-14, they felt confident enough in their handling qualities analysis to widely separate its engines to provide a "tunnel" where two of the big Phoenix missiles could be carried side-by-side.
However, minimum control speed would still prove fatal to the unwary: Hultgreen Crash
Finally, click HERE for a great tale of how a second engine and a naval aviator saved an airplane...
The major benefit of twins, the capability to return to base after an engine failure, was initially problematic for a carrier-based airplane. The pilots approached power-on in level flight at the lowest safe speed and minimum altitude, cutting the engine just when the airplane would settle onto the deck. This was essentially the same technique used for a short-field landing ashore, where it assured a touchdown very close to the approach end of the runway, allowing the maximum distance for stopping.
The pilot of a twin-engine propeller-driven airplane with one engine inoperative had to take into account the minimum control speed in that situation. Since the engines were almost always placed out on the wing, when one failed the other produced a significant turning moment which had to be counteracted by the rudder. Since rudder effectiveness varied with airspeed, at some point the pilot could no longer stop the airplane from turning with the engine at full power. What’s worse, the turning generated a roll because of the difference in the lift on the wing on the outside of the turn versus the one moving slower on the inside. Since the ailerons also lost effectiveness with decreasing airspeed, a loss of control in roll would result as well if the engine power was not immediately reduced.
The tyranny of minimum-control speed was also imposed on an airplane taking off, but it was more draconian at sea than ashore. If the pilot taking off from a runway lost an engine while still below minimum-control speed, he would simply close the throttle on the good engine and reject the takeoff. The outcome varied with the length of the runway and if it came to that, the landscape beyond its end, but was rarely as dire as faced by the pilot of an airplane less than one hundred feet above the sea after being launched from an aircraft carrier at less than the minimum-control speed with full power on the operating engine. If he reduced power on it engine to maintain control, he almost certainly would not have enough for level flight, much less to climb or accelerate to a speed at which he could use all the power available.
Having a second engine was therefore not as good a deal for a pilot flying from an aircraft carrier as it was for one flying from an airport. Although it enabled one to divert to a land base or get back to friendly ships and ditch if an engine was lost in flight, it doubled the risk of an engine failure during a critical, albeit short, time during takeoff and landing. Twin-engine airplanes also tended to be bigger than singles whereas compactness was a virtue on an aircraft carrier.
Nevertheless, there were benefits beyond the ability to continue flight after an engine failure. The easy way to improve the performance of fighter airplanes is to incorporate more powerful engines in new or existing designs. Increasing power in piston engines basically meant adding more and/or bigger cylinders and supercharging. By the late 1930s, the engine manufacturers were beginning to approach the limits of existing technology and incremental horsepower increases were resulting in increasingly smaller increases in speed and greater engine complexity. The obvious next step was the twin-engine fighter, a doubling of power available without requiring the time and expense of a new engine development.
The U.S. Navy solicited proposals for a twin-engine carrier-based fighter in 1937 but none of the submittals were deemed to be acceptable. In 1938, the Navy had Lockheed modify an Electra Junior to have a fixed tricycle landing gear and tail hook. It was designated XJO-3 and delivered in October 1938. On 30 August 1939, Navy pilots made 11 takeoffs and landings from Lexington (CV-2) to evaluate it from both twin engine and tricycle landing gear standpoints.
In parallel with this research program, the 1938 competition for a new fighter was opened to both single and twin-engine designs. This time, the Grumman design number G-34 was considered worthy of evaluation by the Navy as the XF5F along with single-engine designs from Vought, the XF4U-1 powered by the big new P&W R-2800; and Bell, offering a derivative of the Army Air Forces P-39, the XFL-1.
The XF5F, probably in consideration of the one-engine-inoperative requirement, had the engines mounted as far inboard as possible and twin vertical fins, one in each engine’s slipstream. One-engine-inoperative wave offs were evaluated at altitude: "(A wave-off) might be accomplished (on one engine) provided the airspeed is about 80 knots or more and no more the 1/2 power on the operative engine were used." The "proper" approach speed based on stall speed, however, was defined as about 74 knots.
In spite of having as much or a little more installed horsepower than the XF4U, the XF5F was slower and couldn’t climb as high although its rate of climb through 20,000 feet was essentially the same. As a result, the Navy elected to proceed with the F4U for development and production. Nevertheless, the Bureau of Aeronautics continued to be interested in a twin-engine carrier-based fighter. On 30 June 1941, Grumman received a contract for the two XF6Fs and two XF7Fs. The F7F program suffered from the priority on F6F Hellcat development but the prototype Tigercat finally flew for the first time on 3 November 1943.
As soon as Grumman test pilots flew the XF7F-1, they realized that it did not have a big enough fin and rudder for an acceptable minimum control speed in the event of an engine failure on takeoff or a wave-off. Design of a bigger fin and rudder was initiated and introduced on the F7F-3. Although all models of the F7F were carrier qualified, the likelihood of and/or concern about a successful single-engine wave-off must have been low as there was no description of the technique for a single-engine carrier landing in the flight manual. In any event, the Tigercat never deployed with an air group on a carrier, probably due to its size as much as anything else.
One of the Navy’s first carrier-based jets, the McDonnell FD-1 Phantom was a twin, mainly because the Westinghouse-provided engine wasn’t very big. It grew to become the F2H Banshee, the first twin-engine airplane to regularly deploy on carriers. Two of its contemporaries, the Douglas F3D Skyknight and the North American AJ Savage, were also multi-engined. The AJ had three engines, two turning propellers and a jet. Like the F7F Tigercat, the Skynight was primarily operated by the Marines and made very few deployments. The Savage did deploy because of its critical mission of long-range nuclear strike, but because of its size, it generally was held in readiness at nearby Naval air stations during a carrier’s deployment. These jets were less limited from a minimum control speed standpoint in the event of a one-engine-inoperative situation than previous twin-engine propeller-driven airplanes because the engines were located close to the centerline; the AJs were slightly better off if one of its piston engines failed because the jet engine was located on its centerline.
However, North American was concerned about minimum control speed as evidenced by the size of the AJ’s original fin and rudder, made even bigger because carrier basing necessitated a fairly short airplane. Unfortunately, the rudder proved to be too big for high speed flight and resulted in a fatal accident when it broke the tail off in a flight test maneuver. The empennage was redesigned to increase the size of the fin, reduce the size of the rudder, and delete the dihedral in the horizontal stabilizer.
The lack of U.S. Navy concern about engine failures in the late 1940s was evident by the initiation of single-engine airplane programs, the Douglas F4D Skyray and the McDonnell F3H Demon, to replace the twin-engine all-weather Banshee. It was still true in 1958, when the Navy had to choose between the single-engine Vought F8U-3 and the twin-engine McDonnell F4H. The safety record of twin versus single-engine airplanes was examined and determined to not be a deciding factor. In fact, the only twin-engine airplane in the deployed carrier air groups at the time was the Douglas A3D Skywarrior, which had two engines because it was too big to be powered by only one. The F4H was selected because it had a dedicated radar operator, not because it had two engines.
The Navy did regularly deploy one twin-engine propeller-driven airplane at sea for more than two decades beginning in the mid-1950s on axial deck carriers, in part because Grumman had learned a lot about operating twin-engine airplanes from aircraft carriers with the F7F program. Its S2F (S-2) was as short-coupled as carrier airplanes get, so in order to size the rudder both for the single engine takeoff and wave-off condition and—relatively speaking—high-speed flight, it had a two-piece rudder. Up and away, the forward portion of the rudder was just used for directional trim and only the aft portion of the rudder moved with the rudder pedals. For takeoffs and landings, the forward and aft portions of the rudder could be selected to move as a unit, doubling the width of the rudder and reducing the S2F's minimum control speed to one suitable for carrier launches and wave-offs.
The introduction of steam catapults, angled decks, and descending, constant angle of attack approaches also reduced the degree of difficulty of one-engine-inoperative takeoffs and landings.
By the time Grumman engineers designed the F-14, they felt confident enough in their handling qualities analysis to widely separate its engines to provide a "tunnel" where two of the big Phoenix missiles could be carried side-by-side.
However, minimum control speed would still prove fatal to the unwary: Hultgreen Crash
Finally, click HERE for a great tale of how a second engine and a naval aviator saved an airplane...
Subscribe to:
Posts (Atom)