Budget Weeds Out Poor Performers: 15 Programs Get the Ax

Published: February 02, 2004

Budget Weeds Out Poor Performers: 15 Programs Get the Ax

By MOLLIE ZIEGLER

Last year, when the Bush administration said it would get tough on nonperforming federal programs, only one of the thousand or so that exist was zeroed out in the president’s budget proposal. This year the number is up to 15.
Clay Johnson, deputy director for management at the Office of Management and Budget, said Jan. 29 there is no single reason for closing down the 15 programs.

“Primarily, it’s because the programs were shown to be ineffective or couldn’t demonstrate that they were effective,” Johnson said.

The 15 programs include eight that failed to demonstrate results when measured by OMB’s Program Assessment Rating Tool (PART):

  • Small Business Innovation Research Program, Commerce, which had an estimated $4 million in funding in 2004.
  • Occupational and Employment Information, Education, $9 million in 2004.
  • Tech-Prep Education State Grants, Education, $107 million in 2004.
  • Nuclear Energy Research Initiative, Energy, $11 million in 2004.
  • Metropolitan Medical Response System, Homeland Security, $50 million in 2004.
  • State Criminal Alien Assistance Program, Justice, $297 million in 2004.
  • Environmental Education, Environmental Protection Agency, $9 million in 2004.
  • Business Information Centers, Small Business Administration, $14 million in 2004.

Five were deemed ineffective when they were measured:

  • Even Start, Education, $247 million in 2004.
  • Federal Perkins Loans, Education, $99 million in 2004.
  • HOPE VI, Housing and Urban Development, $149 million in 2004.
  • Juvenile Accountability Block Grants, Justice, $59 million in 2004.
  • Migrant and Seasonal Farmworkers, Justice, $77 million in 2004.

Two of those programs terminated were rated as adequate:

  • Advanced Technology Program, Commerce, $171 million in 2004.
  • Comprehensive School Reform, Education, $234 million in 2004.

OMB has increasingly pressed for accountability in performance in determining the administration’s budget priority. Last year, 234 federal programs — 20 percent of all federal programs — were analyzed using PART. Those that were unable to measure their effectiveness — 118, or just over half — suffered financially, receiving average funding increases of 1.2 percent. One program was terminated.

But programs that could measure their results — regardless of whether those results were good or bad — fared much better. Funding for programs that measured results averaged budget boosts of 11.7 percent.

For the 2005 budget, OMB evaluated twice as many programs: about 400. Johnson said the effort has begun to pay off. Fewer programs were unable to demonstrate results than last year. However, Johnson did not provide numbers.

“Agencies started paying attention,” he said. “But it’s still a challenge in that a lot can’t demonstrate results or results are unsatisfactory.”

And this year, Johnson pledged that OMB officials will work more closely with Congress to focus on program performance as legislators write the appropriations bills that actually determine funding.

“The challenge over the last couple of years was getting agencies comfortable with performance,” Johnson said. “Agencies own this now, and they’re moving forward. The challenge now is to work with Congress.”

A key to the system is that poor results do not necessarily mean a budget cut. On the contrary, poor performance may actually demonstrate increased funding needs, Johnson emphasized. “But you only get an increase in funding if there’s a good reason why you can’t demonstrate results. And that’s a hard story to sell.”

Education Department adult literacy programs, which were rated ineffective or unable to demonstrate results last year, will not lose funding this year, Johnson said. Their goal is too important.

But budgets were cut when managers couldn’t demonstrate the value of their programs or suggest how they could be improved, or when policy analysts decided the program’s mission was not a government mission, Johnson said.

“The whole budget and performance integration piece is looking at whether programs work or not,” Johnson said. “If it doesn’t produce results, what are you going to do?

At least one lawmaker agrees. Rep. Todd Platts, R-Pa., who chairs the House Government Reform subcommittee on efficiency and financial management, is concerned that unless the program assessment program is codified in law, it may disappear after this administration leaves office.

Basing budget decisions on programs’ effectiveness was the intent behind the 1993 Government Performance and Results Act, he said.

“I plan to work with the administration and explore legislation that would codify the requirement to review all federal programs,” Platts said in a Jan. 29 statement. “This effort will ensure that future administrations are required to assess programmatic effectiveness.”

Pratts’ subcommittee is awaiting a General Accounting Office report on OMB’s program assessments and plans hearings at which OMB officials will explain how well the assessments are working.


Will Congress Care?

Johnson holds up NASA as a model for how to craft a budget. NASA moved to a corporate accounting process, which allocates salaries, support goods and services, and other overhead costs by program. NASA also breaks funding down by mission objective and results. Agency leaders asked congressional appropriators to allocate funding based on the revised budget.

“It took a lot of work to help the [appropriations] committee deliberate differently,” Johnson said. “But [NASA is] a model for the rest of government.”

John Mercer wrote the 1993 Government Performance and Results Act, the first effort by Congress to put a spotlight on the results of federal programs. Now, he said, it is time to hold lawmakers accountable for how well they oversee agency performance.

“I spent 13 years working in the Congress, on both the House and Senate sides, with most of that time spent almost exclusively on general oversight issues,” Mercer said. “I am acutely aware of how little really good oversight is conducted by most congressional committees.”

So on Jan. 21, Mercer released a scorecard for evaluating how well congressional committees monitor agency programs. Called the Congressional Oversight of Management and Performance Accountability Scorecard, it evaluates how well lawmakers cover 12 oversight issues during hearings, including:

  • Long-term strategic goals.
  • Annual performance goals and past results.
  • Effectiveness of strategies for reaching performance targets.
  • Linking budget amounts to the achievement of specific goals.
  • Ways to ensure the accountability of program managers.
  • The quality of performance data and financial systems.
  • Major concerns raised by General Accounting Office and inspector general reports.

Mercer said his scorecard follows in the footsteps of the agency performance reports, the president’s management agenda scorecard and OMB’s Program Assessment Rating Tool.

“Now I think it’s Congress’s turn to be scored on its contribution to performance accountability,” Mercer said.

Johnson said Congress is already focusing more on results and OMB is prepared to help.

“More and more, their conversations with us are being informed by results,” he said. “We want to help them make the transition to what NASA and others think is a better way. We are committed to do the work to make it easier. That’s our responsibility.”

Copyright 2004 Federal Times.