In the end we crafted a seven sever solution with load balanced web front ends, separate logic server for data crunching, and a set of very powerful database servers (in 2005 fully loaded HP ProLiant 570 were the best of breed).
At UAT the solution surpassed technical and usability expectations and was well suited to handle over 3,000 users. So why is this solution (still actively used) not fit for purpose? Because the number of actual users is in the hundreds and not thousands. The client over estimated the adoption rate of the solution. The next year the business processes changed and the organization chose not to invest into reconfiguring the solution to enable the full 3,000 users to use it.
On a surface it appears that the architect is not at fault. But should the architect thought about user adoption? Should the solution have been designed with fewer servers from the start? In this case the risk of building for max size was the right choice, because this organization is large and slow moving. Procurement takes months and budgets are often unpredictable. Without knowing the history of this solution it's clear that this is an overkill and the solution is not fit for purpose (waste of computational resources). A potential way out is to scale down through virtualization, but of that's a story for another day.