When a process in Linux has issued an IO and is waiting for a response there are basically two different possibilities. The good case is that there is other work pending and the CPU can continue to work. If there is no work left to do the CPU changes it's state to idle and the time until either new work comes in or the IO is delivered is accounted as %iowait in tools like vmstat, iostat or sar. So you can view "iowait" as a shade of "idle".
From the perspective of the hipervisor such an idle CPU can be put to good use e.g. by dispatching it to another guest that's in need of CPU. This is why from a virtualization layer perspective guest CPUs in iowait are usually accounted as idle.
For problem determination it would help to add an iowait measure to the hipervisor as well as it would help detecting problems that are created by cloning inefficient guests, e.g. 100
servers all doing sync IO where async IO would have been possible.