Imagine the following: you have a WebSphere Application Server infrastructure and one or more JVM instances (server, nodeagent, dmgr. etc.) reporting the following error in their configured log file (default file is the SystemOut.log):
[4/22/16 10:26:50:128 CEST] 00000000 FileDocument E ADMR0104E: The system is unable to read document : java.io.IOException: No space left on device
If you try to start your investigation based on the message your next command probably would be the following on Linux/Unix:
df -h
which have the following output (or something similar):
Filesystem 1M-blocks Used Available Use% Mounted on
…
/dev/mapper/VolGroup00-ibmopt 1952 783 1068 43% /opt
…
See? I have a disk utilization of 43% and the JVM is still telling me that no space left on device. What can be the problem? After I examined many aspects on the system (checking permissions, checking temp storage, checking the TMP environment variable) I could not proceed and fix this.
The idea which has not been crossed my mind so far that maybe the Linux/Unix inodes is running out. So, I executed the following command and a big hallelujah has happened:
df -i /opt
…
1245184 1244320 864 100% /opt/WebSphere
So, next time when you have “no space left on device” error message in your logs and you are 100% sure that you have the sufficient disk space, check the inodes as well.
I hope it helped! 🙂