Gluster bricks are offline and errors in logs
- by Roman Newaza
I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected:
root@GlusterNode1a:~# gluster peer status
Number of Peers: 3
Hostname: gluster-1b
Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2
State: Peer in Cluster (Connected)
Hostname: gluster-2b
Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9
State: Peer in Cluster (Connected)
Hostname: gluster-2a
Uuid: 72405811-15a0-456b-86bb-1589058ff89b
State: Peer in Cluster (Connected)
I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log:
copy(/storage/152627/dat): failed to open stream: Structure needs cleaning
readfile(/storage/1438227/dat): failed to open stream: Input/output error
unlink(/storage/189457/23/dat): No such file or directory
Finally, I have found out some bricks are offline:
root@GlusterNode1a:~# gluster volume status
Status of volume: storage
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick gluster-1a:/storage/1a 24009 Y 1326
Brick gluster-1b:/storage/1b 24009 N N/A
Brick gluster-2a:/storage/2a 24009 N N/A
Brick gluster-2b:/storage/2b 24009 N N/A
Brick gluster-1a:/storage/3a 24011 Y 1332
Brick gluster-1b:/storage/3b 24011 N N/A
Brick gluster-2a:/storage/4a 24011 N N/A
Brick gluster-2b:/storage/4b 24011 N N/A
NFS Server on localhost 38467 Y 24670
Self-heal Daemon on localhost N/A Y 24676
NFS Server on gluster-2b 38467 Y 4339
Self-heal Daemon on gluster-2b N/A Y 4345
NFS Server on gluster-2a 38467 Y 1392
Self-heal Daemon on gluster-2a N/A Y 1402
NFS Server on gluster-1b 38467 Y 2435
Self-heal Daemon on gluster-1b N/A Y 2441
What can I do about that? I need to fix it.
Note: CPU and Network usage of all the four nodes are about the same.