{{text-cta}}
Setting up a single kubernetes cluster on a single server is really easy from a maintenance point of view because you really do not need to worry about Persistent Volume (PV), Persistent Volume Claim (PVC), PODs deployed under different zones (ex. - eu-north-1, us-west-1 etc..)
The issue is not about different zones, but when you start scaling your Kubernetes cluster setup for your production environment, you need to take care many other elements (Databases, Persistent Volume Size, Mapping between the Persistent Volume and Persistent Volume claim)
In this blog we'll cover different types of Kubernetes configuration errors associated with Zones, PV, PVC:
- Kubernetes Pod Warning: 1 node has volume node affinity conflict
- Insufficient Capacity Size or Resource
- The accessModes of your Persistent Volume and Persistent Volume Claim are inconsistent
- The number of PersistentVolume is greater than PersistentVolumeClaims
1. Kubernetes Pod Warning: 1 node has a volume node affinity conflict
To talk about the volume node affinity conflict error, first let us take an example to understand this Kubernetes cluster setup:
- You have set up a Kubernetes cluster running in AWS in the eu-north-1 Frankfurt region
- For the same Kubernetes cluster, you have defined PV (Persistent Volume) and PVC (Persistent Volume Claim) but in an alternate AWS region eu-west-1Ireland
- Now you are trying to schedule a Pod in eu-north-1 Frankfurt using the PV and PVC which are present in eu-west-1 Ireland region
- So whenever you try to schedule a Pod in different zone, the Pod status will always fail and it will result in - Kubernetes Pod Warning: 1 node has volume node affinity conflict
How to fix this?
The first point for troubleshooting would be to check how many zones you are using for setting up your Kubernetes cluster and in which zone you have defined your persistent volume (PV) and persistent volumes claim (PVC)
Verification of the zones can easily be done by a web portal provided by the cloud services provider.
But for the verification of PV and PVC, you must run the following command:
$ kubectl get pv
$ kubectl describe pv
$ kubectl get pvc
$ kubectl describe pvc
Fix 1 : Delete the PV and PVC
To fix the issue you can delete the PV and PVC from the other zone and recreate the same resources in the same zone where you are trying to schedule the Pod.
Here are the commands for deleting the PV and PVC:
$ kubectl delete pvc my-test-pvc
$ kubectl delete pv my-test-pv
Now after deleting the PV and PVC, recreate them in the same zone where you're running your Pod:
$ kubectl apply -f my-test-pv.yaml
$ kubectl apply -f my-test-pvc.yaml
Now after creating the above PV and PVC, you can schedule the Pod or create your Kubernetes deployment in the same zone and your error Kubernetes Pod Warning: 1 node has volume node affinity conflict should be fixed.
Fix 2: Move the Pod to the same zone along with the PV and PVC
Since this issue is all about PV, and PVC running in different zones, you can move the Pod to the same zone where we have already created the PV and PVC.
Let’s assume your Deployment is running in ZONE-1 and your PV and PVC are running in ZONE-2, so first start by deleting the Deployment from ZONE-1:
$ kubectl delete deployment my-deployment
Create a new deployment after switching to the new ZONE-2:
$ kubectl apply -f my-deployment.yaml
Once you create a successful deployment in ZONE-2 along with PV and PVC your issue of Kubernetes Pod Warning: 1 node has volume node affinity conflict should be fixed.
(Note* - If you are using the storage class then it is always recommended to create the storage class and deployment in the same Pod.)
2. Insufficient Capacity Size or Resource
The next error we'll cover is unable to locate a PV with sufficient capacity. As the name suggests it is a problem associated with PVC (Persistent Volume Claim) where you have defined the storage more than the PV (Persistent Volume).
Here is an example configuration to understand this issue:
1. Create a Persistent Volume (PV) with 1Gi of storage
2. Create Persistent Volume Claim (PVC) with 3Gi of storage (here we are setting the storage to 3Gi more than the Persistent Volume)
3. Now, first apply the PV (persistent volume) configuration:
$ kubectl apply -f test-pv.yml
$ kubectl apply -f test-pv.yml
4. After that, apply PVC (Persistent Volume claim configuration with 3Gi):
$ kubectl apply -f test-pvc.yml
5. Now verify the PVC:
$ kubectl describe pvc test-pvc
{{text-cta}}
How to fix this?
To fix the issue, your PVC configuration should always have storage less than or equal to PV’s storage.
So update your test-pvc.yml and change the storage from 3Gi to 1Gi and re-apply the PVC configuration.
1. Delete the old PVC:
$ kubectl delete pvc test-pvc
2. Re-apply updated PVC:
$ kubectl apply -f test-pvc.yml
3. Verify the status of the PVC:
$ kubectl describe pvc test-pvc
3. The AccessModes of You Persistent Volume and Persistent Volume Claim are Inconsistent
The next error we'll cover is incompatible accessMode. Whether you create PV (Persistent Volume) or PVC (Persistent Volume Claim) you have set the accessModes for each configuration.
As a rule of thumb, you should always set the same accessMode for both PV (Persistent Volume) or PVC (Persistent Volume Claim). If there is a mismatch in the accessMode, then your PVC (Persistent Volume Claim) will not be able to bound with PV(Persistent Volume).
Let’s take an example:
1. Create a PV (Persistent Volume) with accessMode: ReadWriteMany
2. Apply the above PV configuration:
$ kubectl apply -f test-pv.yml
3. Verify the status of test-pv:
$ kubectl describe pv test-pv
4. Create a PVC (Persistent Volume Claim) with accessMode: ReadWriteOnce
5. Apply the above PV configuration:
$ kubectl apply -f test-pvc.yml
6. Verify the status of test-pvc:
$ kubectl describe pvc test-pvc
How to fix this?
The fix to this problem is pretty simple - you have to use the same access mode as for both PV and PVC.
In our example, let us change the accessMode of PV to ReadWriteOnce and re-apply both the configurations once again.
$ kubectl apply -f test-pv.yml
$ kubectl apply -f test-pvc.yml
Verify the status of PVC:
$ kubectl describe pvc test-pvc
4. The Number of PersistentVolume is Greater Than PersistentVolumeClaims
The next error which we are going to talk about is how many times we are mapping PV (persistent volume) with PVC (Persistent Volume Claim).
Ideally, you should map single PV (persistent volume) with one PVC (Persistent Volume Claim), but if there are more than one PVC trying to use same PV, you may face the FailedBinding issue.
Let’s take an example:
- Create one PV (Persistent Volume): test-pv.yml
- Create two PVC (Persistent Volume Claim): test-pvc.yml, test-pvc-2.yml
- In both the PVC (test-pvc.yml, test-pvc-2.yml) use the same PV(Persistent Volume) .i.e. test-pv
Here is the test-pv.yml
Here is the First test-pvc.yml
$ kubectl apply -f test-pvc.yml
Here is the second test-pvc.yml
$ kubectl apply -f test-pvc-2.yml
After applying the second PBV, verify the status of it:
$ kubectl describe pvc test-pvc-2
As you can see the problem is that you are not able to bound the second PVC (Persistent Volume claim) with the Persistent Volume (PV), i.e. test-pv because it has already been taken by the first test-pvc.
How to fix this?
Well, you should first start by deleting the PVC (persistent volume claim) where you faced the issue. In the above example, we faced the issue in our second PVC .i.e. Test-pvc-2.
$ kubectl delete pvc test-pvc-2
After deleting:
- Create a new persistent volume
- Map the newly created persistent volume to test-pvc-2
- Re-apply the test-pvc-2 configuration again
Learn from Nana, AWS Hero & CNCF Ambassador, how to enforce K8s best practices with Datree
Headingajsdajk jkahskjafhkasj khfsakjhf
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.