Skip to content

For on-premises node should we buy a VPS or a dedicated server that provides us with a private IP CIDR? Do VPS providers even offer the private IP feature? #672

@arjunthazhath2001

Description

@arjunthazhath2001

I'm trying to set up a hybrid EKS cluster using a VPS I purchased from SSD Nodes, but I'm facing critical issues related to private IP networking and CNI stability. Here's a detailed breakdown:


📌 Setup Context

  • I am building a hybrid EKS setup, where my on-prem node (VPS) should join an Amazon EKS control plane.

  • As part of the cluster creation, EKS requires two non-overlapping private CIDRs:

    • remoteNodeNetworkCIDR
    • remotePodNetworkCIDR
  • I purchased a VPS from SSD Nodes, which provided a dedicated public IP, but no private IP.


🚨 Issue Faced

  • When I run nodeadm init -c file://nodeConfig.yaml, I receive the error:

    The IP address "X.X.X.X" is not within the configured remoteNodeNetworkCIDR.
    

    (Where X.X.X.X is the public IP of the VPS.)

  • To bypass this, I ran:

    nodeadm init --skip-node-validation
    

    This did register the node with the EKS control plane successfully.


🧪 What I Tried

  1. Manually assigned a private IP (from the remoteNodeNetworkCIDR range) to the VPS using ip addr add, but the system still uses the public IP for outbound traffic and cluster communication.
  2. Set up a site-to-site VPN using StrongSwan; both tunnel endpoints show "UP" status.
  3. Installed various CNIs (Cilium, Calico, Flannel). However, the pods keep crashing and restarting.
  4. Only kube-proxy is in a READY state; CoreDNS fails to come up due to CNI failures.

🤔 Root Cause Suspected

It appears the issue is that the VPS lacks proper private IP support. Even after assigning one manually, the node continues to use its public IP, which is incompatible with the expectations of hybrid EKS networking (specifically the VPN and pod routing logic). This is likely why the CNIs fail.


❓ My Questions

  1. Did I make the wrong choice by purchasing a VPS instead of a true dedicated server that provides private IP ranges?

  2. Is there any way to configure a VPS to use custom private IPs properly for internal routing and communication?

  3. If not, should I purchase from a provider that offers true private IP allocation (RFC1918 range) and allows proper control over networking?

    • If yes, could you recommend such providers?
  4. Despite VPN tunnel status being "UP", is it possible that the lack of private IP support on this VPS is the core reason why CNI pods are failing?


📌 Summary

Although I could register the node using --skip-node-validation, CNI pods crash due to networking/routing issues, likely caused by the node’s reliance on its public IP in a setup that expects private CIDR-based routing. I'm trying to understand whether this is a fundamental limitation of the VPS I’m using, or if it can be resolved via configuration.

Would appreciate your guidance.

TIRED OF HAVING MULTIPLE CALLS WITH AWS SUPPORT TEAM. They themselves are not aware of how this thing works.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions