vpc

AckeeCZ/vpc/gke

Terraform Module HCL GKE

GCP GKE module, provision GEK cluster with underlaying infrastructure

Install
module "vpc" {
source = "AckeeCZ/vpc/gke"
version = "11.11.0"
}
plain text: /constructs/tfmod-ackeecz-vpc-gke/install.txt
⭐ Source on GitHub 📦 Registry page
README

Terraform Google Kubernetes Engine VPC-native module Terraform module for provisioning of GKE cluster with VPC-native nodes and support for private networking (no public IP addresses) Private networking Private GKE cluster creation is divided into few parts: Private nodes Turned on with parameter private, all GKE nodes are created without public and thus without route to internet Cloud NAT gateway and Cloud Router Creating GKE cluster with private nodes means they have not internet connection. Creating of NAT GW is no longer part of this module. You can use upstream Google Terraform module like this : `` resource "google_compute_address" "outgoing_traffic_europe_west3" { name = "nat-external-address-europe-west3" region = var.region project = var.project } module "cloud-nat" { source = "te

Inputs (47)
NameTypeDescriptionDefault
vault_secret_pathstringPath to secret in local vault, used mainly to save gke credentials required
projectstringGCP project ID required
ci_sa_emailstringEmail of Service Account used for CI deploys"[email protected]
image_streamingboolEnable GKE image streaming feature.false
locationstringDefault GCP zone"europe-west3-c"
auto_upgradeboolAllow auto upgrade of node poolfalse
traefik_versionstringVersion number of helm chart"1.7.2"
traefik_custom_valueslist(object({ name = striTraefik Helm chart custom values list[ { "name": "ssl.enabled", "va
private_master_subnetstringSubnet for private GKE master. There will be peering routed to VPC created with "172.16.0.0/28"
managed_prometheus_enableboolConfiguration for Managed Service for Prometheus.false
cluster_adminslist(string)List of users granted admin roles inside cluster[]
regionstringGCP region"europe-west3"
services_ipv4_cidr_blockstringOptional IP address range of the services IPs in this cluster. Set to blank to h""
node_poolsmap(any)Definition of the node pools, by default uses only ackee_pool{}
monitoring_config_enable_componentslist(string)The GKE components exposing logs. SYSTEM_COMPONENTS and in beta provider, both Snull
enable_cert_managerboolEnable cert-manager helm chartfalse
cert_manager_versionstringVersion number of helm chart"v1.6.1"
node_pool_location_policystringNode pool load balancing location policy"BALANCED"
enable_traefikboolEnable traefik helm chart for VPCfalse
networkstringName of VPC network we are deploying to"default"
maintenance_window_timestringTime when the maintenance window begins."01:00"
dns_nodelocal_cacheboolEnable NodeLocal DNS Cache. This is disruptive operation. All cluster nodes are false
… and 7 more inputs
Outputs (7)
cluster_ipv4_cidr — The IP address range of the Kubernetes pods in this cluster in CIDR notation
endpoint — Cluster control plane endpoint
node_pools — List of node pools associated with this cluster
client_certificate — Client certificate used kubeconfig
client_key — Client key used kubeconfig
cluster_ca_certificate — Client ca used kubeconfig
access_token — Client access token used kubeconfig
Resources (10)
google_compute_firewallgoogle_container_clustergoogle_container_node_poolgoogle_gke_hub_featuregoogle_gke_hub_membershipgoogle_project_servicehelm_releasekubernetes_cluster_role_bindingkubernetes_namespacevault_generic_secret
Details
FrameworkTerraform Module
LanguageHCL
Version11.11.0
Cloud GKE
★ Stars0
Forks4
Total downloads11.9k
Inputs47
Outputs7
Resources10
NamespaceAckeeCZ
Updated