Kubernetes Service does not get its external IP address

When I build a Kubernetes service in two steps (1. replication controller; 2. expose the replication controller) my exposed service gets an external IP address: initially: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.241.95 80/TCP app=app-1 7s and after about 30s: NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE app-1 10.67.241.95 104.155.93.79 80/TCP app=app-1 35s But when I do

Kubernetes Elastic Google Container Engine cluster?

When you create a Google Container Engine (GKE) cluster you specify what the number and what types of machines you want to use in the cluster. Is it possible to auto-scale the number of cluster machines based on (for example) CPU load? If this is not supported, is there a reason why or is Google working on something like this for the future?

Kubernetes Is there a way to add arbitrary records to kube-dns?

I will use a very specific way to explain my problem, but I think this is better to be specific than explain in an abstract way... Say, there is a MongoDB replica set outside of a Kubernetes cluster but in a network. The ip addresses of all members of the replica set were resolved by /etc/hosts in app servers and db servers. In an experiment/transition phase, I need to access those mongo db servers from kubernetes pods. However, kubernetes doesn't seem to allow adding custom entries to /etc/ho

kubernetes pods are restarting with new ID

The pods i am working with are being managed by kubernetes. When I use the docker restart command to restart a pod, sometimes the pod gets a new id and sometimes the old one. When the pod gets a new id, its state first goes friom running ->error->crashloopbackoff. Can anyone please tell me why is this happening. Also how frequently does kubernetes does the health check

Kubernetes "x509: certificate signed by unknown authority" when running kubelet

I'm trying to install kubernetes with kubelet 1.4.5 on CoreOS beta (1192.2.0). I'm using a slightly modified version of the controller and worker install scripts from https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic so in general I created the licenses on Gentoo Linux using the following bash script: #!/bin/bash export MASTER_HOST=coreos-2.tux-in.com export K8S_SERVICE_IP=10.3.0.1 export WORKER_IP=10.79.218.3 export WORKER_FQDN=coreos-3.tux-in.com openssl genrsa -out

Create a deployment from a pod in kubernetes

For a use case I need to create deployments from a pod when a script is being executed from inside the pod. I am using google container engine for my cluster. How to configure the container inside the pod to be able to run commands like kubectl create deployment.yaml? P.S A bit clueless about it at the moment.

How to run e2e tests on custom cluster within Kubernetes.

https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md#testing-against-local-clusters I have been following the above guide, but I keep getting this error: 2017/07/12 09:53:58 util.go:131: Step './cluster/kubectl.sh version --match-server-version=false' finished in 20.604745ms 2017/07/12 09:53:58 util.go:129: Running: ./hack/e2e-internal/e2e-status.sh WARNING: The bash deployment for AWS is obsolete. The v1.5.x releases are the last to support cluster/kube-up.sh w

Kubernetes kubectl pull image from gitlab unauthorized: HTTP Basic: Access denied

I am trying to configure gitlab ci to deploy app to google compute engine. I have succesfully pushed image to gitlab repository but after applying kubernetes deployment config i see following error in kubectl describe pods: Failed to pull image "registry.gitlab.com/proj/subproj/api:v1": rpc error: code = 2 desc = Error response from daemon: {"message":"Get https://registry.gitlab.com/v2/proj/subproj/api/manifests/v1: unauthorized: HTTP Basic: Access denied"} Here is my deployment gitlab-ci j

Kubernetes Unhealthy load balancer on GCE

I have a couple of services and the loadbalancers work fine. Now I keep facing an issue with a service that runs fine, but when a loadbalancer is applied I cannot get it to work, because one service seams to be unhealty, but I cannot figure out why. How can I get that service healthy? Here are my k8s yaml. Deployment: kind: Deployment apiVersion: extensions/v1beta1 metadata: name: api-production spec: replicas: 1 template: metadata: name: api labels: app: api

Kubernetes Grafana HTTP Error Bad Gateway and Templating init failed errors

Use helm installed Prometheus and Grafana on minikube at local. $ helm install stable/prometheus $ helm install stable/grafana Prometheus server, alertmanager grafana can run after set port-forward: $ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") $ kubectl --namespace default port-forward $POD_NAME 9090 $ export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o

Kubernetes Unable to get to service using the master url

I used kubeadm init to build the cluster (one of the reference I used: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ ), cluster came up everything looks good. NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME abc-kubemaster01 Ready master 5d v1.10.2 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce abc-kubemaster02 Re

Elasticsearch Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume

I am trying to deploy elastic-search in kubernetes with local drive volume but I get the following error, can you please correct me. using ubuntu 16.04 kubernetes v1.11.0 Docker version 17.03.2-ce Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume error: error validating "es-d.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[1]): unknown field "hostPath" in io.k8s.api.core.v1.Container; if you choose to ignore these e

Kubernetes monitoring and self-healing

I am new to Kubernetes monitoring and self-healing. I wonder what kind of self-healing Kubernetes can provide, such as restart failed pod if necessary? anything else? what Kubernetes cannot provide. As for Kubernetes monitoring, what kind of metrics we need to monitor in order to operate on Kubernetes instead of Kubernetes self-healing? Any ideas welcomed. Thanks.

Kubernetes Pod image Update and restart container keep same IP question

I am using a POD directly to manage our C* cluster in a K8s cluster, not using any high-level controller. When I want to upgrade C*, I want to do an image update. Is this a good pattern to update image for an upgrade? I saw the high-level deployment controller support image update too, but that causes the POD to delete and recreate, which in turn causes the IP to change. I don't want to change the IP and I found if I directly update the POD image, it can cause a restart and also keep the IP. T

Kubernetes oc cluster up timeout waiting for condition

I am trying to setup openshift origin on my local Virtualbox centos 7.4. This is a all-in-one environment for study purpose only. I followed the exact document: https://docs.okd.io/latest/getting_started/administrators.html Method 1: Running in a container I installed docker and when I am running the command it failed due to timeout: [root@master openshift]# oc cluster up Getting a Docker client ... Checking if image openshift/origin-control-plane:v3.11 is available ... Checking type of volu

Kubernetes How to allow/deny http requests from other namespaces of the same cluster?

In a cluster with 2 namespaces (ns1 and ns2), I deploy the same app (deployment) and expose it with a service. I thought separate namespaces would prevent from executing curl http://deployment.ns1 from a pod in ns2, but apparently, it's possible. So my question is, how to allow/deny such cross namespaces operations? For example: pods in ns1 should accept requests from any namespace pods (or service?) in ns2 should deny all requests from other namespaces

How can I find Kubernetes memory default for a cron job?

I have a Kubernetes cron job that gets an OOMKilled (Out of Memory) message when running. This specific cron job runs once a day. The node itself has 4 GB of RAM. Found somewhere that said the default for a cron job is 100 MB? Where can I view or change the default for Kubernetes cron jobs?

Runtime Processors Calculation for the JVM in Kubernetes Containers

I am running a JVM in a K8s pods, and I have allocated it 512m of CPU and 1Gi of memory. When we execute In context of Runtime.getRuntime().availableProcessors(), what would be the value returned? I want to know this since some of the libraries like Couchbase Java client, RxJava etc. rely on this value to determine the number of threads in various thread-pools. How is the value 512m interpreted. Would it take the floor or the ceil of this value? Or is there a different way to compute this

How to see which node/pod served a Kubernetes Ingress request?

I have a Deployment with three replicas, everyone started on a different node, behing an ingress. For tests and troubleshooting, I want to see which pod/node served my request. How is this possible? The only way I know is to open the logs on all of the pods, do my request and search for the pod that has my request in the access log. But this is complicated and error prune, especially on productive apps with requests from other users. I'm looking for something like a HTTP Response header like

Kubernetes New user can view all pods without any rolebindings

kube-apiserver.service is running with --authorization-mode=Node,RBAC $ kubectl api-versions | grep rbac rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 Believe this is enough to enable RBAC. However, any new user i create can view all resources without any rolebindings. Steps to create new user: $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes nonadmin-csr.json | cfssljson -bare nonadmin $ kubectl config set-cluster nonadmin --certi

Kubernetes How can I restore master node after it failed or its instance down?

I am running an API service with Kubernetes. So it is set up like 3 aws instances(one master node, two worker node). I am kinda considering a scenario that when an instance that has a master node is down or crash, whatever happens, how should I restore master node? when I use docker-swarm, it automatically backed on, then attached to worker(or worker attached to her) and it works fine! I tried kubeadm init again but it shows errors error execution phase preflight: [preflight] Some fatal er

How to cp data from one container to another using kubernetes

Say we have a simple deployment.yml file: apiVersion: apps/v1 kind: Deployment metadata: namespace: ikg-api-demo name: ikg-api-demo spec: selector: matchLabels: app: ikg-api-demo replicas: 3 template: metadata: labels: app: ikg-api-demo spec: containers: - name: ikg-api-demo imagePullPolicy: Always image: example.com/main_api:private_key ports: - containerPort: 80 the problem is that this image/c

Kubernetes Specify Dynamically Created EBS Volume Names

When dynamically creating Persistent Volumes in a K8s cluster running on EKS, using gp2 as the default storage class, is it possible to name the EBS volumes that are created? Currently, they get names like kubernetes-dynamic-pvc-d8896767-a1c9-11e9-bb21-0e3fcd7b2ecc but it would be nice for volume management to have the labels be more clear.

Kubernetes Multiple paths access a backend by traefik ingress

I want to use traefik ingress to achieve the following functions just like nginx: nginx config: location she/admin/art/ { proxy_pass http://172.18.214.174:801/admin/; } location he/admin/art/ { proxy_pass http://172.18.214.174:801/admin/; } location my/admin/art/ { proxy_pass http://172.18.214.174:801/admin/; } If I want to achieve this effect in the traefik ingress I need to use annotations: traefik.ingress.kubernetes.io/redirect-regex: ^http://www.h

Kubernetes securityContext.privileged: Forbidden: disallowed by cluster policy

I can't start pod which requires privileged security context. PodSecurityPolicy: apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: pod-security-policy spec: privileged: true allowPrivilegeEscalation: true readOnlyRootFilesystem: false allowedCapabilities: - '*' allowedProcMountTypes: - '*' allowedUnsafeSysctls: - '*' volumes: - '*' hostPorts: - min: 0 max: 65535 hostIPC: true hostPID: true hostNetwork: true runAsUser: rule: 'RunAsAny'

Kubernetes host_path deleter only supports /tmp/.+ but received provided /mnt/disk/kafka

I am trying to delete a persistent volume, to start form scratch a used kafka cluster into kubernetes, i changed the Retain mode to Delete, it was Retain. But i am not able to delete two of the three volumes: [yo@machine kafka_k8]$ kubectl describe pv kafka-zk-pv-0 Name: kafka-zk-pv-0 Labels: type=local StorageClass: Status: Failed Claim: kafka-ns/datadir-0-poc-cp-kafka-0 Reclaim Policy: Delete Access Modes: RWO Capacity: 500Gi Messag

ingress-controller and Google kubernetes

I have created an ingress resource in my Kubernetes cluster on google cloud. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gordion annotations: nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.global-static-ip-name: gordion-ingress networking.gke.io/managed-certificates: gordion-certificate,gordion-certificate-backend spec: rules: - host: backend.gordion.io http: paths: - path: / backend: serviceName: backe

Kubernetes 'kubectl top pods' Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

When I am trying to run kubectl top nodes I`m getting the output: Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io) Metric server is able to scrape the metrics, in the logs getting the metrics ScrapeMetrics: time: 49.754499ms, nodes: 4, pods: 82 ...Storing metrics... ...Cycle complete... But the end points for the metrics service are missing, how can i resolve this issue? kubectl get apiservices |egrep metrics v1beta

Kubernetes Reference value key with different value in helm/go template

I want to know if it's possible to use a value as an object key of a different value like this? ... spec: replicas: {{ .Values[.Release.Namespace].replicas }} ... When my values.yaml is like: production: replicas: 2 staging: replicas: 1 And I install like this: helm install --namespace production my-release . If not, is there any other way to achieve this?

VSCode devcontainer connect to kubernetes cluster on vm

From a dotnet/core/sdk devcontainer (using VSCode Remote Containers), debug a .NET Core app running in a kubernetes cluster hosted on another vm of my host machine. Current Setup Docker Desktop for Windows running via Hyper-V default DockerNAT network adapter Ubuntu VM (multipass) running on same Hyper-V host microk8s cluster running on this ubuntu instance default "Default Switch" network adapter Errors When I try to ping the ubuntu vm from a docker container by hostname, the IP is

Kubernetes Istio Mesh Federation Locality Aware

We are trying to migrate our microservices architecture to K8s and Istio. We will have two k8s different clusters. One per frontend applications and the another for backend apps. Our initial idea is to configure each cluster as a separated Istio Mesh. My doubt is; Can we keep the locality-aware routing between clusters when a frontend app do a request against a backend app? I have read it is possible when you have one mesh distributed among K8s clusters but I'm not sure if this feature kee

How does kubernetes provide HA for stateful applications with volumes attached?

I am unable to configure my stateful application to be resilient to kubernetes worker failure (the one where my application pod exists) $ kk get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-openebs-97767f45f-xbwp6 1/1 Running 0 6m21s 192.168.207.233 new-kube-worker1 <none> <none> Once I take the worker down, kubernetes notices that

Kubernetes Rolling update strategy not giving zero downtime in live traffic

I'm using rolling update strategy for deployment using these two commands: kubectl patch deployment.apps/<deployment-name> -n <namespace> -p '{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}' kubectl apply -f ./kube.deploy.yml -n <namespace> kubectl apply -f ./kube_service.yml -n <namespace> YAML properties for rolling update: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: "applyupui-persist-service-deployment"

How to run Dgraph on bare-metal Kubernetes cluster

I am trying to setup Dgraph in HA Cluster but it won't deploy if no volumes are present. When directly applying the provided config on a bare-metal cluster won't work. $ kubectl get pod --namespace dgraph dgraph-alpha-0 0/1 Pending 0 112s dgraph-ratel-7459974489-ggnql 1/1 Running 0 112s dgraph-zero-0 0/1 Pending 0 112s $ kubectl describe pod/dgraph-alpha-0 --namespace dgraph Events: Type R

Kubernetes How to use Istio Virtual Service in-between front and back services

I am totally new to Istio, and the pitch looks very exciting. However, I can't make it work, which probably means I don't use it properly. My goal is to implement session affinity between 2 services, this is why originally I end up using Istio. However, I do a very basic test, and it does not seem to work: I have an kubernetes demo app which as a front service, a stateful-service, and a stateless service. From a browser, I access the front-service which dispatches the request either on the state

Kubernetes k8s - Keep pod up even if sidecar crashed

I have a pod with a sidecar. The sidecar does file synchronisation and is optional. However it seems that if the sidecar crashes, the whole pod becomes unavailable. I want the pod to continue serving requests even if its sidecar crashed. Is this doable?

Is it possible to disable kubernetes dashboard tls check

I am login kubernetes dashboard in my local machine(http://kubernetes.dolphin.com:8443/#/login), and I define a virutal domain name in /etc/hosts: 192.168.31.30 kubernetes.dolphin.com and now I am login kubernetes dashboard uing this domain, but it give me tips: Insecure access detected. Sign in will not be available. Access Dashboard securely over HTTPS or using localhost. is it possbile to close kubernetes dashboard(kubernetesui/dashboard:v2.0.3) tls security check in kubernetes dashboard y

Prometheus + Kubernetes - do all pods get values even if short lived?

Since Prometheus scrapes metrics at a regular interval (30 seconds or so), and some kubernetes pods only live a few seconds, can we depend on the metric kube_pod_created to actually show a value for each pod that existed in the system? My worry is that it only sees pods that existed during the scrape. But I'm not sure of the exporter implementation to be sure.

Kubernetes Kuberenetes deployed application can not execute swagger API

I have deployed application on Kubernetes and the application has one post Rest api and that has Swagger API. I have added network policy for the application to access it from out side. I could able to access the application and open swagger UI. But while executing the API I am getting error "TypeError: NetworkError when attempting to fetch resource". Looking at the few threads on Google it seems issue is with CORS. Has anybody encountered such issue and fixed it? When I run the applic

RabbitMQ Cluster Kubernetes Operator on minikube

I'm trying to set up RabbitMQ on Minikube using the RabbitMQ Cluster Operator: When I try to attach a persistent volume, I get the following error: $ kubectl logs -f rabbitmq-rabbitmq-server-0 Configuring logger redirection 20:04:40.081 [warning] Failed to write PID file "/var/lib/rabbitmq/mnesia/rabbit@rabbitmq-rabbitmq-server-0.rabbitmq-rabbitmq-headless.default.pid": permission denied 20:04:40.264 [error] Failed to create Ra data directory at '/var/lib/rabbitmq/mnesia/rabbit@rabbit

Kubernetes Ingress-nginx - socket hang up, connResetException?

My system: Ubuntu using microk8s kubectl I'm taking an online course and have run into an issue I can't find a solution to. I can't access the following URL internally in my application http://ingress-nginx-controller.ingress-nginx.svc.cluster.local I get the following error in my web browser "page": "/", "query": {}, "buildId": "development", "isFallback": false, "err": {"name": "Error","message"

What´s the correct way to stop pod in kubernetes

I'm having problems with my Kubernetes cluster. I have a service with a deployment associated. I have several replicas of the pods in this deployment up and running and 2 containers for each deployment. I'm using a Rolling Updatestrategy for redeploying, with parameters maxSurge : 1 and maxUnavailable: 0. When I change a deployment the pods are changing correctly, adding one new pod and then deleting an old one until the whole deployment is changed. However, on the switch of the first pod from o

Kubernetes k8s pod readiness probed failed: read tcp xxx -> yyy: read: connection reset by peer

I'm running Fargate on EKS and I have about 20~30 pods running. After about a few days (5 ~ 7 days; experienced two times), they begin to refuse Readiness probe HTTP requests. I captured the pod's description at that time. I want to point out the first event - connection reset by peer. I've come across this issue in Istio and the root cause can be the same. However, I don't use Istio so I'm stuck where to go. I'm going to attach partial data of my ingress, service, and deployment below. Events:

Kubernetes helm/kubernets sending mail from connection times out

I have some backend code that allows sending mails, however when I deploy this in my k8s environment, this gets a connection timeout. I'm thinking that this is because the port (465) is closed. But I can't seem to find on how to open them. These are the port configurations I've done so far to try and make it work, but result is still the same deployment.yaml: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Chart.AppVersion }}" imagePullPolicy: {{ .V

Shared Folder with Azure File on kubernetes pod doesn't work

I have an issue on my deployment when I try to share a folder with a kubernetes volume. The folder will be shared using an Azure File Storage. If I deploy my image without sharing the folder (/integrations) the app start. as shown in the image below the pod via lens is up and running If I add the relation of the folder to a volume the result is that the pod will stuck in error with this messagge Here I put my yaml deployment: apiVersion: apps/v1 kind: Deployment metadata: namespace: sandbox-pi

  1    2   3   4   5   6  ... 下一页 最后一页 共 79 页