❌

Reading view

There are new articles available, click to refresh the page.

CIOs take note: talent will walk without real training and leadership

Tech talent, especially with advanced and specialized skills, remains elusive. Findings from a recent IT global HR trends report by Gi Group show a 47% enterprise average struggles with sourcing and retaining talent. As a consequence, turnover remains high.

Another international study by Cegos highlights that 53% of 200 directors or managers of information systems in Italy alone say the difficulty of attracting and retaining IT talent is something they face daily.Β CybersecurityΒ is the most relevant IT problem but a majority, albeit slight, feels confident of tackling it. Conversely, however, only 8% think they’ll be able to solve the IT talent problem. IT team skills development and talent retention are the next biggest issues facing CIOs in Italy, and only 24% and 9%, respectively, think they can successfully address it.

β€œTalents aren’t rare,” says Cecilia Colasanti, CIO of Istat, the National Institute of Statistics. β€œThey’re there but they’re not valued. That’s why, more often, they prefer to go abroad. For me, talent is the right person in the right place. Managers, including CIOs, must have the ability to recognize talents, make them understand they’ve been identified, and enhance them with the right opportunities.”

The CIO as protagonist of talent management

Colasanti has very clear ideas on how to manage her talents to create a cohesive and motivated group. β€œThe goal I set myself as CIO was to release increasingly high-quality products for statistical users, both internal and external,” she says. β€œI want to be concrete and close the projects we’ve opened, to ensure the institution continues to improve with the contribution of IT, which is a driver of statistical production. I have the task of improving the IT function, the quality of the products released, the relevance of the management, and the well-being of people.”

Istat’s IT department currently has 195 people, and represents about 10% of the institute’s entire staff. Colasanti’s first step after her CIO appointment in October 2023 was to personally meet with all the resources assigned to management for an interview.

β€œI’ve been working at Istat since 2001 and almost everyone knows each other,” she says. β€œI’ve held various roles in the IT department, and in my latest role as CIO, I want to listen to everyone to gather every possible viewpoint. Because how well we know each other, I feel my colleagues have a high expectation of our work together. That’s why I try to establish a frank dialogue and avoid ambiguity. But I make it clear that listening doesn’t mean delegating responsibility. I accept some proposals, reject others, and try to justify choices.”

Another move was to reinstate the two problems, two solutions initiative launched in Istat many years ago. Colasanti asked staff, on a voluntary basis, to identify two problems and propose two solutions. She then processed the material and shared the results in face-to-face meetings, commenting on the proposals, and evaluating those to be followed up.

β€œI’ve been very vocal about this initiative,” she says, β€œBut I also believe it’s been an effective way to cement the relationship of trust with my colleagues.”

Some of the inquiries related to career opportunities and technical issues, but the most frequent pain points that emerged were internal communication and staff shortages. Colasanti spoke with everyone, clarifying which points she could or couldn’t act on. Career paths and hiring in the public sector, for example, follow precise procedures where little could be influenced.

β€œI tried to address all the issues from a proactive perspective,” she says. β€œWhere I perceived a generic resistance to change rather than a specific problem, I tried to focus on intrinsic motivation and people’s commitment. It’s important to explain the strategies of the institution and the role of each person to achieve objectives. After all, people need and have the right to know the context in which they operate, and be aware of how their work affects the bigger picture.”

Engagement must be built day by day, so Colasanti regularly meets with staff including heads of department and service managers.

Small enterprise, big concerns

The case of Istat stands out for the size of its IT department, but in SMEs, IT functions can be just a handful of people, including the CIO, and much of the work is done by external consultants and suppliers. It’s a structure that has to be worked with, dividing themselves between coordinating various resources across different projects, and the actual IT work. Outsourcing to the cloud is an additional support but CIOs would generally like to have more in-house expertise rather than depend on partners to control supplier products.

β€œAttracting and retaining talent is a problem, so things are outsourced,” says the CIO of a small healthcare company with an IT team of three. β€œYou offload the responsibility and free up internal resources at the risk of losing know-how in the company. But at the moment, we have no other choice. We can’t offer the salaries of a large private group, and IT talent changes jobs every two years, so keeping people motivated is difficult. We hire a candidate, go through the training, and see them grow only to see them leave. But our sector is highly specialized and the necessary skills are rare.”

The sirens of the market are tempting for those with the skills to command premium positioning, and the private sector is able to attract talent more easily than public due to its hiring flexibility and career paths.

β€œThe public sector offers the opportunity to research, explore and deepen issues that private companies often don’t invest in because they don’t see the profit,” says Colasanti. β€œThe public has the good of the community as its mission and can afford long-term investments.”

Training builds resource retention

To meet demand, CIOs are prioritizing hiring new IT profiles and training their teams, according to the Cegos international barometer. Offering reskilling and upskilling are effective ways to overcome the pitfalls of talent acquisition and retention.

β€œThe market is competitive, so retaining talent requires barriers to exit,” says Emanuela Pignataro, head of business transformation and execution at Cegos Italia. β€œIf an employer creates a stimulating and rewarding environment with sufficient benefits, people are less likely to seek other opportunities or get caught up in the competition. Many feel they’re burdened with too many tasks they can’t cope with on their own, and these are people with the most valuable skills, but who often work without much support. So if the company spends on training or onboarding new people who support these people, they create reassurance, which generates loyalty.”

In fact, Colasanti is a staunch supporter of life-long learning, and the experience that brings balance and management skills. But she doesn’t have a large budget for IT training, yet solutions in response to certain requests are within reach.

β€œIn these cases, I want serious commitment,” she says. β€œThe institution invests and the course must give a result. A higher budget would be useful, of course, especially for an ever-evolving subject like cybersecurity.”

The need for leadership

CIOs also recognize the importance of following people closely, empowering them, and giving them a precise and relevant role that enhances motivation. It’s also essential to collaborate with the HR function to develop tools for welfare and well-being.

According to the Gi Group study, the factors that IT candidates in Italy consider a priority when choosing an employer are, in descending order, salary, a hybrid job offer, work-life balance, the possibility of covering roles that don’t involve high stress levels, and opportunities for career advancement and professional growth.

But there’s another aspect that helps solve the age-old issue of talent management. CIOs need to recognize more of the role of their leadership. At the moment, Italian IT directors place it at the bottom of their key qualities. In the Cegos study, technical expertise, strategic vision, and ability to innovate come first, while leadership came a distant second. But the leadership of the CIO is a founding basis, even when there’s disagreement with choices.

β€œI believe in physical presence in the workplace,” says Colasanti. β€œIstat has a long tradition of applying teleworking and implementing smart working, which everyone can access if they wish. Personally, I prefer to be in the office, but I respect the need to reconcile private life and work, and I have no objection to agile working. I’m on site every day, though. My colleagues know I’m here.”

What is AI? (Grades 5-8)

4 Min Read

What is AI? (Grades 5-8)

Artist illustration of an unmanned passenger aircraft in flight during sunrise in the city.

This article is for students grades 5-8.

What is AI?

Artificial intelligence, or AI, is a type of technology that helps machines and computers have β€œthinking” abilities similar to humans. Devices using AI can learn words and concepts, recognize objects, see patterns, or make predictions. They can also be taught how to work autonomously. AI is often used to help people understand and solve problems more quickly than they could on their own.

AI includes:

  • Machine learning: This type of AI looks at large amounts of data and learns how to make fast and accurate predictions based on that data.Β 
  • Deep learning: This type helps computers operate much like the human brain. It uses several layers of β€œthought” to recognize patterns and learn new information. Deep learning is a type of machine learning.Β 
  • Generative AI: A human can use generative AI to create text, videos, images, and more. It is based on deep learning.
NASA’s Perseverance Mars rover alongside the rock nicknamed β€œCheyava Falls,” in this July 23, 2024, selfie made up of 62 individual images. β€œCheyava Falls,” which has features that may bear on the question of whether the Red Planet was long ago home to microscopic life, is to the left of the rover near the center of the image. The small hole visible in the rock is where Perseverance collected the β€œSapphire Canyon” core sample.
NASA/JPL-Caltech/MSSS

How is NASA using AI?

NASA has found uses for AI in many of its missions and programs.

For missions to the Moon, AI can use satellite imagery to create detailed 3D maps of dark craters. This data could help scientists plan missions, spot hazards, and even identify where future crews might find water ice. On Mars, theΒ Perseverance roverΒ uses AI to drive itself autonomously. It takes pictures of the ground, sees obstacles, and chooses the safest path.

AI also helps NASA search forΒ planets outside our solar system. For example, AI has helped citizen scientists find over 10,000 pairs of binary stars. These pairs orbit each other and block each other’s light. This information could help scientists search for new planets and learn more about how stars form.

β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”

Words to Know

Autonomous: acting or operating independently, without external control. An autonomous technology can perform duties without human intervention.

Citizen scientist: a member of the public, often a volunteer, who collects data that can be used by scientists. When members of the public participate in research in this way, it’s called citizen science.

β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”β€”

NASA also uses AI to support its work on Earth. Β The agency uses AI to aid disaster relief efforts during and after natural disasters like hurricanes or wildfires. For example, AI can count tarps on roofs in satellite images to measure damage after a storm. NASA is also supporting flight controllers and pilots by using AI to plan better flight routes, makingΒ air travel safer and more efficient.Β 

AI is helping NASA explore space, protect people, and make amazing discoveries!

The blue tentacle-like arms containing gecko-like adhesive pads, attached to an Astrobee robotic free-flyer, reach out and grapple a "capture cube" inside the International Space Station's Kibo laboratory module. The experimental grippers, outfitted on the toaster-sized Astrobee, demonstrated autonomous detection and capture techniques that may be used to remove space debris and service satellites in low Earth orbit.
The blue tentacle-like arms containing gecko-like adhesive pads, attaBlue tentacle-like arms with gecko-like adhesive pads reach out and grapple a β€œcapture cube” inside the International Space Station. The arms are attached to the cube-shaped Astrobee robotic free-flyer, right. The experimental grippers demonstrated techniques to autonomously perform tasks in low Earth orbit.
NASA

Advice From NASA AI Experts

β€œAI is a great field for people who like solving problems, building things, or asking questions about how the world works. People use AI to help doctors understand diseases, to teach robots how to explore space, Β and to help communities prepare for things like floods or wildfires. If you like using technology to help people and discover new things, AI could be a great career for you!” – Krista Kinnard, NASA’s Deputy Chief AI Officer

Artist concept of Artemis astronaut working on Lunar surface.
In this illustration, astronauts work on the lunar surface as part of NASA’s Artemis program.
NASA

Start exploring coding and STEM activities like robotics clubs. Just remember to always stay curious, keep practicing, and don’t be afraid of making mistakes. This reallyΒ helps you learn.

Martin Garcia

Martin Garcia

AI Adoption and Innovation Lead, NASA’s Johnson Space Center

Career Corner

NASA roles that may involve AI include: Β 
Astronauts: Astronauts on the International Space Station can use an AI β€œdigital assistant” to get medical recommendations. This is helpful when communication with Earth is interrupted. It could also be useful on future missions to distant destinations like Mars.
Engineers: Engineers can use AI to help them generate designs for things like new spacecraft.
Astronomers: AI helps astronomers analyze satellite and deep space telescope data to find stars and exoplanets.
Meteorologists: Weather experts can use machine learning to make climate projections.
Programmers: Programmers can use AI to update code used in older missions, bringing it up to modern standards.
IT professionals: AI can enable IT experts to understand outages across NASA, allowing them to get programs back up and running faster.
Program managers: Program managers can use AI to plan and model NASA missions.

Explore More

Build Your Computer Science Skills With NASA
Gaining Traction on Mars Activity
NASA Space Detective: Can You Spot a Star or a Galaxy
Video: Hack Into Computer Science With NASAΒ 
Artificial Intelligence at NASA

Data Centers’ Insatiable Demand for Electricity Will Change the Entire Energy Sector

10/20/25
ENERGY SECURITY
Enable IntenseDebate Comments:Β 
Enable IntenseDebate Comments

AI models are out on an energy-intensive training session with no end in sight. The training takes place on the servers in the world’s data centers, which currently number just over 10,000. Especially the large language models and generative AI that creates images and videos consume huge amounts ofΒ electricity.

They are so voracious that the International Energy Agency (IEA) has estimated that the power they needed for computing increased a billion-fold from 2022 toΒ 2024.

read more

Electric Cars May Be the β€œGreen” Choice, but They're Driving a Scramble for Critical Minerals

10/27/25
CRITICAL MINERALS
Enable IntenseDebate Comments:Β 
Enable IntenseDebate Comments

Our cars are responsible forΒ about 20 per cent of global carbon emissions. The move to electric vehicles (EVs) is central to the effort toΒ decarbonize the world’s transport.Β Β 

Electrifying only half of passenger carsΒ could cut 1.5 billion tons of COβ‚‚annually.

read more

Framework Reveals a Smarter and Faster Way to Retire U.S. Coal Plants

By: Staff
10/25/25
ENERGY SECURITY
Enable IntenseDebate Comments:Β 
Enable IntenseDebate Comments

Even as coal power continues its steady decline in the United States, more than a hundred plants still have no retirement plansβ€”a gap large enough to derail national climate goals. A new study led by UC Santa Barbara researchers offers a way forward, showing how targeted, data-driven approaches could help accelerate theΒ transition.

read more

Worth checking ep.3

By: hoek

Ahh, today is that glorious day when I realize that my plan to write one or even two articles a month on my wonderful blog is failing. And then I remind myself that my worth checking series is not only there to share knowledge and interesting material but also to save my ass in just such situations. To

Worth checking ep.2

By: hoek

I hope you haven’t forgotten about the Worth Checking series. Here you can check out the latest one. I thought I’d post monthly in this format, but it would turn out that I’d have more entries like this than my own :) If there were maybe 10 articles a month, it would make as much sense as possible.

Worth checking ep.1

By: hoek

Welcome to the first article of never ending (or till I die) series called Worth checking episode 1.

Weird introduction

Being a security researcher or pentester, red teamer, hacker, or other specialist in a

Popeye - A Kubernetes Cluster Resource Sanitizer

By: Unknown

Popeye - A Kubernetes Cluster Sanitizer

Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.

Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!


Installation

Popeye is available on Linux, OSX and Windows platforms.

  • Binaries for Linux, Windows and Mac are available as tarballs in the release page.

  • For OSX/Unit using Homebrew/LinuxBrew

    brew install derailed/popeye/popeye
  • Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:

    1. Clone the repo

    2. Add the following command in your go.mod file

      replace (
      github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
      )
    3. Build and run the executable

      go run main.go

    Quick recipe for the impatient:

    # Clone outside of GOPATH
    git clone https://github.com/derailed/popeye
    cd popeye
    # Build and install
    go install
    # Run
    popeye

PreFlight Checks

  • Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.

    export TERM=xterm-256color

Sanitizers

Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.

The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...

Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!

Here is a list of some of the available sanitizers:

Resource Sanitizers Aliases

Node no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)

Namespace ns
Inactive
Dead namespaces

Pod po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references

Service svc
Endpoints presence
Matching pods labels
Named ports and their references

ServiceAccount sa
Unused, detects potentially unused SAs

Secrets sec
Unused, detects potentially unused secrets or associated keys

ConfigMap cm
Unused, detects potentially unused cm or associated keys

Deployment dp, deploy
Unused, pod template validation, resource utilization

StatefulSet sts
Unsed, pod template validation, resource utilization

DaemonSet ds
Unsed, pod template validation, resource utilization

PersistentVolume pv
Unused, check volume bound or volume error

PersistentVolumeClaim pvc
Unused, check bounded or volume mount error

HorizontalPodAutoscaler hpa
Unused, Utilization, Max burst checks

PodDisruptionBudget
Unused, Check minAvailable configuration pdb

ClusterRole
Unused cr

ClusterRoleBinding
Unused crb

Role
Unused ro

RoleBinding
Unused rb

Ingress
Valid ing

NetworkPolicy
Valid np

PodSecurityPolicy
Valid psp

You can also see the full list of codes

Save the report

To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.

Example to save report in working directory:

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save

Example to save report in working directory in HTML format under the name "report.html" :

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

Save the report to S3

You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.

Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.

Example to save report to S3:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=json

If AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINT

Run public Docker image locally

You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
derailed/popeye --context foo -n bar

Running the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
-e POPEYE_REPORT_DIR=/tmp/popeye \
-v /tmp:/tmp \
derailed/popeye --context foo -n bar --save --output-file my_report.txt

# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
$ cat /tmp/popeye/my_report.txt
<snip>

The Command Line

You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.

kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help" dir="auto">
# Dump version info
popeye version
# Popeye a cluster using your current kubeconfig environment.
popeye
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye help

Output Formats

Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.

Format Description Default Credits
standard The full monty output iconized and colorized yes
jurassic No icons or color like it's 1979
yaml As YAML
html As HTML
json As JSON
junit For the Java melancholic
prometheus Dumps report a prometheus scrappable metrics dardanel
score Returns a single cluster sanitizer score value (0-100) kabute

The SpinachYAML Configuration

A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.

NOTE: This file will change as Popeye matures!

Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).

A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.

For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.

NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye β€žconfiglessβ€œ to make sure you will recognize any new issues that may have arisen in your clusters…

Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)

# A Popeye sample configuration file
popeye:
# Checks resources against reported metrics usage.
# If over/under these thresholds a sanitization warning will be issued.
# Your cluster must run a metrics-server for these to take place!
allocations:
cpu:
underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load.
memory:
underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.

# Excludes excludes certain resources from Popeye scans
excludes:
v1/pods:
# In the monitoring namespace excludes all probes check on pod's containers.
- name: rx:monitoring
code s:
- 102
# Excludes all istio-proxy container scans for pods in the icx namespace.
- name: rx:icx/.*
containers:
# Excludes istio init/sidecar container from scan!
- istio-proxy
- istio-init
# ConfigMap sanitizer exclusions...
v1/configmaps:
# Excludes key must match the singular form of the resource.
# For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4
- name: rx:fred.+\.v\d+
# Namespace sanitizer exclusions...
v1/namespaces:
# Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported!
- name: rx:kube
codes:
- 404
# Exclude all istio* namespaces from being scanned.
- name: rx:istio
# Completely exclude horizontal pod autoscalers.
autoscaling/v1/horizontalpodautoscalers:
- name: rx:.*

# Configure node resources.
node:
# Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
limits:
# CPU checks if current CPU utilization on a node is greater than 90%.
cpu: 90
# Memory checks if current Memory utilization on a node is greater than 80%.
memory: 80

# Configure pod resources
pod:
# Restarts check the restarts count and triggers a lint warning if above threshold.
restarts:
3
# Check container resource utilization in percent.
# Issues a lint warning if about these threshold.
limits:
cpu: 80
memory: 75

# Configure a list of allowed registries to pull images from
registries:
- quay.io
- docker.io

Popeye In Your Clusters!

Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.

Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.

kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: popeye
namespace: popeye
spec:
schedule: "* */1 * * *" # Fire off Popeye once an hour
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: popeye
restartPolicy: Never
containers:
- name: popeye
image: derailed/popeye
imagePullPolicy: IfNotPresent
args:
- -o
- yaml
- --force-exit-zero
- true
resources:
limits:
cpu: 500m
memory: 100Mi

The --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.

Popeye got your RBAC!

In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.

Sample Popeye RBAC Rules (please note that those are subject to change.)

---
# Popeye ServiceAccount.
apiVersion: v1
kind: ServiceAccount
metadata:
name: popeye
namespace: popeye

---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: popeye
rules:
- apiGroups: [""]
resources:
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources :
- pods
- nodes
verbs: ["get", "list"]

---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: popeye
subjects:
- kind: ServiceAccount
name: popeye
namespace: popeye
roleRef:
kind: ClusterRole
name: popeye
apiGroup: rbac.authorization.k8s.io

Screenshots

Cluster D Score

Cluster A Score

Report Morphology

The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:

Level Icon Jurassic Color Description
Ok
βœ…
OK Green Happy!
Info
ο”Š
I BlueGreen FYI
Warn

W Yellow Potential Issue
Error
ο’₯
E Red Action required

The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.

The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.

Known Issues

This initial drop is brittle. Popeye will most likely blow up when…

  • You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
  • You don't have enough RBAC oomph to manage your cluster (see RBAC section)

Disclaimer

This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!

ATTA Girls/Boys!

Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!

Contact Info

  1. Email: fernand@imhotep.io
  2. Twitter: @kitesurfer


Popeye - A Kubernetes Cluster Resource Sanitizer

By: Unknown

Popeye - A Kubernetes Cluster Sanitizer

Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.

Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!


Installation

Popeye is available on Linux, OSX and Windows platforms.

  • Binaries for Linux, Windows and Mac are available as tarballs in the release page.

  • For OSX/Unit using Homebrew/LinuxBrew

    brew install derailed/popeye/popeye
  • Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:

    1. Clone the repo

    2. Add the following command in your go.mod file

      replace (
      github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
      )
    3. Build and run the executable

      go run main.go

    Quick recipe for the impatient:

    # Clone outside of GOPATH
    git clone https://github.com/derailed/popeye
    cd popeye
    # Build and install
    go install
    # Run
    popeye

PreFlight Checks

  • Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.

    export TERM=xterm-256color

Sanitizers

Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.

The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...

Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!

Here is a list of some of the available sanitizers:

Resource Sanitizers Aliases

Node no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)

Namespace ns
Inactive
Dead namespaces

Pod po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references

Service svc
Endpoints presence
Matching pods labels
Named ports and their references

ServiceAccount sa
Unused, detects potentially unused SAs

Secrets sec
Unused, detects potentially unused secrets or associated keys

ConfigMap cm
Unused, detects potentially unused cm or associated keys

Deployment dp, deploy
Unused, pod template validation, resource utilization

StatefulSet sts
Unsed, pod template validation, resource utilization

DaemonSet ds
Unsed, pod template validation, resource utilization

PersistentVolume pv
Unused, check volume bound or volume error

PersistentVolumeClaim pvc
Unused, check bounded or volume mount error

HorizontalPodAutoscaler hpa
Unused, Utilization, Max burst checks

PodDisruptionBudget
Unused, Check minAvailable configuration pdb

ClusterRole
Unused cr

ClusterRoleBinding
Unused crb

Role
Unused ro

RoleBinding
Unused rb

Ingress
Valid ing

NetworkPolicy
Valid np

PodSecurityPolicy
Valid psp

You can also see the full list of codes

Save the report

To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.

Example to save report in working directory:

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save

Example to save report in working directory in HTML format under the name "report.html" :

  $ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

Save the report to S3

You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.

Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.

Example to save report to S3:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=json

If AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:

popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINT

Run public Docker image locally

You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
derailed/popeye --context foo -n bar

Running the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.

  docker run --rm -it \
-v $HOME/.kube:/root/.kube \
-e POPEYE_REPORT_DIR=/tmp/popeye \
-v /tmp:/tmp \
derailed/popeye --context foo -n bar --save --output-file my_report.txt

# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
$ cat /tmp/popeye/my_report.txt
<snip>

The Command Line

You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.

kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help" dir="auto">
# Dump version info
popeye version
# Popeye a cluster using your current kubeconfig environment.
popeye
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye help

Output Formats

Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.

Format Description Default Credits
standard The full monty output iconized and colorized yes
jurassic No icons or color like it's 1979
yaml As YAML
html As HTML
json As JSON
junit For the Java melancholic
prometheus Dumps report a prometheus scrappable metrics dardanel
score Returns a single cluster sanitizer score value (0-100) kabute

The SpinachYAML Configuration

A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.

NOTE: This file will change as Popeye matures!

Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).

A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.

For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.

NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye β€žconfiglessβ€œ to make sure you will recognize any new issues that may have arisen in your clusters…

Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)

# A Popeye sample configuration file
popeye:
# Checks resources against reported metrics usage.
# If over/under these thresholds a sanitization warning will be issued.
# Your cluster must run a metrics-server for these to take place!
allocations:
cpu:
underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load.
memory:
underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.

# Excludes excludes certain resources from Popeye scans
excludes:
v1/pods:
# In the monitoring namespace excludes all probes check on pod's containers.
- name: rx:monitoring
code s:
- 102
# Excludes all istio-proxy container scans for pods in the icx namespace.
- name: rx:icx/.*
containers:
# Excludes istio init/sidecar container from scan!
- istio-proxy
- istio-init
# ConfigMap sanitizer exclusions...
v1/configmaps:
# Excludes key must match the singular form of the resource.
# For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4
- name: rx:fred.+\.v\d+
# Namespace sanitizer exclusions...
v1/namespaces:
# Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported!
- name: rx:kube
codes:
- 404
# Exclude all istio* namespaces from being scanned.
- name: rx:istio
# Completely exclude horizontal pod autoscalers.
autoscaling/v1/horizontalpodautoscalers:
- name: rx:.*

# Configure node resources.
node:
# Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
limits:
# CPU checks if current CPU utilization on a node is greater than 90%.
cpu: 90
# Memory checks if current Memory utilization on a node is greater than 80%.
memory: 80

# Configure pod resources
pod:
# Restarts check the restarts count and triggers a lint warning if above threshold.
restarts:
3
# Check container resource utilization in percent.
# Issues a lint warning if about these threshold.
limits:
cpu: 80
memory: 75

# Configure a list of allowed registries to pull images from
registries:
- quay.io
- docker.io

Popeye In Your Clusters!

Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.

Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.

kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: popeye
namespace: popeye
spec:
schedule: "* */1 * * *" # Fire off Popeye once an hour
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: popeye
restartPolicy: Never
containers:
- name: popeye
image: derailed/popeye
imagePullPolicy: IfNotPresent
args:
- -o
- yaml
- --force-exit-zero
- true
resources:
limits:
cpu: 500m
memory: 100Mi

The --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.

Popeye got your RBAC!

In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.

Sample Popeye RBAC Rules (please note that those are subject to change.)

---
# Popeye ServiceAccount.
apiVersion: v1
kind: ServiceAccount
metadata:
name: popeye
namespace: popeye

---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: popeye
rules:
- apiGroups: [""]
resources:
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources :
- pods
- nodes
verbs: ["get", "list"]

---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: popeye
subjects:
- kind: ServiceAccount
name: popeye
namespace: popeye
roleRef:
kind: ClusterRole
name: popeye
apiGroup: rbac.authorization.k8s.io

Screenshots

Cluster D Score

Cluster A Score

Report Morphology

The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:

Level Icon Jurassic Color Description
Ok
βœ…
OK Green Happy!
Info
ο”Š
I BlueGreen FYI
Warn

W Yellow Potential Issue
Error
ο’₯
E Red Action required

The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.

The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.

Known Issues

This initial drop is brittle. Popeye will most likely blow up when…

  • You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
  • You don't have enough RBAC oomph to manage your cluster (see RBAC section)

Disclaimer

This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!

ATTA Girls/Boys!

Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!

Contact Info

  1. Email: fernand@imhotep.io
  2. Twitter: @kitesurfer


❌