The Portuguese Air Force has formally signed a contract with ICEYE for the direct acquisition of a Synthetic Aperture Radar (SAR) satellite, marking the first time the service will fully own and control a space-based intelligence asset. The announcement, made jointly by ICEYE and the Portuguese Air Force, represents a major step in expanding Portugalβs [β¦]
If you need some motivation to keep from eating too much this Thanksgiving, here it is: Doctors in Romania pulled an 11 cm (4.3 inch) living, writhing round worm from a womanβs left eyelid.
According to a report in the New England Journal of Medicine, the worm likely hatched from a hard lump in her right temple, which the woman recalled first spotting a month beforehand. She also noticed that the nodule had vanished just a day before the worm apparently made a squiggly run for her eye.
When she went to an ophthalmologist the next day, doctors immediately noted the βmobile lesionβ on her eyelid, which was in the suspicious shape of a bunched-up worm just under her skin with a little redness and swelling.
This 3-pound tomahawk ribeye went straight on the pellet grillβno reverse sear, no cast iron finish. Just steady smoke and low heat all the way through. I figured it might miss that crust, but the right layer of seasoning and patience built up the perfect color. The result? Juicy, tender, medium-rare perfection with a buttery jus from the rest. Proof you can cook a tomahawk start to finish on a pellet grill and still pull off a true steakhouse-quality ribeye.
Smoked tomahawk ribeye cooked start-to-finish on the pellet grill. No sear, just low heat, rich smoke, and juicy medium-rare steakhouse flavor every time.
Ingredients
1 Tomahawk Ribeye Steak (about 3 lbs)
2 Tbsp Worcestershire sauce
2 Tbsp Willinghamβs Wβham Original Mild Seasoning
1β2 Tbsp Killer Hogs AP Seasoning (salt, pepper, garlic)
Trim the Steak Lightly trim any big pockets of fat around the edges and clean up the bone for presentation.
Season Generously
Rub the steak all over with Worcestershire sauce as a binder.
Apply a medium coat of Wβham Original Mild Seasoning for color and base flavor.
Add a layer of Killer Hogs AP Seasoning to build that salt, pepper, garlic profile.
Finish with a layer of Killer Hogs Steak Rub for texture and crust.
Rest & Fire Up the Grill
Let the steak sit out while you fire up your pellet grill to 250Β°F. This gives the seasonings time to melt in and lets the steak come up to room temperature before cooking.
Smoke the Tomahawk
Place the steak on the pellet grill and insert a probe thermometer into the thickest part.
Set your first target internal temp to 120Β°F.
Once it hits 120, reset your probe to 128Β°F and monitor closely.
Rest with Butter
Place pats of butter and a light sprinkle of Steak Rub on a platter.
Remove the steak from the grill at 128Β°F and place it directly on butter.
Tent loosely with foil and rest for 10 minutes.
Slice & Serve
Remove the steak from the bone, then slice across the grain. The melted butter and drippings combine to make a rich, flavorful sauceβspoon that over the slices before serving.
Thatβs how you nail a tomahawk on the pellet grillβlow heat, good smoke, and plenty of patience. Slice it up, drizzle that buttery rest over the top, and youβve got steakhouse flavor right in your backyard.
This is how I grill a ribeye steak. Sometimes I change out the flavors and seasoningsβ¦ and you can tooβ¦ but this is my tried and true technique I use for getting a perfectly juicy ribeye β with all the flavor!
A good ribeye steak deserves the right treatment, and Iβm showing you exactly how to grill it up right! Weβre seasoning it with a bold rub, cooking it over red-hot coals for a killer crust, and making sure itβs perfectly juicy from edge to edge.
Whether you like it medium-rare or a little more done, this method locks in all the flavor and gives you steakhouse-quality results every time!
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.
Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!
Installation
Popeye is available on Linux, OSX and Windows platforms.
Binaries for Linux, Windows and Mac are available as tarballs in the release page.
For OSX/Unit using Homebrew/LinuxBrew
brew install derailed/popeye/popeye
Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:
# Clone outside of GOPATH git clone https://github.com/derailed/popeye cd popeye # Build and install go install # Run popeye
PreFlight Checks
Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.
export TERM=xterm-256color
Sanitizers
Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.
The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...
Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!
Here is a list of some of the available sanitizers:
Resource
Sanitizers
Aliases
ο
Node
no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)
ο
Namespace
ns
Inactive
Dead namespaces
ο
Pod
po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references
ο
Service
svc
Endpoints presence
Matching pods labels
Named ports and their references
ο
ServiceAccount
sa
Unused, detects potentially unused SAs
ο
Secrets
sec
Unused, detects potentially unused secrets or associated keys
ο
ConfigMap
cm
Unused, detects potentially unused cm or associated keys
ο
Deployment
dp, deploy
Unused, pod template validation, resource utilization
ο
StatefulSet
sts
Unsed, pod template validation, resource utilization
ο
DaemonSet
ds
Unsed, pod template validation, resource utilization
To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.
Example to save report in working directory:
$ POPEYE_REPORT_DIR=$(pwd) popeye --save
Example to save report in working directory in HTML format under the name "report.html" :
$ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html
Save the report to S3
You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.
Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.
If AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:
You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :
docker run --rm -it \ -v $HOME/.kube:/root/.kube \ derailed/popeye --context foo -n bar
Running the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.
# Docker has exited, and the container has been deleted, but the file # is in your /tmp directory because you mapped it into the container $ cat /tmp/popeye/my_report.txt <snip>
The Command Line
You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.
kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help" dir="auto">
# Dump version info popeye version # Popeye a cluster using your current kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help
Output Formats
Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.
A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.
NOTE: This file will change as Popeye matures!
Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).
A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.
For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.
NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye βconfiglessβ to make sure you will recognize any new issues that may have arisen in your clustersβ¦
Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)
# A Popeye sample configuration file popeye: # Checks resources against reported metrics usage. # If over/under these thresholds a sanitization warning will be issued. # Your cluster must run a metrics-server for these to take place! allocations: cpu: underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load. overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load. memory: underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load. overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.
# Excludes excludes certain resources from Popeye scans excludes: v1/pods: # In the monitoring namespace excludes all probes check on pod's containers. - name: rx:monitoring code s: - 102 # Excludes all istio-proxy container scans for pods in the icx namespace. - name: rx:icx/.* containers: # Excludes istio init/sidecar container from scan! - istio-proxy - istio-init # ConfigMap sanitizer exclusions... v1/configmaps: # Excludes key must match the singular form of the resource. # For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4 - name: rx:fred.+\.v\d+ # Namespace sanitizer exclusions... v1/namespaces: # Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported! - name: rx:kube codes: - 404 # Exclude all istio* namespaces from being scanned. - name: rx:istio # Completely exclude horizontal pod autoscalers. autoscaling/v1/horizontalpodautoscalers: - name: rx:.*
# Configure node resources. node: # Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered. limits: # CPU checks if current CPU utilization on a node is greater than 90%. cpu: 90 # Memory checks if current Memory utilization on a node is greater than 80%. memory: 80
# Configure pod resources pod: # Restarts check the restarts count and triggers a lint warning if above threshold. restarts: 3 # Check container resource utilization in percent. # Issues a lint warning if about these threshold. limits: cpu: 80 memory: 75
# Configure a list of allowed registries to pull images from registries: - quay.io - docker.io
Popeye In Your Clusters!
Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.
Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.
The --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.
Popeye got your RBAC!
In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.
Sample Popeye RBAC Rules (please note that those are subject to change.)
The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:
Level
Icon
Jurassic
Color
Description
Ok
β
OK
Green
Happy!
Info
ο
I
BlueGreen
FYI
Warn
ο±
W
Yellow
Potential Issue
Error
ο₯
E
Red
Action required
The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.
The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.
Known Issues
This initial drop is brittle. Popeye will most likely blow up whenβ¦
You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
You don't have enough RBAC oomph to manage your cluster (see RBAC section)
Disclaimer
This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!
ATTA Girls/Boys!
Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.
Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!
Installation
Popeye is available on Linux, OSX and Windows platforms.
Binaries for Linux, Windows and Mac are available as tarballs in the release page.
For OSX/Unit using Homebrew/LinuxBrew
brew install derailed/popeye/popeye
Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:
# Clone outside of GOPATH git clone https://github.com/derailed/popeye cd popeye # Build and install go install # Run popeye
PreFlight Checks
Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.
export TERM=xterm-256color
Sanitizers
Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.
The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...
Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!
Here is a list of some of the available sanitizers:
Resource
Sanitizers
Aliases
ο
Node
no
Conditions ie not ready, out of mem/disk, network, pids, etc
Pod tolerations referencing node taints
CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM)
ο
Namespace
ns
Inactive
Dead namespaces
ο
Pod
po
Pod status
Containers statuses
ServiceAccount presence
CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM)
Container image with no tags
Container image using latest tag
Resources request/limits presence
Probes liveness/readiness presence
Named ports and their references
ο
Service
svc
Endpoints presence
Matching pods labels
Named ports and their references
ο
ServiceAccount
sa
Unused, detects potentially unused SAs
ο
Secrets
sec
Unused, detects potentially unused secrets or associated keys
ο
ConfigMap
cm
Unused, detects potentially unused cm or associated keys
ο
Deployment
dp, deploy
Unused, pod template validation, resource utilization
ο
StatefulSet
sts
Unsed, pod template validation, resource utilization
ο
DaemonSet
ds
Unsed, pod template validation, resource utilization
To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.
Example to save report in working directory:
$ POPEYE_REPORT_DIR=$(pwd) popeye --save
Example to save report in working directory in HTML format under the name "report.html" :
$ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html
Save the report to S3
You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.
Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.
If AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:
You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :
docker run --rm -it \ -v $HOME/.kube:/root/.kube \ derailed/popeye --context foo -n bar
Running the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.
# Docker has exited, and the container has been deleted, but the file # is in your /tmp directory because you mapped it into the container $ cat /tmp/popeye/my_report.txt <snip>
The Command Line
You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.
kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help" dir="auto">
# Dump version info popeye version # Popeye a cluster using your current kubeconfig environment. popeye # Popeye uses a spinach config file of course! aka spinachyaml! popeye -f spinach.yml # Popeye a cluster using a kubeconfig context. popeye --context olive # Stuck? popeye help
Output Formats
Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.
A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.
NOTE: This file will change as Popeye matures!
Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).
A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.
For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.
NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye βconfiglessβ to make sure you will recognize any new issues that may have arisen in your clustersβ¦
Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)
# A Popeye sample configuration file popeye: # Checks resources against reported metrics usage. # If over/under these thresholds a sanitization warning will be issued. # Your cluster must run a metrics-server for these to take place! allocations: cpu: underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load. overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load. memory: underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load. overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.
# Excludes excludes certain resources from Popeye scans excludes: v1/pods: # In the monitoring namespace excludes all probes check on pod's containers. - name: rx:monitoring code s: - 102 # Excludes all istio-proxy container scans for pods in the icx namespace. - name: rx:icx/.* containers: # Excludes istio init/sidecar container from scan! - istio-proxy - istio-init # ConfigMap sanitizer exclusions... v1/configmaps: # Excludes key must match the singular form of the resource. # For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4 - name: rx:fred.+\.v\d+ # Namespace sanitizer exclusions... v1/namespaces: # Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported! - name: rx:kube codes: - 404 # Exclude all istio* namespaces from being scanned. - name: rx:istio # Completely exclude horizontal pod autoscalers. autoscaling/v1/horizontalpodautoscalers: - name: rx:.*
# Configure node resources. node: # Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered. limits: # CPU checks if current CPU utilization on a node is greater than 90%. cpu: 90 # Memory checks if current Memory utilization on a node is greater than 80%. memory: 80
# Configure pod resources pod: # Restarts check the restarts count and triggers a lint warning if above threshold. restarts: 3 # Check container resource utilization in percent. # Issues a lint warning if about these threshold. limits: cpu: 80 memory: 75
# Configure a list of allowed registries to pull images from registries: - quay.io - docker.io
Popeye In Your Clusters!
Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.
Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.
The --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.
Popeye got your RBAC!
In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.
Sample Popeye RBAC Rules (please note that those are subject to change.)
The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:
Level
Icon
Jurassic
Color
Description
Ok
β
OK
Green
Happy!
Info
ο
I
BlueGreen
FYI
Warn
ο±
W
Yellow
Potential Issue
Error
ο₯
E
Red
Action required
The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.
The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.
Known Issues
This initial drop is brittle. Popeye will most likely blow up whenβ¦
You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
You don't have enough RBAC oomph to manage your cluster (see RBAC section)
Disclaimer
This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!
ATTA Girls/Boys!
Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!
One of the most active and notorious data-stealing ransomware groups, Maze, says it is βofficially closed.β
The announcement came as a waffling statement, riddled with spelling mistakes and published on its website on the dark web, which for the past year has published vast troves of stolen internal documents and files from the companies it targeted, including Cognizant, cybersecurity insurance firm Chubb, pharmaceutical giant ExecuPharm, Tesla and SpaceX parts supplier Visser and defense contractor Kimchuk.
Where typical ransomware groups would infect a victim with file-encrypting malware and hold the files for a ransom, Maze gained notoriety for first exfiltrating a victimβs data and threatening to publish the stolen files unless the ransom was paid.
It quickly became the preferred tactic of ransomware groups, which set up websites β often on the dark web β to leak the files it stole if the victim refused to pay up.
Maze initially used exploit kits and spam campaigns to infect its victims, but later began using known security vulnerabilities to specifically target big-name companies. Maze was known to use vulnerable virtual private network (VPN) and remote desktop (RDP) servers to launch targeted attacks against its victimβs network.
Some of the demanded ransoms reached into the millions of dollars. Maze reportedly demanded $6 million from one Georgia-based wire and cable manufacturer, and $15 million from one unnamed organization after the group encrypted its network. But after COVID-19 was declared a pandemic in March, Maze β as well as other ransomware groups β promised to not target hospitals and medical facilities.
But security experts arenβt celebrating just yet. After all, ransomware gangs are still criminal enterprises, many of which are driven by profit.
A statement by the Maze ransomware group, claiming it has shut down. Screenshot: TechCrunch
βObviously, Mazeβs claims should be taken with a very, very small pinch of salt,β said Brett Callow, a ransomware expert and threat analyst at security firm Emsisoft. βItβs certainly possible that the group feels they have made enough money to be able to close shop and sail off into the sunset. However, itβs also possible β and probably more likely β that theyβve decided to rebrand.β
Callow said the groupβs apparent disbanding leaves open questions about the Maze groupβs connections and involvement with other groups. βAs Maze was an affiliate operation, their partners in crime are unlikely to retire and will instead simply align themselves with another group,β he said.
Maze denied that it was a βcartelβ of ransomware groups in its statement, but experts disagree. Steve Ragan, a security researcher at Akamai, said Maze was known to post on its website data from other ransomware, like Ragnar Locker and the LockBit ransomware-for-hire.
βFor them to pretend now that there was no team-up or cartel is just plain backwards. Clearly these groups were working together on many levels,β said Ragan.
βThe downside to this, and the other significant element, is that nothing will change, Ransomware is still going to be out there,β said Ragan. βCriminals are still targeting open access, exposed RDP [remote desktop protocol] and VPN portals, and still sending malicious emails with malicious attachments in the hope of infecting unsuspecting victims on the internet,β he said.
Jeremy Kennelly at FireEyeβs Mandiant threat intelligence unit said that while the Maze brand may be dead, its operators are likely not gone for good.
βWe assess with high confidence that many of the individuals and groups that collaborated to enable the Maze ransomware service will likely continue to engage in similar operations β either working to support existing ransomware services or supporting novel operations in the future,β said Kennelly.