Normal view
Four-inch worm hatches in womanβs forehead, wriggles to her eyelid
If you need some motivation to keep from eating too much this Thanksgiving, here it is: Doctors in Romania pulled an 11 cm (4.3 inch) living, writhing round worm from a womanβs left eyelid.
According to a report in the New England Journal of Medicine, the worm likely hatched from a hard lump in her right temple, which the woman recalled first spotting a month beforehand. She also noticed that the nodule had vanished just a day before the worm apparently made a squiggly run for her eye.
When she went to an ophthalmologist the next day, doctors immediately noted the βmobile lesionβ on her eyelid, which was in the suspicious shape of a bunched-up worm just under her skin with a little redness and swelling.


Smoked Tomahawk Ribeye on the Pellet Grill
Smoked Tomahawk Ribeye on the Pellet Grill
This 3-pound tomahawk ribeye went straight on the pellet grillβno reverse sear, no cast iron finish. Just steady smoke and low heat all the way through. I figured it might miss that crust, but the right layer of seasoning and patience built up the perfect color. The result? Juicy, tender, medium-rare perfection with a buttery jus from the rest. Proof you can cook a tomahawk start to finish on a pellet grill and still pull off a true steakhouse-quality ribeye.
WHAT MALCOM USED IN THIS RECIPE:
- Disposable BBQ Boards
- 5β³ Flexible Curved Boning Knife
- Willinghamβs Wβham Seasoning
- Killer Hogs AP Seasoning
- Killer Hogs Steak Seasoning
- Thermoworks DOT
- BBQ Gloves
Smoked Tomahawk Ribeye On The Pellet Grill Recipe
Smoked Tomahawk Ribeye on the Pellet Grill
5 Stars 4 Stars 3 Stars 2 Stars 1 Star
No reviews
- Author: Malcom Reed
Description
Smoked tomahawk ribeye cooked start-to-finish on the pellet grill. No sear, just low heat, rich smoke, and juicy medium-rare steakhouse flavor every time.
Ingredients
- 1 Tomahawk Ribeye Steak (about 3 lbs)
- 2 Tbsp Worcestershire sauce
- 2 Tbsp Willinghamβs Wβham Original Mild Seasoning
- 1β2 Tbsp Killer Hogs AP Seasoning (salt, pepper, garlic)
- 1β2 Tbsp Killer Hogs Steak Rub (for texture & color)
- Β½ stick (4 Tbsp) unsalted butter, cut into pats
Instructions
- Trim the Steak
Lightly trim any big pockets of fat around the edges and clean up the bone for presentation. - Season Generously
Rub the steak all over with Worcestershire sauce as a binder.
Apply a medium coat of Wβham Original Mild Seasoning for color and base flavor.
Add a layer of Killer Hogs AP Seasoning to build that salt, pepper, garlic profile.
Finish with a layer of Killer Hogs Steak Rub for texture and crust. - Rest & Fire Up the Grill
Let the steak sit out while you fire up your pellet grill to 250Β°F. This gives the seasonings time to melt in and lets the steak come up to room temperature before cooking. - Smoke the Tomahawk
Place the steak on the pellet grill and insert a probe thermometer into the thickest part.
Set your first target internal temp to 120Β°F.
Once it hits 120, reset your probe to 128Β°F and monitor closely. - Rest with Butter
Place pats of butter and a light sprinkle of Steak Rub on a platter.
Remove the steak from the grill at 128Β°F and place it directly on butter.
Tent loosely with foil and rest for 10 minutes. - Slice & Serve
Remove the steak from the bone, then slice across the grain. The melted butter and drippings combine to make a rich, flavorful sauceβspoon that over the slices before serving.
Thatβs how you nail a tomahawk on the pellet grillβlow heat, good smoke, and plenty of patience. Slice it up, drizzle that buttery rest over the top, and youβve got steakhouse flavor right in your backyard.
Malcom Reed
Connect on Facebook
Follow me on TikTok
Subscribe to my YouTube Channel
Follow me on Instagram
The post Smoked Tomahawk Ribeye on the Pellet Grill appeared first on HowToBBBQRight.
How To Grill a Ribeye Steak
Grilled Ribeye Steak
This is how I grill a ribeye steak. Sometimes I change out the flavors and seasoningsβ¦ and you can tooβ¦ but this is my tried and true technique I use for getting a perfectly juicy ribeye β with all the flavor!
WHAT MALCOM USED IN THIS RECIPE
A good ribeye steak deserves the right treatment, and Iβm showing you exactly how to grill it up right! Weβre seasoning it with a bold rub, cooking it over red-hot coals for a killer crust, and making sure itβs perfectly juicy from edge to edge.
Whether you like it medium-rare or a little more done, this method locks in all the flavor and gives you steakhouse-quality results every time!
Malcom Reed
Connect on Facebook
Follow me on TikTok
Subscribe to my YouTube Channel
Follow me on Instagram
The post How To Grill a Ribeye Steak appeared first on HowToBBBQRight.
Popeye - A Kubernetes Cluster Resource Sanitizer
Popeye - A Kubernetes Cluster Sanitizer
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.
Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!
Installation
Popeye is available on Linux, OSX and Windows platforms.
-
Binaries for Linux, Windows and Mac are available as tarballs in the release page.
-
For OSX/Unit using Homebrew/LinuxBrew
brew install derailed/popeye/popeye -
Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:
-
Clone the repo
-
Add the following command in your go.mod file
replace (
github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
) -
Build and run the executable
go run main.go
Quick recipe for the impatient:
# Clone outside of GOPATH
git clone https://github.com/derailed/popeye
cd popeye
# Build and install
go install
# Run
popeye -
PreFlight Checks
-
Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.
export TERM=xterm-256color
Sanitizers
Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.
The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...
Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!
Here is a list of some of the available sanitizers:
| Resource | Sanitizers | Aliases | |
|---|---|---|---|
ο | Node | no | |
| Conditions ie not ready, out of mem/disk, network, pids, etc | |||
| Pod tolerations referencing node taints | |||
| CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM) | |||
ο | Namespace | ns | |
| Inactive | |||
| Dead namespaces | |||
ο | Pod | po | |
| Pod status | |||
| Containers statuses | |||
| ServiceAccount presence | |||
| CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM) | |||
| Container image with no tags | |||
Container image using latest tag | |||
| Resources request/limits presence | |||
| Probes liveness/readiness presence | |||
| Named ports and their references | |||
ο | Service | svc | |
| Endpoints presence | |||
| Matching pods labels | |||
| Named ports and their references | |||
ο | ServiceAccount | sa | |
| Unused, detects potentially unused SAs | |||
ο | Secrets | sec | |
| Unused, detects potentially unused secrets or associated keys | |||
ο | ConfigMap | cm | |
| Unused, detects potentially unused cm or associated keys | |||
ο | Deployment | dp, deploy | |
| Unused, pod template validation, resource utilization | |||
ο | StatefulSet | sts | |
| Unsed, pod template validation, resource utilization | |||
ο | DaemonSet | ds | |
| Unsed, pod template validation, resource utilization | |||
ο | PersistentVolume | pv | |
| Unused, check volume bound or volume error | |||
ο | PersistentVolumeClaim | pvc | |
| Unused, check bounded or volume mount error | |||
ο | HorizontalPodAutoscaler | hpa | |
| Unused, Utilization, Max burst checks | |||
ο | PodDisruptionBudget | ||
| Unused, Check minAvailable configuration | pdb | ||
ο | ClusterRole | ||
| Unused | cr | ||
ο | ClusterRoleBinding | ||
| Unused | crb | ||
ο | Role | ||
| Unused | ro | ||
ο | RoleBinding | ||
| Unused | rb | ||
ο | Ingress | ||
| Valid | ing | ||
ο | NetworkPolicy | ||
| Valid | np | ||
ο | PodSecurityPolicy | ||
| Valid | psp |
You can also see the full list of codes
Save the report
To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.
Example to save report in working directory:
$ POPEYE_REPORT_DIR=$(pwd) popeye --saveExample to save report in working directory in HTML format under the name "report.html" :
$ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.htmlSave the report to S3
You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.
Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.
Example to save report to S3:
popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=jsonIf AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:
popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINTRun public Docker image locally
You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :
docker run --rm -it \
-v $HOME/.kube:/root/.kube \
derailed/popeye --context foo -n barRunning the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.
docker run --rm -it \
-v $HOME/.kube:/root/.kube \
-e POPEYE_REPORT_DIR=/tmp/popeye \
-v /tmp:/tmp \
derailed/popeye --context foo -n bar --save --output-file my_report.txt
# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
$ cat /tmp/popeye/my_report.txt
<snip>The Command Line
You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.
# Dump version info
popeye version
# Popeye a cluster using your current kubeconfig environment.
popeye
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye helpOutput Formats
Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.
| Format | Description | Default | Credits |
|---|---|---|---|
| standard | The full monty output iconized and colorized | yes | |
| jurassic | No icons or color like it's 1979 | ||
| yaml | As YAML | ||
| html | As HTML | ||
| json | As JSON | ||
| junit | For the Java melancholic | ||
| prometheus | Dumps report a prometheus scrappable metrics | dardanel | |
| score | Returns a single cluster sanitizer score value (0-100) | kabute |
The SpinachYAML Configuration
A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.
NOTE: This file will change as Popeye matures!
Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).
A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.
For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.
NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye βconfiglessβ to make sure you will recognize any new issues that may have arisen in your clustersβ¦
Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)
# A Popeye sample configuration file
popeye:
# Checks resources against reported metrics usage.
# If over/under these thresholds a sanitization warning will be issued.
# Your cluster must run a metrics-server for these to take place!
allocations:
cpu:
underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load.
memory:
underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.
# Excludes excludes certain resources from Popeye scans
excludes:
v1/pods:
# In the monitoring namespace excludes all probes check on pod's containers.
- name: rx:monitoring
code s:
- 102
# Excludes all istio-proxy container scans for pods in the icx namespace.
- name: rx:icx/.*
containers:
# Excludes istio init/sidecar container from scan!
- istio-proxy
- istio-init
# ConfigMap sanitizer exclusions...
v1/configmaps:
# Excludes key must match the singular form of the resource.
# For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4
- name: rx:fred.+\.v\d+
# Namespace sanitizer exclusions...
v1/namespaces:
# Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported!
- name: rx:kube
codes:
- 404
# Exclude all istio* namespaces from being scanned.
- name: rx:istio
# Completely exclude horizontal pod autoscalers.
autoscaling/v1/horizontalpodautoscalers:
- name: rx:.*
# Configure node resources.
node:
# Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
limits:
# CPU checks if current CPU utilization on a node is greater than 90%.
cpu: 90
# Memory checks if current Memory utilization on a node is greater than 80%.
memory: 80
# Configure pod resources
pod:
# Restarts check the restarts count and triggers a lint warning if above threshold.
restarts:
3
# Check container resource utilization in percent.
# Issues a lint warning if about these threshold.
limits:
cpu: 80
memory: 75
# Configure a list of allowed registries to pull images from
registries:
- quay.io
- docker.ioPopeye In Your Clusters!
Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.
Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.
kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye---
apiVersion: batch/v1
kind: CronJob
metadata:
name: popeye
namespace: popeye
spec:
schedule: "* */1 * * *" # Fire off Popeye once an hour
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: popeye
restartPolicy: Never
containers:
- name: popeye
image: derailed/popeye
imagePullPolicy: IfNotPresent
args:
- -o
- yaml
- --force-exit-zero
- true
resources:
limits:
cpu: 500m
memory: 100MiThe --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.
Popeye got your RBAC!
In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.
Sample Popeye RBAC Rules (please note that those are subject to change.)
---
# Popeye ServiceAccount.
apiVersion: v1
kind: ServiceAccount
metadata:
name: popeye
namespace: popeye
---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: popeye
rules:
- apiGroups: [""]
resources:
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources :
- pods
- nodes
verbs: ["get", "list"]
---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: popeye
subjects:
- kind: ServiceAccount
name: popeye
namespace: popeye
roleRef:
kind: ClusterRole
name: popeye
apiGroup: rbac.authorization.k8s.ioScreenshots
Cluster D Score
Cluster A Score
Report Morphology
The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:
| Level | Icon | Jurassic | Color | Description |
|---|---|---|---|---|
| Ok | β
| OK | Green | Happy! |
| Info | ο | I | BlueGreen | FYI |
| Warn | ο± | W | Yellow | Potential Issue |
| Error | ο₯ | E | Red | Action required |
The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.
The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.
Known Issues
This initial drop is brittle. Popeye will most likely blow up whenβ¦
- You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
- You don't have enough RBAC oomph to manage your cluster (see RBAC section)
Disclaimer
This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!
ATTA Girls/Boys!
Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!
Contact Info
- Email: fernand@imhotep.io
- Twitter: @kitesurfer
Popeye - A Kubernetes Cluster Resource Sanitizer
Popeye - A Kubernetes Cluster Sanitizer
Popeye is a utility that scans live Kubernetes cluster and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.
Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way!
Installation
Popeye is available on Linux, OSX and Windows platforms.
-
Binaries for Linux, Windows and Mac are available as tarballs in the release page.
-
For OSX/Unit using Homebrew/LinuxBrew
brew install derailed/popeye/popeye -
Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:
-
Clone the repo
-
Add the following command in your go.mod file
replace (
github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
) -
Build and run the executable
go run main.go
Quick recipe for the impatient:
# Clone outside of GOPATH
git clone https://github.com/derailed/popeye
cd popeye
# Build and install
go install
# Run
popeye -
PreFlight Checks
-
Popeye uses 256 colors terminal mode. On `Nix system make sure TERM is set accordingly.
export TERM=xterm-256color
Sanitizers
Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch'in to make Popeye even better.
The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc...
Popeye is not another static analysis tool. It runs and inspect Kubernetes resources on live clusters and sanitize resources as they are in the wild!
Here is a list of some of the available sanitizers:
| Resource | Sanitizers | Aliases | |
|---|---|---|---|
ο | Node | no | |
| Conditions ie not ready, out of mem/disk, network, pids, etc | |||
| Pod tolerations referencing node taints | |||
| CPU/MEM utilization metrics, trips if over limits (default 80% CPU/MEM) | |||
ο | Namespace | ns | |
| Inactive | |||
| Dead namespaces | |||
ο | Pod | po | |
| Pod status | |||
| Containers statuses | |||
| ServiceAccount presence | |||
| CPU/MEM on containers over a set CPU/MEM limit (default 80% CPU/MEM) | |||
| Container image with no tags | |||
Container image using latest tag | |||
| Resources request/limits presence | |||
| Probes liveness/readiness presence | |||
| Named ports and their references | |||
ο | Service | svc | |
| Endpoints presence | |||
| Matching pods labels | |||
| Named ports and their references | |||
ο | ServiceAccount | sa | |
| Unused, detects potentially unused SAs | |||
ο | Secrets | sec | |
| Unused, detects potentially unused secrets or associated keys | |||
ο | ConfigMap | cm | |
| Unused, detects potentially unused cm or associated keys | |||
ο | Deployment | dp, deploy | |
| Unused, pod template validation, resource utilization | |||
ο | StatefulSet | sts | |
| Unsed, pod template validation, resource utilization | |||
ο | DaemonSet | ds | |
| Unsed, pod template validation, resource utilization | |||
ο | PersistentVolume | pv | |
| Unused, check volume bound or volume error | |||
ο | PersistentVolumeClaim | pvc | |
| Unused, check bounded or volume mount error | |||
ο | HorizontalPodAutoscaler | hpa | |
| Unused, Utilization, Max burst checks | |||
ο | PodDisruptionBudget | ||
| Unused, Check minAvailable configuration | pdb | ||
ο | ClusterRole | ||
| Unused | cr | ||
ο | ClusterRoleBinding | ||
| Unused | crb | ||
ο | Role | ||
| Unused | ro | ||
ο | RoleBinding | ||
| Unused | rb | ||
ο | Ingress | ||
| Valid | ing | ||
ο | NetworkPolicy | ||
| Valid | np | ||
ο | PodSecurityPolicy | ||
| Valid | psp |
You can also see the full list of codes
Save the report
To save the Popeye report to a file pass the --save flag to the command. By default it will create a temp directory and will store the report there, the path of the temp directory will be printed out on STDOUT. If you have the need to specify the output directory for the report, you can use the environment variable POPEYE_REPORT_DIR. By default, the name of the output file follow the following format : sanitizer_<cluster-name>_<time-UnixNano>.<output-extension> (e.g. : "sanitizer-mycluster-1594019782530851873.html"). If you have the need to specify the output file name for the report, you can pass the --output-file flag with the filename you want as parameter.
Example to save report in working directory:
$ POPEYE_REPORT_DIR=$(pwd) popeye --saveExample to save report in working directory in HTML format under the name "report.html" :
$ POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.htmlSave the report to S3
You can also save the generated report to an AWS S3 bucket (or another S3 compatible Object Storage) with providing the flag --s3-bucket. As parameter you need to provide the name of the S3 bucket where you want to store the report. To save the report in a bucket subdirectory provide the bucket parameter as bucket/path/to/report.
Underlying the AWS Go lib is used which is handling the credential loading. For more information check out the official documentation.
Example to save report to S3:
popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --out=jsonIf AWS sS3 is not your bag, you can further define an S3 compatible storage (OVHcloud Object Storage, Minio, Google cloud storage, etc...) using s3-endpoint and s3-region as so:
popeye --s3-bucket=NAME-OF-YOUR-S3-BUCKET/OPTIONAL/SUBDIRECTORY --s3-region YOUR-REGION --s3-endpoint URL-OF-THE-ENDPOINTRun public Docker image locally
You don't have to build and/or install the binary to run popeye: you can just run it directly from the official docker repo on DockerHub. The default command when you run the docker container is popeye, so you just need to pass whatever cli args are normally passed to popeye. To access your clusters, map your local kube config directory into the container with -v :
docker run --rm -it \
-v $HOME/.kube:/root/.kube \
derailed/popeye --context foo -n barRunning the above docker command with --rm means that the container gets deleted when popeye exits. When you use --save, it will write it to /tmp in the container and then delete the container when popeye exits, which means you lose the output. To get around this, map /tmp to the container's /tmp. NOTE: You can override the default output directory location by setting POPEYE_REPORT_DIR env variable.
docker run --rm -it \
-v $HOME/.kube:/root/.kube \
-e POPEYE_REPORT_DIR=/tmp/popeye \
-v /tmp:/tmp \
derailed/popeye --context foo -n bar --save --output-file my_report.txt
# Docker has exited, and the container has been deleted, but the file
# is in your /tmp directory because you mapped it into the container
$ cat /tmp/popeye/my_report.txt
<snip>The Command Line
You can use Popeye standalone or using a spinach yaml config to tune the sanitizer. Details about the Popeye configuration file are below.
# Dump version info
popeye version
# Popeye a cluster using your current kubeconfig environment.
popeye
# Popeye uses a spinach config file of course! aka spinachyaml!
popeye -f spinach.yml
# Popeye a cluster using a kubeconfig context.
popeye --context olive
# Stuck?
popeye helpOutput Formats
Popeye can generate sanitizer reports in a variety of formats. You can use the -o cli option and pick your poison from there.
| Format | Description | Default | Credits |
|---|---|---|---|
| standard | The full monty output iconized and colorized | yes | |
| jurassic | No icons or color like it's 1979 | ||
| yaml | As YAML | ||
| html | As HTML | ||
| json | As JSON | ||
| junit | For the Java melancholic | ||
| prometheus | Dumps report a prometheus scrappable metrics | dardanel | |
| score | Returns a single cluster sanitizer score value (0-100) | kabute |
The SpinachYAML Configuration
A spinach.yml configuration file can be specified via the -f option to further configure the sanitizers. This file may specify the container utilization threshold and specific sanitizer configurations as well as resources that will be excluded from the sanitization.
NOTE: This file will change as Popeye matures!
Under the excludes key you can configure to skip certain resources, or certain checks by code. Here, resource types are indicated in a group/version/resource notation. Example: to exclude PodDisruptionBugdets, use the notation policy/v1/poddisruptionbudgets. Note that the resource name is written in the plural form and everything is spelled in lowercase. For resources without an API group, the group part is omitted (Examples: v1/pods, v1/services, v1/configmaps).
A resource is identified by a resource kind and a fully qualified resource name, i.e. namespace/resource_name.
For example, the FQN of a pod named fred-1234 in the namespace blee will be blee/fred-1234. This provides for differentiating fred/p1 and blee/p1. For cluster wide resources, the FQN is equivalent to the name. Exclude rules can have either a straight string match or a regular expression. In the latter case the regular expression must be indicated using the rx: prefix.
NOTE! Please be careful with your regex as more resources than expected may get excluded from the report with a loose regex rule. When your cluster resources change, this could lead to a sub-optimal sanitization. Once in a while it might be a good idea to run Popeye βconfiglessβ to make sure you will recognize any new issues that may have arisen in your clustersβ¦
Here is an example spinach file as it stands in this release. There is a fuller eks and aks based spinach file in this repo under spinach. (BTW: for new comers into the project, might be a great way to contribute by adding cluster specific spinach file PRs...)
# A Popeye sample configuration file
popeye:
# Checks resources against reported metrics usage.
# If over/under these thresholds a sanitization warning will be issued.
# Your cluster must run a metrics-server for these to take place!
allocations:
cpu:
underPercUtilization: 200 # Checks if cpu is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if cpu is over allocated by more than 50% at current load.
memory:
underPercUtilization: 200 # Checks if mem is under allocated by more than 200% at current load.
overPercUtilization: 50 # Checks if mem is over allocated by more than 50% usage at current load.
# Excludes excludes certain resources from Popeye scans
excludes:
v1/pods:
# In the monitoring namespace excludes all probes check on pod's containers.
- name: rx:monitoring
code s:
- 102
# Excludes all istio-proxy container scans for pods in the icx namespace.
- name: rx:icx/.*
containers:
# Excludes istio init/sidecar container from scan!
- istio-proxy
- istio-init
# ConfigMap sanitizer exclusions...
v1/configmaps:
# Excludes key must match the singular form of the resource.
# For instance this rule will exclude all configmaps named fred.v2.3 and fred.v2.4
- name: rx:fred.+\.v\d+
# Namespace sanitizer exclusions...
v1/namespaces:
# Exclude all fred* namespaces if the namespaces are not found (404), other error codes will be reported!
- name: rx:kube
codes:
- 404
# Exclude all istio* namespaces from being scanned.
- name: rx:istio
# Completely exclude horizontal pod autoscalers.
autoscaling/v1/horizontalpodautoscalers:
- name: rx:.*
# Configure node resources.
node:
# Limits set a cpu/mem threshold in % ie if cpu|mem > limit a lint warning is triggered.
limits:
# CPU checks if current CPU utilization on a node is greater than 90%.
cpu: 90
# Memory checks if current Memory utilization on a node is greater than 80%.
memory: 80
# Configure pod resources
pod:
# Restarts check the restarts count and triggers a lint warning if above threshold.
restarts:
3
# Check container resource utilization in percent.
# Issues a lint warning if about these threshold.
limits:
cpu: 80
memory: 75
# Configure a list of allowed registries to pull images from
registries:
- quay.io
- docker.ioPopeye In Your Clusters!
Alternatively, Popeye is containerized and can be run directly in your Kubernetes clusters as a one-off or CronJob.
Here is a sample setup, please modify per your needs/wants. The manifests for this are in the k8s directory in this repo.
kubectl apply -f k8s/popeye/ns.yml && kubectl apply -f k8s/popeye---
apiVersion: batch/v1
kind: CronJob
metadata:
name: popeye
namespace: popeye
spec:
schedule: "* */1 * * *" # Fire off Popeye once an hour
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: popeye
restartPolicy: Never
containers:
- name: popeye
image: derailed/popeye
imagePullPolicy: IfNotPresent
args:
- -o
- yaml
- --force-exit-zero
- true
resources:
limits:
cpu: 500m
memory: 100MiThe --force-exit-zero should be set to true. Otherwise, the pods will end up in an error state. Note that popeye exits with a non-zero error code if the report has any errors.
Popeye got your RBAC!
In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.
Sample Popeye RBAC Rules (please note that those are subject to change.)
---
# Popeye ServiceAccount.
apiVersion: v1
kind: ServiceAccount
metadata:
name: popeye
namespace: popeye
---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: popeye
rules:
- apiGroups: [""]
resources:
- configmaps
- deployments
- endpoints
- horizontalpodautoscalers
- namespaces
- nodes
- persistentvolumes
- persistentvolumeclaims
- pods
- secrets
- serviceaccounts
- services
- statefulsets
verbs: ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources:
- clusterroles
- clusterrolebindings
- roles
- rolebindings
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources :
- pods
- nodes
verbs: ["get", "list"]
---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: popeye
subjects:
- kind: ServiceAccount
name: popeye
namespace: popeye
roleRef:
kind: ClusterRole
name: popeye
apiGroup: rbac.authorization.k8s.ioScreenshots
Cluster D Score
Cluster A Score
Report Morphology
The sanitizer report outputs each resource group scanned and their potential issues. The report is color/emoji coded in term of Sanitizer severity levels:
| Level | Icon | Jurassic | Color | Description |
|---|---|---|---|---|
| Ok | β
| OK | Green | Happy! |
| Info | ο | I | BlueGreen | FYI |
| Warn | ο± | W | Yellow | Potential Issue |
| Error | ο₯ | E | Red | Action required |
The heading section for each scanned Kubernetes resource provides a summary count for each of the categories above.
The Summary section provides a Popeye Score based on the sanitization pass on the given cluster.
Known Issues
This initial drop is brittle. Popeye will most likely blow up whenβ¦
- You're running older versions of Kubernetes. Popeye works best with Kubernetes 1.13+.
- You don't have enough RBAC oomph to manage your cluster (see RBAC section)
Disclaimer
This is work in progress! If there is enough interest in the Kubernetes community, we will enhance per your recommendations/contributions. Also if you dig this effort, please let us know that too!
ATTA Girls/Boys!
Popeye sits on top of many of open source projects and libraries. Our sincere appreciations to all the OSS contributors that work nights and weekends to make this project a reality!
Contact Info
- Email: fernand@imhotep.io
- Twitter: @kitesurfer
Maze, a notorious ransomware group, says itβs shutting down
One of the most active and notorious data-stealing ransomware groups, Maze, says it is βofficially closed.β
The announcement came as a waffling statement, riddled with spelling mistakes and published on its website on the dark web, which for the past year has published vast troves of stolen internal documents and files from the companies it targeted, including Cognizant, cybersecurity insurance firm Chubb, pharmaceutical giant ExecuPharm, Tesla and SpaceX parts supplier Visser and defense contractor Kimchuk.
Where typical ransomware groups would infect a victim with file-encrypting malware and hold the files for a ransom, Maze gained notoriety for first exfiltrating a victimβs data and threatening to publish the stolen files unless the ransom was paid.
It quickly became the preferred tactic of ransomware groups, which set up websites β often on the dark web β to leak the files it stole if the victim refused to pay up.
Maze initially used exploit kits and spam campaigns to infect its victims, but later began using known security vulnerabilities to specifically target big-name companies. Maze was known to use vulnerable virtual private network (VPN) and remote desktop (RDP) servers to launch targeted attacks against its victimβs network.
Some of the demanded ransoms reached into the millions of dollars. Maze reportedly demanded $6 million from one Georgia-based wire and cable manufacturer, and $15 million from one unnamed organization after the group encrypted its network. But after COVID-19 was declared a pandemic in March, Maze β as well as other ransomware groups β promised to not target hospitals and medical facilities.
But security experts arenβt celebrating just yet. After all, ransomware gangs are still criminal enterprises, many of which are driven by profit.
A statement by the Maze ransomware group, claiming it has shut down. Screenshot: TechCrunch
βObviously, Mazeβs claims should be taken with a very, very small pinch of salt,β said Brett Callow, a ransomware expert and threat analyst at security firm Emsisoft. βItβs certainly possible that the group feels they have made enough money to be able to close shop and sail off into the sunset. However, itβs also possible β and probably more likely β that theyβve decided to rebrand.β
Callow said the groupβs apparent disbanding leaves open questions about the Maze groupβs connections and involvement with other groups. βAs Maze was an affiliate operation, their partners in crime are unlikely to retire and will instead simply align themselves with another group,β he said.
Maze denied that it was a βcartelβ of ransomware groups in its statement, but experts disagree. Steve Ragan, a security researcher at Akamai, said Maze was known to post on its website data from other ransomware, like Ragnar Locker and the LockBit ransomware-for-hire.
βFor them to pretend now that there was no team-up or cartel is just plain backwards. Clearly these groups were working together on many levels,β said Ragan.
βThe downside to this, and the other significant element, is that nothing will change, Ransomware is still going to be out there,β said Ragan. βCriminals are still targeting open access, exposed RDP [remote desktop protocol] and VPN portals, and still sending malicious emails with malicious attachments in the hope of infecting unsuspecting victims on the internet,β he said.
Jeremy Kennelly at FireEyeβs Mandiant threat intelligence unit said that while the Maze brand may be dead, its operators are likely not gone for good.
βWe assess with high confidence that many of the individuals and groups that collaborated to enable the Maze ransomware service will likely continue to engage in similar operations β either working to support existing ransomware services or supporting novel operations in the future,β said Kennelly.
Maze, a notorious ransomware group, says itβs shutting down by Zack Whittaker originally published on TechCrunch
