Do you have your tools stored in a non-climate-controlled environment, particularly if they are used in the field and exposed to the elements? If so, you know how quickly rust may form on your equipment.
A simple solution is to place pure camphor blocks in your toolbox or other tool compartment to prevent your tools from rusting. Camphor fumes will fill a drawer, cabinet, toolbox, or any closed compartment, then condense on the surfaces of the tools, coating them with a film just a few molecules thick. And because this film is an oil, it repels moisture and helps keep your tools from rusting.
Camphor is a crystalline oil derived from naturally occurring chemicals found in laurel trees or synthetically from turpentine. It slowly evaporates from its compressed block and then condenses on metal surfaces, insulating them from moisture. You need about 1 to 2 ounces of camphor per toolbox or cabinet, and you can expect the tablet to last 6 months to 1 year in a temperate climate. Be sure to keep the toolbox or cabinet space closed during storage. Avoid naphthalene (which is what they make most mothballs from these days).
Camphor is flammable — the flash point for camphor vapor is 150 degrees F. So use caution with tools such as grinders or anything that would get very hot or throw sparks. It’s also toxic if taken internally, so keep pets, children, and well… you from eating it.
You can buy Camphor from a source like Amazon. Don’t unwrap them. Instead, cut a slit in the individual block packaging and place it in the compartment.
You know it’s time to replace your camphor block when it has significantly shrunk in size, or its scent has disappeared, as this indicates it has evaporated and is no longer providing its protective benefits. A block of 100% pure camphor will gradually shrink and eventually disappear as the camphor evaporates and its odor vanishes.
Today’s scenario consists of a db.m6i.2xlarge Amazon RDS instance running Microsoft SQL Server Web Edition. A newer “greenfield” web application has a larger number of users hitting it during work hours.
An alert indicated that the server was nearing its CPU capacity. To start our investigation, we examine the instance’s Monitoring tab and see that the “CPUUtilization” metric indeed indicates an issue. This chart shows a server restart around the 17:35 mark to free up some resources. However, it quickly rockets back up.
Next, we examine our Database Insights (which you will need to activate in advance). Here we see CPU waits exceeding our max vCPU threshold, indicating an issue.
We then scroll down to the “Top SQL” tab and see that our most considerable wait (taking up 5.44% of the database load at the time of this screenshot, a few hours after the fix) is our target SQL query.
One of the items that we double-checked and found as an oversight was that the max degrees of parallelism (MAXDOP) parameter was set to 0 (unlimited). When using OLTP, or Online Transaction Processing, it’s best to set it to 1, effectively disabling parallelism. It ensures that no single query can consume too many resources, leaving processor cores available for other user requests. This improves overall system responsiveness and throughput.
However, after setting this to 1 and rebooting the instance, we found the primary issue remained. MAXDOP issues tend to be reflected as “CXPACKET” waits instead of “CPU” waits, and you can see all the “CXPACKET” waits, shown in light blue in the screenshot chart below, ceased at 18:30 when the parameter was changed from 0 to 1 and the server was emergency restarted.
During troubleshooting, developers optimized the query to join on a primary ID and then do a fuzzy search. Per the developer, the previous query was not utilizing the indexes properly.
At about 19:30 on the chart, the optimized query was pushed to production. The SQL Server was then emergency restarted to clear the hanging queries. You can see that the issue was then resolved with a practical database load and wait metric.
Random monitoring for a few hours confirmed that the fixes have resolved the issue.
In conclusion, utilize AWS RDS’s database insights and CloudWatch monitoring tools to look for clues. Often, on an OLTP stack, database waits become an issue that manifests as slow-loading web pages. Use application performance monitors (APM) such as FusionReactor for ColdFusion and Lucee servers to help you narrow down troublesome issues.
As a side note, while restarting production databases during normal hours is never great, you have to weigh the benefits. In this case a minute or two of downtime won over endless lag. Credit goes to Amazon RDS for a very speedy server restart. We were back up within a minute or two at most after hitting restart!
Spoiler alert; the answer is yes. And by quite a bit. But with reservations. You have to look at the whole picture.
I took an example linked AWS account that had two Windows EC2 compute instances currently running. Both are webservers, one for production and one for staging. The following monthly cost comparisons use both on demand and savings plans that are for 1-year commitments with no down payment as this is typical for me.
AWS Instance
On Demand
Savings Plan
Azure Instance
On Demand
Savings Plan
t3a.2xlarge
$327
$266
B8ms
$247
$171
m7i-flex.2xlarge
$535
$461
D8d v5
$330
$226
As you can see, I could potentially be saving up to $330 on just these two instances by moving to Azure from AWS. $3,960/yr is a decent vacation or maybe a paycheck. So why not switch cloud providers? Well, there is the cost of learning a new infrastructure and dealing with all the nuances that come with it. But in this specific case, it’s managed database services.
The following compares single AZ Microsoft SQL Server Web Edition on Amazon RDS with Provisioned vCore Azure SQL Database. Note that Azure SQL Database is edition agnostic. There is also Azure managed SQL, but that’s even more expensive. We won’t be comparing AWS reserved instances to Azure reserved instances because we may never buy a reserved RDS instance on AWS for this account. Once you buy it, you are very locked in. No instance type or size exchanges, nor selling it on a marketplace. But you can see, a reserved instance on Azure is still more expensive than AWS on demand.
AWS Instance
On Demand
Azure Instance
On Demand
Reserved Capacity
db.m6i.xlarge
$430
D4s v5
$737
$581
db.m6i.2xlarge
$890
D8s v5
$1,105
$1,162
Find that odd? You actually get a better value on Azure when comparing it to the standard or enterprise edition. However, I deal primarily with Web and Express editions. The majority of applications I handle don’t require the functionality or redundancy built into standard and enterprise editions. However, if you were to require standard or enterprise, I would strongly suggest looking at using Azure to save you quite a bit of money, or start using Linux and an alternative database technology like Aurora on AWS.
When we factor in the managed database, we now do not find value moving to Azure from AWS as the realized savings is now lost. Yes there are options to spin up the database on a VM and manage it ourselves, but now you are adding in monthly labor costs and additional license costs for backups, not to mention the lost performance gains RDS or Azure SQL Database brings.
In conclusion, hosting Windows servers on Azure over AWS can save you money. But you need to factor in all the services, even beyond the database, including system administration labor and 3rd parties utility integration. Why can Azure have potentially significant savings on Windows servers? Because Microsoft owns both the infrastructure and the licenses. They do not translate the same license discounts to other cloud providers. For example, you used to be able to bring your own SQL license to AWS RDS, but that was discontinued some time ago.
Note: Turn off the “Azure Hybrid Benefit” slider when viewing Azure pricing. This option requires bringing your own license and does not facilitate an accurate pricing comparison.
If you are interested, Microsoft has an official AWS-to-Azure compute mapping guide; however, I used Microsoft Copilot to help me find the equivalent instance types and sizes. There are also web utilities that map them you can search for.
There are a number of methods for removing emails from the Amazon Simple Email Service (SES) Account Level Suppression List. But underneath they all use the API. This could be via a native API library, importing a file in the console UI, using the CloudShell, and more. But one thing is for certain, the Console UI does not make the practical. You must use the CLI/API in one way or another.
For purposes of this article, we will use the CloudShell. I write this because after asking Amazon Q and doing some Google searches, they all failed. I ended up with a good answer from the handy Stack Overflow post Remove Email from AWS SES Suppression List by user “Jawad“, even though it wasn’t marked as the accepted response.
Just a forewarning, there’s nothing fast about this. It iterates about one email per second, so expect to be monitoring this for a while.
The CloudShell can be access by pressing the “CloudShell” icon in the top header of the AWS console. It looks like a command prompt inside of a window icon. It will default to the region you have currently selected.
#Purge Amazon SES Account Level Suppression List
# AWS CLI command to list all email addresses in the suppression list
suppression_list=$(aws sesv2 list-suppressed-destinations)
# Extracting email addresses from the suppression list
email_addresses=$(echo $suppression_list | jq -r '.SuppressedDestinationSummaries[].EmailAddress')
# Loop through each email address and remove it from the suppression list
for email in $email_addresses; do
echo "Removing $email from the suppression list..."
aws sesv2 delete-suppressed-destination --email-address $email
done
echo "This page of emails purged. Rerun this script to delete further potential pages of emails, if emails were deleted."
Once you run this script, if emails were listed as deleted, run it again. Additionally, changes may take up to 48 hours to propagate. You may find the email still has the status that it is on the suppression list until it propagates.
The command list-suppressed-destinations retrieves the first page of up to 1,000 objects (default), which includes the email. At the end of the returned suppression_list value, if a “NextToken” string is defined, there are more emails still in the list. You can add a --no-paginate parameter to the command to turn off pagination, but depending on the size of your list, it’s possible you may run into unexpected issues, like memory limitations, that I have not tested for. See command documentation.
You can adapt this script to a CLI outside of the CloudShell by adding in --region <your-region> property to the list-suppressed-destinations and delete-suppressed-destination commands.
The other methods that were introduced that I found seem to have failed mostly because they introduced/kept double quotes around the email address, leading to the following error during the delete-suppressed-destination command:
An error occurred (NotFoundException) when calling the DeleteSuppressedDestination operation: Email address (email address) does not exist on your suppression list.
Recently, my mother received a bird feeder with a camera from the family that uses AI recognition to identify bird species and recharges itself with a solar panel. (hint: if you ever buy one of these, get one with “free AI forever”; others charge a monthly/annual subscription you don’t want to be stuck with). True, it’s a little gimmicky, but she loves her birds and it’s a thoughtful gift.
However, the downside to these types of devices is that they only support 2.4GHz Wi-Fi. The upside to this is extended range (think outdoors to indoors), and it doesn’t require a lot of bandwidth anyway. Most wireless CCTV-type IP Cameras still primarily use 2.4GHz Wi-Fi to achieve the desired range and penetrate materials more effectively, such as walls.
They attempted to connect the camera to the Wi-Fi, which you’d think would have been easy. But it did not. What happens is that when you have a multi-band Wi-Fi (2.4GHz, 5GHz, 6GHz) network, all using the identical SSIDs (Wi-Fi name), it gets confused and just doesn’t work. I’ve had the same thing happen with Ring doorbells in the past, along with other devices.
A phone call to the ISP may have helped them, assuming the ISP would not tell them to call the bird feeder company for support. And no one really wants to sit on hold and try to explain the issue over the phone anyway. So they call the “expert”, aka “me”.
Here are the steps I took:
Boot up the trusty laptop and connect to the Wi-Fi.
Get the default gateway using ipconfig from the command prompt
Connect to the router/modem using the IP I just looked up from the default gateway and tried the trusty default admin/password combos (of course, then setting a new password, so don’t even try 😉 )
Look for a way to set up a new 2.4GHz SSID for devices, which I was unable to find. (this is available on some access points/routers)
Disable the 5GHz and 6GHz SSIDs
Connect the camera to the Wi-Fo
Enable the 5GHz and 6GHz SSIDs
And now the camera and/or router remembers they want to be on 2.4GHz, and all is well; now my mom is enjoying her new smart bird feeder. What a world we live in!
When running a business and the goal is to become successful, you may inevitably fill out a risk assessment questionnaire that asks, “Is all your data encrypted at risk?”. And when you start looking at your list of EBS volumes created over the past period of time, you might be surprised to learn that the “encrypted” property of your volumes very well might equal “false”. NVMe instance store volumes are encrypted, but many EC2 solutions rely on EBS volumes, and they are not encrypted by default.
Encryption at rest for Amazon Elastic Block Store (EBS) volumes protects data when it’s not in use. It ensures that data stored on the physical storage media in an AWS data center is unreadable to anyone without the appropriate decryption key. Many argue this is a fundamental component of a strong security strategy, and they are not wrong.
AWS leverages its Key Management Service (KMS) to manage the encryption keys. You can use the default AWS-managed keys or create and manage your own customer-managed keys (CMKs) for more control over key policies, access permissions, and rotation schedules.
Does encryption at rest really matter while at an AWS data center, specifically? Unless you are dealing with classified information, probably not. AWS has an amazing level of virtual security separating your data from bad actors, both virtually and physically. The chances of someone walking off with your data medium (hard drive) are slim to none, and when the medium goes bad, the destruction and audit process is equally impressive. But at the end of the data, it matters to someone’s boss, their investors, policies, perception, and potentially your career. Many industry regulations and security standards, such as HIPAA, PCI DSS, and GDPR, require that sensitive data be encrypted at rest.
Security is like an onion with many layers. While AWS has robust physical and logical security controls in place, encryption at rest adds another vital layer of protection. This “defense-in-depth” approach ensures that even if one security control fails, others are there to prevent a breach.
Encryption isn’t limited to just the EBS volume itself. Per AWS documentation, when you encrypt an EBS volume, any snapshots you create from it and any subsequent volumes created from those snapshots will also be automatically encrypted. This provides an end-to-end security envelope for your data’s entire lifecycle.
So why not do it? The list of reasons is small, but you need to be aware of them, especially when dealing with AMIs and cross-region copies:
Very minimal increase in latency (typically negligible)
If a key becomes unusable (disabled or deleted), you cannot access the encrypted volume data until the key is restored
Additional key management overhead may be necessary if you opt to manage your own keys
AMIs with encrypted snapshots cannot be shared publicly – only with specific AWS accounts
Cannot directly copy encrypted snapshots to another region (must re-encrypt with destination region key)
Minimal additional cost for AWS KMS key usage
When creating an EC2 instance, the default view does not encrypt the volume. You will need to press the “Advanced” link in the “Configure storage” section first.
Then, expand your volume details. Next, change the “Encrypted” property from “Not encrypted” to “Encrypted”. Optionally, select your KMS key or leave it alone, and it will use the default. While you are here, you may wish to change the “Delete on termination” from “No” to “Yes”. This will help prevent any accidental data loss in edge cases, but be aware that this may lead to unexpected orphaned EBS volumes if you don’t go in and clean up your EBS volumes when you delete EC2 instances.
If you forget to turn this on, or you have an existing instance, you can still convert the volume to encrypted. It is a bit of a process though. You need to stop the instance, create an encrypted snapshot, detach the volume from the instance (jot down the device name such as /dev/xvda), restore the encrypted snapshot as a volume, and attach the volume to the instance with the same device mapping.
Installing Docker with the Compose v2 plugin does not follow the recommended paths outlined on the Docker website. Specifically, it suggests that you add either the CentOS or Fedora repo and then run sudo dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin, stating that sudo dnf install docker is the obsolete way of doing it (“ce” stands for community edition and “ee” stands for enterprise edition). Unfortunately, that does not work. Another thing to note is that Amazon’s repo uses a more stable version of Docker rather than the most current one. While this is typically a good thing, you may be missing out on a security patch you need.
Here is a script that I use for reference:
#ensure all the installed packages on our system are up to date
sudo dnf update
#use the default Amazon repository to download and install the Docker
sudo dnf install docker
#start Docker
sudo systemctl start docker
#start automatically with system boot
sudo systemctl enable docker
#confirm the service is running
sudo systemctl status docker
#add our current user to the Docker group
sudo usermod -aG docker $USER
#apply the changes we have done to Docker Group
newgrp docker
#create plugin directory for all users
sudo mkdir -p /usr/local/lib/docker/cli-plugins
#download current version of docker compose plugin
sudo curl -SL "https://github.com/docker/compose/releases/latest/download/docker-compose-linux-$(uname -m)" -o /usr/libexec/docker/cli-plugins/docker-compose
#make plugin executable
sudo chmod +x /usr/libexec/docker/cli-plugins/docker-compose
# restart docker service
sudo systemctl restart docker
#verify docker compose works
docker compose version
Note: Because the docker compose plugin was manually installed, it will not be updated with future yum/df updates. You will need to repeat the download and permission process to update it.
If you have a better way of doing this, I’d love to hear your feedback!
When I bought my 2019 RAM from Edwards Ram in Council Bluffs, Iowa, back in 2019, I received the typical car salesman knowledge: “It has 4-wheel drive, and it’s nice.” Ask the salesperson any questions, and you get the glazed-over look of “Can I see your checking account now?”
Being more technical, I did some Googling on the smartphone for an hour or two, but I didn’t find much. In particular, I wanted to find out if I wanted the “4WD AUTO” (BW 48-11) feature with a limited-slip diff or the e-locker option. In the end, I went with the “4WD AUTO” (auto four-wheel drive) option because I knew I mostly needed it on the streets in the winter.
This is now 2025 and it was the best decision. Over the years I’ve gone from 4H/4L, to just 2-wheel drive, back to 4H/4L, and now to 4WD AUTO/4H/4L. I use 4WD AUTO the most and 4H/4L only when in the thick of it (rarely). When it’s snowy or icy out here in the Midwest, I can turn that on with the additional tire traction and no tire grab during steering. I recommend this feature to all drivers in this climate.
But I still didn’t understand how this worked. Was it gears or something else? But lately, I learned the transfer case is a clutch pack, similar to what’s in the transmission. It’s not as beefy as a transfer case with a gear pack and can heat up. But for how I use it most of the time, that’s not an issue.
One hidden tip you have to learn, when you are in the thick of it, is to turn off electronic stability control, but pressing the traction control lever for 5 seconds. You will see an indication message on the dash this has been turned off. You have to be in 4H or 4L to do this. This will remove unnecessary power throttling to the wheels.
If I had gone with the e-locker, I would not have the nice traction during the winter months on the road. Though true grit offroad 4×4 needs would be better. But that’s just not what I use my truck for. Auto allows me to quickly and automatically transition from snow/icy roads to dry roads without issue.
By default, Bing Wallpaper is installed with Windows 11. The image rotation feature is nice, and on the surface, it seems innocuous. I left it alone for years because… well, I liked the wallpaper images it provided. But the unwanted behaviors of something pushing me to search with Bing or move to Edge when opening Google Chrome put me over the edge.
After uninstalling Bing Wallpaper, these intrusive Bing and Edge promotions have ceased!
Interestingly enough, the wallpaper defaulted to “Windows Spotlight,” which rotates through high-quality wallpaper images, just like Bing Wallpaper, minus all the spyware and malware attributes. So it’s a win-win situation.
I found an interesting discussion between the community and Adobe today regarding early cfscript functionality for tags. They use CFCs (query.cfc, ldap.cfc, http.cfc, etc.) located in the ColdFusion installation directory.
I found this interesting as I never put together that the early introduction of cfscript functionality using “cf…()” was a custom tag. See the screenshot below.
These do not correspond with “modern” cfscript such as “httpservice = new http();”. “cfhttp();” is a custom tag, however.
The discussion surrounded Adobe’s position/long-term plan for these CFC’s. If you look at ColdFusion Deprecated Features, these specific functions have been depreciated since the 2018 release. Not that they are NOT marked as “Deprecated and unsupported”. However, in the Adobe bug tracker, any related bug (and there’s a lot) has been marked as “Never fixing”. This realistically means “unsupported” and therefore, bugs, and potentially security risks, continue to be present even in 2023. Adobe has marked this for internal discussion on how to handle future releases.
It is my recommendation that if you use “cf…()” in your cfscripts while using ColdFusion 2018+, you cease using them in any new code, instead using “modern cfscript”. For existing code, getting those swapped typically looks like replacing the function as you touch it through the code lifecycle. However, some more sensitive organizations may wish to swap these out as a project and were unaware of this. To be clear, I am aware of no active security issues that would dictate immediate replacement.
It is good to take a quick peek at ColdFusion Deprecated Features to become familiar with depreciated functions and attributes so we do not continue to build technical debt for ourselves or our customers. A more popular example may be HTMLEditFormat.
Screenshot of Adobe ColdFusion 2021 Installation Directory Containing Custom Tags for CFCs