Learning “AWS Backup” Restorations for On-Prem VMWare VMs

CF Webtools has maintained VMWare ESXi guest OS instances, managed by vCenter, for about 7 years. They are a mix of Linux and Windows Server OSs and are maintained at a secure and redundant co-location data center. While an expensive up-front investment, it has paid for itself over those years, and we have a plan to continue that solution for about another 5 years. A recent upgrade to the next major version proved that virtual machines take a fraction of time for maintenance compared to bare metal instances. Granted, there’s some spin-up time when things work for so long, and you must remember, research, and troubleshoot procedures. Managed cloud takes almost all that time out of the equation, making it my favorite. Though I do miss hands-on hardware here and there.

Some of our on-prem VMs are critical, and some are not. The critical ones have always been backed up with different solutions, depending upon what they are and what the recovery needs look like. However, almost all have come with challenges. So I wanted to look for a VM snapshot-based cloud backup solution that I could trust and would be budget-friendly.

My first direction was to research Veeam. Their solution is very well known. However, it was a struggle to get the attention of Veeam and CDW as a small business without an existing account. I was able to lean on one of our hardware vendors, xByte, who hooked us up with one of their Veeam partners. But it was determined that it was fairly costly with a per-instance license model compared to our existing solutions. So I continued my search.

I then found AWS Backup has an on-prem VMWare solution. AWS Backup is relatively new to the backup game, but its implementations are continually growing. We currently use that service for all our AWS EC2 backups. That service was a “God send” after numerous awful implementations of custom Lambda/CloudWatch scripts and an EBS Automation method. Finally, a solution for what should have been around since the start of EC2.

As of November 2021, AWS Backup offers backup for on-prem VMWare vCenter servers. You must install their Storage Gateway virtual appliance as the “middleman” agent. I was hoping for an “agentless” solution; however, we only pay $0.05/GB-Mo warm storage and $0.01/GB-Mo Cold Storage. That’s a considerable saving, considering we do not have to pay for a license per instance, and there are no incoming bandwidth fees! We will have to pay bandwidth for on-prem restores, but considering that is very rarely done, and bandwidth is relatively cheap, it’s a non-issue. We’d have to pay for storage anyway, so there’s no change.

Another significant advantage is we get a single backup solution for both on-prem and AWS Cloud. It’s one less piece of software we must be familiar with, document, troubleshoot, and keep updated. Outside of an office domain controller, we also anticipate a complete cutover to AWS in 5 years.

Continue reading

#aws-backup, #backup, #vm, #vmware, #vsphere

Upgrading PostgreSQL on Docker Compose

I took the dive into Docker Containers on Linux. I’ve been meaning to do this for quite some time, but really my specialty of ColdFusion and Microsoft SQL Server really doesn’t require much in the way of containers.

However, my current project is to convert our internal wiki from WikiPedia to Wiki.js. During this process, I’ve learned about Docker and Docker Compose. I’m currently running Docker 20.10.12 and Docker compose V2. So I use “docker compose” rather than “docker-compose”. This is being run on Amazon Linux 2, on-premise. These steps should work for any RHEL-based Linux distro and beyond.

During this process, I somehow ended up running PostgreSQL 11 when the latest version is 14. I already had data in the database, so I thought it’d just be best to learn how to upgrade PostgreSQL, which is not as simple as changing the version number in the docker-compose.yml config file.

Thank you to José Postiga with Better Programming for getting started with the “How to Upgrade Your PostgreSQL Version Using Docker” blog post. I was able to accomplish the upgrade with some modifications.

To upgrade PostgreSQL from 11 to 14 (other versions seem to require the same steps), these general steps must be accomplished:

  1. Create a temporary folder for the database backup and share it with your PostgreSQL container instance
  2. Backup the database
  3. Shutdown the database container
  4. Remove the database data
  5. Change the PostgreSQL version
  6. Change password encryption
  7. Start the database container
  8. Restore the database
Continue reading

#4%e5%ae%ae%e5%b3%b6%e6%a4%bf%e3%81%ae%e5%a0%b4%e5%90%88, #docker, #docker-compose, #postgresql, #upgrade

Getting AWS Java SDK 2.0

In the past I’ve always used REST calls to the AWS API from ColdFusion. There are never any complete CFC libraries that work and they’re almost always dated. The reason being that AWS moves so fast, it’d require a full time person or more to keep it up-to-date and complete.

I am moving towards using the AWS Java SDK to call Java methods from ColdFusion. The SDK is kept up-to-date regularly by AWS and is quite complete and proven. The most common SDK in use today is version 1.x. However, late last year they came out with version 2.0.

According to AWS, “it is a major rewrite of the 1.11.x code base. Built with support for Java 8+, 2.x adds several frequently requested features, like nonblocking I/O, improved start-up performance and automatic iteration over paginated responses. In addition, many aspects of the SDK have been updated with a focus on consistency, immutability, and ease of use.”

But as a non-Java developer that uses Java libraries, this hasn’t come without difficulties. Because of its sheer size, AWS requires you to compile the source into a JAR file. You can compile all of it, which took me 1 hour and 3 minutes at a size of 122MiB. However, they recommend only compiling the (components) service that you plan on using.

I initially installed Maven on Windows 10 to compile it. However, as of version 2.3.6 there is a bug which makes the test fail in Windows, and thus the build. An issue was opened to resolve this and as of 1/22/2019 is pending to be merged into the master branch.

Therefore I compiled in Ubuntu for Windows.

Here’s my commands I used to get the environment ready and build the whole SDK using Maven:

sudo su
apt-get update && apt-get upgrade
# Install Maven
apt install maven
# Install Java SDK 8
apt-get install software-properties-common
add-apt-repository ppa:webupd8team/java
apt-get update
apt-get install oracle-java8-installer
# Verify Maven works and it does not throw a JAVA_HOME notice
mvn-version
# Get the AWS SDK source
git clone https://github.com/aws/aws-sdk-java-v2.git
# Check out a tag containing the release you want to use for the build
cd aws-sdk-java-v2
git fetch && git fetch --tags
git checkout 2.x.x
# Build out the SDK
mvn clean install
# compiles to ./bundle/target/aws-sdk-java-bundle-2.x.x.jar

Now, as I mentioned before, it’s recommended to compile only the components (services) you are going to use to reduce the JAR footprint.

The guide for this can be found here: https://docs.aws.amazon.com/sdk-for-java/v2/developer-guide/setup-project-maven.html

However, I found that guide to be fairly unhelpful. Currently I haven’t been able to get it to build successfully (it creates an empty JAR file).

Basically it’s supposed to use a “Bill of Materials” in the “MVN Repository” as your dependency dictionary. Then I believe it’s supposed to download the source files located in the MVN Repository, based upon your dependency definitions.

Here’s my pom.xml file that is used to define all that:

mvn-no-jar

After hours of frustration, I decided to boot up an AWS Linux 2 instance to see if maybe it was Windows Ubuntu related. Interestingly enough I got a different outcome.

When looking at the contents of the target jar, it looks promising. Not exactly sure what to expect just yet.

#jar, #java, #sdk

AWS Introduces Service Backups

Currently to backup EBS volumes on AWS, CF Webtools, and many other organizations, rely upon either custom Lambda script that can be buggy or a hard to figure out solution that’s deployed from a package AWS supplies.

Today AWS announced a centrally managed and automated solution for backing up multiple AWS services, including AWS: “AWS Backup

product-page-diagram_aws_backup_how-it-works.aafc7b1324fd4d8b52e9fbcd5c95e14529de27c6

This appears to be a backup solution that seeks out tags for different services. But the key here is this appears to be a well-thought-out solution that we don’t have to dick around with to make it work.

Service levels backups are provided for Amazon EFS and Amazon DynamoDB. Service-level snapshots are provided for Amazon RDS Amazon EBS and AWS Storage Gateway.

You can also backup your on-premise data via the AWS Storage Gateway.

I look forward to implementing this service to provide our clients with a much more reliable backup solution.

New AWS Windows and DR Services

49137469 - a word cloud of disaster recovery related items

On January 7th, 2019, AWS released the AMI for Windows Server 2019. This comes a few months after the October 2nd, 2018 release of the new Microsoft OS.

Some highlights include smaller and more efficient Windows containers, support for Linux containers for application modernization and App Compatibility Feature on Demand. It comes under the standard AWS Windows pricing model.

On that note, Windows Server 2008 R2 (SP1) is at end-of-life in about one year, while Windows Server 2012 R2 End of Mainstream support was back in October of 2018. Don’t let technical debt put you into a predicament. CF Webtools operations team can assist you with upgrading your operating system to a more recent version, whether it be the weather tested Windows Server 2016 or the most modern version of Windows Server 2019.

CF Webtools uses CloudEndure to provide disaster recovery (DR) services to our on-premise and data center clients. The service is low impact and gives you the security your organization demands. We investigated multiple DR services and chose CloudEndure to be our primary solution.

Amazon has recently acquired CloudEndure. The company was backed by Dell, Infosys and others. They have been an AWS Advanced Technology Partner since 2016. More details have yet to surface.

If you are interested in Disaster Recovery Services for your organization, please contact us and we’d love to help you.

Estimating AWS EC2 EBS Snapshots

Estimating and understanding what AWS EC2 EBS Snapshots will cost you can be more difficult than you may think.

Here are some key points to keep in mind:

  • Snapshots are not compressed. Therefore your first snapshot will be equal to the GiB used in the source EBS volume.
  • Additional snapshots are incremental. Each incremental snapshot uses pointers, pointing to the prior snapshot’s blocks that have not changed. New blocks are recorded.
  • You can use the AWS Cost Explorer to view past usage. Today is not available. Filter down by “Usage Type Group” and set the value to “EC2: EBS – Snapshots”. Narrow down further by region and/or tag.
    • Usage (GB) are measured by “GB-Month”. So if there are 30 days in that month, multiple the metric by 30 to get that day’s actual usage.
  • As of 12/10/2018, the cost of a snapshot is $0.05/GB/mo

The hard part is estimating the amount of change per snapshot. The most lenient method would be to use a 100% change value. But that’s not practical.

Let’s say you estimate that 3% of your total volume size will be modified per snapshot. Therefore plan on an additional cost of $.15/mo for every 100 GiB of used volume space on every snapshot produced..

SQL Server Agent Jobs on AWS RDS Multi-AZ

When running AWS RDS Microsoft SQL Server you may run into a configuration issue that may trip you up either during failover or instance upgrades.

This is taken from the Microsoft SQL Server Multi-AZ Deployment Notes and Recommendations section under the “Multi-AZ Deployments for Microsoft SQL Server with Database Mirroring” document:

If you have SQL Server Agent jobs, you need to recreate them in the secondary, as these jobs are stored in the msdb database, and this database can’t be replicated via Mirroring. Create the jobs first in the original primary, then fail over, and create the same jobs in the new primary.

This is one of the weaknesses in the Multi-AZ for RDS Server Server service.

They use mirroring to keep two RDS instances loaded with identical user table data, but they can’t mirror MSDB because it’s a system database.

One of the reasons jobs are so confusing on Multi-AZ SQL Server is, if you start off as Single-AZ, and move to Multi-AZ, all of your jobs are copied as part of the move to Multi-AZ. That’s because AWS takes a snapshot of all your data (including MSDB) and recreates it on the mirrored instance. This is where it can get confusing: people who look at a multi-AZ instance, and at a “was Single-AZ, now is Multi-AZ” and see inconsistent behavior in the jobs. But it can all be understood if you apply two rules:

  1. Jobs created when you’re Single-AZ will be copied when you move to Multi-AZ, because AWS takes a snapshot of all databases (including MSDB), but
  2. Other than that, no changes to jobs will ever be copied to the mirror unless the changes are done manually on both servers.

Continue reading

AWS Database Migration Service Endpoint Connection Issue

When setting up an AWS Database Migration Service (DMS) endpoint to an EC2 instance, within your VPC, you may get the error stating the connection could not be established and there’s a login timeout.

Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to connect Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: HYT00 NativeError: 0 Message: [unixODBC][Microsoft][ODBC Driver 13 for SQL Server]Login timeout expired ODBC general error.

This may be due to lack of ingress into your EC2 instance. Create a security group that allows the appropriate port into your EC2 instance, for example 1433 for SQL Server, limited to the private IP address of the DNS instance. Then attach that security group to the EC2 endpoint (database).

That’s the easy part. But how do you find the private IP? It’s not listed anywhere in the DMS console.

  1. Go to your DNS Replication Instance and copy the VPC and public IP address listed.
  2. Go to Network Interfaces inside your EC2 console.
  3. Look for the network interface with the copied public IPv4 address and VPC ID.
  4. Copy the Primary Private IPv4 IP.
  5. Go to Security Groups.
  6. Select or create on that is associated with your database endpoint instance.
  7. Add the copied IP into the source field of an inbound rule.

#04-74, #41259