New AWS Windows and DR Services

49137469 - a word cloud of disaster recovery related items

On January 7th, 2019, AWS released the AMI for Windows Server 2019. This comes a few months after the October 2nd, 2018 release of the new Microsoft OS.

Some highlights include smaller and more efficient Windows containers, support for Linux containers for application modernization and App Compatibility Feature on Demand. It comes under the standard AWS Windows pricing model.

On that note, Windows Server 2008 R2 (SP1) is at end-of-life in about one year, while Windows Server 2012 R2 End of Mainstream support was back in October of 2018. Don’t let technical debt put you into a predicament. CF Webtools operations team can assist you with upgrading your operating system to a more recent version, whether it be the weather tested Windows Server 2016 or the most modern version of Windows Server 2019.

CF Webtools uses CloudEndure to provide disaster recovery (DR) services to our on-premise and data center clients. The service is low impact and gives you the security your organization demands. We investigated multiple DR services and chose CloudEndure to be our primary solution.

Amazon has recently acquired CloudEndure. The company was backed by Dell, Infosys and others. They have been an AWS Advanced Technology Partner since 2016. More details have yet to surface.

If you are interested in Disaster Recovery Services for your organization, please contact us and we’d love to help you.

Estimating AWS EC2 EBS Snapshots

Estimating and understanding what AWS EC2 EBS Snapshots will cost you can be more difficult than you may think.

Here are some key points to keep in mind:

  • Snapshots are not compressed. Therefore your first snapshot will be equal to the GiB used in the source EBS volume.
  • Additional snapshots are incremental. Each incremental snapshot uses pointers, pointing to the prior snapshot’s blocks that have not changed. New blocks are recorded.
  • You can use the AWS Cost Explorer to view past usage. Today is not available. Filter down by “Usage Type Group” and set the value to “EC2: EBS – Snapshots”. Narrow down further by region and/or tag.
    • Usage (GB) are measured by “GB-Month”. So if there are 30 days in that month, multiple the metric by 30 to get that day’s actual usage.
  • As of 12/10/2018, the cost of a snapshot is $0.05/GB/mo

The hard part is estimating the amount of change per snapshot. The most lenient method would be to use a 100% change value. But that’s not practical.

Let’s say you estimate that 3% of your total volume size will be modified per snapshot. Therefore plan on an additional cost of $.15/mo for every 100 GiB of used volume space on every snapshot produced..

How Many Hours In A Month?

When estimating the monthly cost of a per hour service, over the course of a year, you need to know how many hours in a month. For some reason a simple “average hours in a month” Google Search yields unhelpful results.

730.5 AVERAGE HOURS IN A MONTH

There are 365.25 Julian calendar days per year (366 days in a leap year), 24 hours in a day and 12 months.

365.25 days X 24 hours / 12 months = 730.5 hours

So now you can estimate monthly cost for hourly services, such as Cloud Services.

example: $.10/hr * 730.5 = $73.05 average cost per month over a year

SQL Server Agent Jobs on AWS RDS Multi-AZ

When running AWS RDS Microsoft SQL Server you may run into a configuration issue that may trip you up either during failover or instance upgrades.

This is taken from the Microsoft SQL Server Multi-AZ Deployment Notes and Recommendations section under the “Multi-AZ Deployments for Microsoft SQL Server with Database Mirroring” document:

If you have SQL Server Agent jobs, you need to recreate them in the secondary, as these jobs are stored in the msdb database, and this database can’t be replicated via Mirroring. Create the jobs first in the original primary, then fail over, and create the same jobs in the new primary.

This is one of the weaknesses in the Multi-AZ for RDS Server Server service.

They use mirroring to keep two RDS instances loaded with identical user table data, but they can’t mirror MSDB because it’s a system database.

One of the reasons jobs are so confusing on Multi-AZ SQL Server is, if you start off as Single-AZ, and move to Multi-AZ, all of your jobs are copied as part of the move to Multi-AZ. That’s because AWS takes a snapshot of all your data (including MSDB) and recreates it on the mirrored instance. This is where it can get confusing: people who look at a multi-AZ instance, and at a “was Single-AZ, now is Multi-AZ” and see inconsistent behavior in the jobs. But it can all be understood if you apply two rules:

  1. Jobs created when you’re Single-AZ will be copied when you move to Multi-AZ, because AWS takes a snapshot of all databases (including MSDB), but
  2. Other than that, no changes to jobs will ever be copied to the mirror unless the changes are done manually on both servers.

Continue reading

AWS Database Migration Service Endpoint Connection Issue

When setting up an AWS Database Migration Service (DMS) endpoint to an EC2 instance, within your VPC, you may get the error stating the connection could not be established and there’s a login timeout.

Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to connect Network error has occurred, Application-Detailed-Message: RetCode: SQL_ERROR SqlState: HYT00 NativeError: 0 Message: [unixODBC][Microsoft][ODBC Driver 13 for SQL Server]Login timeout expired ODBC general error.

This may be due to lack of ingress into your EC2 instance. Create a security group that allows the appropriate port into your EC2 instance, for example 1433 for SQL Server, limited to the private IP address of the DNS instance. Then attach that security group to the EC2 endpoint (database).

That’s the easy part. But how do you find the private IP? It’s not listed anywhere in the DMS console.

  1. Go to your DNS Replication Instance and copy the VPC and public IP address listed.
  2. Go to Network Interfaces inside your EC2 console.
  3. Look for the network interface with the copied public IPv4 address and VPC ID.
  4. Copy the Primary Private IPv4 IP.
  5. Go to Security Groups.
  6. Select or create on that is associated with your database endpoint instance.
  7. Add the copied IP into the source field of an inbound rule.

Elon Musk on The Joe Rogan Experience

Elon Musk is one of my favorite people to follow. I’d love to own a Tesla as well. From SpaceX to The Boring Company to Tesla, I find them all interesting.

Here’s some pretty interesting insight into Elon’s mind.

ColdFusion Docker Image Released

CFDockerTweet

On April 25, 2018, Adobe released the long awaited, official, Docker Image for ColdFusion 2016! ColdFusion 2018’s image is in the works.

I am excited about this primarily for two reasons:

#1 Development: we can create a docker image that can possibly be passed around to developers either as general use or specific to a customer’s setup. This has the potential to speed up “ramp-up time” for developers when beginning a new client workload.

#2 Fix AWS AMI’s: The current AMI solution on AWS is bleak. You’re limited to operating systems that are either the wrong flavor of Linux for you or an outdated Windows Server version. You are also stuck with the inability to upgrade the OS or ColdFusion. I’m hoping this makes its way into the AWS container library that you can lease month-to-month, both in the Standard and Enterprise flavors.

docker-cloud-serversAdobe choose the JFrog container repository over Docker Hub “due to licencing and distribution issues”. This seems to be a common theme with Adobe, but at least it’s out there. You can find these repos at https://bintray.com/eaps/coldfusion.

As of 4/26/2018, you will find the following images:

  • ColdFusion Server (2016)
  • ColdFusion Addons (2016)
  • ColdFusion API Manager (2016)
  • ColdFusion API Manager Addons (2016)

The “ColdFusion Server” contains the “barebones” only. It runs the bundled “built-in” web server (normally port 8500).

You’d then normally want to connect Apache or IIS to ColdFusion using wsconfig. However that’s not possible, in the common sense, here. The reason being is that you’re in a container that has no real access to the outside space, including your webserver services.

So in this case you’ll need to treat this as a distributed setup. You’ll need to copy some files, including wsconfig, onto your Apache file system – likely in another container. From there you’ll run wsconfig and connect it to the “remote” ColdFusion instance. There are some basic instructions at http://blogs.coldfusion.com/setting-up-coldfusion-in-distributed-envionment/ and Adobe says they will work on an official instruction set. I also plan on posting my own instructions. I don’t think this will work with IIS without a hack, but this is definitely something I’d want. The majority of web servers we maintain are IIS.

When you run the ColdFusion container, you are able to pass in a limited set of environment variables. They range include items such as password, secure profile, external session info, addons, and a startup script.

The setup script will be a .CFM file that calls the admin object. Here you will script items such as datasources. However, as far as I know, not all admin functions are available via the API. One of the major benefits of running containers is to have a disposable environment that can easily be recreated. In order to do this, you must be able to script out all your configuration. I would also like to see all settings be available in the environment variables as that’s what it’s intended for. Using a script is more or less a hack that needs additional maintenance.

Another method is mounting a volume for configuration files such as JVM.config and neo-*.xml files. I have to experiment with this to figure out how that would work.

The third method would be to mount a directory that has CAR archives into “/data” and configurations in the archive would be automatically imported during container setup. However this is a rather static method and not easy to manage. However according to Immanual Noel, on 4/27/2018, Adobe is having an issue importing DSN’s and scheduled tasks in this fashion and are currently working on this issue.

The final method would be to use Ortus’s CFConfig CLI. You could pass in a JSON string and let it build out the configuration. This might actually be one of the best ways to do it. I’m hoping Adobe’s implementation catches up to this quickly though. Ortus has great open source products for ColdFusion, but this shouldn’t be required.

The “ColdFusion Addons” container runs SOLR and PDF services. The “.NET” service will not exist as it is a Linux container. I would have preferred separating out the services though due to the container pattern of one service per container. This extends the ColdFusion container.

According to Adobe, they are not going to create a Windows container at this point due to performance issues they saw. But the great thing about Docker on Windows is that it’s capable of running both Windows and Linux containers. It’s very rare that I need to run .NET, however that does leave my 2 points out in the cold if a customer uses it.

In conclusion, I look forward to testing this out and perhaps implementing this solution where it makes sense.

Other resources to read:

  • https://www.cutterscrossing.com/index.cfm/2018/4/18/Adventures-in-Docker-Land