Enabling Windows nodes inside an EKS cluster on AWS.

Description

We recently went about adding windows nodes for some legacy dotnet stacks at Ancestry. As part of this we followed the AWS documentation to enable windows. We also use Karpenter to handle the scheduling and decided to make a new provisioner for just windows. After following the docs a new windows node came up but there was an error that prevented the pods from obtaining an ip address.

Warning FailedCreatePodSandBox 3m21s (x4555 over 19h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a760b4d93ed0937341cb5083547b0b8a197a280a66ad3d0cb096562ab2a237a1": plugin type="vpc-bridge" name="vpc" failed (add): failed to parse Kubernetes args: failed to get pod IP address windows-test-5bbc88b5f9-vzgfc: error executing k8s connector: error executing connector binary: exit status 1 with execution error: pod windows-test-5bbc88b5f9-vzgfc does not have label vpc.amazonaws.com/PrivateIPv4Address

Solution

My suspicion was there was something in the control plane that can’t be seen from kubectl. After talking with AWS support through several rounds there is an admission webhook in the control plane that is triggered by setting the nodeSelector. The docs say that needs to be set, but I thought it was just to set nodeaffinity and not the other field.

nodeSelector:
        kubernetes.io/os: windows
        kubernetes.io/arch: amd64

Once we added this to the pod spec section of the deployment yaml the pods were able to successfully be triggered by the mutating webhook to add the right annotations vpc.amazonaws.com/PrivateIPv4Address to the pod and it was able to get an ip address succesffully.

Birth of a Liberal

Abstract

I just want to begin by expressing my gratitude for this country where I live, The United States of America. It really is a land of immense freedom and opportunity. I love the constitution and feel that it really is an inspired document. I consider myself somewhere in the middle of the political spectrum. I do not like extremist views in either direction, conservative or liberal. The following is just my personal viewpoints and opinions on how liberals and conservatives are the same people just separated by time. Liberal views on the political spectrum are just a natural evolution of a progression where people live.

How to become liberal

The Wild West

In 1803 the United States made one of the largest land purchases in history known as “The Louisiana Purchase.” This purchase was for what is now the Midwest states such as Iowa, South Dakota, Kansas, Oklahoma, and of course Louisiana. Several years after this purchase the rest of the west joined the union. California through Utah came from the Mexican Cession in 1848, Oregon and Washington joined in 1846, and Texas joined the union in 1845. The United States had a plethora of land that needed people to fill it, and thus the Homestead Act was introduced in 1862 by President Lincoln. This act allowed people to claim up to 160 acres that was currently unclaimed at the time for free. The only requirements were the owners had to live on the land, improve it, and pay the registration fees. The Homestead Act caused many people to move west with the dream of owning their own land for free.

Try and imagine living during this time and deciding to travel west to take advantage of the free land. You sell everything you have, which isn’t much by today’s standards. You buy a wagon that is pulled by either horses or oxen. You load the wagon with food, seeds for growing future food, tools, a gun with some ammunition, and maybe a few changes of clothes. Early in the spring you take your family and begin walking from one of the eastern states and traveling west. The average distance the pioneers would travel in one day was 15 miles. To cover the number of miles needed to head west this means you will be traveling for six months. There is an old game called The Oregon Trail that accurately describes life as a pioneer heading west.

After six months of traveling, you find a location that you can call home. There is a small town 15 miles away where you register your land. You have no house, there is now hospital, there are no neighbors. Just your small family and the local wildlife for company. There is much work to be done as winter is coming. You need to build a cabin or house to protect your family. You need to dig a well so you can pull fresh water. After winter comes and goes you need to get a harvest started, which means plowing the fields and preparing them to be farmed. There is a near endless amount of work to be done.

Living one 160 acres by yourself means you are the only rule and law around. With the culture at that time it was perfectly acceptable to shoot and kill any intruder or stranger that came to your property. Remember you are the only person living out there. You are the law. Whatever you say becomes the law. You have no neighbors to worry about. You in essence have the freedom to do anything you want.

The Hamlet or Township

Staying with this same scenario let’s advance forward a few years. You have had several children that are now married and want to live near you. Next to your house you divide off sections of land for each to build a house. Your children then have other relatives that visit and want to move close to them. More houses are built. Now you have a small township with ten to thirty homes. You are not free to do everything the same as you once did. The more people that come together and try to live together the more freedoms need to be surrendered for everyone to live peacefully together. You cannot just kill anyone that comes to your house. You cannot cause a disturbance to the peace in the middle of the night. Everyone in this small township must agree on certain rules to follow in order to have peace and they pick one person as a sheriff in order to maintain this law. To pay for the sheriff and various other publicly shared municipalities such as a road taxes now have to be levied. They aren’t huge and the people are only taxed when something needs to be done.

Even though you have given up a little bit of your ultimate freedom life is good because you no longer must wear so many hats. One of the women is a medicine woman and has a gift for healing, another resident is a baker, and each person finds unique tasks to specialize in so the whole community can share talent and resources.

Town, City, and Metropolis

As time progresses more and more people call this place home. There is an official town name given. The more people that come to live together the more municipalities there are that need to be created, such as schools, parks, public transportation, and etc. As any community grows in population there are more people that have to decide how to live peacefully together. This is no small task. Every step of growth more freedoms must be surrendered. Traffic lights for example inhibit people from speeding through intersections continuously, but they help give order so that everyone can share the road peacefully.

The difference in ideology between small communities and large cities can be seen today. Most of rural America has a fairly conservative base and the large metropolis cities have a more liberal view. Large cities such as New York City or San Francisco have different problems to solve than small towns like Andover South Dakota. Once upon a time San Francisco was a small farming community much like the small town in South Dakota, but as the city grew the problems that it faced changed and grew with it.

The next time you become upset becomes someone’s viewpoint is too liberal or too conservative this does not mean that either you or the other person is right or wrong. The difference is the time and growth of where you both live or grew up. Eventually the Earth will be just like Coruscant in Star Wars. Coruscant is a planet that is just one giant city. When this happens everyone will be relatively liberal because we will all have similar problems that we will be facing. At the end of the day we are more alike than we give ourselves credit. I believe that if a liberal person was forced to transplant into rural America that he or she would eventually become more conservative because the set of problems faced has changed. The same is true for a conservative person who moves into a metropolis megacity. Unfortunately, there are not many one solution fits all sizes for political issues. This is why I generally believe that states or local governments should be left to make decisions that are best for their residents.

 

Java New Relic Agent Increases Memory Footprint

Overview

We recently have been upgrading our java stacks to have the latest new relic agent 6 and 7 to incorporate distributed tracing into our environments. In testing the various versions of the new relic agent six we found that our stacks all took a performance hit. We increased our memory usage and increased our latency thus bringing us in danger of meeting our SLAs.

does new relic agent increase memory usage?

In this stack we did a roll back to a version 5 of the new relic java agent. The graph displays differently for the mysql database connection measurements. On the left is version 6.1 on on the right is version 5.3. There is a clear drop in memory usage when we use the older version of the java agent.

Version 7 Analysis

Right now we are currently testing various versions of the 7 release of new relic java agent. We are seeing an increased memory footprint still and many of our stacks are experiencing java garbage collection problems. This causes our stacks to periodically lock up and slow down response times thus triggering alerts. The issues will clean themselves up usually over a period of 30 minutes, however this creates way too many false alarms in our alerting process. Reaching out to the new relic support isn’t very helpful either. Just be cautious in any upgrades you do with the new relic agent after major version 5.

HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster

Description

I recently decided to use the elasticsearch and hibernate bindings in java. Hibernate is already a very good ORM and has come a long ways in the last several years, but the integration with elasticsearch and lucene make it an amazing combination. It cuts down on the maintenance of keeping an elasticsearch index up to date and maintaining multiple classes for search providers. In my test project I decided to run the services using docker-compose. I ended up getting the following error that wasn’t very descriptive:

default] Unable to build Hibernate SessionFactory; nested exception is org.hibernate.search.util.common.SearchException: HSEARCH000520: Hibernate Search encountered failures during bootstrap. Failures:

    default backend: 
        failures: 
          - HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster: HSEARCH400007: Elasticsearch request failed: Connection refused
Request: GET  with parameters {}
Response: (no response)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1786) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:602) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:524) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1154) ~[spring-context-5.3.6.jar:5.3.6]
    at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:908) ~[spring-context-5.3.6.jar:5.3.6]
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.6.jar:5.3.6]
    at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:144) ~[spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:782) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:774) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:439) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:339) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1340) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1329) [spring-boot-2.4.5.jar:2.4.5]
    at com.recisphere.ServerKt.main(Server.kt:54) [main/:na]
Caused by: javax.persistence.PersistenceException: [PersistenceUnit: default] Unable to build Hibernate SessionFactory; nested exception is org.hibernate.search.util.common.SearchException: HSEARCH000520: Hibernate Search encountered failures during bootstrap. Failures:

    default backend: 
        failures: 
          - HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster: HSEARCH400007: Elasticsearch request failed: Connection refused
Request: GET  with parameters {}
Response: (no response)
    at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:421) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:396) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.afterPropertiesSet(LocalContainerEntityManagerFactoryBean.java:341) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1845) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1782) ~[spring-beans-5.3.6.jar:5.3.6]
    ... 17 common frames omitted
Caused by: org.hibernate.search.util.common.SearchException: HSEARCH000520: Hibernate Search encountered failures during bootstrap. Failures:

    default backend: 
        failures: 
          - HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster: HSEARCH400007: Elasticsearch request failed: Connection refused
Request: GET  with parameters {}
Response: (no response)
    at org.hibernate.search.engine.reporting.spi.RootFailureCollector.checkNoFailure(RootFailureCollector.java:50) ~[hibernate-search-engine-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.search.engine.common.impl.SearchIntegrationBuilderImpl.prepareBuild(SearchIntegrationBuilderImpl.java:243) ~[hibernate-search-engine-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.search.mapper.orm.bootstrap.impl.HibernateOrmIntegrationBooterImpl.doBootFirstPhase(HibernateOrmIntegrationBooterImpl.java:259) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.search.mapper.orm.bootstrap.spi.HibernateOrmIntegrationBooterBehavior.bootFirstPhase(HibernateOrmIntegrationBooterBehavior.java:17) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.search.mapper.orm.bootstrap.impl.HibernateOrmIntegrationBooterImpl.lambda$bootNow$7(HibernateOrmIntegrationBooterImpl.java:218) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at java.util.Optional.orElseGet(Optional.java:267) ~[na:1.8.0_201]
    at org.hibernate.search.mapper.orm.bootstrap.impl.HibernateOrmIntegrationBooterImpl.bootNow(HibernateOrmIntegrationBooterImpl.java:218) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) ~[na:1.8.0_201]
    at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577) ~[na:1.8.0_201]
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[na:1.8.0_201]
    at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) ~[na:1.8.0_201]
    at org.hibernate.search.mapper.orm.bootstrap.impl.HibernateSearchSessionFactoryObserver.sessionFactoryCreated(HibernateSearchSessionFactoryObserver.java:41) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.internal.SessionFactoryObserverChain.sessionFactoryCreated(SessionFactoryObserverChain.java:35) ~[hibernate-core-5.4.32.Final.jar:5.4.32.Final]
    at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:385) ~[hibernate-core-5.4.32.Final.jar:5.4.32.Final]
    at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:468) ~[hibernate-core-5.4.32.Final.jar:5.4.32.Final]
    at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1259) ~[hibernate-core-5.4.32.Final.jar:5.4.32.Final]
    at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:409) ~[spring-orm-5.3.6.jar:5.3.6]
    ... 21 common frames omitted


Process finished with exit code 1

Solution

After doing some digging I realized that the error is it is actually having a problem connecting to the elasticsearch container. I proceeded to manually test the container and everything seemed fine. The problem was in the spring config file I used I called the host localhost instead of localhost:9200 with the port. I updated the application.properties or application.yaml file to include the right port and viola I was able to connect.

spring:
  datasource:
    url: jdbc:mysql://localhost:3306/recisphere?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&useSSL=false
    username: test
    password: test
  jpa:
    generate-ddl: true
    properties:
      hibernate:
        dialect: org.hibernate.dialect.MySQL5InnoDBDialect
        ddl-auto: create
        search:
          backend:
            hosts: localhost:9200

 

Move from Quay to AWS ECR Automation

Why the move

We recently have decided to move all the docker repositories from a private registry called Quay to ECR. Quay uses robot tokens in order to authenticate the registry in order to push images. We have found it is more cost effective to just use ECR instead of pay for and maintain Quay Registry.

Automate Jenkins

All of our builds have a docker step in the pipe that builds the docker images after we run a combative set of tests. Here is the code in a bash script that we use to automate the push to ECR. This will create the container registry automatically if it doesn’t exist and then pushes the changes to it. Because elastic container registry uses IAM roles we also use amazon-ecr-credential-helper in order to manage the fact we have multiple accounts with AWS. We use kustomize and grab the namespace from there in order to name the docker registry. Then we also need to update the policy for ECR to allow other accounts need access to the image and the easiest way is to just allow the other accounts access.

namespace="$(kubectl kustomize k8/base/ | grep namespace | head -1 | awk 'NF>1{print $NF}')"
#aws --region us-east-1 ecr describe-repositories --repository-names ${namespace} || aws --region us-east-1 ecr create-repository --repository-name ${namespace} --image-tag-mutability IMMUTABLE --tags Key=StackId,Value=${namespace} --image-scanning-configuration scanOnPush=true
aws --region us-east-1 ecr describe-repositories --repository-names ${namespace} || aws --region us-east-1 ecr create-repository --repository-name ${namespace} --image-tag-mutability MUTABLE --tags Key=StackId,Value=${namespace} --image-scanning-configuration scanOnPush=true
REGISTRY_PATH="${REGISTRY_BASE}/${namespace}"


####
# SET PERMISSIONS
####

cat <<'EOF' > policy.json
{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Sid": "ReadOnlyWithinAncestry",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:BatchGetImage",
        "ecr:GetAuthorizationToken",
        "ecr:GetDownloadUrlForLayer"
      ]
    },
    {
      "Sid": "AllowCrossAccount",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "ecr:*"
    }
  ]
}
EOF
aws --region us-east-1 ecr set-repository-policy --repository-name ${namespace} --policy-text file://policy.json


echo "Building Docker Image"
echo ${REGISTRY_PATH}:${buildNumber}

#This builds the container and tags it locally
#aws --region us-east-1 ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 310865762107.dkr.ecr.us-east-1.amazonaws.com
docker build --pull -t ${REGISTRY_PATH}:${buildNumber} .

echo "Publishing now"

echo "Publishing since running on the build machine."

#If PUBLISH has been set we assume we are on the build machine and will also push the image to the registry.
docker push ${REGISTRY_PATH}:${buildNumber}
echo "Push latest"

# tag/push latest
#This is not needed as we now use aws credentials manager
#aws --region us-east-1 ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 878616621923.dkr.ecr.us-east-1.amazonaws.com
docker tag ${REGISTRY_PATH}:${buildNumber} ${REGISTRY_PATH}:latest
docker push ${REGISTRY_PATH}:latest

 

How to Set Master Nodes in Elasticsearch

Problem

In order to horizontally scale an elasticsearch cluster in the cloud we need to make sure we don’t remove the master nodes if we scale the cluster down during times of less usage. We do this in Amazon Web Services by having two groups of compute instances. The first block is equal to the number of shards in the cluster. We set a unique name on these so they can be easily found in AWS. We treat these as masters and never scale them down. Preserving the master nodes helps prevent data loss. Unfortunately elasticsearch doesn’t let us set master nodes and it doesn’t guarantee that which nodes in the cluster are the master nodes.

Solution

We need to force a set of shards onto the instances of our choice. The following is a script that queries aws based on a tag and then uses the reallocation endpoint in elasticsearch to swap shards.

fun forceUniqueness(ips: List<String>, numberShards: Int) {
        //check to see if we are unique
        //yes exit
        if (validateUniqueShards(ips)) {
            return
        } else {
            //no ->
            val fullMap = getShardMap()
            val fullList = getShardList()
            var duplicateMap = mutableMapOf<String, Int>()
            var duplicatedShards = mutableSetOf<Int>()
            var missingShards = mutableSetOf<Int>()
            var listMoveTo = mutableListOf<String>()

            var tmpSet = mutableSetOf<Int>()

            //which ones are duplicated
            for (ip in ips) {
                if (tmpSet.contains(fullMap.get(ip))) {
                    duplicateMap.put(ip, fullMap.getValue(ip))
                    duplicatedShards.add(fullMap.getValue(ip))
                    listMoveTo.add(ip)
                } else {
                    tmpSet.add(fullMap.get(ip)!!)
                }
            }

            //which ones are missing
            for (i in 0..numberShards - 1) {
                if (!tmpSet.contains(i)) {
                    missingShards.add(i)
//                    println(i)
                }
            }

            println("FoundShards")
            println(tmpSet)
            println("MissingShards")
            println(missingShards)
            println("DuplicateMap")
            println(duplicateMap)

            //find a shard with missing and move it over
            var tripleList = mutableListOf<Triple<String, String, Int>>()
            for (shard in missingShards) {
                for (json in fullList) {
                    //if it is in an ignore ip list because of multiple on demand data integrity lines continue
                    if (ignoreIps.contains(json.getString("ip"))) {
                        continue
                    }
                    if (json.getInt("shard") == shard &&  json.getString("prirep") == "r") {

                        var moveFrom = json.getString("node")
                        var moveTo = getNodeName(listMoveTo.removeAt(0))
                        tripleList.add(Triple(moveFrom, moveTo, shard))

                        //todo move the one that is duplicated and swap
                        var tmpShard = getShardByNode(moveTo)
                        tripleList.add((Triple(moveTo, moveFrom, tmpShard)))
                        break
                    }
                }
            }
            moveShards(tripleList)
        }
    }

 

Automatically Terraform Elasticsearch Cluster in AWS

Terraform Elasticsearch in Amazon Web Services

The elasticsearch cluster needs to know the seed ip addresses, which makes it a little bit tricker to do in terraform. We actually need to do have two separate ec2 declarations. The second batch will grab the first batches ip addresses. The userdata automatically will install and set up everything. This also allows us to dynamically change the size of the cluster if we need to scale up or down depending on the job. Any new ec2 instances that come up will automatically join the cluster. I specify the version of elasticsearch in order to make sure new virtual machines don’t automatically grab an incompatible version. 

Files

ec2.tf

variable "ami_id_arm64" {
  description = "Amazon Linux 2 ARM64 AMI"
  default = "ami-07acebf185d439fa0"
}

#arm64 sizes
variable "instance_type_arm64" {
  type = "map"
  default = {
    L3 = "a1.large"
    L2 = "a1.4xlarge"
    L1 = "a1.4xlarge"
  }
}

variable "instance_type_arm64_metal" {
  type = "map"
  default = {
    L3 = "a1.large"
    L2 = "a1.metal"
    L1 = "a1.metal"
  }
}

variable "instance_count_reserved_arm64" {
  type = "map"
  default = {
    L3 = "2"
    L2 = "2"
    L1 = "20"
  }
}

variable "instance_count_reserved_2_arm64" {
  type = "map"
  default = {
    L3 = "0"
    L2 = "0"
    L1 = "160"
  }
}

variable "instance_count_reserved_failover_arm64" {
  type = "map"
  default = {
    L3 = "0"
    L2 = "0"
    L1 = "0"
  }
}

variable "instance_count_spot_arm64" {
  type = "map"
  default = {
    L3 = "0"
    L2 = "0"
    L1 = "0"
  }
}

data "template_cloudinit_config" "ondemand"  {
  gzip          = true
  base64_encode = true

  part {
    content      = "${file("userdata_arm64.yml")}"
  }

  part {
    merge_type    = "list(append)+dict(recurse_array)+str()"
    content_type = "text/cloud-config"
    content = <<EOF
#cloud-config
---
write_files:
- path: /etc/elasticsearch/elasticsearch.yml
  permissions: 0660
  content: |
    cluster.name: "BPI"
    network.host: 0.0.0.0
    xpack.ml.enabled: false
    xpack.monitoring.enabled: false
    bootstrap.memory_lock: true
    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
    discovery.zen.minimum_master_nodes: 1
    http.cors.enabled: true
    http.cors.allow-origin: /http?://localhost(:[0-9]+)?/
    #cluster.routing.allocation.total_shards_per_node: 1
EOF
  }
}


data "template_cloudinit_config" "spot"  {
  gzip          = true
  base64_encode = true

  part {
    content      = "${file("userdata_arm64.yml")}"
  }

  part {
    merge_type    = "list(append)+dict(recurse_array)+str()"
    content_type = "text/cloud-config"
    content = <<EOF
#cloud-config
---
write_files:
  - path: /etc/elasticsearch/elasticsearch.yml
    permissions: 0660
    content: |
      cluster.name: "BPI"
      network.host: 0.0.0.0
      xpack.ml.enabled: false
      xpack.monitoring.enabled: false
      bootstrap.memory_lock: true
      path.data: /var/lib/elasticsearch
      path.logs: /var/log/elasticsearch
      discovery.zen.minimum_master_nodes: 1
      #cluster.routing.allocation.total_shards_per_node: 1
      http.cors.enabled: true
      http.cors.allow-origin: /http?://localhost(:[0-9]+)?/
      discovery.zen.ping.unicast.hosts: ["${element(aws_instance.bgt-bpi-arm64.*.private_ip, 0)}:9300", "${element(aws_instance.bgt-bpi-arm64.*.private_ip, 1)}:9300", "${element(aws_instance.bgt-bpi-arm64.*.private_ip, 2)}:9300", "${element(aws_instance.bgt-bpi-arm64.*.private_ip, 3)}:9300"]

EOF
  }
}

resource "aws_instance" "bgt-bpi-arm64" {
  ami = "${var.ami_id_arm64}"
  instance_type = "${var.instance_type_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  iam_instance_profile    = "${lower(var.stack_id)}"

  subnet_id               = "${data.terraform_remote_state.global.default_vpc_server_subnet_ids_list[1]}"
  user_data               = "${data.template_cloudinit_config.ondemand.rendered}"
  vpc_security_group_ids  = ["${data.terraform_remote_state.global.base_security_group_ids_default_vpc_list[0]}"]

  count                   = "${var.instance_count_reserved_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  placement_group         = "${aws_placement_group.bgt_bpi_arm64_pg.name}"
  ebs_optimized           = "${var.ebs_optimized["${data.terraform_remote_state.global.aws_environment}"]}"

  lifecycle {
    ignore_changes = ["user_data", "ami", "ebs_optimized"]
  }

  root_block_device {
    volume_type           = "gp2"
    volume_size           = "165"
    delete_on_termination = true
  }

  tags {
    Name                = "bgt-bpi-arm64-p1-z1"
    Team                = "Bigtree Services"
    Tool                = "Terraform"
    StackId             = "${var.stack_id}"
    Deploy              = "arm64"
  }
}

resource "aws_elb" "elb" {
  name               = "bgt-bpi-elb"
  subnets = ["${data.terraform_remote_state.global.default_vpc_server_subnet_ids_list}"]
  security_groups = ["${data.terraform_remote_state.global.base_security_group_ids_default_vpc_list[0]}"]
  internal = true

  listener {
    instance_port = 9200
    instance_protocol = "http"
    lb_port = 80
    lb_protocol = "http"
  }

  health_check {
    healthy_threshold = 2
    interval = 10
    target = "http:9200/"
    timeout = 5
    unhealthy_threshold = 3
  }

  tags {
    Name                = "bgt-bpi-elb"
    Team                = "Bigtree Services"
    Tool                = "Terraform"
    StackId             = "${var.stack_id}"
    Deploy              = "arm64"
  }
}

resource "aws_elb_attachment" "elb" {
  count = "${var.instance_count_reserved_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  elb = "${aws_elb.elb.id}"
  instance = "${element(aws_instance.bgt-bpi-arm64.*.id, count.index)}"
}




resource "aws_instance" "bgt-bpi-arm64-part2" {
  ami = "${var.ami_id_arm64}"
  instance_type = "${var.instance_type_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  iam_instance_profile    = "${lower(var.stack_id)}"

  subnet_id               = "${data.terraform_remote_state.global.default_vpc_server_subnet_ids_list[1]}"
  user_data               = "${data.template_cloudinit_config.spot.rendered}"
  vpc_security_group_ids  = ["${data.terraform_remote_state.global.base_security_group_ids_default_vpc_list[0]}"]

  count                   = "${var.instance_count_reserved_2_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  ebs_optimized           = "${var.ebs_optimized["${data.terraform_remote_state.global.aws_environment}"]}"

  lifecycle {
    ignore_changes = ["user_data", "ami", "ebs_optimized", "placement_group"]
  }

  root_block_device {
    volume_type           = "gp2"
    volume_size           = "165"
    delete_on_termination = true
  }

  tags {
    Name                = "bgt-bpi-arm64-a2-z1"
    Team                = "Bigtree Services"
    Tool                = "Terraform"
    StackId             = "${var.stack_id}"
    Deploy              = "arm64"
  }
}

resource "aws_elb_attachment" "elb-part2" {
  count = "${var.instance_count_reserved_2_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  elb = "${aws_elb.elb.id}"
  instance = "${element(aws_instance.bgt-bpi-arm64-part2.*.id, count.index)}"
}












In this example L3/L2/L1 are the same as having a dev/stage/prod environment. 

userdata.yml

#cloud-config
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCTSRtWzW/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
yum_repos:
  elasticsearch.repo:
    name: Elasticsearch repository for 6.x packages
    baseurl: https://artifacts.elastic.co/packages/6.x/yum
    gpgcheck: 1
    gpgkey: https://artifacts.elastic.co/GPG-KEY-elasticsearch
    enabled: 1
    autorefresh: 1
    type: rpm-md

package_upgrade: true
packages:
- vim
- htop
- wget
- gcc

write_files:
- path: /etc/systemd/timesyncd.conf
  permissions: 0644
  owner: root
  content: |
    [Time]
    NTP=0.amazon.pool.ntp.org 1.amazon.pool.ntp.org 2.amazon.pool.ntp.org 3.amazon.pool.ntp.org

- path: /etc/sysctl.d/net.ipv4.neigh.default.conf
  content: |
    net.ipv4.neigh.default.gc_thresh1=4096
    net.ipv4.neigh.default.gc_thresh2=8192
    net.ipv4.neigh.default.gc_thresh3=16384

- path: /etc/sysctl.d/fs.inotify.max_user_instances.conf
  content: |
    fs.inotify.max_user_instances=4096

- path: /etc/sysctl.d/net.conf
  content: |
    net.core.somaxconn = 1000
    net.core.netdev_max_backlog = 5000
    net.core.rmem_default = 524280
    net.core.wmem_default = 524280
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.udp_rmem_min = 10240
    net.nf_conntrack_max = 1048576

- path: /etc/sysctl.d/vm.conf
  content: |
    vm.max_map_count=262144

- path: /etc/security/limits.conf
  content: |
    * soft memlock unlimited
    * hard memlock unlimited
    *  -  nofile  65536

- path: /etc/systemd/system/elasticsearch.service.d/override.conf
  permissions: 666
  content: |
    [Service]
    LimitMEMLOCK=infinity

- path: /etc/elasticsearch/jvm.options
  permissions: 0666
  content: |
    ## JVM configuration
    -Xms20g
    -Xmx20g
    -XX:+UseConcMarkSweepGC
    -XX:CMSInitiatingOccupancyFraction=75
    -XX:+UseCMSInitiatingOccupancyOnly
    -Des.networkaddress.cache.ttl=60
    -Des.networkaddress.cache.negative.ttl=10
    -XX:+AlwaysPreTouch
    -Xss1m
    -Djava.awt.headless=true
    -Dfile.encoding=UTF-8
    -Djna.nosys=true
    -XX:-OmitStackTraceInFastThrow
    -Dio.netty.noUnsafe=true
    -Dio.netty.noKeySetOptimization=true
    -Dio.netty.recycler.maxCapacityPerThread=0
    -Dlog4j.shutdownHookEnabled=false
    -Dlog4j2.disable.jmx=true
    -Djava.io.tmpdir=${ES_TMPDIR}
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:HeapDumpPath=/var/lib/elasticsearch
    -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log
    8:-XX:+PrintGCDetails
    8:-XX:+PrintGCDateStamps
    8:-XX:+PrintTenuringDistribution
    8:-XX:+PrintGCApplicationStoppedTime
    8:-Xloggc:/var/log/elasticsearch/gc.log
    8:-XX:+UseGCLogFileRotation
    8:-XX:NumberOfGCLogFiles=32
    8:-XX:GCLogFileSize=64m
    9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
    9-:-Djava.locale.providers=COMPAT
    10-:-XX:UseAVX=2


runcmd:
- [ amazon-linux-extras, install, corretto8, -y ]
- [ yum, install, elasticsearch-6.4.3-1, -y ]
- [ systemctl, daemon-reload ]
- [ systemctl, enable, elasticsearch.service ]
- [ systemctl, start, elasticsearch.service ]

 

Free Cellphone For Life

Over the last few months I have been working on understanding what it means to invest. My ultimate goal is to be able to retire in about 10 years or so (right now I am 30). To break this goal into bite sized chunks I have decided to try and replace my monthly reoccurring bills paid for by assets rather than by my income. A good explanation of this is one of my most loved books Rich Dad Poor Dad, which explains the basic concept of how this cash flow works.

I want to first replace my cell phone bill. Currently I spend about $200/month on a cell plan plus various cell phones. This means I need a passive income that generates at least $2400 / year. I don’t think I will ever be without a cell phone so why not invest in cell phone company – like AT&T. They currently pay about 7% dividends per year. This is how the equation works out.

Total Amount Needed to Invest * .07 = 2400. This means I need to invest $34,000 total.

The next question I have is how many years does it take to equal that $34000. At $200/month this comes to 14 years. I just need to prepay my phone by 14 years in order to have a free phone for the rest of my life.

FirstOrDefault C# Lambda to Java 8 Lambda

Description

While converting c# code to java there are several lambdas that c# has that were a little difficult to find a port to java. One of these was C Sharp’s FirstOrDefault lambda expression. Java has an equivalent stream. We can use the findFirst and if it doesn’t find anything then we can return a null. This in essence is the same as the method in c# – return first or null unless I specify otherwise. While the java version is slightly more verbose, the functionality is the same.

 if (birth == null) birth = person.Events.FirstOrDefault(e => e.Type.GetGeneralEventType() == EventType.Birth); // Use a christening, etc.
if (birth == null) birth = person.getEvents().stream().filter(event -> event.getType().getGeneralEventType() == EventType.Birth).findFirst().orElse(null);// Use a christening, etc.

 

C Sharp delegate function in Java

Porting from C# to Java – The Delegate

The delegate function is kind of cool in C# land. It allows you to pass a bunch of functions as a parameter. Java has a handful of different ways you can do this. I personally like just passing an object that implements Runnable and then have an anonymous inner class that defines the run method. The following is an example in C Sharp code and how I ported it over to Java.

Example

C#

This first class calls the delegate function that passes a series of functions to run to another class.
[csharp]
new RetryPattern(Logger).Execute(delegate
{
var routingKey = myEvent.GetRoutingKey();
IBasicProperties properties = null;
if (durable)
{
properties = bundle.Channel.CreateBasicProperties();
properties.Persistent = true;
}
bundle.Channel.BasicPublish(ExchangeName, routingKey, Mandatory, properties,
Encoding.UTF8.GetBytes(myEvent.GetBody()));
}, delegate
{
ResetOnConnectionError();
CreatePublisher();
});
[/csharp]

These functions are executed using an Action object.
[csharp]
public void Execute(Action action, Action throwHandler = null) {
//Stuff
action(); // Do work here
//More stuff
}
[/csharp]

Java

The way I like to do this in Java is to pass an object that implements Runner. The following two blocks do the same as the above delegate in C#.
This is the anonymous inner class.
[java]
new RetryPattern().execute(new Runnable() {
@Override
public void run() {
String routingKey = myEvent.getRoutingKey();
BasicProperties properties = null;
if (durable) {
properties = new AMQP.BasicProperties().builder()
.deliveryMode(2)
.build();
}
try {
bundle.channel.basicPublish(exchangeName, routingKey, mandatory, properties, myEvent.getBody().getBytes("UTF-8"));
} catch (IOException e) {
e.printStackTrace();
}
}
}, new Runnable() {
@Override
public void run() {
resetOnConnectionError();
createPublisher();
}
});

And this is how it is called elsewhere.
[java]
public void execute(Runnable action, Runnable throwHandler ) {
//Stuff
action.run();
//More Stuff
}
[/java]