Java New Relic Agent Increases Memory Footprint

Overview

We recently have been upgrading our java stacks to have the latest new relic agent 6 and 7 to incorporate distributed tracing into our environments. In testing the various versions of the new relic agent six we found that our stacks all took a performance hit. We increased our memory usage and increased our latency thus bringing us in danger of meeting our SLAs.

does new relic agent increase memory usage?

In this stack we did a roll back to a version 5 of the new relic java agent. The graph displays differently for the mysql database connection measurements. On the left is version 6.1 on on the right is version 5.3. There is a clear drop in memory usage when we use the older version of the java agent.

Version 7 Analysis

Right now we are currently testing various versions of the 7 release of new relic java agent. We are seeing an increased memory footprint still and many of our stacks are experiencing java garbage collection problems. This causes our stacks to periodically lock up and slow down response times thus triggering alerts. The issues will clean themselves up usually over a period of 30 minutes, however this creates way too many false alarms in our alerting process. Reaching out to the new relic support isn’t very helpful either. Just be cautious in any upgrades you do with the new relic agent after major version 5.

HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster

Description

I recently decided to use the elasticsearch and hibernate bindings in java. Hibernate is already a very good ORM and has come a long ways in the last several years, but the integration with elasticsearch and lucene make it an amazing combination. It cuts down on the maintenance of keeping an elasticsearch index up to date and maintaining multiple classes for search providers. In my test project I decided to run the services using docker-compose. I ended up getting the following error that wasn’t very descriptive:

default] Unable to build Hibernate SessionFactory; nested exception is org.hibernate.search.util.common.SearchException: HSEARCH000520: Hibernate Search encountered failures during bootstrap. Failures:

    default backend: 
        failures: 
          - HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster: HSEARCH400007: Elasticsearch request failed: Connection refused
Request: GET  with parameters {}
Response: (no response)
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1786) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:602) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:524) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1154) ~[spring-context-5.3.6.jar:5.3.6]
    at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:908) ~[spring-context-5.3.6.jar:5.3.6]
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.6.jar:5.3.6]
    at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:144) ~[spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:782) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:774) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:439) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:339) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1340) [spring-boot-2.4.5.jar:2.4.5]
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1329) [spring-boot-2.4.5.jar:2.4.5]
    at com.recisphere.ServerKt.main(Server.kt:54) [main/:na]
Caused by: javax.persistence.PersistenceException: [PersistenceUnit: default] Unable to build Hibernate SessionFactory; nested exception is org.hibernate.search.util.common.SearchException: HSEARCH000520: Hibernate Search encountered failures during bootstrap. Failures:

    default backend: 
        failures: 
          - HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster: HSEARCH400007: Elasticsearch request failed: Connection refused
Request: GET  with parameters {}
Response: (no response)
    at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:421) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:396) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.afterPropertiesSet(LocalContainerEntityManagerFactoryBean.java:341) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1845) ~[spring-beans-5.3.6.jar:5.3.6]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1782) ~[spring-beans-5.3.6.jar:5.3.6]
    ... 17 common frames omitted
Caused by: org.hibernate.search.util.common.SearchException: HSEARCH000520: Hibernate Search encountered failures during bootstrap. Failures:

    default backend: 
        failures: 
          - HSEARCH400080: Unable to detect the Elasticsearch version running on the cluster: HSEARCH400007: Elasticsearch request failed: Connection refused
Request: GET  with parameters {}
Response: (no response)
    at org.hibernate.search.engine.reporting.spi.RootFailureCollector.checkNoFailure(RootFailureCollector.java:50) ~[hibernate-search-engine-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.search.engine.common.impl.SearchIntegrationBuilderImpl.prepareBuild(SearchIntegrationBuilderImpl.java:243) ~[hibernate-search-engine-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.search.mapper.orm.bootstrap.impl.HibernateOrmIntegrationBooterImpl.doBootFirstPhase(HibernateOrmIntegrationBooterImpl.java:259) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.search.mapper.orm.bootstrap.spi.HibernateOrmIntegrationBooterBehavior.bootFirstPhase(HibernateOrmIntegrationBooterBehavior.java:17) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.search.mapper.orm.bootstrap.impl.HibernateOrmIntegrationBooterImpl.lambda$bootNow$7(HibernateOrmIntegrationBooterImpl.java:218) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at java.util.Optional.orElseGet(Optional.java:267) ~[na:1.8.0_201]
    at org.hibernate.search.mapper.orm.bootstrap.impl.HibernateOrmIntegrationBooterImpl.bootNow(HibernateOrmIntegrationBooterImpl.java:218) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) ~[na:1.8.0_201]
    at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577) ~[na:1.8.0_201]
    at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[na:1.8.0_201]
    at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) ~[na:1.8.0_201]
    at org.hibernate.search.mapper.orm.bootstrap.impl.HibernateSearchSessionFactoryObserver.sessionFactoryCreated(HibernateSearchSessionFactoryObserver.java:41) ~[hibernate-search-mapper-orm-6.0.5.Final.jar:6.0.5.Final]
    at org.hibernate.internal.SessionFactoryObserverChain.sessionFactoryCreated(SessionFactoryObserverChain.java:35) ~[hibernate-core-5.4.32.Final.jar:5.4.32.Final]
    at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:385) ~[hibernate-core-5.4.32.Final.jar:5.4.32.Final]
    at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:468) ~[hibernate-core-5.4.32.Final.jar:5.4.32.Final]
    at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1259) ~[hibernate-core-5.4.32.Final.jar:5.4.32.Final]
    at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) ~[spring-orm-5.3.6.jar:5.3.6]
    at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:409) ~[spring-orm-5.3.6.jar:5.3.6]
    ... 21 common frames omitted


Process finished with exit code 1

Solution

After doing some digging I realized that the error is it is actually having a problem connecting to the elasticsearch container. I proceeded to manually test the container and everything seemed fine. The problem was in the spring config file I used I called the host localhost instead of localhost:9200 with the port. I updated the application.properties or application.yaml file to include the right port and viola I was able to connect.

spring:
  datasource:
    url: jdbc:mysql://localhost:3306/recisphere?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&useSSL=false
    username: test
    password: test
  jpa:
    generate-ddl: true
    properties:
      hibernate:
        dialect: org.hibernate.dialect.MySQL5InnoDBDialect
        ddl-auto: create
        search:
          backend:
            hosts: localhost:9200

 

Move from Quay to AWS ECR Automation

Why the move

We recently have decided to move all the docker repositories from a private registry called Quay to ECR. Quay uses robot tokens in order to authenticate the registry in order to push images. We have found it is more cost effective to just use ECR instead of pay for and maintain Quay Registry.

Automate Jenkins

All of our builds have a docker step in the pipe that builds the docker images after we run a combative set of tests. Here is the code in a bash script that we use to automate the push to ECR. This will create the container registry automatically if it doesn’t exist and then pushes the changes to it. Because elastic container registry uses IAM roles we also use amazon-ecr-credential-helper in order to manage the fact we have multiple accounts with AWS. We use kustomize and grab the namespace from there in order to name the docker registry. Then we also need to update the policy for ECR to allow other accounts need access to the image and the easiest way is to just allow the other accounts access.

namespace="$(kubectl kustomize k8/base/ | grep namespace | head -1 | awk 'NF>1{print $NF}')"
#aws --region us-east-1 ecr describe-repositories --repository-names ${namespace} || aws --region us-east-1 ecr create-repository --repository-name ${namespace} --image-tag-mutability IMMUTABLE --tags Key=StackId,Value=${namespace} --image-scanning-configuration scanOnPush=true
aws --region us-east-1 ecr describe-repositories --repository-names ${namespace} || aws --region us-east-1 ecr create-repository --repository-name ${namespace} --image-tag-mutability MUTABLE --tags Key=StackId,Value=${namespace} --image-scanning-configuration scanOnPush=true
REGISTRY_PATH="${REGISTRY_BASE}/${namespace}"


####
# SET PERMISSIONS
####

cat <<'EOF' > policy.json
{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Sid": "ReadOnlyWithinAncestry",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:BatchGetImage",
        "ecr:GetAuthorizationToken",
        "ecr:GetDownloadUrlForLayer"
      ]
    },
    {
      "Sid": "AllowCrossAccount",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "ecr:*"
    }
  ]
}
EOF
aws --region us-east-1 ecr set-repository-policy --repository-name ${namespace} --policy-text file://policy.json


echo "Building Docker Image"
echo ${REGISTRY_PATH}:${buildNumber}

#This builds the container and tags it locally
#aws --region us-east-1 ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 310865762107.dkr.ecr.us-east-1.amazonaws.com
docker build --pull -t ${REGISTRY_PATH}:${buildNumber} .

echo "Publishing now"

echo "Publishing since running on the build machine."

#If PUBLISH has been set we assume we are on the build machine and will also push the image to the registry.
docker push ${REGISTRY_PATH}:${buildNumber}
echo "Push latest"

# tag/push latest
#This is not needed as we now use aws credentials manager
#aws --region us-east-1 ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 878616621923.dkr.ecr.us-east-1.amazonaws.com
docker tag ${REGISTRY_PATH}:${buildNumber} ${REGISTRY_PATH}:latest
docker push ${REGISTRY_PATH}:latest

 

How to Set Master Nodes in Elasticsearch

Problem

In order to horizontally scale an elasticsearch cluster in the cloud we need to make sure we don’t remove the master nodes if we scale the cluster down during times of less usage. We do this in Amazon Web Services by having two groups of compute instances. The first block is equal to the number of shards in the cluster. We set a unique name on these so they can be easily found in AWS. We treat these as masters and never scale them down. Preserving the master nodes helps prevent data loss. Unfortunately elasticsearch doesn’t let us set master nodes and it doesn’t guarantee that which nodes in the cluster are the master nodes.

Solution

We need to force a set of shards onto the instances of our choice. The following is a script that queries aws based on a tag and then uses the reallocation endpoint in elasticsearch to swap shards.

fun forceUniqueness(ips: List<String>, numberShards: Int) {
        //check to see if we are unique
        //yes exit
        if (validateUniqueShards(ips)) {
            return
        } else {
            //no ->
            val fullMap = getShardMap()
            val fullList = getShardList()
            var duplicateMap = mutableMapOf<String, Int>()
            var duplicatedShards = mutableSetOf<Int>()
            var missingShards = mutableSetOf<Int>()
            var listMoveTo = mutableListOf<String>()

            var tmpSet = mutableSetOf<Int>()

            //which ones are duplicated
            for (ip in ips) {
                if (tmpSet.contains(fullMap.get(ip))) {
                    duplicateMap.put(ip, fullMap.getValue(ip))
                    duplicatedShards.add(fullMap.getValue(ip))
                    listMoveTo.add(ip)
                } else {
                    tmpSet.add(fullMap.get(ip)!!)
                }
            }

            //which ones are missing
            for (i in 0..numberShards - 1) {
                if (!tmpSet.contains(i)) {
                    missingShards.add(i)
//                    println(i)
                }
            }

            println("FoundShards")
            println(tmpSet)
            println("MissingShards")
            println(missingShards)
            println("DuplicateMap")
            println(duplicateMap)

            //find a shard with missing and move it over
            var tripleList = mutableListOf<Triple<String, String, Int>>()
            for (shard in missingShards) {
                for (json in fullList) {
                    //if it is in an ignore ip list because of multiple on demand data integrity lines continue
                    if (ignoreIps.contains(json.getString("ip"))) {
                        continue
                    }
                    if (json.getInt("shard") == shard &&  json.getString("prirep") == "r") {

                        var moveFrom = json.getString("node")
                        var moveTo = getNodeName(listMoveTo.removeAt(0))
                        tripleList.add(Triple(moveFrom, moveTo, shard))

                        //todo move the one that is duplicated and swap
                        var tmpShard = getShardByNode(moveTo)
                        tripleList.add((Triple(moveTo, moveFrom, tmpShard)))
                        break
                    }
                }
            }
            moveShards(tripleList)
        }
    }

 

Automatically Terraform Elasticsearch Cluster in AWS

Terraform Elasticsearch in Amazon Web Services

The elasticsearch cluster needs to know the seed ip addresses, which makes it a little bit tricker to do in terraform. We actually need to do have two separate ec2 declarations. The second batch will grab the first batches ip addresses. The userdata automatically will install and set up everything. This also allows us to dynamically change the size of the cluster if we need to scale up or down depending on the job. Any new ec2 instances that come up will automatically join the cluster. I specify the version of elasticsearch in order to make sure new virtual machines don’t automatically grab an incompatible version. 

Files

ec2.tf

variable "ami_id_arm64" {
  description = "Amazon Linux 2 ARM64 AMI"
  default = "ami-07acebf185d439fa0"
}

#arm64 sizes
variable "instance_type_arm64" {
  type = "map"
  default = {
    L3 = "a1.large"
    L2 = "a1.4xlarge"
    L1 = "a1.4xlarge"
  }
}

variable "instance_type_arm64_metal" {
  type = "map"
  default = {
    L3 = "a1.large"
    L2 = "a1.metal"
    L1 = "a1.metal"
  }
}

variable "instance_count_reserved_arm64" {
  type = "map"
  default = {
    L3 = "2"
    L2 = "2"
    L1 = "20"
  }
}

variable "instance_count_reserved_2_arm64" {
  type = "map"
  default = {
    L3 = "0"
    L2 = "0"
    L1 = "160"
  }
}

variable "instance_count_reserved_failover_arm64" {
  type = "map"
  default = {
    L3 = "0"
    L2 = "0"
    L1 = "0"
  }
}

variable "instance_count_spot_arm64" {
  type = "map"
  default = {
    L3 = "0"
    L2 = "0"
    L1 = "0"
  }
}

data "template_cloudinit_config" "ondemand"  {
  gzip          = true
  base64_encode = true

  part {
    content      = "${file("userdata_arm64.yml")}"
  }

  part {
    merge_type    = "list(append)+dict(recurse_array)+str()"
    content_type = "text/cloud-config"
    content = <<EOF
#cloud-config
---
write_files:
- path: /etc/elasticsearch/elasticsearch.yml
  permissions: 0660
  content: |
    cluster.name: "BPI"
    network.host: 0.0.0.0
    xpack.ml.enabled: false
    xpack.monitoring.enabled: false
    bootstrap.memory_lock: true
    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
    discovery.zen.minimum_master_nodes: 1
    http.cors.enabled: true
    http.cors.allow-origin: /http?://localhost(:[0-9]+)?/
    #cluster.routing.allocation.total_shards_per_node: 1
EOF
  }
}


data "template_cloudinit_config" "spot"  {
  gzip          = true
  base64_encode = true

  part {
    content      = "${file("userdata_arm64.yml")}"
  }

  part {
    merge_type    = "list(append)+dict(recurse_array)+str()"
    content_type = "text/cloud-config"
    content = <<EOF
#cloud-config
---
write_files:
  - path: /etc/elasticsearch/elasticsearch.yml
    permissions: 0660
    content: |
      cluster.name: "BPI"
      network.host: 0.0.0.0
      xpack.ml.enabled: false
      xpack.monitoring.enabled: false
      bootstrap.memory_lock: true
      path.data: /var/lib/elasticsearch
      path.logs: /var/log/elasticsearch
      discovery.zen.minimum_master_nodes: 1
      #cluster.routing.allocation.total_shards_per_node: 1
      http.cors.enabled: true
      http.cors.allow-origin: /http?://localhost(:[0-9]+)?/
      discovery.zen.ping.unicast.hosts: ["${element(aws_instance.bgt-bpi-arm64.*.private_ip, 0)}:9300", "${element(aws_instance.bgt-bpi-arm64.*.private_ip, 1)}:9300", "${element(aws_instance.bgt-bpi-arm64.*.private_ip, 2)}:9300", "${element(aws_instance.bgt-bpi-arm64.*.private_ip, 3)}:9300"]

EOF
  }
}

resource "aws_instance" "bgt-bpi-arm64" {
  ami = "${var.ami_id_arm64}"
  instance_type = "${var.instance_type_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  iam_instance_profile    = "${lower(var.stack_id)}"

  subnet_id               = "${data.terraform_remote_state.global.default_vpc_server_subnet_ids_list[1]}"
  user_data               = "${data.template_cloudinit_config.ondemand.rendered}"
  vpc_security_group_ids  = ["${data.terraform_remote_state.global.base_security_group_ids_default_vpc_list[0]}"]

  count                   = "${var.instance_count_reserved_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  placement_group         = "${aws_placement_group.bgt_bpi_arm64_pg.name}"
  ebs_optimized           = "${var.ebs_optimized["${data.terraform_remote_state.global.aws_environment}"]}"

  lifecycle {
    ignore_changes = ["user_data", "ami", "ebs_optimized"]
  }

  root_block_device {
    volume_type           = "gp2"
    volume_size           = "165"
    delete_on_termination = true
  }

  tags {
    Name                = "bgt-bpi-arm64-p1-z1"
    Team                = "Bigtree Services"
    Tool                = "Terraform"
    StackId             = "${var.stack_id}"
    Deploy              = "arm64"
  }
}

resource "aws_elb" "elb" {
  name               = "bgt-bpi-elb"
  subnets = ["${data.terraform_remote_state.global.default_vpc_server_subnet_ids_list}"]
  security_groups = ["${data.terraform_remote_state.global.base_security_group_ids_default_vpc_list[0]}"]
  internal = true

  listener {
    instance_port = 9200
    instance_protocol = "http"
    lb_port = 80
    lb_protocol = "http"
  }

  health_check {
    healthy_threshold = 2
    interval = 10
    target = "http:9200/"
    timeout = 5
    unhealthy_threshold = 3
  }

  tags {
    Name                = "bgt-bpi-elb"
    Team                = "Bigtree Services"
    Tool                = "Terraform"
    StackId             = "${var.stack_id}"
    Deploy              = "arm64"
  }
}

resource "aws_elb_attachment" "elb" {
  count = "${var.instance_count_reserved_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  elb = "${aws_elb.elb.id}"
  instance = "${element(aws_instance.bgt-bpi-arm64.*.id, count.index)}"
}




resource "aws_instance" "bgt-bpi-arm64-part2" {
  ami = "${var.ami_id_arm64}"
  instance_type = "${var.instance_type_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  iam_instance_profile    = "${lower(var.stack_id)}"

  subnet_id               = "${data.terraform_remote_state.global.default_vpc_server_subnet_ids_list[1]}"
  user_data               = "${data.template_cloudinit_config.spot.rendered}"
  vpc_security_group_ids  = ["${data.terraform_remote_state.global.base_security_group_ids_default_vpc_list[0]}"]

  count                   = "${var.instance_count_reserved_2_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  ebs_optimized           = "${var.ebs_optimized["${data.terraform_remote_state.global.aws_environment}"]}"

  lifecycle {
    ignore_changes = ["user_data", "ami", "ebs_optimized", "placement_group"]
  }

  root_block_device {
    volume_type           = "gp2"
    volume_size           = "165"
    delete_on_termination = true
  }

  tags {
    Name                = "bgt-bpi-arm64-a2-z1"
    Team                = "Bigtree Services"
    Tool                = "Terraform"
    StackId             = "${var.stack_id}"
    Deploy              = "arm64"
  }
}

resource "aws_elb_attachment" "elb-part2" {
  count = "${var.instance_count_reserved_2_arm64["${data.terraform_remote_state.global.aws_environment}"]}"
  elb = "${aws_elb.elb.id}"
  instance = "${element(aws_instance.bgt-bpi-arm64-part2.*.id, count.index)}"
}












In this example L3/L2/L1 are the same as having a dev/stage/prod environment. 

userdata.yml

#cloud-config
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCTSRtWzW/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
yum_repos:
  elasticsearch.repo:
    name: Elasticsearch repository for 6.x packages
    baseurl: https://artifacts.elastic.co/packages/6.x/yum
    gpgcheck: 1
    gpgkey: https://artifacts.elastic.co/GPG-KEY-elasticsearch
    enabled: 1
    autorefresh: 1
    type: rpm-md

package_upgrade: true
packages:
- vim
- htop
- wget
- gcc

write_files:
- path: /etc/systemd/timesyncd.conf
  permissions: 0644
  owner: root
  content: |
    [Time]
    NTP=0.amazon.pool.ntp.org 1.amazon.pool.ntp.org 2.amazon.pool.ntp.org 3.amazon.pool.ntp.org

- path: /etc/sysctl.d/net.ipv4.neigh.default.conf
  content: |
    net.ipv4.neigh.default.gc_thresh1=4096
    net.ipv4.neigh.default.gc_thresh2=8192
    net.ipv4.neigh.default.gc_thresh3=16384

- path: /etc/sysctl.d/fs.inotify.max_user_instances.conf
  content: |
    fs.inotify.max_user_instances=4096

- path: /etc/sysctl.d/net.conf
  content: |
    net.core.somaxconn = 1000
    net.core.netdev_max_backlog = 5000
    net.core.rmem_default = 524280
    net.core.wmem_default = 524280
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.udp_rmem_min = 10240
    net.nf_conntrack_max = 1048576

- path: /etc/sysctl.d/vm.conf
  content: |
    vm.max_map_count=262144

- path: /etc/security/limits.conf
  content: |
    * soft memlock unlimited
    * hard memlock unlimited
    *  -  nofile  65536

- path: /etc/systemd/system/elasticsearch.service.d/override.conf
  permissions: 666
  content: |
    [Service]
    LimitMEMLOCK=infinity

- path: /etc/elasticsearch/jvm.options
  permissions: 0666
  content: |
    ## JVM configuration
    -Xms20g
    -Xmx20g
    -XX:+UseConcMarkSweepGC
    -XX:CMSInitiatingOccupancyFraction=75
    -XX:+UseCMSInitiatingOccupancyOnly
    -Des.networkaddress.cache.ttl=60
    -Des.networkaddress.cache.negative.ttl=10
    -XX:+AlwaysPreTouch
    -Xss1m
    -Djava.awt.headless=true
    -Dfile.encoding=UTF-8
    -Djna.nosys=true
    -XX:-OmitStackTraceInFastThrow
    -Dio.netty.noUnsafe=true
    -Dio.netty.noKeySetOptimization=true
    -Dio.netty.recycler.maxCapacityPerThread=0
    -Dlog4j.shutdownHookEnabled=false
    -Dlog4j2.disable.jmx=true
    -Djava.io.tmpdir=${ES_TMPDIR}
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:HeapDumpPath=/var/lib/elasticsearch
    -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log
    8:-XX:+PrintGCDetails
    8:-XX:+PrintGCDateStamps
    8:-XX:+PrintTenuringDistribution
    8:-XX:+PrintGCApplicationStoppedTime
    8:-Xloggc:/var/log/elasticsearch/gc.log
    8:-XX:+UseGCLogFileRotation
    8:-XX:NumberOfGCLogFiles=32
    8:-XX:GCLogFileSize=64m
    9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
    9-:-Djava.locale.providers=COMPAT
    10-:-XX:UseAVX=2


runcmd:
- [ amazon-linux-extras, install, corretto8, -y ]
- [ yum, install, elasticsearch-6.4.3-1, -y ]
- [ systemctl, daemon-reload ]
- [ systemctl, enable, elasticsearch.service ]
- [ systemctl, start, elasticsearch.service ]

 

Free Cellphone For Life

Over the last few months I have been working on understanding what it means to invest. My ultimate goal is to be able to retire in about 10 years or so (right now I am 30). To break this goal into bite sized chunks I have decided to try and replace my monthly reoccurring bills paid for by assets rather than by my income. A good explanation of this is one of my most loved books Rich Dad Poor Dad, which explains the basic concept of how this cash flow works.

I want to first replace my cell phone bill. Currently I spend about $200/month on a cell plan plus various cell phones. This means I need a passive income that generates at least $2400 / year. I don’t think I will ever be without a cell phone so why not invest in cell phone company – like AT&T. They currently pay about 7% dividends per year. This is how the equation works out.

Total Amount Needed to Invest * .07 = 2400. This means I need to invest $34,000 total.

The next question I have is how many years does it take to equal that $34000. At $200/month this comes to 14 years. I just need to prepay my phone by 14 years in order to have a free phone for the rest of my life.

FirstOrDefault C# Lambda to Java 8 Lambda

Description

While converting c# code to java there are several lambdas that c# has that were a little difficult to find a port to java. One of these was C Sharp’s FirstOrDefault lambda expression. Java has an equivalent stream. We can use the findFirst and if it doesn’t find anything then we can return a null. This in essence is the same as the method in c# – return first or null unless I specify otherwise. While the java version is slightly more verbose, the functionality is the same.

 if (birth == null) birth = person.Events.FirstOrDefault(e => e.Type.GetGeneralEventType() == EventType.Birth); // Use a christening, etc.
if (birth == null) birth = person.getEvents().stream().filter(event -> event.getType().getGeneralEventType() == EventType.Birth).findFirst().orElse(null);// Use a christening, etc.

 

C Sharp delegate function in Java

Porting from C# to Java – The Delegate

The delegate function is kind of cool in C# land. It allows you to pass a bunch of functions as a parameter. Java has a handful of different ways you can do this. I personally like just passing an object that implements Runnable and then have an anonymous inner class that defines the run method. The following is an example in C Sharp code and how I ported it over to Java.

Example

C#

This first class calls the delegate function that passes a series of functions to run to another class.
[csharp]
new RetryPattern(Logger).Execute(delegate
{
var routingKey = myEvent.GetRoutingKey();
IBasicProperties properties = null;
if (durable)
{
properties = bundle.Channel.CreateBasicProperties();
properties.Persistent = true;
}
bundle.Channel.BasicPublish(ExchangeName, routingKey, Mandatory, properties,
Encoding.UTF8.GetBytes(myEvent.GetBody()));
}, delegate
{
ResetOnConnectionError();
CreatePublisher();
});
[/csharp]

These functions are executed using an Action object.
[csharp]
public void Execute(Action action, Action throwHandler = null) {
//Stuff
action(); // Do work here
//More stuff
}
[/csharp]

Java

The way I like to do this in Java is to pass an object that implements Runner. The following two blocks do the same as the above delegate in C#.
This is the anonymous inner class.
[java]
new RetryPattern().execute(new Runnable() {
@Override
public void run() {
String routingKey = myEvent.getRoutingKey();
BasicProperties properties = null;
if (durable) {
properties = new AMQP.BasicProperties().builder()
.deliveryMode(2)
.build();
}
try {
bundle.channel.basicPublish(exchangeName, routingKey, mandatory, properties, myEvent.getBody().getBytes("UTF-8"));
} catch (IOException e) {
e.printStackTrace();
}
}
}, new Runnable() {
@Override
public void run() {
resetOnConnectionError();
createPublisher();
}
});

And this is how it is called elsewhere.
[java]
public void execute(Runnable action, Runnable throwHandler ) {
//Stuff
action.run();
//More Stuff
}
[/java]

Perfect WordPress Development Environment

WordPress Development Problem

Anybody who has developed for php applications knows it can be tricky to have  the right environmental setup to do development work. I love WordPress, but it is no exception, until now . . . I found the fastest way to have a development environment for WordPress that can be blown away and brought up within seconds. The solution is to use Docker. I won’t go into detail in this article about how awesome docker is, but you should definitely check it out. This makes iterating new themes and plugins for WordPress simple and no need for having a separate computer with all the hosting configured.

Use Docker

We can bring up two separate containers. One image is the supported WordPress docker image and the second is the supported MySQL image. You tie the two together using docker compose. The key to make this work so well is we mount a drive from our local machine to a path in the docker image. This allows us to have a git repo with our plugin so we can do active development. When you are satisfied with the plugin or theme you can simply zip up the files you need using whatever automation you prefer and ship the code. You can start developing within 15 minutes. There is no need to set up a server with apache and mysql just to do development work.

docker-compose.yml

web:
    image: wordpress
    links:
     - mysql
    environment:
     - WORDPRESS_DB_PASSWORD=password
    ports:
     - "0.0.0.0:8080:80"
    volumes:
     - /Users/zaphinath/Projects/wordpressPlugins:/var/www/html/wp-content/plugins
mysql:
    image: mysql:5.7
    environment:
     - MYSQL_ROOT_PASSWORD=password
     - MYSQL_DATABASE=wordpress

Just copy this file, change the directory underneath volumes, and run docker-compose up to start the two docker containers. You can then browse to the IP of the WordPress docker container and viola – you are able to start developing. NOTE: On a Mac you will need to use docker-machine ip default to get the virtual machine’s IP address and then go to the right port in a web browser to view your WordPress install. Being able to get up and running fast and iterate over a clean WordPress install makes this the perfect WordPress development environment.

Spring Rest Endpoint Test With MockMVC + ApplicationEvent

Problem Addressed

In a service oriented architecture, known as SOA, we have several microservices that get regularly deployed. The  strategy and various needs of deployment of such services has already been solved by applications like kubernetes. From a quality perspective there needs to be a high level test after any service is deployed or any configuration change is made to that service. One legacy method is to have a bunch of manual testers validate that the service behavior is working as expected. A better alternative is to use make an endpoint that gets called in every service whenever a service receives load or a configuration update. This endpoint then runs a series of tests that will validate that the service is in a good condition. When it is done it reports back to the endpoint that was called the status. You can then consume this anyway you prefer. Log it, trigger alerts, and etc.

SmokeTestResult Class

This is pretty much a copy of DropWizard’s HealthCheck class, except we need this to be serializable so that Spring can send the object into JSON. Jackson needs a public constructor to do this because it determines the class structure from Java’s reflection.

package com.domo.bedrock.health;


import com.fasterxml.jackson.annotation.JsonIgnore;

/**
 * Created by derekcarr on 4/5/16.
 */
public abstract class SmokeTestCheck {

    protected String name;

    public abstract SmokeTestCheck.Result check() throws Exception;

    public SmokeTestCheck(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }


    public static class Result {
        private static final Result HEALTHY = new Result(true, null, null);
        private static final int PRIME = 31;

        /**
         * Returns a healthy {@link Result} with no additional message.
         *
         * @return a healthy {@link Result} with no additional message
         */
        public static Result healthy() {
            return HEALTHY;
        }

        /**
         * Returns a healthy {@link Result} with an additional message.
         *
         * @param message an informative message
         * @return a healthy {@link Result} with an additional message
         */
        public static Result healthy(String message) {
            return new Result(true, message, null);
        }

        /**
         * Returns a healthy {@link Result} with a formatted message.
         * <p/>
         * Message formatting follows the same rules as {@link String#format(String, Object...)}.
         *
         * @param message a message format
         * @param args    the arguments apply to the message format
         * @return a healthy {@link Result} with an additional message
         * @see String#format(String, Object...)
         */
        public static Result healthy(String message, Object... args) {
            return healthy(String.format(message, args));
        }

        /**
         * Returns an unhealthy {@link Result} with the given message.
         *
         * @param message an informative message describing how the health check failed
         * @return an unhealthy {@link Result} with the given message
         */
        public static Result unhealthy(String message) {
            return new Result(false, message, null);
        }

        /**
         * Returns an unhealthy {@link Result} with a formatted message.
         * <p/>
         * Message formatting follows the same rules as {@link String#format(String, Object...)}.
         *
         * @param message a message format
         * @param args    the arguments apply to the message format
         * @return an unhealthy {@link Result} with an additional message
         * @see String#format(String, Object...)
         */
        public static Result unhealthy(String message, Object... args) {
            return unhealthy(String.format(message, args));
        }

        /**
         * Returns an unhealthy {@link Result} with the given error.
         *
         * @param error an exception thrown during the health check
         * @return an unhealthy {@link Result} with the given error
         */
        public static Result unhealthy(Throwable error) {
            return new Result(false, error.getMessage(), error);
        }

        private final boolean healthy;
        private final String message;
        private final Throwable error;

        public Result(boolean isHealthy, String message, Throwable error) {
            this.healthy = isHealthy;
            this.message = message;
            this.error = error;
        }

        /**
         * Returns {@code true} if the result indicates the component is healthy; {@code false}
         * otherwise.
         *
         * @return {@code true} if the result indicates the component is healthy
         */
        public boolean isHealthy() {
            return healthy;
        }

        /**
         * Returns any additional message for the result, or {@code null} if the result has no
         * message.
         *
         * @return any additional message for the result, or {@code null}
         */
        public String getMessage() {
            return message;
        }

        /**
         * Returns any exception for the result, or {@code null} if the result has no exception.
         *
         * @return any exception for the result, or {@code null}
         */
        @JsonIgnore
        public Throwable getError() {
            return error;
        }

        @Override
        public boolean equals(Object o) {
            if (this == o) {
                return true;
            }
            if (o == null || getClass() != o.getClass()) {
                return false;
            }
            final Result result = (Result) o;
            return healthy == result.healthy &&
                    !(error != null ? !error.equals(result.error) : result.error != null) &&
                    !(message != null ? !message.equals(result.message) : result.message != null);
        }

        @Override
        public int hashCode() {
            int result = (healthy ? 1 : 0);
            result = PRIME * result + (message != null ? message.hashCode() : 0);
            result = PRIME * result + (error != null ? error.hashCode() : 0);
            return result;
        }

        @Override
        public String toString() {
            final StringBuilder builder = new StringBuilder("Result{isHealthy=");
            builder.append(healthy);
            if (message != null) {
                builder.append(", message=").append(message);
            }
            if (error != null) {
                builder.append(", error=").append(error);
            }
            builder.append('}');
            return builder.toString();
        }
    }
}

The Smoketest Event

package com.domo.bedrock.service.event;

import com.domo.bedrock.health.SmokeTestCheck;
import org.springframework.context.ApplicationEvent;

import java.util.HashMap;
import java.util.Map;


public class SmokeTestEvent extends ApplicationEvent {
    private Map<String, SmokeTestCheck.Result> map;

    /**
     * Create a new ApplicationEvent.
     *
     * @param source the component that published the event (never {@code null})
     */
    public SmokeTestEvent(Map<String, SmokeTestCheck.Result> source) {
        super(source);
        this.map = source;
    }

    public Map<String, SmokeTestCheck.Result> getHealthCheckMap() {
        return new HashMap(this.map);
    }

    public void addHealthCheckResult(String id, SmokeTestCheck.Result result){
      map.put(id, result);
    }
}

 

The Endpoint each microservice will have

@RequestMapping(value = "/smoketest", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
@ResponseBody
public Map<String, SmokeTestCheck.Result> requestSmokeTestHealthCheck() {
    Map<String, SmokeTestCheck.Result> map = new HashMap<>();
    SmokeTestEvent smokeTestEvent = new SmokeTestEvent(map);
    applicationContext.publishEvent(smokeTestEvent);
    return smokeTestEvent.getHealthCheckMap();
}

This will publish an event that the service will consume by implementing an application listener. When the listener is finished it will return all the results synchronously.

Default Application Listener

Simply extend this class and implement the method that returns the SmokeTest.Result information. This can run anything from simple ping/pong endpoint tests to a full behavioral test of the service.

package com.domo.bedrock.health;

import com.domo.bedrock.service.event.SmokeTestEvent;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationListener;
import org.springframework.stereotype.Component;

import java.util.List;

/**
 * Created by derekcarr on 4/4/16.
 */
@Component
public abstract class SmokeTestListener implements ApplicationListener<SmokeTestEvent> {

    protected static final Logger LOGGER = LoggerFactory.getLogger(SmokeTestListener.class);

    @Autowired
    protected List<SmokeTestCheck> tests;

    @Override
    public abstract void onApplicationEvent(SmokeTestEvent event);

}

 Unit Test Endpoint With MockMVC

This has extra dependencies because of our deployment manager and preauthentication strategies, but this allows us to create a mock of the event and make sure that the endpoint returns what is expected. There is a better way to validate this – using json to do the validation, but we had conflicts with the various versions.

package com.domo.maestro.service;

import com.codahale.metrics.MetricRegistry;
import com.domo.bedrock.health.SmokeTestCheck;
import com.domo.bedrock.maestro.metrics.MaestroCustomMetricsAppender;
import com.domo.bedrock.service.CoreAutoConfiguration;
import com.domo.bedrock.service.ToeProvider;
import com.domo.bedrock.service.event.SmokeTestEvent;
import com.domo.bedrock.web.WebAutoConfiguration;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.web.HttpMessageConvertersAutoConfiguration;
import org.springframework.cloud.autoconfigure.RefreshAutoConfiguration;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationListener;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.support.PropertySourcesPlaceholderConfigurer;
import org.springframework.core.env.MutablePropertySources;
import org.springframework.core.env.PropertySource;
import org.springframework.mock.env.MockPropertySource;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.testng.AbstractTestNGSpringContextTests;
import org.springframework.test.context.web.WebAppConfiguration;
import org.springframework.test.web.servlet.MockMvc;
import org.springframework.test.web.servlet.setup.MockMvcBuilders;
import org.springframework.web.context.WebApplicationContext;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;

import java.util.EmptyStackException;

import static org.hamcrest.Matchers.containsString;
import static org.mockito.Mockito.mock;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
import static org.springframework.test.web.servlet.result.MockMvcResultHandlers.print;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;

/**
 * Created by derekcarr on 3/31/16.
 */
@WebAppConfiguration
@ContextConfiguration(classes = {
        MaestroResourceTest.resourceTestConfig.class,
        CoreAutoConfiguration.class,
        HttpMessageConvertersAutoConfiguration.class,
        WebAutoConfiguration.class,
        RefreshAutoConfiguration.class
})
public class MaestroResourceTest extends AbstractTestNGSpringContextTests {

    @Configuration
    public static class resourceTestConfig {
        @Bean
        public MaestroResource maestroResource(ApplicationContext applicationContext1) {
            DefaultHandlerProvider defaultHandlerProvider = mock(DefaultHandlerProvider.class);
            MetricRegistry metricsRegistry = mock(MetricRegistry.class);
            ServiceContext serviceContext = mock(ServiceContext.class);
            MaestroCustomMetricsAppender maestroCustomMetricsAppender = mock(MaestroCustomMetricsAppender.class);
            return new MaestroResource(defaultHandlerProvider,metricsRegistry,serviceContext, maestroCustomMetricsAppender, applicationContext1);
        }

        @Bean
        public SmokeEventListener smokeEventListener() {
            return new SmokeEventListener();
        }

        @Bean
        public static PropertySourcesPlaceholderConfigurer placeHolderConfigurer() {
            PropertySourcesPlaceholderConfigurer pspc = new PropertySourcesPlaceholderConfigurer();
            MutablePropertySources propertySources = new MutablePropertySources();
            PropertySource ps = new MockPropertySource()
                    .withProperty("requireSecurityAdapters", "false")
                    .withProperty("authenticationKey", "hippopotomonstrosesquipedaliophobiahippopotomonstrosesquipedaliophobia")
                    .withProperty("bedrock.spring.hmac.enabled","false");
            propertySources.addFirst(ps);
            pspc.setPropertySources(propertySources);
            return pspc;
        }

        @Bean
        public ToeProvider toeProvider() {
            return mock(ToeProvider.class);
        }

    }


    public static class SmokeEventListener implements ApplicationListener<SmokeTestEvent> {
        @Override
        public void onApplicationEvent(SmokeTestEvent event) {
            event.addHealthCheckResult("Test all the awesome things!", SmokeTestCheck.Result.healthy());
            event.addHealthCheckResult("Something Terrible has happened", SmokeTestCheck.Result.unhealthy("There are no more cookies in the break room."));
            event.addHealthCheckResult("Throw your hands Up - they're playing Clint's Song", SmokeTestCheck.Result.unhealthy(new EmptyStackException()));
        }
    }

    @Autowired
    private WebApplicationContext wac;
    private MockMvc mockMvc;

    private String expected = ",\"Something Terrible has happened\":{\"healthy\":false,\"message\":\"There are no more cookies in the break room.\"},\"Throw your hands Up - they're playing Clint's Song\":{\"healthy\":false,\"message\":null}}";

    @BeforeClass
    public void setupSpec() {
        mockMvc = MockMvcBuilders
                .webAppContextSetup(this.wac)
                .build();
    }

    @Test
    /** test for {@link MaestroResource#requestSmokeTestHealthCheck()} */
    public void test_smoketest_endpoint() throws Exception {
        mockMvc.perform(get("/maestro/smoketest"))
                // HTTP 200 returned
                .andDo(print())
                .andExpect(status().isOk())
                .andExpect(content().string(containsString("Test all the awesome things!\":{\"healthy\":true,\"message\":null}")))
                .andExpect(content().string(containsString("Something Terrible has happened\":{\"healthy\":false,\"message\":\"There are no more cookies in the break room.\"}")))
                .andExpect(content().string(containsString("Throw your hands Up - they're playing Clint's Song\":{\"healthy\":false,\"message\":null}")));
    }

}