Modern Microservices Architectures

Recently I visited the microservices conference Microxchg. Here is a wrap up about State of the Art concepts in modern microservices architectures. Following all the principles and techniques described here brings you right to Microservices Nirvana.

Microservices

Be warned this blogpost is more like a short and dense list. It may contain a whole lot of buzzwords. 🙂

People and Teams

  • Each microservice is developed and cared for by one team.
  • Each team has several microservices they build and run.
  • Teams build test deploy and monitor the services themselves.
  • The team has a big impact and sometimes the only saying in which technologies and frameworks they use.
  • Teams working with a microservices architecture can go faster once they have mastered the technical challenges.
  • Microservices make it a lot easier to try out new technologies wich makes developers happy.
  • Microservices architectures work best when there is organisational alignment. See Conways Law.

Architecture

  • Microservices are only allowed to share libraries from open source projects.
  • Each microservice needs to be documented. Its API should be made easily accessable.
  • Microservices are almost always run in many instances grouped in clusters.
  • Orchestration of microservices is supported by tools like Docker Compose.
  • Patterns are used to describe solutions to particular problems in the microservice architecture style. “www.microservices.io”
  • Scale Free Architecture is an architecture that is able scale to any size without any need to adjust it.
  • Serverless architectures like AWS Lambda can be the next step after microservices.
  • Reactive fault tolerant microservices architectures using message busses such as Akka could be a next step to architectures with thousands of services.
  • Each microservice complies to the twelve factor app concept.
  • A slightly different concept to microservices is the Self Contained System architecture.

Microservices and Monoliths

  • There is not a clear winner between starting with a monolith or starting with a microservices architecture.
  • Microservices may pose more technical challenges in the beginning.
  • Monoliths may be better to start with if the problem domain is not wholly understood.
  • Microservices introduce complexities and new problems. But each those complexities can be dealt with individually whereas there are problems in monolithic apps that are hard to fix at all.
  • Microservices reap the most benefits when you have several teams and you want them to go fast.

Mindshift from Monolith to Microservices

  • Code duplication is not a bad thing by itself. DRY needs to be balanced with other principles like low coupling.
  • Not all developers may like the job enrichment character of having to do development, test, deployment, monitoring, UX, DevOps.
  • It is OK to throw away services or rewrite them. Microservices enable this.
  • Maintaining monolithic applications may take more effort in the long run than the infrastructure overhead of microservices in the beginning.

Service Communication

  • Microservices interact most of the time using REST or messaging. Seldom they use webservices.
  • Microservices don’t share a database.
  • Microservices only talk to each other via published APIs.
  • APIs should be backward compatible in order to enable change.
  • If you follow Postels Law you will be fine with changing your system.

DDD

  • Domain Driven Design is a central concept to find the right structure and boundary of services.
  • An individual aspect or part of the domain usually results in a microservice holding and managing that aspect.
  • When constructing the bounded contexts it is important to distinguish between entity objects and value objects. Aggregations are a great way to structure entities and its value objects.

Operational aspects

  • DevOps and Microservices go hand in hand.
  • Each service is monitored and metrics are gathered.
  • The state of the whole system and its services is visualized and made easily available to the teams.
  • Each team is responsible for the operation of its services.
  • More and more microservices based applications run in the cloud such as AWS and Azure.
  • Systems and stages are described as “Infrastructure as Code”. This enables infrastructure to be immutable. Infrastructure and services should be managed more like cattle than like pets.
  • Dealing with failure is an integral part of the application infrastructure.
    Concepts like Green Blue Deployment, Circuit Breakers, Chaos Monkey and Failover procedures are used to migitate and even provoke failures.
  • Microservices communication is measured and monitored e.g. with zipkin.
  • The logging data can be analyzed using monte carlo simulation. A tool exists to visualize the distributions (see www.getguestimate.com)
  • Management of passwords and credentials is done using Hashicorp Vault www.vaultproject.io
  • Your logging format should be seen as an API between application and monitoring systems.
  • The ELK Stack is used often as a log management tool. There are also great commercial products for Log Management like Splunk or Sumo Logic.

Deployment

  • Each service is build and deployed using a Continous Deployment pipeline.
  • Assure the integrity of the whole system with contract tests that are run in the continous deployment pipeline. The interopability with other consumers and producers is tested with the new version of the service.
  • Services are deployed as BLOBs that contain all the dependencies and configuration.
  • Docker is a great packaging format.
  • There are more and more tools and plattforms that help managing the deployment workflow like Spinnaker.
  • Service Discovery is an integral part of delivery. Tools like Consul support it.
  • Docker is now a safe technological bet. There is a huge and growing community of services and tools to build on.
  • Spinnaker is a tool that helps manage the deployment process. Those deployment processes should also be described in code.
  • DNS is a simple and powerful tool to use for service discovery.

Final Thoughts and Conclusions

Some people dislike the term microservice because it is misleading. But the concepts and development in this space are big push in the right direction when developing large web based systems.
You are probably not in the infrastructure business. Don’t try to build your own AWS.
Logging and monitoring with ELK, Kibana, Grafana and so on is cool and fun – but don’t forget to create business value.
Heterogeneity in your programming languages and tools gives you options and flexibility. But there is a risk of overstretching and becoming unable to maintain your systems.
I think microservices concepts are still evolving and maturing. New concepts like serverless architectures are the next steps in the evolution.
In the future there will be no “either or” between monoliths and microservices. Both and a combination are valid architecture concepts.
There is still a speed bump for most organisations that switch to microservices. There are a lot of technologies and concepts to learn. Senior developers are needed to manage this transition.

Java HashMap Performance

Recently I was wondering if there are alternatives to the Java Collections Implementations and if they have a better performance than the java.util package. Especially the performance of HashMaps was of interest. At the company I work at we have keep lots of business data entries in memory and those entries are stored in HashMaps.

In several Articles I read that other implementaions are faster than java.util.HashMap. So I was conducting a performance check.

Alternatives

The alternatives to java.util that I considered were:

 <dependency>
        <groupId>net.sf.trove4j</groupId>
        <artifactId>trove4j</artifactId>
        <version>3.0.3</version>
    </dependency>
    <dependency>
        <groupId>com.google.guava</groupId>
        <artifactId>guava</artifactId>
        <version>15.0</version>
    </dependency>
    <dependency>
      <groupId>commons-collections</groupId>
      <artifactId>commons-collections</artifactId>
      <version>3.2.1</version>
    </dependency>
    <dependency>
        <groupId>javolution</groupId>
        <artifactId>javolution</artifactId>
        <version>5.5.1</version>
    </dependency>

Result

java.util.HashMap is the fastest implementation to date! I checked if different constructors have an impact to the performance of the individual HashMap. It took on average 45ms to get all Objects out of a HashMap with 1.000.000 items, and it took on average 80ms to put 1.000.00 items into the HashMap. Here is the data:

Implementation get() in ms put() in ms
java.util.HashMap(1_000_000,1f) 60 1728
java.util.HashMap(1_000_000,10f) 43 104
java.util.HashMap(1_000_000,0.3f) 42 78
java.util.HashMap(1_000_000,0.1f) 42 112
java.util.HashMap() 40 87
java.util.HashMap(1_000_000) 41 76
gnu.trove.map.hash.THashMap() 77 250
gnu.trove.map.hash.THashMap(1_000_000) 70 74
gnu.trove.map.hash.THashMap(1_000_000,1f) 161 155
javolution.util.FastMap() 119 272
javolution.util.FastMap(1_000_000) 100 116
org.apache.commons.collections.FastHashMap() 65 125
org.apache.commons.collections.FastHashMap(1_000_000) 64 87
java.util.TreeMap() 269 305
java.util.HashMap() 48 89
org.apache.commons.collections.FastTreeMap() 269 331

The performance of the java.util.HashMap(1_000_000,1f) testrun seems to be impacted by the startup and optimizing of the code by the JVM. I think this value is not accurate. A reordering of the testruns supports this thesis.

Lets take a look at the results in a chart:

Conclusion

The java.util.HashMap implementaion is still faster than other implementations. At this point it is not worthwhile to use other libraries in most cases.

In Java 7 there seem to be major Collection improvements as this article noted that in Java 6 the Trove library was faster than in java.util.HashMap. Using constructors with a predetermined initial map size improves inserting into the map, but not by a large extend.

Links

Source

Environment:

java version “1.7.0_40”
Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)

The sourcecode of the performance Test:

package info.klewitz.hashmap;

import com.google.common.collect.Maps;
import gnu.trove.map.hash.THashMap;
import javolution.util.FastMap;
import org.apache.commons.collections.FastHashMap;
import org.apache.commons.collections.FastTreeMap;
import org.springframework.util.StopWatch;
import org.testng.annotations.AfterClass;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;

import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import java.util.TreeMap;

public class HashMapSpeedTest {

  public static final int SIZE = 1_000_000;
  private Set<String> objects;
  private StopWatch stopWatch;

  @BeforeClass
  public void setUp(){
    System.out.println("creating " + SIZE + " objects");
    System.out.println("Implementation;get();put()");
    objects = getObjects();
    stopWatch = new StopWatch();
  }

  @DataProvider
  public Object[][] mapProvider(){
    return new Object[][]{
        { new HashMap<String,Object>(SIZE,1f),"SIZE,1f" },
        { new HashMap<String,Object>(SIZE,10f),"SIZE,10f" },
        { new HashMap<String,Object>(SIZE,0.3f),"SIZE,0.3f" },
        { new HashMap<String,Object>(SIZE,0.1f),"SIZE,0.1f" },
        { new HashMap<String,Object>(),"" },
        { new HashMap<String,Object>(SIZE),"SIZE" },
        { new THashMap<String,Object>(),""},
        { new THashMap<String,Object>(SIZE), "SIZE"},
        { new THashMap<String,Object>(SIZE,1f), "SIZE,1f"},
        { new FastMap<String,Object>(), ""},
        { new FastMap<String,Object>(SIZE), "SIZE"},
        { new FastHashMap(),""},
        { new FastHashMap(SIZE),"SIZE"},
        { new TreeMap<String,Object>(),""},
        {Maps.newHashMap(),""},
        { new FastTreeMap(),""},
    };
  }

  @Test(dataProvider = "mapProvider",singleThreaded = true)
  public void test(Map<String,Object> map,String typeExtension) {
    String type = map.getClass().getName() + "("+typeExtension+")";

    stopWatch.start(type+"put()");
    for(String o:objects){
      map.put(o,o);
    }
    stopWatch.stop();
    long putTime = stopWatch.getLastTaskTimeMillis();

    stopWatch.start(type+"get()");
    for(String o:objects){
      map.get(o);
    }
    stopWatch.stop();
    long getTime = stopWatch.getLastTaskTimeMillis();
    System.out.println(type + ";" + getTime + ";" + putTime);
    map.clear();
    System.gc();
  }

  @AfterClass
  public void tearDown() throws Exception {
    System.out.println(stopWatch.prettyPrint());
  }

  private Set<String> getObjects() {
    Set<String> objects = new HashSet<>();
    for(int i=0;i< SIZE;i++){
      objects.add("" + i);
    }
    return objects;
  }
}

Software Architecture vs. Software Technologies

In my opinion one of the biggest misunderstanding in the field of software development is the distinction between the technology and the architecture of the software considered. It sounds like a miniscule distinction between the two and to experts in the field the distinction is rather obvious. But to many people in the domain of software engineering it is topic they never really thought about and the results of that neglect of thinking about it clouds their vision and understanding of both.

Have you ever talked to a collegue or friend of yours that is working as a developer asking him to describe the design of the software he is working on? Have you ever been puzzled trying to understand the workings of the software as he talks in words like “XML, Java, Grooy, XSLT, Load Runner, jQuery, JSF,  Android, NoSql, AST , DOM, Scala and maybe even Visual Basic”?
Well I have. And the reason is because the person almost always talked a lot about technologies (well that all gets us exited doesn`t it 😉 ) while thinking he perfectly described you the architecture of the software.

Let me describe the values, architecture and technologies of a shelf I built a few years ago.

It is a real world example to the point I am making. This is me standing in front of it.And now let me describe it short and clear:

The reasoning/values behind the architecture:

  • grace and an elegant size
  • order without dullness
  • a thick frame closes the shelf to its surroundings
  • joints between parts are invisible
  • easily assembled and disassembled
  • natural wood makes it long lasting and high value
  • friendly and natural look

Lets talk about the architecture:

  • 4x thick wood boards builds a frame
  • 4 horizontal boards are connected by 16 vertical boards of equals size
  • The inner thinner boards are joined to the outer frame by corner screws
  • The inner thinner boards are joined to other inner boards by the flat dowels
  • the parts are not glued together so it is versatile and easily changable

The technologies I used:

  • planed spruce wood painted with birch veneer (gehobeltes Fichtenholz mit Birkenfurnier)
  • flat dowles (Flachdübel oder Lamellos)
  • steel corner screws (Eckverbinder Schrauben bekannt von IKEA Möbeln)
  • steel braces to prevent tilting

I wish we could all describe the technologies and architecture of software as well as the reasoning to use them as easily and precise as I did describing the shelf I built.
Unfortunately software is a lot more complicated. We struggle every time to describe software design, code and the technologies employed.

Technologies influence the design up to a certain point. They are the building materials of software. Java, Groovy, XML, Webservices,…. you name it.
Design is something else. Design reveals how developers understood the technology employed, the problem domain and how much experience knowledge we have in both things.

So what is to take away from all of this?
Agile Developers need to be precise in talking about software. We need to make good distinctions between valuesarchitecture and technology. Most non software engineering sciences are professional about this. We should be too.

Hello World! Hallo Welt!

Welcome to my new Blog!

I am surprised how well the installation went so far. It`s been some time since I had a Blog or a CMS. Now I am trying to find out how well the design and security of WordPress has become. I played around with Nuke, PostNuke, Joomla and Mambo around the year 2004.

Now the Modularity, Security and Interface Quality of WordPress really impresses me compared to the CMS Systems some time ago.

Let`s see how well it goes!