| #!/bin/bash | |
| # pad base64URL encoded to base64 | |
| paddit() { | |
| input=$1 | |
| l=`echo -n $input | wc -c` | |
| while [ `expr $l % 4` -ne 0 ] | |
| do | |
| input="${input}=" | |
| l=`echo -n $input | wc -c` |
| // Result is a superpowered enum that can be Success or Failure | |
| // and the basis for a railway junction | |
| sealed class Result<T> | |
| data class Success<T>(val value: T): Result<T>() | |
| data class Failure<T>(val errorMessage: String): Result<T>() | |
| // Composition: apply a function f to Success results | |
| infix fun <T,U> Result<T>.then(f: (T) -> Result<U>) = | |
| when (this) { | |
| is Success -> f(this.value) |
| { | |
| "Statement": [ | |
| { | |
| "Action": [ | |
| "apigateway:*", | |
| "cloudformation:CancelUpdateStack", | |
| "cloudformation:ContinueUpdateRollback", | |
| "cloudformation:CreateChangeSet", | |
| "cloudformation:CreateStack", | |
| "cloudformation:CreateUploadBucket", |
Vertical decomposition. Creating cohesive services
One of the biggest misconceptions about services is that a service is an independent deployable unit, i.e., service equals process. With this view, we are defining services according to how components are physically deployed. In our example, since it’s clear that the backend admin runs in its own process/container, we consider it to be a service.
But this definition of a service is wrong. Rather you need to define your services in terms of business capabilities. The deployment aspect of the system doesn’t have to be correlated to how the system has been divided into logical services. For example, a single service might run in different components/processes, and a single component might contain parts of multiple services. Once you start thinking of services in terms of business capabilities rather than deployment units, a whole world of options open.
What are the Admin UI
Kafka 0.11.0.0 (Confluent 3.3.0) added support to manipulate offsets for a consumer group via cli kafka-consumer-groups command.
- List the topics to which the group is subscribed
kafka-consumer-groups --bootstrap-server <kafkahost:port> --group <group_id> --describeNote the values under "CURRENT-OFFSET" and "LOG-END-OFFSET". "CURRENT-OFFSET" is the offset where this consumer group is currently at in each of the partitions.
- Reset the consumer offset for a topic (preview)
In your command-line run the following commands:
brew doctorbrew update
| { | |
| "Comment": "How long AWS Lambda keeps idle functions around?", | |
| "StartAt": "FindIdleTimeout", | |
| "States": { | |
| "FindIdleTimeout": { | |
| "Type": "Task", | |
| "Resource": "arn:aws:lambda:us-east-1:{account_id}:function:when-will-i-coldstart-dev-find-idle-timeout", | |
| "Next": "RepeatOrNot" | |
| }, | |
| "RepeatOrNot": { |
The following are examples of the four types rate limiters discussed in the accompanying blog post. In the examples below I've used pseudocode-like Ruby, so if you're unfamiliar with Ruby you should be able to easily translate this approach to other languages. Complete examples in Ruby are also provided later in this gist.
In most cases you'll want all these examples to be classes, but I've used simple functions here to keep the code samples brief.
This uses a basic token bucket algorithm and relies on the fact that Redis scripts execute atomically. No other operations can run between fetching the count and writing the new count.