MySQL + smiles in db

we should use for this utf8mb4 encoding: for server, for table columns. And YES, we should configure mysqld for this

1
2
3
4
5
6
7
8
9
10
11
12
$ vim $(brew --prefix)/etc/my.cnf

[mysqld]
character-set-client-handshake = FALSE
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci

[client]
default-character-set = utf8mb4

[mysql]
default-character-set = utf8mb4
1
$ /usr/local/opt/mysql/support-files/mysql.server restart

and then check character/collation in server/columns

1
SHOW VARIABLES WHERE Variable_name LIKE 'character\_set\_%' OR Variable_name LIKE 'collation%';

1
2
3
SELECT TABLE_NAME,COLUMN_NAME,CHARACTER_SET_NAME,COLLATION_NAME FROM information_schema.`COLUMNS`
WHERE table_schema = "sqedit"
AND table_name = "post";

and change if needed

1
2
3
ALTER TABLE attach CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
ALTER TABLE post CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
...

Tomcat and redirect to it

HTTPD + CPanel

1
2
3
4
5
6
<VirtualHost 23.235.214.210:443>
...
# vim /etc/apache2/conf/httpd.conf
ProxyPass "/sqedit-api" "http://127.0.0.1:8081/sqedit-api" max=1 retry=0

</VirtualHost>

Linux ACL

1
2
3
4
# setfacl -R -m u:appsqedit:rx /var/log
# setfacl -dm u:appsqedit:rx /var/log

# getfacl /var/log

gcloud. Google Cloud Engine

Login to Compute Engine instance

1
2
3
4
5
6
7
8
9
10
11
$ cd ~/.ssh
$ ln -sv id_rsa google_compute_engine
$ ln -sv id_rsa.pub google_compute_engine.pub
$ ln -sv known_hosts google_compute_known_hosts

$ gcloud auth login
$ gcloud config set project ride6th
$ gcloud compute ssh app-riders

# and after
$ ssh 35.203.128.137 -l panser

Docker Errors

No space left on device

1
2
3
4
DOCKER CLEANUP:

docker volume rm $(docker volume ls -qf dangling=true)
docker rm $(docker ps -q -f 'status=exited')
docker system prune -a

jhome

1
2
3
4
5
6
7
8
9
$ type jhome
jhome is a function
jhome ()
{
export JAVA_HOME=`/usr/libexec/java_home $@`;
echo "JAVA_HOME:" $JAVA_HOME;
echo "java -version:";
java -version
}

AWS. Instance Metadata and User Data

для получение различной информации о запущенном инстансе EC2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ curl http://169.254.169.254/latest/meta-data/
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups

$ curl http://169.254.169.254/latest/meta-data/public-ipv4
52.29.81.62

$ curl http://169.254.169.254/latest/meta-data/hostname
ip-172-31-17-178.eu-central-1.compute.internal

$ curl http://169.254.169.254/latest/meta-data/public-hostname
ec2-52-29-81-62.eu-central-1.compute.amazonaws.com

$ curl http://169.254.169.254/latest/meta-data/public-keys/
0=rep_engine_demo_aws

...

Kafka. Notes

Native

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cd /opt
$ sudo wget http://apache.ip-connect.vn.ua/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz
$ sudo tar xvf kafka_2.11-0.9.0.1.tgz
$ sudo chown -R panser /opt/kafka_2.11-0.9.0.1*

$ vim config/server.properties
zookeeper.connect=localhost:2181/kafka

$ ./bin/zookeeper-server-start.sh config/zookeeper.properties
$ ./bin/kafka-server-start.sh config/server.properties


# List of Topics
$ ./bin/kafka-topics.sh --list --zookeeper localhost:2181
# view specific topic

# Start Producer to Send Messages
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic topic-name

# Start Consumer to Receive Messages
$ ./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic-name --from-beginning

commands

1
2
3
4
5
6
7
8
9
10
11
# список топиков
kafka-topics --list --zookeeper HOST:2181

# сообщения в топике
kafka-console-consumer --bootstrap-server HOST:9092 --topic springCloudBus --from-beginning

# список consumer(процессинговой) груп
kafka-consumer-groups --bootstrap-server HOST:9092 --list

# состояние обработки топика конкретной consumer(процессинговой) групой
kafka-consumer-groups --bootstrap-server HOST:9092 --group springCloudBus-LOCAL --describe

Graphic Clients

графический клиент для Kafka Kafka Tool

Kafkacat

1
2
# brew install kafkacat --with-yajl
# brew install jq

администрирование. просмотр существующих брокеров/топиков/партиция_в_топике

1
$ kafkacat -L -b 127.0.0.1:9092

для тестирования Consumer (приема сообщений)

1
$ kafkacat -C -b 127.0.0.1:9092 -t kvstore_byte | jq

для тестирования Producer (отправка сообщений на kafka)

1
2
kafkacat -P -b 127.0.0.1:9092 -t kvstore_byte
^C # для отправки выше написаного сообщения

тестирование всего вместе

1
2
3
4
5
6
# записать в кафку
kafkacat -b IP:9092 -P -t test
> lalalalalala
# прочитать из кафки
kafkacat -b IP:9092 -C -t test
> lalalalalala

Transferring messages between topics

1
2
3
4
$ kafkacat -C -b kafka -t awesome-topic -e | kafkacat -P -b kafka -t awesome-topic2
## OR
$ kafkacat -C -b kafka -t awesome-topic -e > awesome-messages.txt
$ cat awesome-messages.txt | kafkacat -P -b kafka -t awesome-topic2

Transferring messages between clusters

1
kafkacat -C -b kafka2 -t awesome-topic -e | kafkacat -P -b kafka -t awesome-topic

Get the messages you want

1
seq 1 100 | kafkacat -P -b kafka -t superduper-topic

read 5 messages from the start (from each partition that you have: I have 10 partitions so I get 50 messages.)

1
kafkacat kafka -t superduper-topic -o 5 -e

read 5 messages from the end

1
kafkacat kafka -t superduper-topic -o -5 -e

If we wanted to focus on one partition

1
kafkacat -C -b kafka -t superduper-topic -o -5 -e -p 5

landoop/kafka

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
version: '3'
services:
kafka:
container_name: app-kafka
image: landoop/fast-data-dev
ports:
- 2181:2181 # Zookeeper
- 3030:3030 # Landoop UI
- 8081-8083:8081-8083 # REST Proxy, Schema Registry, Kafka Connect
- 9581-9585:9581-9585 # JMX Ports
- 9092:9092 # Kafka Broker
environment:
ADV_HOST: app.domain.com
RUNTESTS: 0
FORWARDLOGS: 0
SAMPLEDATA: 0
networks:
- fast-data

networks:
fast-data:

# Start-up
# docker-compose up

# Clean-up
# docker-compose down --volume


# docker run --rm -p 2181:2181 -p 3030:3030 -p 8081-8083:8081-8083 \
# -p 9581-9585:9581-9585 -p 9092:9092 -e ADV_HOST=app.domain.com \
# landoop/fast-data-dev

# docker run --rm -it --net=host landoop/fast-data-dev bash

Errors && Analyz

reset offset

1
2
3
4
5
6
7
8
9
10
11
12
# create new consumer-group for reset
kafka-console-consumer --bootstrap-server HOST:9092 --topic user-phone-book-PROD --consumer-property group.id=user-phone-book-PROD-processingGroup77

# reset
kafka-consumer-groups --bootstrap-server HOST:9092 --topic user-phone-book-PROD --group user-phone-book-PROD-processingGroup --reset-offsets --to-datetime 2019-01-17T00:00:00.000
kafka-consumer-groups --bootstrap-server HOST:9092 --topic user-phone-book-PROD --group user-phone-book-PROD-processingGroup --reset-offsets --to-datetime 2019-01-17T00:00:00.000 --execute

kafka-consumer-groups --bootstrap-server HOST:9092 --topic user-phone-book-PROD --group user-phone-book-PROD-processingGroup --reset-offsets --shift-by -16

kafka-consumer-groups --bootstrap-server HOST:9092 --topic user-phone-book-PROD --group user-phone-book-PROD-processingGroup --reset-offsets --shift-by 210
kafka-consumer-groups --bootstrap-server HOST:9092 --topic user-phone-book-PROD --group user-phone-book-PROD-processingGroup --reset-offsets --shift-by 210 --execute
...

LAG (LOG-END-OFFSET)

1
2
посмотреть состояние очереди на обрабоку топика консьюмером на определенныю consumer-group
kafka-consumer-groups --bootstrap-server kafka:9092 --group user-phone-book-PROD-processingGroup --describe

Broker: Leader not available

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ kafkacat -L -b 127.0.0.1:9092
Metadata for all topics (from broker -1: 127.0.0.1:9092/bootstrap):
1 brokers:
broker 1006 at localhost:9092
4 topics:
topic "kvstore_byte" with 1 partitions:
partition 0, leader -1, replicas: 1001, isrs: , Broker: Leader not available
topic "__consumer_offsets" with 50 partitions:
partition 23, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 41, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 32, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 8, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 17, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 44, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 35, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 26, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 11, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 29, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 38, leader -1, replicas: 1001, isrs: , Broker: Leader not available
partition 47, leader -1, replicas: 1001, isrs: , Broker: Leader not available

возможная причина

1
2
3
???
broker 1006
replicas: 1001

как по мне, эти номера должны быть одинаковы. мне помогло

1
2
docker-compose down
docker-compose up

LEADER_NOT_AVAILABLE

1
2
2019-02-14 23:49:29.017  WARN [bootstrap,,,] 57142 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : Error while fetching metadata with correlation id 441 : {user-phone-book-LOCAL3=LEADER_NOT_AVAILABLE}
2019-02-14 23:49:29.157 WARN [bootstrap,,,] 57142 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : Error while fetching metadata with correlation id 442 : {user-phone-book-LOCAL3=LEADER_NOT_AVAILABLE}

при проверки я выяснил,что топик был создан автоматом, но вот группа - нет

1
2
3
4
5
6
$ kafka-topics --list --zookeeper kafka:2181
user-phone-book-LOCAL3
...

$ kafka-consumer-groups --bootstrap-server kafka:9092 --list
...

Amazon S3. check access

для проверки доступа к S3 обьекту

1
2
3
4
5
6
7
8
9
$ cat ~/.aws/config | grep vidpost -A4
[profile vidpost]
region = us-east-1
aws_access_key_id = XXX
aws_secret_access_key = XXX

$ aws s3api get-object --bucket vidpostsystem-config --key vidpost/vidpost.properties vidpost.properties --profile repengine-test-s3

An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
1
2
3
4
5
6
7
8
9
10
11
12
13
$ cat ~/.aws/config | grep repen -A4
[profile repengine-test-s3]
aws_access_key_id = XXX
aws_secret_access_key = XXX
$ aws s3api get-object --bucket repengine-avator-764737e2-03df-4aca-9bc3-059bc62e5f31-awstest --key 008ac382-b946-4a26-80c2-d9119bc24a9e-200-200.jpeg 008ac382-b946-4a26-80c2-d9119bc24a9e-200-200.jpeg --profile repengine-test-s3
{
"AcceptRanges": "bytes",
"ContentType": "image/jpeg",
"LastModified": "Wed, 23 Nov 2016 10:57:53 GMT",
"ContentLength": 8024,
"ETag": "\"21296fd39f108637bf40ad4298b3b52e\"",
"Metadata": {}
}