使用现有的镜像,新拉起一个OGG+Kafka容器环境,为新增业务做准备。

创建镜像

使用已有镜像

启动容器

配置docker-compose.yml文件,修改端口号

相关配置

进入OGG_HOME目录

创建OGG基本目录结构

初次配置OGG需要先创建基本目录结构:

GGSCI (7add7fa1b405) 2> edit param mgr
ERROR: Directory /usr/local/work/ogg/dirprm does not exist yet (use CREATE SUBDIRS).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
GGSCI (7add7fa1b405) 3> create subdirs

Creating subdirectories under current directory /usr/local/work/ogg

Parameter file /usr/local/work/ogg/dirprm: created.
Report file /usr/local/work/ogg/dirrpt: created.
Checkpoint file /usr/local/work/ogg/dirchk: created.
Process status files /usr/local/work/ogg/dirpcs: created.
SQL script files /usr/local/work/ogg/dirsql: created.
Database definitions files /usr/local/work/ogg/dirdef: created.
Extract data files /usr/local/work/ogg/dirdat: created.
Temporary files /usr/local/work/ogg/dirtmp: created.
Credential store files /usr/local/work/ogg/dircrd: created.
Masterkey wallet files /usr/local/work/ogg/dirwlt: created.
Dump files /usr/local/work/ogg/dirdmp: created.

配置MGR进程

注意端口号与动态端口列表,要与容器绑定的端口一致。

1
2
3
4
5
6
7
8
9
10
11
GGSCI (7add7fa1b405) 4> edit param mgr

GGSCI (7add7fa1b405) 5> view param mgr

PORT 9839
DYNAMICPORTLIST 9840-9939
AUTORESTART EXTRACT *,RETRIES 5,WAITMINUTES 3
PURGEOLDEXTRACTS ./dirdat/*,usecheckpoints, minkeepdays 3
LAGREPORTHOURS 1
LAGINFOMINUTES 30
LAGCRITICALMINUTES 45

添加checkpoint表

目前不知道这步有什么用……

1
2
3
4
5
6
GGSCI (7add7fa1b405) 6> edit params GLOBALS

GGSCI (7add7fa1b405) 7> view param GLOBALS

GGSCHEMA ogg
CHECKPOINTTABLE ogg.checkpoint_table

配置REP进程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
GGSCI (7add7fa1b405) 8> add  REPLICAT rep_kf,exttrail ./dirdat/kf,CHECKPOINTTABLE ogg.checkpoint_table
REPLICAT added.

GGSCI (7add7fa1b405) 9> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER STOPPED
REPLICAT STOPPED REP_KF 00:00:00 00:00:03


GGSCI (7add7fa1b405) 10> edit params rep_kf

GGSCI (7add7fa1b405) 11> view param rep_kf

REPLICAT rep_kf
sourcedefs /usr/local/work/ogg/ogg.t_file_info_all
TARGETDB LIBFILE libggjava.so SET property=dirprm/kafka.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP stl_zb.t_file_info*, TARGET stl_zb.t_file_info*;

配置Kafka相关参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@7add7fa1b405 ogg]# cd dirprm/
[root@7add7fa1b405 dirprm]# ls
globals.prm mgr.prm rep_kf.prm
[root@7add7fa1b405 dirprm]# vi kafka.props
[root@7add7fa1b405 dirprm]# cat kafka.props
gg.handlerlist=kafkahandler
gg.handler.kafkahandler.type=kafka
gg.handler.kafkahandler.KafkaProducerConfigFile=custom_kafka_producer.properties
gg.handler.kafkahandler.topicMappingTemplate=wjdata010
gg.handler.kafkahandler.format=json
gg.handler.kafkahandler.mode=op
gg.classpath=dirprm/:/usr/local/work/kafka_2.11-2.0.0/libs/*:/usr/local/work/ogg/:/usr/local/work/ogg/lib/*
[root@7add7fa1b405 dirprm]# vi custom_kafka_producer.properties
[root@7add7fa1b405 dirprm]# cat custom_kafka_producer.properties
bootstrap.servers=broker1:9092,broker2:9092,broker3:9092
acks=1
compression.type=gzip
reconnect.backoff.ms=1000
value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
batch.size=102400
linger.ms=10000

启动MGR、REP进程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
GGSCI (7add7fa1b405) 8> start mgr     
Manager started.

GGSCI (7add7fa1b405) 9> start REP_KF

Sending START request to MANAGER ...
REPLICAT REP_KF starting


GGSCI (7add7fa1b405) 10> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING REP_KF 00:00:00 00:00:01

验证

等待验证

参考文章

旧有文档