替代ELK:ClickHouse+Kafka+FileBeat才是最绝的( 三 )


修改/etc/clickhouse-server/config.xml配置文件 , 修改日志级别为information , 默认是trace
information
执行日志所在目录:
正常日志
/var/log/clickhouse-server/clickhouse-server.log
异常错误日志
/var/log/clickhouse-server/clickhouse-server.err.log
查看安装的clickhouse版本:
clickhouse-server--version
clickhouse-client--password
sudoclickhousestop
sudoclickhousetart
sudoclickhousestart
替代ELK:ClickHouse+Kafka+FileBeat才是最绝的
文章图片
clickhouse部署过程中遇到的一些问题如下:
1)clickhouse创建kafka引擎表
CREATETABLEdefault.kafka_clickhouse_inner_logONCLUSTERclickhouse_cluster(
log_uuidString,
date_partitionUInt32,
event_nameString,
activity_nameString,
activity_typeString,
activity_idUInt16
)ENGINE=KafkaSETTINGS
kafka_broker_list='kafka1:9092,kafka2:9092,kafka3:9092',
kafka_topic_list='data_clickhouse',
kafka_group_name='clickhouse_xxx',
kafka_format='JSONEachRow',
kafka_row_delimiter='n',
kafka_num_consumers=1;
问题1:clikhouse客户端无法查询kafka引擎表
Directselectisnotallowed.Toenableusesettingstream_like_engine_allow_direct_select.(QUERY_NOT_ALLOWED)(version22.5.2.53(officialbuild))
替代ELK:ClickHouse+Kafka+FileBeat才是最绝的
文章图片
解决方案:
需要在clickhouseclient创建加上--stream_like_engine_allow_direct_select1
clickhouse-client--stream_like_engine_allow_direct_select1--passwordxxxxx
替代ELK:ClickHouse+Kafka+FileBeat才是最绝的
文章图片
2)clickhouse创建本地节点表
问题2:无法开启本地表macro
Code:62.DB::Exception:Therewasanerroron[10.74.244.57:9000]:Code:62.DB::Exception:Nomacro'shard'inconfigwhileprocessingsubstitutionsin'/clickhouse/tables/default/bi_inner_log_local/{shard}'at'50'ormacroisnotsupportedhere.(SYNTAX_ERROR)(version22.5.2.53(officialbuild)).(SYNTAX_ERROR)(version22.5.2.53(officialbuild))
创建本地表(使用复制去重表引擎)
createtabledefault.bi_inner_log_localONCLUSTERclickhouse_cluster(
log_uuidString,
date_partitionUInt32,
event_nameString,
activity_nameString,
credits_bringInt16,
activity_typeString,
activity_idUInt16
)ENGINE=ReplicatedReplacingMergeTree('/clickhouse/tables/default/bi_inner_log_local/{shard}','{replica}')
PARTITIONBYdate_partition
ORDERBY(event_name,date_partition,log_uuid)
SETTINGSindex_granularity=8192;
解决方案:在不同的clickhouse节点上配置不同的shard , 每一个节点的shard名称不能一致 。
01
example01-01-1
替代ELK:ClickHouse+Kafka+FileBeat才是最绝的
文章图片
替代ELK:ClickHouse+Kafka+FileBeat才是最绝的
文章图片
问题3:clickhouse中节点数据已经存在
Code:253.DB::Exception:Therewasanerroron:Code:253.DB::Exception:Replica/clickhouse/tables/default/bi_inner_log_local/01/replicas/example01-01-1alreadyexists.(REPLICA_IS_ALREADY_EXIST)(version22.5.2.53(officialbuild)).(REPLICA_IS_ALREADY_EXIST)(version22.5.2.53(officialbuild))
解决方案:进入zookeeper客户端删除相关节点 , 然后再重新创建ReplicatedReplacingMergeTree表 。 这样可以保障每一个clickhouse节点都会去消费kafkapartition的数据 。
3)clickhouse创建集群表
创建分布式表(根据log_uuid对数据进行分发 , 相同的log_uuid会发送到同一个shard分片上 , 用于后续合并时的数据去重):
CREATETABLEdefault.bi_inner_log_allONCLUSTERclickhouse_clusterASdefault.bi_inner_log_local
ENGINE=Distributed(clickhouse_cluster,default,bi_inner_log_local,xxHash32(log_uuid));