同一clientid重连的问题

您好!麻烦请问一下以下问题:
在emqx中,相同的clientid登录时,新登录的连接会将旧的连接挤掉;但如果在插件中,在on_client_connected回调中,通过 emqx_mgmt:subscribe(ClientId,)为客户端订阅一些topic,有时会出现旧连接被踢掉的后,新连接订阅的topic也被清理掉了。

看论坛中的帖子,提到新旧两个连接可能会同时存在一段时间,猜想是否是订阅的时候,因为是通过clientid订阅的,踢出旧连接时,关联着把新订阅的topic也都清理了。

现在的解决方法是,在订阅topic的时候,循环检测该客户端是否有已订阅的topic,如果没有,则说明旧的连接已被清除;如果有已订阅的topic,则说明旧客户端还未被踢出,等待一会儿再重新检测。

请问以上问题,是否有更优雅一些的解决方法,能够直接检测到旧客户端已被踢掉?或者为该客户端订阅后,踢出旧连接的时候不影响新连接的订阅?

试试带clean_start=false session时间设置短一点。

clean_start=false,意思是说让新旧连接使用同一个session,这样就不必再重新订阅了吗?
请问除此之外,有没有直接的检测函数来判断就客户端是否已踢掉?(因为需要用到on_subscribe事件进行业务逻辑处理)
感谢答复 :pray:

目前没有。

1 个赞

您好,请教一下,emqx5.4中的插件开发,发现偶现以下问题:
账号18888888888 使用client_id=MT_188888888886201f进行登录,由于网络原因,客户端发生了重连,client_id不变,此时会出现旧连接没有被清除的现象:
可以从日志中看到,新连接客户端 ip:port 是114,246,239,127:24736, 旧连接是24,127,69,37:41564。
请问什么原因会导致相同的client_id,旧连接无法被清理掉呢?

{“time”:1722447098038272,“level”:“debug”,“msg”:“-------------------do_subscribe before”,“mfa”:“u_im_worker:do_subscribe/5(567)”,“username”:“18888888888”,“port”:24736,“pid”:“<0.5090.0>”,“host”:“{114,246,239,127}”,“cnt”:1,“clientid”:“MT_188888888886201f”}
{“time”:1722447098038646,“level”:“debug”,“msg”:“-------------------do_subscribe not match,retry”,“mfa”:“u_im_worker:do_subscribe/5(600)”,“pid”:“<0.5090.0>”,“clients”:[“{{<<"MT_188888888886201f">>,<0.21264.0>},#{clientinfo => #{clientid => <<"MT_188888888886201f">>,enable_authn => true,is_bridge => false,is_superuser => false,listener => ‘tcp:default’,mountpoint => undefined,peerhost => {124,127,69,37},peerport => 41564,protocol => mqtt,sockport => 1883,username => <<"18888888888">>,zone => default},conn_state => connected,conninfo => #{clean_start => true,clientid => <<"MT_188888888886201f">>,conn_mod => emqx_connection,conn_props => #{},connected_at => 1722446930623,expiry_interval => 0,keepalive => 300,peername => {{124,127,69,37},41564},proto_name => <<"MQTT">>,proto_ver => 4,receive_maximum => 32,sockname => {{172,18,0,6},1883},socktype => tcp,username => <<"18888888888">>},node => ‘emqx@172.18.0.6’,session => #{await_rel_timeout => 300000,created_at => 1722446930623,id => <<0,6,30,142,109,113,187,133,95,172,0,0,83,16,0,0>>,is_persistent => false,retry_interval => 30000,subscriptions => #{},upgrade_qos => false},sockinfo => #{peername => {{124,127,69,37},41564},sockname => {{172,18,0,6},1883},sockstate => running,socktype => tcp},will_msg => undefined},[{recv_oct,4995},{recv_cnt,9},{send_oct,1127},{send_cnt,10},{send_pend,0},{subscriptions_cnt,3},{subscriptions_max,infinity},{inflight_cnt,0},{inflight_max,32},{mqueue_len,0},{mqueue_max,1000},{mqueue_dropped,0},{next_pkt_id,2},{awaiting_rel_cnt,0},{awaiting_rel_max,100},{recv_pkt,10},{recv_msg,4},{‘recv_msg.qos0’,0},{‘recv_msg.qos1’,4},{‘recv_msg.qos2’,0},{‘recv_msg.dropped’,0},{‘recv_msg.dropped.await_pubrel_timeout’,0},{send_pkt,10},{send_msg,1},{‘send_msg.qos0’,0},{‘send_msg.qos1’,1},{‘send_msg.qos2’,0},{‘send_msg.dropped’,0},{‘send_msg.dropped.expired’,0},{‘send_msg.dropped.queue_full’,0},{‘send_msg.dropped.too_large’,0},{mailbox_len,0},{heap_size,987},{total_heap_size,5172},{reductions,92374},{memory,42704}]}”]}