Jdbc Source Connector is holding more active session in Kafka connect for Redshift?

Jdbc Source Connector is holding more active session in Kafka connect for Redshift?,jdbc,apache-kafka,amazon-redshift,apache-kafka-connect,confluent-platform,Jdbc,Apache Kafka,Amazon Redshift,Apache Kafka Connect,Confluent Platform,we are using Kafka JDBC connector to connect to Redshift database we created three connectors to pull the data from three different table to three different topic we observed that each connector holding multiple active sessions even if we delete the connectors, active sessions are still available and not closing Did anyone faced same kind of issues? is there any configuration is available to restrict this kind of behaviuor? can anyone know how to overcome this issues? Here is the connector confi

we are using Kafka JDBC connector to connect to Redshift database

we created three connectors to pull the data from three different table to three different topic

we observed that each connector holding multiple active sessions

even if we delete the connectors, active sessions are still available and not closing

Did anyone faced same kind of issues?

is there any configuration is available to restrict this kind of behaviuor?

can anyone know how to overcome this issues?

Here is the connector configuration which i used

{
"name": "test-conn",
"config": {
    "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
    "connection.url": "jdbc:redshift://ec2-32-93-33-143.us-east-2.compute.amazonaws.com:5444/testdb",
    "connection.user": "",
    "connection.password": "",        
    "key.converter": "org.apache.kafka.connect.storage.StringConverter",
    "key.converter.schemas.enable": "false",
    "errors.retry.delay.max.ms": "60000",
    "errors.tolerance": "none",
    "errors.log.enable": "true",
    "errors.log.include.messages": "true",
    "validate.non.null": "false",
    "poll.interval.ms": "300000",
    "batch.max.rows": "10000",
    "table.whitelist": "input_table",
    "schema.pattern": "input_schema",
    "mode": "timestamp+incrementing",
    "incrementing.column.name": "id",
    "timestamp.column.name": "time",            "value.converter":"org.apache.kafka.connect.json.JsonConverter",
    "value.converter.schemas.enable": "false"
    
}

}


#1

How many tasks does each connector have?

#2

Hi @OneCricketeer i updated the question with configuration which i use,i didn't explicitly mentioned task