SolvedClickHouse DB::Exception: Too many parts (600). Merges are processing significantly slower than inserts

ClickHouse client version 18.6.0.
Connected to ClickHouse server version 18.6.0 revision 54401.

Hello all,
I have been working on this issue for almost a week and still see the problem coming back.

The following changes have been made on config.xml:


The input data size: 177GB
The total number of input files: 2050
Average size of each file: 110MB
Command used to import the data:
cat *.clickhouse | clickhouse-client --query="INSERT INTO default.MyTable(....")...
Question 1> Based on the above command, how many insert statement generated?
Is this a bulk insertion?

After the change, the system still reports the same errors as below.
Question 2> What else should I try? Increase the size again?

Thank you very much

2018.09.19 10:25:02.114885 [ 49 ] executeQuery: Code: 252, e.displayText() = DB::Exception: Too many parts (600). Merges are processing significantly slower than inserts., e.what() = DB::Exception (from (in query: INSERT INTO default.MyTable(timestamp, ...) FORMAT RowBinary), Stack trace:

  1. clickhouse-server(StackTrace::StackTrace()+0x16) [0x48abed6]
  2. clickhouse-server(DB::Exception::Exception(std::string const&, int)+0x1f) [0x2682dcf]
  3. clickhouse-server(DB::MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event*) const+0x3a7) [0x431ccc7]
  4. clickhouse-server(DB::MergeTreeBlockOutputStream::write(DB::Block const&)+0x2c) [0x4314cfc]
  5. clickhouse-server(DB::PushingToViewsBlockOutputStream::write(DB::Block const&)+0x31) [0x44b38a1]
  6. clickhouse-server(DB::AddingDefaultBlockOutputStream::write(DB::Block const&)+0x9a1) [0x44442c1]
  7. clickhouse-server(DB::SquashingBlockOutputStream::write(DB::Block const&)+0x363) [0x44b91a3]
  8. clickhouse-server(DB::CountingBlockOutputStream::write(DB::Block const&)+0x1d) [0x4468a3d]
  9. clickhouse-server(DB::TCPHandler::receiveData()+0x8d) [0x268b63d]
  10. clickhouse-server(DB::TCPHandler::receivePacket()+0x85) [0x268c435]
  11. clickhouse-server(DB::TCPHandler::readData(DB::Settings const&)+0x19b) [0x268c8eb]
  12. clickhouse-server(DB::TCPHandler::processInsertQuery(DB::Settings const&)+0x1d6) [0x268cc86]
  13. clickhouse-server(DB::TCPHandler::runImpl()+0x43c) [0x268d24c]
  14. clickhouse-server(DB::TCPHandler::run()+0x1c) [0x268e0ac]
  15. clickhouse-server(Poco::Net::TCPServerConnection::start()+0xf) [0x49eefaf]
  16. clickhouse-server(Poco::Net::TCPServerDispatcher::run()+0x110) [0x49ef610]
  17. clickhouse-server(Poco::PooledThread::run()+0x77) [0x4a4df27]
  18. clickhouse-server(Poco::ThreadImpl::runnableEntry(void*)+0x38) [0x4a4b488]
  19. clickhouse-server() [0x4d3fe1f]
  20. /lib64/ [0x7f912655ae25]
  21. /lib64/ [0x7f9125d8234d]

2018.09.19 10:25:02.162124 [ 49 ] ServerErrorHandler: Code: 99, e.displayText() = DB::Exception: Unknown packet 44 from client, e.what() = DB::Exception, Stack trace:

  1. clickhouse-server(StackTrace::StackTrace()+0x16) [0x48abed6]
  2. clickhouse-server(DB::Exception::Exception(std::string const&, int)+0x1f) [0x2682dcf]
  3. clickhouse-server(DB::TCPHandler::receivePacket()+0x2e0) [0x268c690]
  4. clickhouse-server(DB::TCPHandler::runImpl()+0x2b0) [0x268d0c0]
  5. clickhouse-server(DB::TCPHandler::run()+0x1c) [0x268e0ac]
  6. clickhouse-server(Poco::Net::TCPServerConnection::start()+0xf) [0x49eefaf]
  7. clickhouse-server(Poco::Net::TCPServerDispatcher::run()+0x110) [0x49ef610]
  8. clickhouse-server(Poco::PooledThread::run()+0x77) [0x4a4df27]
  9. clickhouse-server(Poco::ThreadImpl::runnableEntry(void*)+0x38) [0x4a4b488]
  10. clickhouse-server() [0x4d3fe1f]
  11. /lib64/ [0x7f912655ae25]
  12. /lib64/ [0x7f9125d8234d]
22 Answers

βœ”οΈAccepted Answer

  1. Each insert create a folder in /var/lib/clickhouse/.../table_name/. Inside that folder there are 2 files per each column - one with data (compressed), second with index. Data is physically sorted by primary key inside those files. Those folders are called 'parts'.
  2. ClickHouse merges those smaller parts to bigger parts in the background. It chooses parts to merge according to some rules. After merging two (or more) parts one bigger part is being created and old parts are queued to be removed. The settings you list allow finetuning the rules of merging parts. The goal of merging process - is to leave one big part for each partition (or few big parts per partition which are not worth to merge because they are too big). Please check also that comment.
  3. if you create new parts too fast (for example by doing lot of small inserts) and ClickHouse is not able to merge them with proper speed (so new parts come faster than ClickHouse can merge them) - then you get the exception 'Merges are processing significantly slower than inserts'. You can try to increase the limit but you can get the situation then you get filesystem problems caused by the too big number of files / directories (like inodes limit).
  4. If you insert to lot of partitions at once the problem is multiplied by the number of partitions affected by insert.
  5. You can try to adjust the behaviour of clickhouse with one of the listed settings, or with max_insert_block_size / max_block_size / insert_format_max_block_size / max_client_network_bandwidth. But: the better solution is just to insert data in expected tempo. The expected tempo is: one insert per 1-2 sec, each insert containing 10K-500K rows of data.
  6. So proper solution to solve "Merges are processing significantly slower than inserts" is to adjust the number of inserts per second and number of rows in each insert. Use batch insert to combine small inserts into one bigger if data comes row-by-row. Throttle huge inserts if you have too much data to insert at once. Don't change clickhouse internals, unless you really understand well what does they it mean.
  7. If your data comes faster than 500K rows per second - most probably you need more servers in the cluster to serve that traffic, not the adjustment of settings.
  8. the speed of background merges usually depends on storage speed, used compression settings, and mergetree option, i.e. merge algorithm - plain merge / aggregating / summing / collapsing etc. & used soring key.

Other Answers:

The main requirement about insert to Clickhouse: you should never send too many INSERT statements per second. Ideally - one insert per second / per few seconds.

So you can insert 100K rows per second but only with one big bulk INSERT statement. When you send hundreds / thousands insert statements per second to *MergeTree table you will always get some errors, and it can not be changed by adjusting some settings.

If you can't combine lot of inserts into one big bulk insert statement outside - then you should create Buffer table before *MergeTree table.

Hello filimonov,

I do have some issues on understanding the meaning of the following config.

/** If table contains at least that many active parts, artificially slow down insert into table. /
* If more than this number active parts, throw 'Too many parts ...' exception */

Question 1> Why does clickhouse have to throw when the limit of the parts_to_throw_insert has been hit?

Question 2> What is the real meaning of parts here? Is it true that the parts are the individual files located on /var/lib/clickhouse/data/default/MyTable/...?
What if I increase parts_to_throw_insert to 100000? Does this cause any potential issues?

Question 3> By decreasing the value in <max_bytes_to_merge_at_max_space_in_pool>107374182400</max_bytes_to_merge_at_max_space_in_pool>, will this cause performance issue? My current workstation has 512GB memory. Since the machine has large memory capacity, I really want to improve the insert/merge performance if that is possible.

Question 4> I have increased the value in <max_delay_to_insert>60</max_delay_to_insert>.
Will this cause performance issue?

Thank you

Related Issues:

ClickHouse DB::Exception: Too many parts (600). Merges are processing significantly slower than inserts
Each insert create a folder in /var/lib/clickhouse/.../table_name/ Inside that folder there are 2 fi...
ClickHouse Mac 10.15.4: Cannot find objcopy
@Renampme you can run the command below then find the file ~/ClickHouse/CMakeLists.txt then you can ...
ClickHouse Avro Support
+1 for Avro support Reopen #1342 for triage. ...
ClickHouse Any plans to support separation hot/cold data at the level of partitions?
@liuqian1989 it's kind of obvious possible feature but in practice ClickHouse works fast enough even...
ClickHouse Partition pruning not working as expected
It's fixed in #13497 And toDate(timestamp_ms / 1000) will also work because of #14513 . ...
amplify js Uncaught ReferenceError: global is not defined in latest Angular 6 RC
Just for reference I have passed through this issue with adding these lines on my index.html head: ...
react native firebase πŸ”₯(Android) Program type already present: io.invertase.firebase.BuildConfig
I think I found my ultimate error here In my package.json file I had at some point earlier ...
amplify cli Many-To-Many
You can implement many to many yourself using two 1-M @connections and a joining @model ...
superset Was unable to import superset Error: cannot import name '_maybe_box_datetimelike'
tx @Uneasy-listening !!! This worked for me: [only necessary if you have already installed pandas (p...
amplify js fetch is not defined
nodejs fix: I'm using amazon-cognito-auth-js with my express app and I'm following the case 1 exampl...
amplify js Error: No credentials, applicationId or region
I had the same issue (running on the latest Amplify v3) and worked around it by changing the followi...
amplify js Is it possible to get cognito user attributes in Lambda/cloud logic ?
I have been looking around for a while I feel the answers here didn't really answer the problem ...
react native firebase πŸ”₯ Version mismatch causing app termination
Looks like GoogleAppMeasurement gets imported as a dependency with version 5.3.0 A simple addition t...
react native firebase onNotificationOpened not working on Android (background/foreground)
Solved the issue only on background using @ZardozSpeaks approach Under my ...
grafana HTTP Error Bad Gateway when using prometheus
I still face this error after explicitly writing the URL (in my case it was http://localhost:9090) ...
amplify cli aws-exports.js is not generated
Even after the third read I find it utterly confusing and I have usability issues too ...
amplify js Auth Error: Amplify has not been configured correctly using Nuxt.js
I 'm having the same issue in aws-amplify: ^3.0.11 I found out Auth module didn't load configs of aw...
react native firebase Firebase dependency updates are required to fix gradle v4+ builds
@DeepaSriramRR as a temporal workaround you can disable version check of the Google plugin At the en...
react native firebase AndroidX support
Play Services just shipped AndroidX breaking changes - if you must upgrade your android Firebase SDK...
react native firebase RNFirebase core module was not found natively on ios
I will just leave it here in case someone comes and it's still struggling with this ...
amplify js How to refresh Cognito tokens
It will refresh if you call the SDK for it e.g. with Auth.currentSession() and it finds an expired t...
react native firebase [πŸ“š] AdMob - use @invertase/react-native-google-ads
Just FYI we're getting close here AdMob documentation availability Hi there! I've noticed that AdMob...
react native firebase [SOLVED with v2.1.1] Undefined symbols for architecture x86_6: _OBJC_CLASS_$_RNFirebaseDatabaseReference
Okay.. if anybody comes accross this magic error try this It solved the problem (for now) Close Xcod...
react native firebase [android] No Firebase app '[DEFAULT]' has been created - call firebase.initializeApp(), js engine: hermes
I spent a lot of hours for found who was the problem most setup issues such as default app has not b...
react native firebase iOS: Firebase.h not found
Can you first close xcode then delete the xcworkspace & Podfile.lock files in the ios directory then...
amplify cli @auth public/private IAM roles and other Providers
ok my bad was actually quite easy just do : and add a auth provider in my case was IAM ...
react native firebase Android: JDK 10 not supported for v4.1.0
+1 4.1.0 broke builds for Java10 As a temporary workaround on MacOS Issue Upon upgrading to v4.1.0 a...
amplify js Sign up multiple different accounts with the same email
The pre-signup trigger can be used to prevent the new signup from being created when there's an exis...
amplify js Amplify Console 200 (Rewrite) fails on SPA React (Router) Application
This worked for me source: </^((?!.(css|gif|ico|jpg|js|png|txt|svg|woff|ttf)$).)*$/> target address:...
react native firebase Notification not showing up in foreground after triggering displayNotification() (Android)
It seems as if the requirements for displaying a foreground notification are stricter on Android ...
amplify js aws-amplify 0.3.0: "Uncaught ReferenceError: require is not defined" when packaged with webpack
Ok I found something that helped Graphql-js uses .mjs as file extension which caused issues with the...
react native firebase Unable to instantiate service io.invertase.firebase.messaging.MessagingService: java.lang.ClassNotFoundException
My problem was solved by removing these lines from my AndroidManifest.xml Issue I've updated both RN...
react native firebase pod install fails after npm install @react-native-firebase/firestore
I tried the above solutions and didn't work for me I solved the issue by deleting the ./ios/Podfile....
react native firebase cannot find symbol BuildConfig.APPLICATION_ID
@mikehardy seems like it should be enough to rename APPLICATION_ID to LIBRARY_PACKAGE_NAME in /app/a...
amplify js Identity providers authentication against User Pools WITHOUT hosted UI
@martimarkov we find a solution for you to use the customized button to do that ...
react native firebase Error: You attempted to use a firebase module that's not installed on your Android project by calling firebase.notifications()
The solution for this is to add those to app/build.gradle : implementation '
react native firebase IOS unable to receive notification
It appears as if RNFirebase documentation is missing a few required steps Issue: Not able to receive...
amplify cli jest-haste-map: Haste module naming collision: -> namefunction <-
For React Native 0.6x configure the blacklist in metro.config.js instead of rn-cli.config.js as per ...
amplify js Getting "no current user" after successful login to Cognito UserPool
I had the same problem but for me removing the cookie storage configuration in aws-exports.js solved...
amplify js How to add user to Group
πŸ‘† Also we have achieved this using the Post Confirmation Lambda trigger Very simplified from our im...
react native firebase RNFirebaseNotifications.h file not found
I think in the header search paths it should be set to recursive for $(SRCROOT)/../node_modules/reac...
amplify js RFC: Amplify Library Modularization and Bundle Size Improvement
Modularization is available in Preview Hi all With us launching Modularization in the coming weeks ...
react native firebase Multiple dex files define Lcom/google/firebase/iid/zzb;
I got the same conflict with react-native-device-info and solved it with : After adding this package...
react native firebase Document how to mock react-native-firebase for jest
For now I've got this going on and it works for me (you may need to mock more or fewer modules or mo...
nativescript plugin firebase new iOS error on google auth - presenting view controller must be set
Google Auth was previously working on iOS but now I am receiving this error with no changes to codeb...
react native firebase Support Firebase SDK v5.0.0+
There are a number of breaking changed in v5 of the iOS pods: (
react native firebase πŸ”₯ [πŸ›] Crashlytics could not determine stripped/unstripped native library directories for project ':app'
Not sure if it is suitable for everyone but with my team with spotted that if in this step : https:/...
netdata Uninstall netdata?
I just removed it from some of my stuff (for now!) - Below paths are based on the defaults used to i...