Ошибка resourcerequest timed out

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and
privacy statement. We’ll occasionally send you account related emails.

Already on GitHub?
Sign in
to your account

Comments

@clutariomark

What you are doing?

Testing the connection to a postgres amazon rds instance

const sequelize = new Sequelize('dbname', 'username', 'password', {
  host: 'example.rds.amazonaws.com',
  dialect: 'postgres'
});


sequelize
  .authenticate()
  .then(() => {
    console.log('Connection has been established successfully.');
  })
  .catch(err => {
    console.error('Unable to connect to the database:', err);
  });

What do you expect to happen?

App should connect and should output in console: «Connection has been established successfully»

What is actually happening?

Unable to connect to the database: { TimeoutError: ResourceRequest timed out
    at ResourceRequest._fireTimeout (myappnode_modulesgeneric-poollibResourceRequest.js:58:17)
    at Timeout.bound (myappnode_modulesgeneric-poollibResourceRequest.js:8:15)
    at ontimeout (timers.js:380:14)
    at tryOnTimeout (timers.js:244:5)
    at Timer.listOnTimeout (timers.js:214:5) name: 'TimeoutError' }

Dialect: postgres
Database version: 9.6
Sequelize version: 4.2.1

antongorodezkiy, WooDzu, skineur, Zaniyar, FantasyNeurotic, tboulis, widada, olingern, LordParsley, volodymyr-sevastianov, and 5 more reacted with thumbs up emoji

@felixfbecker

I cannot reproduce the error. Obviously there was a timeout in connecting to the DB server, it doesn’t look like a bug in Sequelize to me.

ishare20 reacted with thumbs up emoji
pittersnider, sanket-work, youshoubianer, Oxicode, sespinoza-dev, esdeseace, arizonatribe, HuanRE, and VyhrystMax reacted with thumbs down emoji
goetzc, volodymyr-sevastianov, pittersnider, confiq, sespinoza-dev, and VyhrystMax reacted with confused emoji

@clutariomark

Tried the same endpoint and other credentials using pg-promise, and I was successfully connected. Could you tell what db server you tried it on? Of course, the app connects to my local db.

@felixfbecker

Own applications running on AWS that use Sequelize.
I am sorry, but even if you can connect with pg-promise, I have no way to find out why it wouldn’t work with Sequelize without a repro.

@clutariomark

@afituri

I have the same exact issue!

OsoianMarcel, long241191, JSFiend, leomfelicissimo, arpitjain099, kramer65, Ricardonacif, AlastairTaft, lehno, cecilialam, and 51 more reacted with thumbs up emoji

@afituri

screenshot from 2017-07-06 02-14-13

@pavelkrcil

Same issue here. The connection is established but when the connection of any connection is lost, the pool is getting smaller and after death of all threads application goes down. We had no troubles with this with 3+ version of Sequelize, so I guess it’s not problem of db connection. Also tried downgrade from 4.2.1 (latest) to 4.1.0 and same problem.

idris, helloris25, leomfelicissimo, JSFiend, kramer65, Ricardonacif, lehno, saiwas, ws2356, joshuat, and 3 more reacted with thumbs up emoji

@afituri

I have fixed the issue by maximizing the acquire option of pool configuration in sequelize

pool: {
    max: 5,
    min: 0,
    idle: 20000,
    acquire: 20000
}

I guess Amazon RDS runs a lot of checks when asking for resources.

Also I am running the Free Tier of Amazon RDS.

pavelkrcil, avrtau, danamajid, ningacoding, qeesung, niilarbie, Undre4m, jonaustin, iamjoyce, euqen, and 61 more reacted with thumbs up emoji
dakdreams, iSanjayAchar, rabihiawaludin, evaporei, jingbo925, qninhks123, widada, sunnysingh, Xetera, alanjonesrios, and 4 more reacted with laugh emoji
rabihiawaludin, evaporei, jingbo925, qninhks123, widada, sunnysingh, Scarlett-DMG, alanjonesrios, eltonmarques96, and JuanuMusic reacted with hooray emoji
amaneer94, iSanjayAchar, rabihiawaludin, evaporei, jingbo925, vndeguzman, qninhks123, miteshsondhi, solution-chip, widada, and 10 more reacted with heart emoji
alanjonesrios, KevinDanikowski, limakayo, JuanuMusic, and hendraaagil reacted with rocket emoji
irfiacre, archunanswaas, limakayo, and JuanuMusic reacted with eyes emoji

@swordfish444

I’m having the exact same issue! The above pool configuration did NOT work. In our production environment, in a 24h period this error is logged ~5,000 times. It’s been consistent for the last 27 days. I too am on the latest v4 of sequelize and using AWS RDS. It would be great to have this looked into more closely as it’s a serious risk in the latest release.

image

@swordfish444

@pavelkrcil

@philipdbrown

Yeah, I’m having this same issue on a DigitalOcean server as well. I’ve tried the configuration from above with no luck.

@tiboprea

I am encountering the same problem on a DigitalOcean server. Do you have any updates on this issue?

I have also tried swordfish444 solution, but it didn’t work.

@philipdbrown

@tiboprea

Cheers @philipdbrown for the solution. I have actually solved it in a dumb way by using setTimeout.
It’s not the most efficient method but it should do for now until they fix it.

@ningacoding

thanks @afituri, your solution fixed the timeout error (for now).

just curious, in which case Sequelize can reach that options and get the TimeOut error again?

what are the «rare» and «optimal» options we need to set, to stay safe of Timeout error?.

i got Timeout error (with default pool options) just by quering 700 items (one by one) not everyone at the same time, but in the same proccess/thread.

what happen if y upload a massive excel with huge data to quering like 10k or more items?

@sushantdhiman

Along with using long timeout and fix in #7924, this should be fixed

@hronro

@sushantdhiman
After upgrade sequelize to v4.4.7, I still have the same problem.

@sushantdhiman

@foisonocean hmm, any way to reproduce this ?

@hronro

@sushantdhiman I’m using mysql, and when I insert 8000 items, I got this error.
By the way, when i connecting to my local mysql database, I also got the same problem.

@sushantdhiman

@foisonocean you can see our CI works properly. If anyone can submit a proper failing test case I will try resolve this. I cant help if I cant reproduce this issue :)

@hronro

@sushantdhiman

@nnsay

I’m having this same issue and I want to give a PR to reproduce this issue but I can not reproduce every time! My test code like:
image

sequelize version: 4.4.3
posgrese 9.4.10
pool config {
max: 5,
min: 1,
idle: 10000,
acquire: 10000,
evict: 60000,
handleDisconnects: true
}

@rlaace423

My reason fot this issue was unhandled transaction (make bunch of transaction, no commit or rollback).

vikas-eb, FujiBilly, alobato, volodymyr-sevastianov, Cauen, adamsvystun, confiq, uronly14me, lekseven, and mashkovtsev reacted with thumbs up emoji
Cauen reacted with laugh emoji
alobato, volodymyr-sevastianov, Cauen, and minhitech reacted with hooray emoji
Cauen reacted with rocket emoji

@lcandiago

I have a .txt file with 21497 rows and I read this file with fs.readFile and forEach row, I sync with Product table and insert the new records or update the existing ones.

image

A lot of records are inserted or updated on the Postgres database, but the major part I get this error:

image

If I try to run again on the same file, new records are inserted, but the error continues on other records.

Maybe it’s a lot of processing at the same time and some records are timed out. But how can I solve this? I tried to increase the max pool value but doesn’t work.

@iamakimmer

Suggest a bulk insert here, so instead of 21497 inserts it is 1 insert. I think method is bulkCreate or bulkInsert

On Feb 27, 2019, at 07:54, Lorenzo de Vargas Candiago ***@***.***> wrote:

I have a .txt file with 21497 rows and I read this file with fs.readFile and forEach row, I sync with Product table and insert the new records or update the existing ones.

A lot of records are inserted or updated on the postgres database, but the major part I get this error:

If I try tu run again on the same file, new records are inserted, but the error continues on other records.

Maybe it’s a lot of processing at the same time and some records are timed out. But how can I solve this? I tried to increase the max pool value but doesn’t work.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@lcandiago

Suggest a bulk insert here, so instead of 21497 inserts it is 1 insert. I think method is bulkCreate or bulkInsert

I found the updateOnDuplicate option on bulkCreate documentation, but it’s only supported by mysql.. so I don’t know how to use bulkCreate with the Upsert effect. Do you have any idea?
Thanks!

@iamakimmer

Are you using postgres? There is this:

PostgreSQL since version 9.5 has UPSERTsyntax, with ON CONFLICT clause. with the following syntax (similar to MySQL)

On Feb 27, 2019, at 09:35, Lorenzo de Vargas Candiago ***@***.***> wrote:

Suggest a bulk insert here, so instead of 21497 inserts it is 1 insert. I think method is bulkCreate or bulkInsert

I found the updateOnDuplicate option on bulkCreate documentation, but it’s only supported by mysql.. so I don’t know how to use bulkCreate with the Upsert effect. Do you have any idea?
Thanks!


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@McFlat

Having same issue. Using postgres 9.6

(node:61927) UnhandledPromiseRejectionWarning: TimeoutError: ResourceRequest timed out
    at ResourceRequest._fireTimeout (/Users/McFlat/indexer/node_modules/generic-pool/lib/ResourceRequest.js:62:17)
    at Timeout.bound (/Users/McFlat/indexer/node_modules/generic-pool/lib/ResourceRequest.js:8:15)
    at ontimeout (timers.js:466:11)
    at tryOnTimeout (timers.js:304:5)
    at Timer.listOnTimeout (timers.js:267:5)

Turns out Sequelize isn’t production ready yet after all. Doesn’t surprise me! Man will never be like the father in heaven, deny it all you want, but that’s the reality. youtube search «Shaking My Head Productions». Can’t depend on any of this tech crap, it’s all garbage from the devil himself, that’s the real nature of the problem if you think about it deep enough.

Abadii, bastika, botlin, PeterAlb, mamounothman, vijaychhuttani, and smoore2386 reacted with thumbs up emoji
lxcid, odusseys, SidSethi, aalimovs, blanchma, mlittrell, dheniges, garrettg123, confiq, farkmarnum, and VyhrystMax reacted with thumbs down emoji
gustaflindqvist, gnuns, Abadii, KeynesYouDigIt, PeterAlb, botlin, robophil, bastika, markhealey, aalimovs, and 8 more reacted with laugh emoji
erick2014 reacted with confused emoji
nlucero, erick2014, gsarpy, aphavichitr, smoore2386, hungnv-sr, victoraugustoas, and VyhrystMax reacted with eyes emoji

@FreakenK

@lcandiago If I read your code correctly, you are syncing the Product model for every row, and you are actually running all queries in parallel. This is where your timeout comes from. Just put a console.log before const product = … and you’ll see what I mean. Your console.log will probably have time to execute 21497 before a handful of queries complete.

Sync your Product model beforehand and change your forEach for a plain old for loop such as this :

for(let i = 0; i < rows.length; ++i) {
   const product = {
      ...
   }
   try {
      await Product.upsert(product);
      console.log(...);
   }
   catch(...) {
   }
}

You will then run all queries sequentially and this should to the trick. You could try to run them in batches of 5, 10, 20, etc. to get better performance.

@weexpectedTHIS

@acodercat

@idangozlan

I’m experiencing the same issue started recently, MariaDB, sequelize 4.42.0, mysql2 1.6.5.

@aschambers

I just wanted to post this to potentially help anyone else out.. because I was really struggling with this myself and could not figure out why I couldn’t connect to the database on my server (ubuntu 18.04), but could in dev. Turns out, inbound rules on Amazon were set to a specific ip, and editing inbound rules to anywhere (any ip address), I wasn’t getting anymore connection timeouts. I thought I had set this up in the beginning because it’s a test project, but apparently not..

@vkaracic

I had the same issue, but locally. Turns out I forgot to commit a transaction before returning a model instance.

@mkaufmaner

For those of you still using v4 of Sequelize, this problem may have been fixed by v4.44.1.

See PR: #11140
See Issue: #11139

@joshuat

Still experiencing the bug on v4.44.1.

It might be worth just migrating to v5 and to see if it’s been resolved there.

@polaroi8d

Still experiencing the bug on v4.44.1.

It might be worth just migrating to v5 and to see if it’s been resolved there.

@Joshua-Turner are u successfully migrated to v5 and find everything perfect? Do you have any experience with the v5?

@lancedikson

We have upgraded to v5 and seems like it helped for some reason. But it was such a shot in the dark.

@btroo

We upgraded to v5 as well. With no further sequelize (i. e. pooling) configuration other than that, we still get SequelizeConnectionAcquireTimeoutError: Operation timeout. Anecdotally, this timeout-ing seems to occur less, but haven’t measured/monitored closely in our deployment.

Will update if we find a configuration or bugfix that eradicates our issues completely!

@btroo

Update:

Found this issue yesterday #10976

Turns out that we had a case of a concurrent nested transaction stalling a parent transaction, which resulted in a connection to hang on the parent transaction. When this case was reached n (where n = our max pool size) times that filled our pool with hanging connections, we’d get bursts of time that ended up degrading our service. We have health checker that will kill degraded instances, so this didn’t give us complete downtime; instantiating new instances put off the problem until we reached that all-connections-in-pool-hanging state again.

We solved this by refactoring to split out the concurrent nested transaction from the parent (they weren’t actually dependent on each other). Our initial code would have the nested transaction fail silently, leaving the parent waiting for it to finish. Also did a pass through the codebase to ensure that anywhere we were doing nested transaction calls, we were passing transactions properly.

We were able to replicate the issue in staging doing something similar to the code in the issue above, so I’m fairly confident this was our issue; after shipping our fix below we aren’t seeing the same results.

Disclaimer: may or may not understand the full intricacies of connections/transactions/pooling, so take with a grain of salt!

@mickhansen

There’s always been some issues with having N concurrent transactions where N is greater than your pool.max, i forgot the specific case but you could end up with code waiting for a connection to do something to finish up the transactions but the transactions had all the connection etc.

@mickhansen

For reference my company has been running Sequelize against RDS for 4+ years with no issues, if there were ever connection issues it’s cause there were actually connection issues.

@derakhshanfar

as @btroo and @mickhansen said, This issue happened when the count of concurrent transactions is greater than your pool.max. I’m trying to describe it in a simple way, Let’s see what happens:
What is the connection pool?

When your application needs to retrieve data from the database, it creates a database connection. Creating this connection involves some overhead of time and machine resources for both your application and the database. Many database libraries and ORM’s will try to reuse connections when possible so that they do not incur the overhead of establishing that DB connection over and over again. The pool is the collection of these saved, reusable connections that

what is concurrent transactions?
in a simple way, if you are trying to run some CRUD command in a nested transaction, it means you are using concurrent transactions. let’s assume our Sequlize config is like this:

  pool: {
    max: 7,
    min: 0,
    acquire: 30000,
    idle: 10000,
  },

we are trying to run this command:

      let nested = function() {
        // here we are making a new transaction:
        return db.sequelize.transaction(t1 => {
          // as you see in bellow command we won't use t1 transaction
          // which means our DB (in my case postgress) run below command in other transaction
          // and this is where the problem lies!
          // when we'll execute nested function, we'll create a concurrent transaction per each call
          return db.user.findOne();
        });
      };
      const arr = [];
      for (let i = 0; i < 7; i++) {
        arr.push(nested());
      }
      Promise.all(arr)
        .then(() => {
          console.log('done');
        })
        .catch(err => {
          console.log(err);
        });

The workaround is just run db.user.findOne() in t1 transaction to avoid concurrent transactions and it will be fixed:
db.user.findOne({transaction: t1})
you can also use continuation-local-storage to pass transaction automatically, just check below link:
https://sequelize.org/master/manual/transactions.html#concurrent-partial-transactions

@ashutoshpw

In pool settings just keep the minimum count to about 2 to 4.
This solved the issue for me.

  pool: {
    max: 7,
    min: 2,
    acquire: 30000,
    idle: 10000,
  },

@hrabizadeh

We’ve bumped into the same error. It’s weird, but we don’t have any transactions in that piece of code. It looks like a bunch of asynchronous queries to the DB, two of them use sequelize.query() to query data and the rest 8 are made with models. I don’t know why exactly, but calling one of the .query() after the others resolved somehow fixed the issue. So, we first run all those 9 queries simultaneously, and after they resolved we run the latest one. Only this worked for us.

Thanks, Your solution solved my issue.

@papb
papb

mentioned this issue

Oct 6, 2022

@intellix

I don’t understand how solving acquire timeout errors is fixed by reducing the timeout from the default of 60 seconds to 20?

#7884 (comment)

edit: because the default is actually 10 sec… not sure where I saw a default of 60

@val-pidburtnyi

I don’t understand how solving acquire timeout errors is fixed by reducing the timeout from the default of 60 seconds to 20?

#7884 (comment)

edit: because the default is actually 10 sec… not sure where I saw a default of 60

I guess the key here is that 2 connections are never released («min: 2»)…

@johnfazzietempus

Looks like the acquire timeout is 60 seconds in sequelize v5 and v6.

ewfian

added a commit
to leyserkids/sequelize
that referenced
this issue

Apr 11, 2022

@ewfian

ewfian

added a commit
to leyserkids/sequelize
that referenced
this issue

Apr 11, 2022

@ewfian

@yusufbayrk

I was also getting the «Unhandled rejection SequelizeConnectionAcquireTimeoutError: Operation timeout error.» I tried everything and the only one solution I got remove setınterval commands:

setInterval(
( ) => sequelize.query(«REFRESH MATERIALIZED VIEW tables»),
2601e3,

I remove this commands and problem is solved. I hope your solution will be solved very soon.

My Express app running on node 6.11 with Sequelize 4.5.0 will sometimes throw TimeoutError: ResourceRequest timed out, on operations that should not be particularly expensive. We’re talking 5 rows of writes, each executed individually.

The database is an Amazon RDS MySQL instance, that hasn’t shown any problems connecting to our second API that is written in Ruby and is using ActiveRecord as an ORM.

I’m not sure how to begin diagnosing the problem, any ideas on what I should do next?

asked Aug 21, 2017 at 17:32

Sam Calvert's user avatar

3

I faced the same problem with sequelize using querys that consume much time.
Based on the github issue (https://github.com/sequelize/sequelize/issues/8133#issuecomment-359993057) the fix for me was increase the aquire time.
When i instanciate a new sequelize I do the following:

const sequelize = new Sequelize(
  config.get("dbConfig.dbName"),
  config.get("dbConfig.user"),
  config.get("dbConfig.password"),
  {
    dialect: "mysql",
    operatorsAliases: false,
    host: config.get("dbConfig.host"),
    pool: {
      max: 100,
      min: 0,
      idle: 200000,
      // @note https://github.com/sequelize/sequelize/issues/8133#issuecomment-359993057
      acquire: 1000000,
    }
  }
);

answered Feb 15, 2019 at 11:30

Gustavo Emmel's user avatar

This solution works for me:

  pool: {
    max: 100,
    min: 0,
    // @note https://github.com/sequelize/sequelize/issues/8133#issuecomment-359993057
    acquire: 100*1000,
  }

answered Jan 22, 2019 at 10:13

webolizzer's user avatar

webolizzerwebolizzer

3372 silver badges5 bronze badges

1

answered Jul 2, 2019 at 18:00

Michael Kaufman's user avatar

I think I am the best person qualified to answer this question.

This issue has once made my life hell no matter how much I tweaked the configuration.

So there are two solutions to this problem first one is to set the pool config something like this

pool: {
      max: 50,
      min: 0,
      acquire: 1200000,
      idle: 1000000,
    }

again this will solve your problem for now but again when your load increases you will start to get the error again.

Coming back to the second solution you can really look into table schema or queries you are running against the table.if you are getting these issue that means your queries are not optimized which are taking a much longer time than normal
The best bet will be to use indexing on the columns and you will never face this issue

answered Sep 29, 2020 at 8:08

gaurav's user avatar

gauravgaurav

3092 silver badges7 bronze badges

Debugging ResourceRequest Timed out error in Sequelize

While working on a project that involves building a small analytics platform using PostgreSQL, I ran into the ResourceRequest timed out error. I’m writing this post to summarize my debugging and the solution I’ve implemented for the time

  • Context
  • Debugging
    • Hypothesis
    • Approaches to resolve the issue
    • Comparing the approaches
  • Conclusion

Context

I ran into the error while sending thousands of concurrent database requests via Sequelize. The code ran without error if the number of concurrent requests were less than 1000. When I pushed the number to 10,000 I got the following error

TimeoutError: ResourceRequest timed out

Debugging

I copied and pasted the error in Google and ended up at the following issue thread on Sequelize. Among the different reasons for the issue one was firing too many database requests concurrently

Hypothesis

The connection pool setup using Sequelize had the following configuration

  pool: {
    max: 5,
    min: 0,
    idle: 10000,
    acquire: 20000
  }

Resulting in

  1. A connection pool with 5 reusable connections
  2. A connection in the pool will be qualified as idle if it is unused for 10 seconds or more
  3. The pool when invoked for a connection will wait a maximum of 20 seconds before throwing a Timeout error

Based on the pool configuration and the comments in the issue thread, I assumed that since I was firing thousands of requests concurrently, and each connection in the pool would only be released once the database query had completed, the requests fired later were hitting the ‘acquire’ timeout of 20 seconds and throwing TimeoutError: ResourceRequest timed out

sequelize-debugging

Based on the above hypothesis the timeout error will be a function of

  1. Time it takes for each database query to complete
  2. Number of concurrent requests fired or number of requests waiting for a database connection from the pool
  3. Maximum time each request would wait for a database connection before throwing a timeout error

Approaches to resolve the issue

In this scenario, the database queries being made were similar and I assumed that therefore each request would take similar duration to complete and release the connection back to the pool

This left two approaches to resolve the issue

  1. Increase the timeout
  2. Limit the number of concurrent requests being fired

Comparing the approaches

Increase the ‘acquire’ timeout

In this approach, the ‘acquire’ time will be a function of the number of concurrent requests fired. It will have to be adjusted such that the ‘acquire’ time is greater than the time it takes for ‘x-1’ requests to complete across a pool of 5 connections.

t > (x-1)/5 * T

where

  • ‘x’ is number of concurrent requests made
  • ‘t’ ‘acquire’ timeout
  • ‘T’ — Time taken for each database query to complete

The disadvantage of this approach is that it depends on the number of concurrent requests made and if the program was to exceed that, it would run into the same error.

Limit the number of concurrent requests made

This approach involves batching together database requests such that no more than ‘n’ requests are fired concurrently. As requests complete more are added to this batch.

This approach allows to set the ‘acquire’ time as per the size ‘n’ of the batch.

I implemented this using the library p-limit which limits the number of concurrent promises.

Conclusion

While the problem has been resolved by limiting the number of concurrent processes, it is contingent on the assumption that the database requests take near identical duration to complete. If that variability was to increase, the error ‘ResourceRequest timed out’ might resurface

I am getting this error in my node program with sequelize.

unhandled rejection timeouterror: resourcerequest timed out

Upon searching on internet I realized its to do with the pool settings. I have the following pool setting in sequelize.

  pool: {
    max: 5,
    min: 0,
    idle: 10000,
    acquire: 40000
  }

Any help is appreciated!

asked Oct 10, 2017 at 18:24

M.Dagiya's user avatar

2

Load 7 more related questions

Show fewer related questions

Recently, I encountered this problem when optimizing cube.js. In fact, after checking the official documents later, the official also explained the problem.

main cause

Redis concurrent connection configuration (but it’s not simple. After testing, if the connection pool is not available and the connection is not enough, there will be a problem that the entire service is unavailable)

The official solution

Combine your actual user query request to configure the connection pool

  • Reference configuration
CUBEJS_REDIS_POOL_MAX = 2000
CUBEJS_REDIS_POOL_MIN = 50
REDIS_URL = redis: //127.0.0.1:6379 // Reference connection, which can be combined with ioredis and redis client
  • A reference to optimize the allocation of
    reasonable or try not to trigger the maximum number of connections, pressure tested, prone to extreme in some cases after being connected with a full, do not have the entire service (even after a connection release)
    specifically cube.js based generic- the connection pool to pool resources to check each release number, the number may be recommended through the configuration can be released as soon as possible, to avoid causing unavailable services can not
    use
    packages / cubejs-query-orchestrator / src / orchestrator / RedisPool.ts
const opts = {
            min,
            max,
            acquireTimeoutMillis: 5000,
            idleTimeoutMillis: 3000,
            numTestsPerEvictionRun: 300, // can be solved by configuration
            evictionRunIntervalMillis: 5000
};

Description

In the short term, the solution is to use a larger number of connection pools, and perform more pressure tests later, and see if the configuration can be added to the official code.

Read More:

Like this post? Please share to your friends:
  • Ошибка resource not owned
  • Ошибка resource materials
  • Ошибка resizebuffers failed device removed device hung
  • Ошибка resident evil 2 remake неустранимая ошибка приложения
  • Ошибка res packages