Вопрос:
У меня есть postgresql 9.1.19, развернутый на Windows Server 2008 R2.
По какой-то причине он начал бросать исключения без какой-либо другой информации, кроме “ошибки ввода-вывода при отправке на бэкэнд”.
Это происходит только с определенными таблицами.
Если я использую чистую базу данных, ошибка исчезает, что приводит меня к мысли, что у меня заканчивается память или какой-то другой выделенный ресурс, который вызывает ошибку??
Я подключаюсь через JDBC.
Любые идеи были бы весьма полезны.
Благодарю.
Скотт.
Лучший ответ:
Проблема заключалась в том, что я выполнял запрос IN со слишком большим количеством параметров… из-за генерируемого ORM запроса.
Но очень бесполезное сообщение, которое могло бы быть более конкретным.
Например, Postgres выдает более полезное сообщение, если вы выберете слишком много столбцов или используете слишком много символов и т.д. Возможно, это тоже проблема jdbc, не уверен.
Ответ №1
Ошибка означает, что вы отключились при подключении к базе данных.
Проверьте правильность подключения. Если это так, посмотрите, есть ли у вас доступ к базе данных или какой-то брандмауэр запрещает доступ.
I enabled query logging and managed to find the offending «insert»:
insert into "myschema"."mytable" ("custcode", "custcar", "custdob", "closed") values ('a33113f2-930c-47de-95a6-b9e07650468a', 'hellow world', '2020-02-02 01:00:00+00:00', 'f')
That is a partitioned table on the «custdob» column, with these partitions:
d+ mytable
Table "myschema.mytable"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
------------+--------------------------+-----------+----------+----------------------------------------+----------+--------------+-------------
id | bigint | | not null | nextval('mytable_id_seq'::regclass) | plain | |
custcode | uuid | | not null | | plain | |
custcar | character varying | | not null | | extended | |
custdob | timestamp with time zone | | not null | | plain | |
closed | boolean | | not null | false | plain | |
Partition key: RANGE (custdob)
Partitions: mytable_201902_partition FOR VALUES FROM ('2019-02-01 00:00:00+00') TO ('2019-03-01 00:00:00+00'),
mytable_201903_partition FOR VALUES FROM ('2019-03-01 00:00:00+00') TO ('2019-04-01 00:00:00+00'),
mytable_201908_partition FOR VALUES FROM ('2019-08-02 00:00:00+00') TO ('2019-09-01 00:00:00+00'),
mytable_202003_partition FOR VALUES FROM ('2020-03-01 00:00:00+00') TO ('2020-04-01 00:00:00+00'),
mytable_202004_partition FOR VALUES FROM ('2020-04-01 00:00:00+00') TO ('2020-05-01 00:00:00+00'),
mytable_000000_partition DEFAULT
Notice the INSERT wants to insert in February’s partition, but taht partition is missing in my CI server, so it should insert the row in the DEFAULT partition. The issue is, the DEFAULT partition has this constraint:
"mytable_partition_check" CHECK (custdob < '2019-08-02 00:00:00+00'::timestamp with time zone)
So Postgres seems to be getting into a bug because it can’t insert a record for February while that constraint is in there. If I drop this constraint and re-issue the offending INSERT, it works this time.
I enabled query logging and managed to find the offending «insert»:
insert into "myschema"."mytable" ("custcode", "custcar", "custdob", "closed") values ('a33113f2-930c-47de-95a6-b9e07650468a', 'hellow world', '2020-02-02 01:00:00+00:00', 'f')
That is a partitioned table on the «custdob» column, with these partitions:
d+ mytable
Table "myschema.mytable"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
------------+--------------------------+-----------+----------+----------------------------------------+----------+--------------+-------------
id | bigint | | not null | nextval('mytable_id_seq'::regclass) | plain | |
custcode | uuid | | not null | | plain | |
custcar | character varying | | not null | | extended | |
custdob | timestamp with time zone | | not null | | plain | |
closed | boolean | | not null | false | plain | |
Partition key: RANGE (custdob)
Partitions: mytable_201902_partition FOR VALUES FROM ('2019-02-01 00:00:00+00') TO ('2019-03-01 00:00:00+00'),
mytable_201903_partition FOR VALUES FROM ('2019-03-01 00:00:00+00') TO ('2019-04-01 00:00:00+00'),
mytable_201908_partition FOR VALUES FROM ('2019-08-02 00:00:00+00') TO ('2019-09-01 00:00:00+00'),
mytable_202003_partition FOR VALUES FROM ('2020-03-01 00:00:00+00') TO ('2020-04-01 00:00:00+00'),
mytable_202004_partition FOR VALUES FROM ('2020-04-01 00:00:00+00') TO ('2020-05-01 00:00:00+00'),
mytable_000000_partition DEFAULT
Notice the INSERT wants to insert in February’s partition, but taht partition is missing in my CI server, so it should insert the row in the DEFAULT partition. The issue is, the DEFAULT partition has this constraint:
"mytable_partition_check" CHECK (custdob < '2019-08-02 00:00:00+00'::timestamp with time zone)
So Postgres seems to be getting into a bug because it can’t insert a record for February while that constraint is in there. If I drop this constraint and re-issue the offending INSERT, it works this time.
System information:
- Windows 10 Enterprise 1909
- DBeaver version 7.3.5
Describe the problem you’re observing:
Not sure exactly what the issue is, but if a Query takes longer than 2 mins I get this error
Error Log:
org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [08006]: An I/O error occurred while sending to the backend.
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:509)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:440)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:168)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:427)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:812)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3226)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:118)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:168)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:116)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4516)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:335)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:327)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:130)
… 12 more
Caused by: java.io.EOFException
at org.postgresql.core.PGStream.receiveChar(PGStream.java:308)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1952)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
… 20 more
I know it is a duplicate question . But I couldn’t find solution for the same.
I have hosted my application in the Amazon EC2 cloud.
And I am using postgresql .
I am getting an exception org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
while running my application in Amazon cloud.
The detailed stack-trace is :
org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:281)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
at com.spy2k3.core.business.processor.ProcessorImpl.executeUpdate(ProcessorImpl.java:237)
at com.spy2k3.core.business.object.BusinessObject.executeUpdate(BusinessObject.java:54)
at com.spy2k3.core.business.object.LoginObject.deleteSession(LoginObject.java:127)
at com.spy2k3.core.business.processor.LoginProcessor.userValidation(LoginProcessor.java:79)
at com.spy2k3.core.business.processor.LoginProcessor.execute(LoginProcessor.java:30)
at com.spy2k3.core.business.processor.ProcessorImpl.process(ProcessorImpl.java:73)
at com.spy2k3.core.handler.request.RequestHandler.doService(RequestHandler.java:90)
at com.spy2k3.core.handler.AbstractHandler.doPost(AbstractHandler.java:25)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:237)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:157)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:152)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:137)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:118)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:102)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.core.StandardValveContext.invokeNext(StandardValveContext.java:104)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:520)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:929)
at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:160)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705)
at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:143)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:112)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:71)
at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:269)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1700)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
... 37 more
Tests :
1.I connected my remote postgresql server via PgAdmin from my local system , and I could connect and execute queries.
2.I connected to my remote server via putty , and could successfully execute queries.
EXAMPLE :
[root@ip-xx-xxx-xx-xxx bin]# psql -U myuser -d mydatabase psql (9.2.4) Type "help" for help. mydatabase=# SELECT USERID FROM MY_MAST_LOGINSESSION WHERE SESSIONID='5DFD5D1E09D523695D6057SOMETHING'; userid -------- (0 rows)
3.When I connected my remote database via jdbc from my application , it successfully connected , but it is taking too much time to execute the queries there.
Can you suggest any solution to find out this time delay ?
UPDATE :
During going deep into the problem , I found the delay happens only for specific queries such as DELETE
, UPDATE
. The queries such as INSERT
,SELECT
executes fine .
The specialty of DELETE
and UPDATE
queries are which return nothing .
So the actual problem is the querying client (suppose psql
) is waiting for the Database server response , but for these queries server returns nothing . So the client keeps on waiting and after the timeout it throws exception .
But I was unable to find where to change to solve this problem.
Yes, your are right
It is a problem related with multiple threads working on the same Prepared Statement.
I have modified the code to run with one single thread and it runs very smoothly.
Previously there were more than one thread working onthe same prepared statements (they were created at server startup) Even if the code already included some synchronization code, it seems I was loosing control of the threads.
I will modify the code, and make sure every thread has his own Prepared Statements.
I thought that sharing the same prepared statements was more efficient.
Thanks
Kris Jurka escribió:
This appears to be a thread safety related problem. I believe your code has one thread setting the parameter values and another thread executing the prepared statement at the same time. The executor does two passes through the parameter list, once to calculate a total message length and another time to send the values. If the contents change between the length calculation and the message sending we’ll have the wrong length and the whole client-server communication gets messed up. The attached test case demonstrates this failure mode.
I’m unsure how hard we should try to fix this, there are a couple of approaches:
1) Do nothing. It’s really a client problem and they shouldn’t be setting and executing at the same time.
2) Just copy the parameters at execution time so we get a consistent view of them. This may not be exactly what the user wants though if the order things actually execute is: execute, set, copy instead of execute, copy, set.
3) Go through all the PreparedStatement functions making most of them synchronized so that you cannot set while an execute is running.
Kris Jurka
Sergi Vera wrote:
Hi!
I’ve been a little busy thoose days and was unable to work on this, but I’ve made the tcpdump session that you requested and
here are the results
Kris Jurka escribió:
Sergi Vera wrote:
Thanks Kris for the help
Adding loglevel=2 dind’t add any more info on logs, and it will be not easy to make a self contained program, but I have attached the result of
The loglevel=2 logging will go to the driver’s System.out not into the server error log.
tcpdump -vvv -i lo -w pgsqlerror2.dat
This only captures the start of each packet so it doesn’t have the whole thing. Could you recapture with:
tcpdump -n -w pgsqlerror3.dat -s 1514 -i any tcp port 5432
This ups the capture size (-s 1514) and also filters out the unrelated UDP traffic you’ve got going on.
Browsing through the first failing pgsql data chunk, one can see that:
http://img139.imageshack.us/my.php?image=pantallazolm8.png
The last data has column lenght -1 which seems strange even if I don’k know anything of this particular protocol
-1 length indicates a NULL value, so that’s expected.
—
/*<![CDATA[*/
<!—
@page { size: 21cm 29.7cm; margin: 2cm }
P { margin-bottom: 0.21cm }
—>
/*]]>*/
Sergio Vera
Rosellón, 34, 5 Planta
08029 Barcelona
tel. 902101870
www.emovilia.com
IMPRIME ESTE EMAIL Y SUS ARCHIVOS SI REALMENTE LOS NECESITAS.
GRACIAS POR RESPETAR EL MEDIO AMBIENTE.
NOTA: La información contenida en este email, y sus documentos adjuntos, es confidencial y para uso exclusivo de la persona o personas a las que va dirigido. No está permitido el acceso a este mensaje a otra persona distinta a los indicados. Si no es uno de los destinatarios o ha recibido este mensaje por error, cualquier duplicación, reproducción, distribución, así como cualquier uso de la información contenida o cualquiera otra acción tomada en relación con el mismo, está prohibida y puede ser ilegal.
ADVICE: The information in this email as in the attachments is confidential and private for exclusive use of the target user group. Access to this message is disallowed to any other than the addressee. If you are not the addressee or you have been included by mistake, any duplication, reproduction, distribution as any other action relative to this email is strictly forbidden and might be illegal.
Yes, your are right
It is a problem related with multiple threads working on the same Prepared Statement.
I have modified the code to run with one single thread and it runs very smoothly.
Previously there were more than one thread working onthe same prepared statements (they were created at server startup) Even if the code already included some synchronization code, it seems I was loosing control of the threads.
I will modify the code, and make sure every thread has his own Prepared Statements.
I thought that sharing the same prepared statements was more efficient.
Thanks
Kris Jurka escribió:
This appears to be a thread safety related problem. I believe your code has one thread setting the parameter values and another thread executing the prepared statement at the same time. The executor does two passes through the parameter list, once to calculate a total message length and another time to send the values. If the contents change between the length calculation and the message sending we’ll have the wrong length and the whole client-server communication gets messed up. The attached test case demonstrates this failure mode.
I’m unsure how hard we should try to fix this, there are a couple of approaches:
1) Do nothing. It’s really a client problem and they shouldn’t be setting and executing at the same time.
2) Just copy the parameters at execution time so we get a consistent view of them. This may not be exactly what the user wants though if the order things actually execute is: execute, set, copy instead of execute, copy, set.
3) Go through all the PreparedStatement functions making most of them synchronized so that you cannot set while an execute is running.
Kris Jurka
Sergi Vera wrote:
Hi!
I’ve been a little busy thoose days and was unable to work on this, but I’ve made the tcpdump session that you requested and
here are the results
Kris Jurka escribió:
Sergi Vera wrote:
Thanks Kris for the help
Adding loglevel=2 dind’t add any more info on logs, and it will be not easy to make a self contained program, but I have attached the result of
The loglevel=2 logging will go to the driver’s System.out not into the server error log.
tcpdump -vvv -i lo -w pgsqlerror2.dat
This only captures the start of each packet so it doesn’t have the whole thing. Could you recapture with:
tcpdump -n -w pgsqlerror3.dat -s 1514 -i any tcp port 5432
This ups the capture size (-s 1514) and also filters out the unrelated UDP traffic you’ve got going on.
Browsing through the first failing pgsql data chunk, one can see that:
http://img139.imageshack.us/my.php?image=pantallazolm8.png
The last data has column lenght -1 which seems strange even if I don’k know anything of this particular protocol
-1 length indicates a NULL value, so that’s expected.
—
/*<![CDATA[*/
<!—
@page { size: 21cm 29.7cm; margin: 2cm }
P { margin-bottom: 0.21cm }
—>
/*]]>*/
Sergio Vera
Rosellón, 34, 5 Planta
08029 Barcelona
tel. 902101870
www.emovilia.com
IMPRIME ESTE EMAIL Y SUS ARCHIVOS SI REALMENTE LOS NECESITAS.
GRACIAS POR RESPETAR EL MEDIO AMBIENTE.
NOTA: La información contenida en este email, y sus documentos adjuntos, es confidencial y para uso exclusivo de la persona o personas a las que va dirigido. No está permitido el acceso a este mensaje a otra persona distinta a los indicados. Si no es uno de los destinatarios o ha recibido este mensaje por error, cualquier duplicación, reproducción, distribución, así como cualquier uso de la información contenida o cualquiera otra acción tomada en relación con el mismo, está prohibida y puede ser ilegal.
ADVICE: The information in this email as in the attachments is confidential and private for exclusive use of the target user group. Access to this message is disallowed to any other than the addressee. If you are not the addressee or you have been included by mistake, any duplication, reproduction, distribution as any other action relative to this email is strictly forbidden and might be illegal.
I am using Amazon RDS service to host a PostreSql which serves as a database for my Java application. After the application starts, it is able to execute queries as expected until I stop interacting for some minutes and try to execute any query again. In that scenario, I get the following exception:
WARNING: Validating connection.
org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:327)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:169)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:136)
at org.postgresql.jdbc.PgConnection.isValid(PgConnection.java:1311)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:897)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:630)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateObject(PoolableConnectionFactory.java:648)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:472)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:349)
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:753)
Caused by: java.net.SocketException: Operation timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:140)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:109)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:67)
at org.postgresql.core.PGStream.receiveChar(PGStream.java:288)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1962)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300)
On Amazon RDS PostgreSQL I see the following errors:
2020-04-09 19:01:11 UTC::[]:LOG: could not receive data from client: Connection timed out
2020-04-09 19:04:27 UTC::@:[]:LOG: checkpoint starting: time
2020-04-09 19:04:28 UTC::@:[]:LOG: checkpoint complete: wrote 1 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.143 s, sync=0.001 s, total=0.154 s; sync files=1, longest=0.001 s, average=0.001 s; distance=16377 kB, estimate=16396 kB
2020-04-09 19:08:15 UTC::LOG: could not receive data from client: Connection timed out
Any idea of how to solve that issue?
I am testing some code which processes registration to a website. The java code is as follows (excerpt):
if (request.getParameter("method").equals("checkEmail")){
String email= request.getParameter("email");
ResultSet rs =null;
PreparedStatement ps = db.prepareStatement(query);
ps.setString(1, email);
rs = ps.executeQuery();
if(rs.next()){
//email already present in Db
} else {
//proceed with registration.....
Most of the time the process executes without any problem, but I am getting an intermittent issue where it fails because connection to the database is closing. Every time it fails, it fails at the same point — when running the prepared statement above (which checks whether the email being submitted is already in the database obviously).
Version of Postgres is 8.1.23
Any help or suggestions appreciated. Stacktrace is as follows (EDIT: Sometimes the Stacktrace says caused by Stream Closed, and sometimes Socket Closed as below):
13:53:00,973 ERROR Registration:334 - org.postgresql.util.PSQLException: An I/O error occured while sending to the backend.
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:283)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:479
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:367)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:271)
at Registration.doPost(Registration.java:113)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:637)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190)
at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291)
at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:769)
at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:698)
at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:891)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690)
at java.lang.Thread.run(Thread.java:595)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:135)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:104)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)
at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:259)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1620)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
... 22 more
Форум программистов Vingrad
Новости ·
Фриланс ·
FAQ
Правила ·
Помощь ·
Рейтинг ·
Избранное ·
Поиск ·
Участники
Форум -> Программирование -> Базы данных -> PostgreSQL
(еще)
Модераторы: LSD |
Поиск: |
|
Ошибка ввода/ввывода при отправке бэкенду, в чем ошибка? |
Опции темы |
skif18 |
|
||
Опытный Профиль
Репутация: нет
|
Ошибка ввода/ввывода при отправке бэкенду. Что это за ошибка? ——————— |
||
|
|||
|
0 Пользователей читают эту тему (0 Гостей и 0 Скрытых Пользователей) |
0 Пользователей: |
« Предыдущая тема | PostgreSQL | Следующая тема » |
Подписаться на тему |
Подписка на этот форум |
Скачать/Распечатать тему
[ Время генерации скрипта: 0.0827 ] [ Использовано запросов: 21 ] [ GZIP включён ]
Реклама на сайте
Информационное спонсорство