Search Results

Search found 624 results on 25 pages for 'postgres'.

Page 7/25 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Sending one record from cursor to another function Postgres

    - by PylonsN00b
    FYI: I am completely new to using cursors... So I have one function that is a cursor: CREATE FUNCTION get_all_product_promos(refcursor, cursor_object_id integer) RETURNS refcursor AS ' BEGIN OPEN $1 FOR SELECT * FROM promos prom1 JOIN promo_objects ON (prom1.promo_id = promo_objects.promotion_id) WHERE prom1.active = true AND now() BETWEEN prom1.start_date AND prom1.end_date AND promo_objects.object_id = cursor_object_id UNION SELECT prom2.promo_id FROM promos prom2 JOIN promo_buy_objects ON (prom2.promo_id = promo_buy_objects.promo_id) LEFT JOIN promo_get_objects ON prom2.promo_id = promo_get_objects.promo_id WHERE (prom2.buy_quantity IS NOT NULL OR prom2.buy_quantity > 0) AND prom2.active = true AND now() BETWEEN prom2.start_date AND prom2.end_date AND promo_buy_objects.object_id = cursor_object_id; RETURN $1; END; ' LANGUAGE plpgsql; SO then in another function I call it and need to process it: ... --Get the promotions from the cursor SELECT get_all_product_promos('promo_cursor', this_object_id) updated := FALSE; IF FOUND THEN --Then loop through your results LOOP FETCH promo_cursor into this_promotion --Preform comparison logic -this is necessary as this logic is used in other contexts from other functions SELECT * INTO best_promo_results FROM get_best_product_promos(this_promotion, this_object_id, get_free_promotion, get_free_promotion_value, current_promotion_value, current_promotion); ... SO the idea here is to select from the cursor, loop using fetch (next is assumed correct?) and put the record fetched into this_promotion. Then send the record in this_promotion to another function. I can't figure out what to declare the type of this_promotion in get_best_product_promos. Here is what I have: CREATE OR REPLACE FUNCTION get_best_product_promos(this_promotion record, this_object_id integer, get_free_promotion integer, get_free_promotion_value numeric(10,2), current_promotion_value numeric(10,2), current_promotion integer) RETURNS... It tells me: ERROR: plpgsql functions cannot take type record OK first I tried: CREATE OR REPLACE FUNCTION get_best_product_promos(this_promotion get_all_product_promos, this_object_id integer, get_free_promotion integer, get_free_promotion_value numeric(10,2), current_promotion_value numeric(10,2), current_promotion integer) RETURNS... Because I saw some syntax in the Postgres docs showed a function being created w/ a input parameter that had a type 'tablename' this works, but it has to be a tablename not a function :( I know I am so close, I was told to use cursors to pass records around. So I studied up. Please help.

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • failing to establish connection between Postgres db and gwt

    - by sprasad12
    Hi, I am using Postgres and gwt 2.0 for one of my applications. I am facing problem connecting to the database. When I try to connect it gives "ClassNotFoundException". Here is what I get when I try to connect to database: java.lang.ClassNotFoundException: org.postgresql.Driver at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at com.google.appengine.tools.development.IsolatedAppClassLoader.loadClass(IsolatedAppClassLoader.java:151) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:169) at com.e.r.d.server.db.SQLManager.<init>(SQLManager.java:18) at com.e.r.d.server.db.EntityRelationManager.<init>(EntityRelationManager.java:10) at com.e.r.d.server.EntityRelationServiceImpl.<clinit>(EntityRelationServiceImpl.java:21) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:121) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:352) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:313) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:844) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:644) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442) Mar 15, 2010 1:17:41 AM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: Nested in javax.servlet.ServletException: init: java.lang.ExceptionInInitializerError at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:121) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:352) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:313) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:844) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:644) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442) Caused by: java.security.AccessControlException: access denied (java.lang.RuntimePermission exitVM.1) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) at java.security.AccessController.checkPermission(AccessController.java:546) at java.lang.SecurityManager.checkPermission(SecurityManager.java:532) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:166) at java.lang.SecurityManager.checkExit(SecurityManager.java:744) at java.lang.Runtime.exit(Runtime.java:88) at java.lang.System.exit(System.java:906) at com.e.r.d.server.db.SQLManager.<init>(SQLManager.java:22) at com.e.r.d.server.db.EntityRelationManager.<init>(EntityRelationManager.java:10) at com.e.r.d.server.EntityRelationServiceImpl.<clinit>(EntityRelationServiceImpl.java:21) ... 33 more Mar 15, 2010 1:17:41 AM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: Nested in java.lang.ExceptionInInitializerError: java.security.AccessControlException: access denied (java.lang.RuntimePermission exitVM.1) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) at java.security.AccessController.checkPermission(AccessController.java:546) at java.lang.SecurityManager.checkPermission(SecurityManager.java:532) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:166) at java.lang.SecurityManager.checkExit(SecurityManager.java:744) at java.lang.Runtime.exit(Runtime.java:88) at java.lang.System.exit(System.java:906) at com.e.r.d.server.db.SQLManager.<init>(SQLManager.java:22) at com.e.r.d.server.db.EntityRelationManager.<init>(EntityRelationManager.java:10) at com.e.r.d.server.EntityRelationServiceImpl.<clinit>(EntityRelationServiceImpl.java:21) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:121) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:352) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:313) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:844) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:644) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442) Mar 15, 2010 1:17:41 AM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: Nested in javax.servlet.ServletException: init: java.lang.NoClassDefFoundError: Could not initialize class com.e.r.d.server.EntityRelationServiceImpl at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:121) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:352) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:313) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:844) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:644) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442) Mar 15, 2010 1:17:41 AM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: /erd1/erpath java.lang.ExceptionInInitializerError at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:121) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:352) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:313) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:844) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:644) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442) Caused by: java.security.AccessControlException: access denied (java.lang.RuntimePermission exitVM.1) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) at java.security.AccessController.checkPermission(AccessController.java:546) at java.lang.SecurityManager.checkPermission(SecurityManager.java:532) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:166) at java.lang.SecurityManager.checkExit(SecurityManager.java:744) at java.lang.Runtime.exit(Runtime.java:88) at java.lang.System.exit(System.java:906) at com.e.r.d.server.db.SQLManager.<init>(SQLManager.java:22) at com.e.r.d.server.db.EntityRelationManager.<init>(EntityRelationManager.java:10) at com.e.r.d.server.EntityRelationServiceImpl.<clinit>(EntityRelationServiceImpl.java:21) ... 33 more Mar 15, 2010 1:17:41 AM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: Nested in java.lang.ExceptionInInitializerError: java.security.AccessControlException: access denied (java.lang.RuntimePermission exitVM.1) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) at java.security.AccessController.checkPermission(AccessController.java:546) at java.lang.SecurityManager.checkPermission(SecurityManager.java:532) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:166) at java.lang.SecurityManager.checkExit(SecurityManager.java:744) at java.lang.Runtime.exit(Runtime.java:88) at java.lang.System.exit(System.java:906) at com.e.r.d.server.db.SQLManager.<init>(SQLManager.java:22) at com.e.r.d.server.db.EntityRelationManager.<init>(EntityRelationManager.java:10) at com.e.r.d.server.EntityRelationServiceImpl.<clinit>(EntityRelationServiceImpl.java:21) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:121) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:352) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:313) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:844) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:644) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442) Mar 15, 2010 1:17:41 AM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: /erd1/erpath java.lang.NoClassDefFoundError: Could not initialize class com.e.r.d.server.EntityRelationServiceImpl at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1093) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:51) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:121) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1084) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:360) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at com.google.apphosting.utils.jetty.DevAppEngineWebAppContext.handle(DevAppEngineWebAppContext.java:70) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at com.google.appengine.tools.development.JettyContainerService$ApiProxyHandler.handle(JettyContainerService.java:352) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:313) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:506) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:844) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:644) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:381) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442) I have included the Postgres jar file to the build path of the project. Question: Is there anywhere else i have to include the postgre? While creating GWT project i did not uncheck app engine. is it a problem? can someone tell me what i am doing wrong? Any input will be of great help. Thank you.

    Read the article

  • Problem connecting to postgres with Kohana 3 database module on OS X Snow Leopard

    - by Bart Gottschalk
    Environment: Mac OS X 10.6 Snow Leopard PHP 5.3 Kohana 3.0.4 When I try to configure and use a connection to a postgresql database on localhost I get the following error: ErrorException [ Warning ]: mysql_connect(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) Here is the configuration of the database in /modules/database/config/database.php (note the third instance named 'pgsqltest') return array ( 'default' => array ( 'type' => 'mysql', 'connection' => array( /** * The following options are available for MySQL: * * string hostname * string username * string password * boolean persistent * string database * * Ports and sockets may be appended to the hostname. */ 'hostname' => 'localhost', 'username' => FALSE, 'password' => FALSE, 'persistent' => FALSE, 'database' => 'kohana', ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'alternate' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=kohana', 'username' => 'root', 'password' => 'r00tdb', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'pgsqltest' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=pgsqltest', 'username' => 'postgres', 'password' => 'dev1234', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), ); And here is the code to create the database instance, create a query and execute the query: $pgsqltest_db = Database::instance('pgsqltest'); $query = DB::query(Database::SELECT, 'SELECT * FROM test')->execute(); I'm continuing to research a solution for this error but thought I'd ask to see if someone else has already found a solution. Any ideas are welcome. One other note is that I know my build of PHP can access this postgresql db since I'm able to manage the db using phpPgAdmin. But I have yet to determine what phpPgAdmin is doing differently to connect to the db than what Kohana 3 is attempting. Bart

    Read the article

  • Postgres user drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

  • Postgres user/role drop

    - by Grasper
    I am trying to drop a user: drop user testUser; I want to force this to work in a simple manner (Not a million calls)... How can I do this easily? I get this output: ERROR: role "testUser" cannot be dropped because some objects depend on it DETAIL: access to table main.tap_db_version access to table main.user_instance access to table main.target_type access to table main.status_code access to table main.state_space_profile access to table main.service_subscription access to table main.service_instance access to table main.sa_ordnance_weapon_type access to table main.operation access to table main.mission_class access to table main.map_symbol access to table main.ada_weapon_type access to table main.active_process access to table main.acft_type_00_only access to table main.abp_create_params access to table main.exercise access to table main.decl access to table main.data_set access to table main.cancellation_notice access to table main.ato_family_tree access to table main.apportionment_cat_cd access to table main.abp access to table main.alert_settings access to table main.alert_log access to table main.airspace_usage_category access to schema main access to view testUser.top_priority access to view testUser.target_ssm_msn_count access to view testUser.target_air_msn_count access to view testUser.sortie_sum access to view testUser.ref_info access to view testUser.preview_rmk_count access to view testUser.preview_pgm_las_count access to view testUser.preview_pgm_desi_count access to view testUser.preview_objective_count access to view testUser.preview_gfriend_count access to view testUser.preview_escort_msn_req access to view testUser.preview_chaff_data access to view testUser.preview_airmove_seg access to view testUser.preview_aircraft_total access to view testUser.offload_total access to view testUser.objective_count access to view testUser.fuel_planned access to view testUser.ew_data access to view testUser.dual access to view testUser.current_base_inventory access to view testUser.cell_total access to view testUser.asgn_sortie_sum access to view testUser.appor_sorties_planned access to view testUser.airmove_seg access to view testUser.aircraft_total access to view testUser.abp access to table testUser.req_msn_task access to table testUser.req_task_source_req access to table testUser.req_ssm_msn access to table testUser.req_ssm_source access to table testUser.req_msn access to table testUser.req_msn_warnings access to table testUser.req_air_msn access to table testUser.req_src_header access to table testUser.req_msn_ids access to table testUser.req_msn_comment access to table testUser.req_c2_msn access to table testUser.req_c2_source access to table testUser.req_ada_msn access to table testUser.req_ada_vertex access to table testUser.weather_forecast access to table testUser.weather_coords access to table testUser.weather_area access to table testUser.weapon_option access to table testUser.wag_activity access to table testUser.unit_remark access to table testUser.unit_location_turn access to table testUser.unit_iff access to table testUser.unit_coordination access to table testUser.unit_code access to table testUser.trace_point access to table testUser.tasking_agency access to table testUser.task_unit access to table testUser.target_type access to table testUser.tap_db_version access to table testUser.status_code access to table testUser.state_space_threat access to table testUser.state_space_profile access to table testUser.state_space access to table testUser.ssm_mission access to table testUser.spins_section_id access to table testUser.spins_codes access to table testUser.spins access to table testUser.unit_location access to table testUser.ship_target_request access to table testUser.service_subscription access to table testUser.service_instance access to table testUser.sa_ordnance_weapon_type access to table testUser.runway access to table testUser.restricted_codes access to table testUser.response_entity access to table testUser.residual_mission access to table testUser.request_objective access to table testUser.request and 194 other objects (see server log for list)

    Read the article

  • postgres - remove whitespace from field?

    - by n00b0101
    I imported a bunch of data using pgloader, and am now seeing that there's a bunch of whitespace (both spaces and tabs) inside of the fields. Is there a way to quickly update the fields to remove it from the beginning, the end, and the middle? I know there's TRIM, but that won't work for me... As an added problem... I only want to remove double spaces and replace it with a single space, but there might be 5 or 6 spaces in a row, and I'd prefer not to have to rerun a replace query until they're all ok? I was looking at regex_replace, but, I'm not sure how to make certain that it removes it from the middle of a string as well...

    Read the article

  • Mulitple full joins in Postgres is slow

    - by blast83
    I have a program to use the IMDB database and am having very slow performance on my query. It appears that it doesn't use my where condition until after it materializes everything. I looked around for hints to use but nothing seems to work. Here is my query: SELECT * FROM name as n1 FULL JOIN aka_name ON n1.id = aka_name.person_id FULL JOIN cast_info as t2 ON n1.id = t2.person_id FULL JOIN person_info as t3 ON n1.id = t3.person_id FULL JOIN char_name as t4 ON t2.person_role_id = t4.id FULL JOIN role_type as t5 ON t2.role_id = t5.id FULL JOIN title as t6 ON t2.movie_id = t6.id FULL JOIN aka_title as t7 ON t6.id = t7.movie_id FULL JOIN complete_cast as t8 ON t6.id = t8.movie_id FULL JOIN kind_type as t9 ON t6.kind_id = t9.id FULL JOIN movie_companies as t10 ON t6.id = t10.movie_id FULL JOIN movie_info as t11 ON t6.id = t11.movie_id FULL JOIN movie_info_idx as t19 ON t6.id = t19.movie_id FULL JOIN movie_keyword as t12 ON t6.id = t12.movie_id FULL JOIN movie_link as t13 ON t6.id = t13.linked_movie_id FULL JOIN link_type as t14 ON t13.link_type_id = t14.id FULL JOIN keyword as t15 ON t12.keyword_id = t15.id FULL JOIN company_name as t16 ON t10.company_id = t16.id FULL JOIN company_type as t17 ON t10.company_type_id = t17.id FULL JOIN comp_cast_type as t18 ON t8.status_id = t18.id WHERE n1.id = 2003 Very table is related to each other on the join via foreign-key constraints and have indexes for all the mentioned columns. The query plan details: "Hash Left Join (cost=5838187.01..13756845.07 rows=15579622 width=835) (actual time=146879.213..146891.861 rows=20 loops=1)" " Hash Cond: (t8.status_id = t18.id)" " -> Hash Left Join (cost=5838185.92..13542624.18 rows=15579622 width=822) (actual time=146879.199..146891.833 rows=20 loops=1)" " Hash Cond: (t10.company_type_id = t17.id)" " -> Hash Left Join (cost=5838184.83..13328403.29 rows=15579622 width=797) (actual time=146879.165..146891.781 rows=20 loops=1)" " Hash Cond: (t10.company_id = t16.id)" " -> Hash Left Join (cost=5828372.95..10061752.03 rows=15579622 width=755) (actual time=146426.483..146429.756 rows=20 loops=1)" " Hash Cond: (t12.keyword_id = t15.id)" " -> Hash Left Join (cost=5825164.23..6914088.45 rows=15579622 width=731) (actual time=146372.411..146372.529 rows=20 loops=1)" " Hash Cond: (t13.link_type_id = t14.id)" " -> Merge Left Join (cost=5825162.82..6699867.24 rows=15579622 width=715) (actual time=146372.366..146372.472 rows=20 loops=1)" " Merge Cond: (t6.id = t13.linked_movie_id)" " -> Merge Left Join (cost=5684009.29..6378956.77 rows=15579622 width=699) (actual time=144019.620..144019.711 rows=20 loops=1)" " Merge Cond: (t6.id = t12.movie_id)" " -> Merge Left Join (cost=5182403.90..5622400.75 rows=8502523 width=687) (actual time=136849.731..136849.809 rows=20 loops=1)" " Merge Cond: (t6.id = t19.movie_id)" " -> Merge Left Join (cost=4974472.00..5315778.48 rows=8502523 width=637) (actual time=134972.032..134972.099 rows=20 loops=1)" " Merge Cond: (t6.id = t11.movie_id)" " -> Merge Left Join (cost=1830064.81..2033131.89 rows=1341632 width=561) (actual time=63784.035..63784.062 rows=2 loops=1)" " Merge Cond: (t6.id = t10.movie_id)" " -> Nested Loop Left Join (cost=1417360.29..1594294.02 rows=1044480 width=521) (actual time=59279.246..59279.264 rows=1 loops=1)" " Join Filter: (t6.kind_id = t9.id)" " -> Merge Left Join (cost=1417359.22..1429787.34 rows=1044480 width=507) (actual time=59279.222..59279.224 rows=1 loops=1)" " Merge Cond: (t6.id = t8.movie_id)" " -> Merge Left Join (cost=1405731.84..1414378.65 rows=1044480 width=491) (actual time=59121.773..59121.775 rows=1 loops=1)" " Merge Cond: (t6.id = t7.movie_id)" " -> Sort (cost=1346206.04..1348817.24 rows=1044480 width=416) (actual time=58095.230..58095.231 rows=1 loops=1)" " Sort Key: t6.id" " Sort Method: quicksort Memory: 17kB" " -> Hash Left Join (cost=172406.29..456387.53 rows=1044480 width=416) (actual time=57969.371..58095.208 rows=1 loops=1)" " Hash Cond: (t2.movie_id = t6.id)" " -> Hash Left Join (cost=104700.38..256885.82 rows=1044480 width=358) (actual time=49981.493..50006.303 rows=1 loops=1)" " Hash Cond: (t2.role_id = t5.id)" " -> Hash Left Join (cost=104699.11..242522.95 rows=1044480 width=343) (actual time=49981.441..50006.250 rows=1 loops=1)" " Hash Cond: (t2.person_role_id = t4.id)" " -> Hash Left Join (cost=464.96..12283.95 rows=1044480 width=269) (actual time=0.071..0.087 rows=1 loops=1)" " Hash Cond: (n1.id = t3.person_id)" " -> Nested Loop Left Join (cost=0.00..49.39 rows=7680 width=160) (actual time=0.051..0.066 rows=1 loops=1)" " -> Nested Loop Left Join (cost=0.00..17.04 rows=3 width=119) (actual time=0.038..0.041 rows=1 loops=1)" " -> Index Scan using name_pkey on name n1 (cost=0.00..8.68 rows=1 width=39) (actual time=0.022..0.024 rows=1 loops=1)" " Index Cond: (id = 2003)" " -> Index Scan using aka_name_idx_person on aka_name (cost=0.00..8.34 rows=1 width=80) (actual time=0.010..0.010 rows=0 loops=1)" " Index Cond: ((aka_name.person_id = 2003) AND (n1.id = aka_name.person_id))" " -> Index Scan using cast_info_idx_pid on cast_info t2 (cost=0.00..10.77 rows=1 width=41) (actual time=0.011..0.020 rows=1 loops=1)" " Index Cond: ((t2.person_id = 2003) AND (n1.id = t2.person_id))" " -> Hash (cost=463.26..463.26 rows=136 width=109) (actual time=0.010..0.010 rows=0 loops=1)" " -> Index Scan using person_info_idx_pid on person_info t3 (cost=0.00..463.26 rows=136 width=109) (actual time=0.009..0.009 rows=0 loops=1)" " Index Cond: (person_id = 2003)" " -> Hash (cost=42697.62..42697.62 rows=2442362 width=74) (actual time=49305.872..49305.872 rows=2442362 loops=1)" " -> Seq Scan on char_name t4 (cost=0.00..42697.62 rows=2442362 width=74) (actual time=14.066..22775.087 rows=2442362 loops=1)" " -> Hash (cost=1.12..1.12 rows=12 width=15) (actual time=0.024..0.024 rows=12 loops=1)" " -> Seq Scan on role_type t5 (cost=0.00..1.12 rows=12 width=15) (actual time=0.012..0.014 rows=12 loops=1)" " -> Hash (cost=31134.07..31134.07 rows=1573507 width=58) (actual time=7841.225..7841.225 rows=1573507 loops=1)" " -> Seq Scan on title t6 (cost=0.00..31134.07 rows=1573507 width=58) (actual time=21.507..2799.443 rows=1573507 loops=1)" " -> Materialize (cost=59525.80..63203.88 rows=294246 width=75) (actual time=812.376..984.958 rows=192075 loops=1)" " -> Sort (cost=59525.80..60261.42 rows=294246 width=75) (actual time=812.363..922.452 rows=192075 loops=1)" " Sort Key: t7.movie_id" " Sort Method: external merge Disk: 24880kB" " -> Seq Scan on aka_title t7 (cost=0.00..6646.46 rows=294246 width=75) (actual time=24.652..164.822 rows=294246 loops=1)" " -> Materialize (cost=11627.38..12884.43 rows=100564 width=16) (actual time=123.819..149.086 rows=41907 loops=1)" " -> Sort (cost=11627.38..11878.79 rows=100564 width=16) (actual time=123.807..138.530 rows=41907 loops=1)" " Sort Key: t8.movie_id" " Sort Method: external merge Disk: 3136kB" " -> Seq Scan on complete_cast t8 (cost=0.00..1549.64 rows=100564 width=16) (actual time=0.013..10.744 rows=100564 loops=1)" " -> Materialize (cost=1.08..1.15 rows=7 width=14) (actual time=0.016..0.029 rows=7 loops=1)" " -> Seq Scan on kind_type t9 (cost=0.00..1.07 rows=7 width=14) (actual time=0.011..0.013 rows=7 loops=1)" " -> Materialize (cost=412704.52..437969.09 rows=2021166 width=40) (actual time=3420.356..4278.545 rows=1028995 loops=1)" " -> Sort (cost=412704.52..417757.43 rows=2021166 width=40) (actual time=3420.349..3953.483 rows=1028995 loops=1)" " Sort Key: t10.movie_id" " Sort Method: external merge Disk: 90960kB" " -> Seq Scan on movie_companies t10 (cost=0.00..35214.66 rows=2021166 width=40) (actual time=13.271..566.893 rows=2021166 loops=1)" " -> Materialize (cost=3144407.19..3269057.42 rows=9972019 width=76) (actual time=65485.672..70083.219 rows=5039009 loops=1)" " -> Sort (cost=3144407.19..3169337.23 rows=9972019 width=76) (actual time=65485.667..68385.550 rows=5038999 loops=1)" " Sort Key: t11.movie_id" " Sort Method: external merge Disk: 735512kB" " -> Seq Scan on movie_info t11 (cost=0.00..212815.19 rows=9972019 width=76) (actual time=15.750..15715.608 rows=9972019 loops=1)" " -> Materialize (cost=207925.01..219867.92 rows=955433 width=50) (actual time=1483.989..1785.636 rows=429401 loops=1)" " -> Sort (cost=207925.01..210313.59 rows=955433 width=50) (actual time=1483.983..1654.165 rows=429401 loops=1)" " Sort Key: t19.movie_id" " Sort Method: external merge Disk: 31720kB" " -> Seq Scan on movie_info_idx t19 (cost=0.00..15047.33 rows=955433 width=50) (actual time=7.284..221.597 rows=955433 loops=1)" " -> Materialize (cost=501605.39..537645.64 rows=2883220 width=12) (actual time=5823.040..6868.242 rows=1597396 loops=1)" " -> Sort (cost=501605.39..508813.44 rows=2883220 width=12) (actual time=5823.026..6477.517 rows=1597396 loops=1)" " Sort Key: t12.movie_id" " Sort Method: external merge Disk: 78888kB" " -> Seq Scan on movie_keyword t12 (cost=0.00..44417.20 rows=2883220 width=12) (actual time=11.672..839.498 rows=2883220 loops=1)" " -> Materialize (cost=141143.93..152995.81 rows=948150 width=16) (actual time=1916.356..2253.004 rows=478358 loops=1)" " -> Sort (cost=141143.93..143514.31 rows=948150 width=16) (actual time=1916.344..2125.698 rows=478358 loops=1)" " Sort Key: t13.linked_movie_id" " Sort Method: external merge Disk: 29632kB" " -> Seq Scan on movie_link t13 (cost=0.00..14607.50 rows=948150 width=16) (actual time=27.610..297.962 rows=948150 loops=1)" " -> Hash (cost=1.18..1.18 rows=18 width=16) (actual time=0.020..0.020 rows=18 loops=1)" " -> Seq Scan on link_type t14 (cost=0.00..1.18 rows=18 width=16) (actual time=0.010..0.012 rows=18 loops=1)" " -> Hash (cost=1537.10..1537.10 rows=91010 width=24) (actual time=54.055..54.055 rows=91010 loops=1)" " -> Seq Scan on keyword t15 (cost=0.00..1537.10 rows=91010 width=24) (actual time=0.006..14.703 rows=91010 loops=1)" " -> Hash (cost=4585.61..4585.61 rows=245461 width=42) (actual time=445.269..445.269 rows=245461 loops=1)" " -> Seq Scan on company_name t16 (cost=0.00..4585.61 rows=245461 width=42) (actual time=12.037..309.961 rows=245461 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=25) (actual time=0.013..0.013 rows=4 loops=1)" " -> Seq Scan on company_type t17 (cost=0.00..1.04 rows=4 width=25) (actual time=0.009..0.010 rows=4 loops=1)" " -> Hash (cost=1.04..1.04 rows=4 width=13) (actual time=0.006..0.006 rows=4 loops=1)" " -> Seq Scan on comp_cast_type t18 (cost=0.00..1.04 rows=4 width=13) (actual time=0.002..0.003 rows=4 loops=1)" "Total runtime: 147055.016 ms" Is there anyway to force the name.id = 2003 before it tries to join all the tables together? As you can see, the end result is 4 tuples but it seems like it should be a fast join by using the available index after it limited it down with the name clause, although very complex.

    Read the article

  • django: can't adapt error when importing data from postgres database

    - by Oleg Tarasenko
    Hi, I'm having strange error with installing fixture from dumped data. I am using psycopg2, and django1.1.1 silver:probsbox oleg$ python manage.py loaddata /Users/oleg/probs.json Installing json fixture '/Users/oleg/probs' from '/Users/oleg/probs'. Problem installing fixture '/Users/oleg/probs.json': Traceback (most recent call last): File "/opt/local/lib/python2.5/site-packages/django/core/management/commands/loaddata.py", line 153, in handle obj.save() File "/opt/local/lib/python2.5/site-packages/django/core/serializers/base.py", line 163, in save models.Model.save_base(self.object, raw=True) File "/opt/local/lib/python2.5/site-packages/django/db/models/base.py", line 495, in save_base result = manager._insert(values, return_id=update_pk) File "/opt/local/lib/python2.5/site-packages/django/db/models/manager.py", line 177, in _insert return insert_query(self.model, values, **kwargs) File "/opt/local/lib/python2.5/site-packages/django/db/models/query.py", line 1087, in insert_query return query.execute_sql(return_id) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/subqueries.py", line 320, in execute_sql cursor = super(InsertQuery, self).execute_sql(None) File "/opt/local/lib/python2.5/site-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/opt/local/lib/python2.5/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) ProgrammingError: can't adapt First I've checked similar issues on internet. This one seemed to be very related: http://code.djangoproject.com/ticket/5996, as my data has many non ASCII symbols But actually I've checked my django installation and it's ok there Could you advice what is wrong

    Read the article

  • Postgres query optmization

    - by hdx
    Hey guys, trying to optimize this query to solve a duplicate user issue: SELECT userid, 'ismaster' AS name, 'false' AS propvalue FROM user WHERE userid NOT IN (SELECT userid FROM userprop WHERE name = 'ismaster'); The problem is that the select after the NOT IN is 120.000 records and it's taking forever. Any suggestion?

    Read the article

  • Postgres update rule returning number of rows affected

    - by Lithium
    I have a view table which is a union of two seperate tables (say Table _A and Table _B). I need to be able to update a row in the view table, and it seems the way to do this was through a 'view rule'. All entries in the view table have seperate id's, so an id that exists in table _A won't exist in table _B. I created the following rule: CREATE OR REPLACE RULE view_update AS ON UPDATE TO viewtable DO INSTEAD ( UPDATE _A SET foo = false WHERE old.id = _A.id; UPDATE _B SET foo = false WHERE old.id = _B.id; ); If I do an update on table _B it returns the correct number of rows affected (1). However if I update table _A it returns (0) rows affected even though the data was changed. If I swap out the order of the updates then the same thing happens, but in reverse. How can I solve this problem so that it returns the correct number of rows affected. Thanks.

    Read the article

  • Postgres: Using function variable names in pgsql function

    - by Peter
    Hi I have written a pgsql function along the lines of what's shown below. How can I get rid of the $1, $2, etc. and replace them with the real argument names to make the function code more readable? Regards Peter CREATE OR REPLACE FUNCTION InsertUser ( UserID UUID, FirstName CHAR(10), Surname VARCHAR(75), Email VARCHAR(75) ) RETURNS void AS $$ INSERT INTO "User" (userid,firstname,surname,email) VALUES ($1,$2,$3,$4) $$ LANGUAGE SQL;

    Read the article

  • How to call Postgres function returning SETOF record?

    - by Peter
    I have written the following function: -- Gets stats for all markets CREATE OR REPLACE FUNCTION GetMarketStats ( ) RETURNS SETOF record AS $$ BEGIN SELECT 'R approved offer' AS Metric, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketAPlus24, SUM(CASE WHEN M.MarketName = 'A+' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketAPlus36, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketA24, SUM(CASE WHEN M.MarketName = 'A' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketA36, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 24 THEN LO.Amount ELSE 0 end) AS MarketB24, SUM(CASE WHEN M.MarketName = 'B' AND M.Term = 36 THEN LO.Amount ELSE 0 end) AS MarketB36 FROM "Market" M INNER JOIN "Listing" L ON L.MarketID = M.MarketID INNER JOIN "ListingOffer" LO ON L.ListingID = LO.ListingID; END $$ LANGUAGE plpgsql; And when trying to call it like this... select * from GetMarketStats() AS (Metric VARCHAR(50),MarketAPlus24 INT,MarketAPlus36 INT,MarketA24 INT,MarketA36 INT,MarketB24 INT,MarketB36 INT); I get an error: ERROR: query has no destination for result data HINT: If you want to discard the results of a SELECT, use PERFORM instead. CONTEXT: PL/pgSQL function "getmarketstats" line 2 at SQL statement I don't understand this output. I've tried using perform too, but I thought one only had to use that if the function doesn't return anything.

    Read the article

  • postgres - regex_replace in distinct clause?

    - by n00b0101
    Ok... changing the question here... I'm getting an error when I try this: SELECT COUNT ( DISTINCT mid, regexp_replace(na_fname, '\\s*', '', 'g'), regexp_replace(na_lname, '\\s*', '', 'g')) FROM masterfile; Is it possible to use regexp in a distinct clause like this? The error is this: WARNING: nonstandard use of \\ in a string literal LINE 1: ...CT COUNT ( DISTINCT mid, regexp_replace(na_fname, '\\s*', ''...

    Read the article

  • rake test not copying development postgres db with sequences

    - by Robert Crida
    I am trying to develop a rails application on postgresql using a sequence to increment a field instead of a default ruby approach based on validates_uniqueness_of. This has proved challenging for a number of reasons: 1. This is a migration of an existing table, not a new table or column 2. Using parameter :default = "nextval('seq')" didn't work because it tries to set it in parenthesis 3. Eventually got migration working in 2 steps: change_column :work_commencement_orders, :wco_number_suffix, :integer, :null => false#, :options => "set default nextval('wco_number_suffix_seq')" execute %{ ALTER TABLE work_commencement_orders ALTER COLUMN wco_number_suffix SET DEFAULT nextval('wco_number_suffix_seq'); } Now this would appear to have done the correct thing in the development database and the schema looks like: wco_number_suffix | integer | not null default nextval('wco_number_suffix_seq'::regclass) However, the tests are failing with PGError: ERROR: null value in column "wco_number_suffix" violates not-null constraint : INSERT INTO "work_commencement_orders" ("expense_account_id", "created_at", "process_id", "vo2_issued_on", "wco_template", "updated_at", "notes", "process_type", "vo_number", "vo_issued_on", "vo2_number", "wco_type_id", "created_by", "contractor_id", "old_wco_type", "master_wco_number", "deadline", "updated_by", "detail", "elective_id", "authorization_batch_id", "delivery_lat", "delivery_long", "operational", "state", "issued_on", "delivery_detail") VALUES(226, '2010-05-31 07:02:16.764215', 728, NULL, E'Default', '2010-05-31 07:02:16.764215', NULL, E'Procurement::Process', NULL, NULL, NULL, 226, NULL, 276, NULL, E'MWCO-213', '2010-06-14 07:02:16.756952', NULL, E'Name 4597', 220, NULL, NULL, NULL, 'f', E'pending', NULL, E'728 Test Road; Test Town; 1234; Test Land') RETURNING "id" The explanation can be found when you inspect the schema of the test database: wco_number_suffix | integer | not null So what happened to the default? I tried adding task: template: smmt_ops_development to the database.yml file which has the effect of issuing create database smmt_ops_test template = "smmt_ops_development" encoding = 'utf8' I have verified that if I issue this then it does in fact copy the default nextval. So clearly rails is doing something after that to suppress it again. Any suggestions as to how to fix this? Thanks Robert

    Read the article

  • Truncate all tables in postgres database

    - by Aaron
    I regularly need to delete all the data from my postgresql database before a rebuild. How would I do this directly in SQL? At the moment I've managed to come up with a SQL statement that returns all the commands I need to execute: SELECT 'TRUNCATE TABLE ' || tablename || ';' FROM pg_tables WHERE tableowner='MYUSER'; but I cant see a way to execute them programatically once I have them... Thanks in advance

    Read the article

  • Porting join from Oracle to Postgres

    - by Grasper
    INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM(+)= 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL Is that exactly the same as: INSERT INTO MISSION_OBJECTIVE( MSN_INT_ID, MO_INT_ID, MO_MSN_CLASS_NM, MO_MSN_CLASS_CD, MO_MSN_TYPE, MO_PRIORITY, MO_COMMENT, MO_START_DT, MO_END_DT, ASP_AIRSPACE_NM, MO_OBJ_LOCATION, MO_ALO_LEG_ID, MO_ALO_ARRIVE_LOC) SELECT '1025', '1', 'AIRDROP', 'ADP', 'LAPES', NULL, COALESCE( NULL, ' '), TO_TIMESTAMP( '1002260900', 'YYMMDDHH24MI'), TO_TIMESTAMP( '1002260915', 'YYMMDDHH24MI'), 'TRANSIT ALPHA', 'TRANSIT ALPHA', '1', 'TRANSIT ALPHA' FROM AIRSPACE ASP, apsmain .MISSION_CLASS MC WHERE ASP.ASP_AIRSPACE_NM = 'TRANSIT ALPHA' AND MC.MCS_MISSION_CLASS_NAME= 'AIRDROP' AND 'TRANSIT ALPHA' IS NOT NULL I just deleted the (+). The part that is confusing me is that ASP.ASP_AIRSPACE_NM is being right joined to a constant.

    Read the article

  • Problem using Embarcadero ER/Studio with Postgres (With Serial PK)

    - by Paul
    Hello ... I created a table using ER/STudio 8.0.3 ... The table has a serial pk (SERIAL/INTEGER in ER/Studio)... But the ER/Studio Physical Model generated convert the Serial to Integer... And The generated table in database has a integer pk, without auto-increment functionality... Any idea? Table generated : CREATE TABLE test ( id integer NOT NULL ) Should be : CREATE TABLE test ( id serial NOT NULL )

    Read the article

  • decodeURIComponent for postgres

    - by maletin
    I use decodeURIComponent and encodeURIComponent in Javascript. Before I store this data in a UTF-8-PostgreSQL-Database, I should decode them: $my_data = pg_escape_string(utf8_encode($_POST['my_data'])); I'm looking for a PostgreSQL-Function to convert Javascript-Encoded Data.

    Read the article

  • postgres counting one record twice if it meets certain criteria

    - by Dashiell0415
    I thought that the query below would naturally do what I explain, but apparently not... My table looks like this: id | name | g | partner | g2 1 | John | M | Sam | M 2 | Devon | M | Mike | M 3 | Kurt | M | Susan | F 4 | Stacy | F | Bob | M 5 | Rosa | F | Rita | F I'm trying to get the id where either the g or g2 value equals 'M'... But, a record where both the g and g2 values are 'M' should return two lines, not 1. So, in the above sample data, I'm trying to return: $q = pg_query("SELECT id FROM mytable WHERE ( g = 'M' OR g2 = 'M' )"); 1 1 2 2 3 4 But, it always returns: 1 2 3 4

    Read the article

  • Postgres: clear entire database before re-creating / re-populating from bash script

    - by Hoff
    hi folks, I'm writing a shell script (will become a cronjob) that will: 1: dump my production database 2: import the dump into my development database Between step 1 and 2, I need to clear the development database (drop all tables?). How is this best accomplished from a shell script? So far, it looks like this: #!/bin/bash time=`date '+%Y'-'%m'-'%d'` # 1. export(dump) the current production database pg_dump -U production_db_name > /backup/dir/backup-${time}.sql # missing step: drop all tables from development database so it can be re-populated # 2. load the backup into the development database psql -U development_db_name < backup/dir/backup-${time}.sql Many thanks in advance! Martin

    Read the article

  • Optimum size of transaction in Postgres?

    - by Joe
    I'm running a process that does a lot of updates ( 100,000) to a table. I have the choice between putting all the updates in a single transaction or committing transactions every 1000 or so. Ignore for the moment the case where a transaction fails and is aborted. I'm interested in the best size of transaction for memory and speed efficiency.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >