Search Results

Search found 624 results on 25 pages for 'postgres'.

Page 22/25 | < Previous Page | 18 19 20 21 22 23 24 25  | Next Page >

  • Amavisd-new(2.6.4-3) failing to do "lookup_sql_dsn" when large number of emails are need to be accessed

    - by sandip
    Amavis is failing to do sql lookup when large number of emails are sent to amavis. Its throwing out error after scanning 40 to 50 email. It shows error like. (!!)TROUBLE in process_request: sql exec: err=7, 57P01,DBD::Pg::st bind_param failed:FATAL: terminating connection due to administrator command\nSSL connection has been closed unexpectedly at (eval 103) line 164, <GEN50> line 5. at (eval 104) line 280, <GEN50> line 5. As soon as this error appears in the logs, Amavis stops and port 10024 is closed. Thinking it to an error due to ssl connection in the database(postgresql-8.4), i had stopped ssl in postgres, but it was of no use. I have tried to configure amavis on another server, but i got the same error again. This happening on a production server, So i am not being able to scan emails as per user settings. Anybody have any idea, what may be the source of this error ?? Please help. Thanks in advance

    Read the article

  • Whats the best cloud backup solution for a small scale server environment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing. Edit: My server is a VPS, so what solution I end up using has to be 100% software based.

    Read the article

  • UIDs for service users in Mac OS X

    - by LaC
    Some third-party servers should be run under a special user for security reasons (eg, PostgreSQL is typically run by "postgres"). Of course, these service users should not show up in the Mac OS X login windows. I know how to create hidden users using dscl or dsimport, but I'm wondering what the best policy is for assigning UIDs (and matching GIDs). Apple's documentation states that UIDs from 0 to 100 are reserved (pg. 69), but OS X comes with several special users and groups outside that range. I used to use ids from 401 onwards for services, but I noticed that OS X 10.6 has started using that range for groups created by the Sharing pane in System Preferences. What is the recommended ID range to use for third-party services, then? Perhaps I should just use IDs in the 500 range, since all that is needed to hide a user in Snow Leopard is setting his password to "*"? Also, most of Apple's services have names starting with an underscore, with an alias sans underscore; eg, _sandbox and sandbox. Is there any special significance to this? Should I do the same for my services?

    Read the article

  • How do I automatically connect my client to an ODBC data source on another machine with dynamic IP?

    - by Kdansky
    At the customer's place, we've got a postgres DB on a server, and a few clients. We connect them through ODBC-drivers, and all machines run windows (usually XP). Now we had a few annoying issues: The client "forgets" some flags in the ODBC drivers, such as ByteA as LO. Every time anything changes, we have to reset that, and type in the password, and sometimes even the IP of the server. On x64 machines running Windows 7, configuring this is a pain as the system settings dialogue will only show 64-bit connections by default. And most importantly: If the server changes IP because the customer restarts or replaces a switch, all connections are lost. Annoyingly, this cannot be fixed with just correcting the IP, but rather, we have to check every single place (even hba_conf) because all the settings magically disappear. Our customers often are very small companies, where "server" means "that one PC in the other room", and not "Oracle mainframe in the dungeon", so we don't want to rely on them not restarting switches. Is there a better way than to rely on these really unstable settings? Are these settings somewhere in a file which I could edit manually, to make fixing it easier?

    Read the article

  • How do I connect remotely to SQL Server from Windows client?

    - by humble_coder
    Hi All, Having a bit of an issue connecting to SQL SERVER remotely from Windows. I've verified that all of my settings are correct via SQL SERVER MANAGEMENT STUDIO EXPRESS and SQL SERVER CONFIGURATION MANAGER. I can connect remotely using ODBC drivers from other OSes (e.g. OS X, Linux, etc). However, when I connect with the same credentials from a remote Windows machine using "SQL SERVER" as the driver I am told that the system cannot connect. I've tried creating an ODBC Data Source and I get the same error: Connection failed: SQLState: '01000' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen(InvalidInstance()). Connection failed: SQLState: '08001' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]Invalid Connection From the non-windows machines I can use the IP address of the SQL Server just fine. However, on the remote Windows machine, neither IP address nor named instance works. FYI, I can create an ODBC Data Source using the named instance on the machine actually running the SQL Server (but this is, of course, nothing special -- just proof that it isn't completely hosed). One interesting note: If I use SQL STUDIO 2005 from a Windows client machine, I can use the IP address to connect remotely. Still, the whole reason I bring this up is because I need to use a software package I've written to connect to SQL Server remotely from Windows machines as well. Previously the solution was only needed to xfer data from SQL Server into a PostGRES or MySQL database on non-Windows machines (due to DBA preference). However, now they also want to move the data from the legacy software to MySQL even on Windows. Any assistance would be most appreciated. Feel free to provide a full example connection string. Best

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • 2.6.9 Kernel on virtual server (non upgradable) - any expected problems?

    - by chris_l
    Hi, I'm considering to rent a virtual server (for me personally). The product I'm currently looking at offers IMO fair pricing, very good hardware etc. The only problem is, that I won't be able to do an upgrade to a newer kernel than 2.6.9 (running Debian Etch). Also, I can't install my own kernel modules. (The server runs with Virtuozzo, so as far as I understand it, it just does some chroot instead of a real virtualization (?)) I want to run GlassFish, Postgres, Subversion, Trac and maybe some other things on it. It will also have to employ a firewall, and provide OpenSSL for https. Ideally, it would also be able to do AIO (asynchronous IO), which could speed up some server I/O. Should I expect problems with that old kernel version, in conjunction with the software I want to install (I'd like to use current versions of the software)? One thing I already found out, is that you can't do everything with iptables, since some kernel modules are missing/things are not build into the kernel. GlassFish v3 appears to run fine at first glance. I was able to test the server for a few hours. Installing my whole setup wasn't feasible in that time, but what I can say is, that it's amazingly fast for an entry-level vserver, especially hard disk and network performance (averaging at ca. 400MBit/s). So if the kernel won't be a problem, I'd really like to take it. Thanks, Chris PS Exact kernel version: 2.6.9-023stab051.3-smp

    Read the article

  • Configuring iptables rules for HAProxy and others

    - by MLister
    I have the following relevant settings for HAProxy: defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 500 contimeout 5s clitimeout 15s srvtimeout 15s frontend public bind *:80 option http-server-close option http-pretend-keepalive option forwardfor # ACLs ... I have three backends (including a Nginx server) configured in HAProxy, all listening on different ports of 127.0.0.1. And my iptables config is this: *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT My questions are: Would the above iptables config work with the settings/options in my HAProxy config? I am also runnning a postgres and a redis server on the same machine, what settings do I need to adjust for these two to enable them work with iptables?

    Read the article

  • During Spring unit test, data written to db but test not seeing the data

    - by richever
    I wrote a test case that extends AbstractTransactionalJUnit4SpringContextTests. The single test case I've written creates an instance of class User and attempts to write it to the database using Hibernate. The test code then uses SimpleJdbcTemplate to execute a simple select count(*) from the user table to determine if the user was persisted to the database or not. The test always fails though. I was suspect because in the Spring controller I wrote, the ability to save an instance of User to the db is successful. So I added the Rollback annotation to the unit test and sure enough, the data is written to the database since I can even see it in the appropriate table -- the transaction isn't rolled back when the test case is finished. Here's my test case: @ContextConfiguration(locations = { "classpath:context-daos.xml", "classpath:context-dataSource.xml", "classpath:context-hibernate.xml"}) public class UserDaoTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private UserDao userDao; @Test @Rollback(false) public void teseCreateUser() { try { UserModel user = randomUser(); String username = user.getUserName(); long id = userDao.create(user); String query = "select count(*) from public.usr where usr_name = '%s'"; long count = simpleJdbcTemplate.queryForLong(String.format(query, username)); Assert.assertEquals("User with username should be in the db", 1, count); } catch (Exception e) { e.printStackTrace(); Assert.assertNull("testCreateUser: " + e.getMessage()); } } } I think I was remiss by not adding the configuration files. context-hibernate.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd> <bean id="namingStrategy" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField"> <value>org.hibernate.cfg.ImprovedNamingStrategy.INSTANCE</value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" destroy-method="destroy" scope="singleton"> <property name="namingStrategy"> <ref bean="namingStrategy"/> </property> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>com/company/model/usr.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.query.substitutions">yes 'Y', no 'N'</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_minimal_puts">false</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.current_session_context_class">thread</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> <property name="nestedTransactionAllowed" value="false" /> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager"> <ref local="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="create">PROPAGATION_REQUIRED</prop> <prop key="delete">PROPAGATION_REQUIRED</prop> <prop key="update">PROPAGATION_REQUIRED</prop> <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop> </props> </property> </bean> </beans> context-dataSource.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver" /> <property name="jdbcUrl" value="jdbc\:postgresql\://localhost:5432/company_dev" /> <property name="user" value="postgres" /> <property name="password" value="postgres" /> </bean> </beans> context-daos.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="extendedFinderNamingStrategy" class="com.company.dao.finder.impl.ExtendedFinderNamingStrategy"/> <bean id="finderIntroductionAdvisor" class="com.company.dao.finder.impl.FinderIntroductionAdvisor"/> <bean id="abstractDaoTarget" class="com.company.dao.impl.GenericDaoHibernateImpl" abstract="true" depends-on="sessionFactory"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> <property name="namingStrategy"> <ref bean="extendedFinderNamingStrategy"/> </property> </bean> <bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true"> <property name="interceptorNames"> <list> <value>transactionInterceptor</value> <value>finderIntroductionAdvisor</value> </list> </property> </bean> <bean id="userDao" parent="abstractDao"> <property name="proxyInterfaces"> <value>com.company.dao.UserDao</value> </property> <property name="target"> <bean parent="abstractDaoTarget"> <constructor-arg> <value>com.company.model.UserModel</value> </constructor-arg> </bean> </property> </bean> </beans> Some of this I've inherited from someone else. I wouldn't have used the proxying that is going on here because I'm not sure it's needed but this is what I'm working with. Any help much appreciated.

    Read the article

  • What causes this org.hibernate.MappingException?

    - by stacker
    I'm trying to configure an ejb3 sample application, it's entities where mapped to postgres now I want the app run on Jboss4.3 and Informix using JPA. If the DDL creation <property name="hibernate.hbm2ddl.auto" value="create"/> is active this error appears > WARN [ServiceController] Problem > starting service > persistence.units:ear=weblog.ear,jar=weblog.jar,unitName=weblog > javax.persistence.PersistenceException: > [PersistenceUnit: weblog] Unable to > build EntityManagerFactory > at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) > at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132) > at org.jboss.ejb3.entity.PersistenceUnitDeployment.start(PersistenceUnitDeployment.java:246) followed by Caused by: org.hibernate.MappingException: No Dialect mapping for JDBC type: 2005 at org.hibernate.dialect.TypeNames.get(TypeNames.java:56) at org.hibernate.dialect.TypeNames.get(TypeNames.java:81) at org.hibernate.dialect.Dialect.getTypeName(Dialect.java:291) at org.hibernate.mapping.Column.getSqlType(Column.java:182) at org.hibernate.mapping.Table.sqlCreateString(Table.java:394) at org.hibernate.cfg.Configuration.generateSchemaCreationScript(Configuration.java:854) at org.hibernate.tool.hbm2ddl.SchemaExport.<init>(SchemaExport.java:74) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:311) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1300) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:874) at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:669) What does JDBC type: 2005 mean? Any idea how I can track down the entity/column causes the problem? Thanks

    Read the article

  • Hibernate + PostgreSQL : relation does not exist - SQL Error: 0, SQLState: 42P01

    - by tommy599
    Hello, I am having some problems trying to work with PostgreSQL and Hibernate, more specifically, the issue mentioned in the title. I've been searching the net for a few hours now but none of the found solutions worked for me. I am using Eclipse Java EE IDE for Web Developers. Build id: 20090920-1017 with HibernateTools, Hibernate 3, PostgreSQL 8.4.3 on Ubuntu 9.10. Here are the relevant files: Message.class package hello; public class Message { private Long id; private String text; public Message() { } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getText() { return text; } public void setText(String text) { this.text = text; } } Message.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="hello"> <class name="Message" table="public.messages"> <id name="id" column="id"> <generator class="assigned"/> </id> <property name="text" column="messagetext"/> </class> </hibernate-mapping> hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">org.postgresql.Driver</property> <property name="hibernate.connection.password">bar</property> <property name="hibernate.connection.url">jdbc:postgresql:postgres/tommy</property> <property name="hibernate.connection.username">foo</property> <property name="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</property> <property name="show_sql">true</property> <property name="log4j.logger.org.hibernate.type">DEBUG</property> <mapping resource="hello/Message.hbm.xml"/> </session-factory> </hibernate-configuration> Main package hello; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.cfg.Configuration; public class App { public static void main(String[] args) { SessionFactory sessionFactory = new Configuration().configure() .buildSessionFactory(); Message message = new Message(); message.setText("Hello Cruel World"); message.setId(2L); Session session = null; Transaction transaction = null; try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); session.save(message); } catch (Exception e) { System.out.println("Exception attemtping to Add message: " + e.getMessage()); } finally { if (session != null && session.isOpen()) { if (transaction != null) transaction.commit(); session.flush(); session.close(); } } } } Table structure: foo=# \d messages Table "public.messages" Column | Type | Modifiers -------------+---------+----------- id | integer | messagetext | text | Eclipse console output when I run it Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: Hibernate 3.5.1-Final Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: hibernate.properties not found Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment buildBytecodeProvider INFO: Bytecode provider name : javassist Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Environment <clinit> INFO: using JDK 1.4 java.sql.Timestamp handling Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration configure INFO: configuring from resource: /hibernate.cfg.xml Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration getConfigurationInputStream INFO: Configuration resource: /hibernate.cfg.xml Apr 28, 2010 11:13:53 PM org.hibernate.cfg.Configuration addResource INFO: Reading mappings from resource : hello/Message.hbm.xml Apr 28, 2010 11:13:54 PM org.hibernate.cfg.HbmBinder bindRootPersistentClassCommonValues INFO: Mapping class: hello.Message -> public.messages Apr 28, 2010 11:13:54 PM org.hibernate.cfg.Configuration doConfigure INFO: Configured SessionFactory: null Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: Using Hibernate built-in connection pool (not for production use!) Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: Hibernate connection pool size: 20 Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: autocommit mode: false Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: using driver: org.postgresql.Driver at URL: jdbc:postgresql:postgres/tommy Apr 28, 2010 11:13:54 PM org.hibernate.connection.DriverManagerConnectionProvider configure INFO: connection properties: {user=foo, password=****} Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: RDBMS: PostgreSQL, version: 8.4.3 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC driver: PostgreSQL Native Driver, version: PostgreSQL 8.4 JDBC4 (build 701) Apr 28, 2010 11:13:54 PM org.hibernate.dialect.Dialect <init> INFO: Using dialect: org.hibernate.dialect.PostgreSQLDialect Apr 28, 2010 11:13:54 PM org.hibernate.engine.jdbc.JdbcSupportLoader useContextualLobCreation INFO: Disabling contextual LOB creation as createClob() method threw error : java.lang.reflect.InvocationTargetException Apr 28, 2010 11:13:54 PM org.hibernate.transaction.TransactionFactoryFactory buildTransactionFactory INFO: Using default transaction strategy (direct JDBC transactions) Apr 28, 2010 11:13:54 PM org.hibernate.transaction.TransactionManagerLookupFactory getTransactionManagerLookup INFO: No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Automatic flush during beforeCompletion(): disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Automatic session close at end of transaction: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC batch size: 15 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC batch updates for versioned data: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Scrollable result sets: enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JDBC3 getGeneratedKeys(): enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Connection release mode: auto Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Default batch fetch size: 1 Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Generate SQL with comments: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Order SQL updates by primary key: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Order SQL inserts for batching: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory createQueryTranslatorFactory INFO: Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory Apr 28, 2010 11:13:54 PM org.hibernate.hql.ast.ASTQueryTranslatorFactory <init> INFO: Using ASTQueryTranslatorFactory Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Query language substitutions: {} Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: JPA-QL strict compliance: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Second-level cache: enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Query cache: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory createRegionFactory INFO: Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Optimize cache for minimal puts: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Structured second-level cache entries: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Echoing all SQL to stdout Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Statistics: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Deleted entity synthetic identifier rollback: disabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Default entity-mode: pojo Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Named query checking : enabled Apr 28, 2010 11:13:54 PM org.hibernate.cfg.SettingsFactory buildSettings INFO: Check Nullability in Core (should be disabled when Bean Validation is on): enabled Apr 28, 2010 11:13:54 PM org.hibernate.impl.SessionFactoryImpl <init> INFO: building session factory Apr 28, 2010 11:13:55 PM org.hibernate.impl.SessionFactoryObjectFactory addInstance INFO: Not binding factory to JNDI, no JNDI name configured Hibernate: insert into public.messages (messagetext, id) values (?, ?) Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: 42P01 Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions WARNING: SQL Error: 0, SQLState: 42P01 Apr 28, 2010 11:13:55 PM org.hibernate.util.JDBCExceptionReporter logExceptions SEVERE: ERROR: relation "public.messages" does not exist Position: 13 Apr 28, 2010 11:13:55 PM org.hibernate.event.def.AbstractFlushingEventListener performExecutions SEVERE: Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:92) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at hello.App.main(App.java:31) Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2569) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:459) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1796) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2708) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 8 more Exception in thread "main" org.hibernate.exception.SQLGrammarException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:92) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:275) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:179) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1206) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:375) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at hello.App.main(App.java:31) Caused by: java.sql.BatchUpdateException: Batch entry 0 insert into public.messages (messagetext, id) values ('Hello Cruel World', '2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2569) at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:459) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1796) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2708) at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70) at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268) ... 8 more PostgreSQL log file 2010-04-28 23:13:55 EEST LOG: execute S_1: BEGIN 2010-04-28 23:13:55 EEST ERROR: relation "public.messages" does not exist at character 13 2010-04-28 23:13:55 EEST STATEMENT: insert into public.messages (messagetext, id) values ($1, $2) 2010-04-28 23:13:55 EEST LOG: unexpected EOF on client connection If I copy/paste the query into the postgre command line and put the values in and ; after it, it works. Everything is lowercase, so I don't think that it's that issue. If I switch to MySQL, the same code same project (I only change driver,URL, authentication), it works. In Eclipse Datasource Explorer, I can ping the DB and it succeeds. Weird thing is that I can't see the tables from there either. It expands the public schema but it doesn't expand the tables. Could it be some permission issue? Thanks!

    Read the article

  • SQLite vs Firebird

    - by rwallace
    The scenario I'm looking at is "This program uses Postgres. Oh, you want to just use it single-user for the moment, and put off having to deal with installing a database server? Okay, in the meantime you can use it with the embedded single-user database." The question is then which embedded database is best. As I understand it, the two main contenders are SQLite and Firebird; so which is better? Criteria: Full SQL support, or as close as reasonably possible. Full text search. Easy to call from C# Locks, or allows you to lock, the database file to make sure nobody tries to run it multiuser and ends up six months down the road with intermittent data corruption in all their backups. Last but far from least, reliability. As I understand it, the disadvantages of SQLite are, No right outer join. Workaround: use left outer join instead. Not much integrity checking. Workaround: be really careful in the application code. No decimal numbers. Workaround: lots of aspirin. None of the above are showstoppers. Are there any others I'm missing? (I know it doesn't support some administrative and code-within-database SQL features, that aren't relevant for this kind of use case.) I don't know anything much about Firebird. What are its disadvantages?

    Read the article

  • Ruby on Rails: How to sanitize a string for SQL when not using find?

    - by williamjones
    I'm trying to sanitize a string that involves user input without having to resort to manually crafting my own possibly buggy regex if possible, however, if that is the only way I would also appreciate if anyone can point me in the right direction to a regex that is unlikely to be missing anything. There are a number of methods in Rails that can allow you to enter in native SQL commands, how do people escape user input for those? The question I'm asking is a broad one, but in my particular case, I'm working with a column in my Postgres database that Rails does not natively understand as far as I know, the tsvector, which holds plain text search information. Rails is able to write and read from it as if it's a string, however, unlike a string, it doesn't seem to be automatically escaping it when I do things like vector= inside the model. For example, when I do model.name='::', where name is a string, it works fine. When I do model.vector='::' it errors out: ActiveRecord::StatementInvalid: PGError: ERROR: syntax error in tsvector: "::" "vectors" = E'::' WHERE "id" = 1 This seems to be a problem caused by lack of escaping of the semicolons, and I can manually set the vector='\:\:' fine. I also had the bright idea, maybe I can just call something like: ActiveRecord::Base.connection.execute "UPDATE medias SET vectors = ? WHERE id = 1", "::" However, this syntax doesn't work, because the raw SQL commands don't have access to find's method of escaping and inputting strings by using the ? mark. This strikes me as the same problem as calling connection.execute with any type of user input, as it all boils down to sanitizing the strings, but I can't seem to find any way to manually call Rails' SQL string sanitization methods. Can anyone provide any advice?

    Read the article

  • Django queryset not returning the same values as the generated sql

    - by HRCerqueira
    Hello guys, I have the following queryset: subscribers = User.objects.values('email', 'username').filter( Q(subscription_settings__new_question='i') | Q(subscription_settings__new_question_watched_tags='i', marked_tags__id__in=question.tags.values('id'), tag_selections__reason='good') ).exclude(id=question.author.id) The problem is that when I evaluate the query I get only the values that are filtered by the first Q object (even if I reverse the order of the objects). So lets say that I was expecting the user A, B, C and D, where A and B are filtered by the first Q object and C and D by the second. But the queryset only returns A and B. I used the django debug toolbar to see the query that was actually being executed (and then I used a direct print statement like "print subscriber.query.as_sql()" just to be sure) and then evaluated the query directly using psql (I'm using postgres by the way), and I get the results I expect. Here's the query btw: SELECT "auth_user"."email", "auth_user"."username" FROM "auth_user" LEFT OUTER JOIN "forum_markedtag" ON ("auth_user"."id" = "forum_markedtag"."user_id") INNER JOIN "forum_defaultsubscriptionsetting" ON ("auth_user"."id" = "forum_defaultsubscriptionsetting"."user_id") WHERE ((("forum_markedtag"."reason" = E'good' AND "forum_defaultsubscriptionsetting"."new_question_watched_tags" = E'i' AND "forum_markedtag"."tag_id" IN (SELECT U0."id" FROM "tag" U0 INNER JOIN "question_tags" U1 ON (U0."id" = U1."tag_id") WHERE U1."question_id" = 64 )) OR "forum_defaultsubscriptionsetting"."new_question" = E'i' ) AND NOT ("auth_user"."id" = 10 )) Thanks in advance EDIT: I tried to break the queryset into two, one that uses the first Q object as the filter and another one with the second Q object, and although the second queryset produces the correct sql that returns the correct values when evaluated directly, it still doesn't return nothing when evaluated as a django queryset. heres the alternative code: subscribers = User.objects.values('email', 'username').filter( subscription_settings__new_question='i').exclude(id=question.author.id) subscribers2 = User.objects.values('email', 'username').filter( subscription_settings__new_question_watched_tags='i', marked_tags__id__in=question.tags.values('id'), tag_selections__reason='good').exclude(id=question.author.id)

    Read the article

  • How can I convert a projection that's not part of spatial_ref_sys?

    - by Summer
    Hi, I'm importing shapefiles into a Postgres+PostGIS database. Here's my usual procedure: * Find an srid in the spatial_ref_sys table where srtext appears to match the shapefile's .prj file * Upload the data into a new table using the shp2pgsql utility, specifying the srid using the -s flag * Add the new table to my main geometry table, and on the way convert to an srid of 4269 (the Census standard projection) using ST_Transform Unfortunately, the spatial_ref_sys table doesn't include Mississippi state's standard projection. The contents of their .prj file is as follows, where I've bolded the parts I usually try to match: PROJCS["mstm",GEOGCS["GCS_North_American_1983",DATUM["D_North_American_1983",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",1300000.0],PARAMETER["Central_Meridian",-89.75],PARAMETER["Scale_Factor",0.9998335],PARAMETER["Latitude_Of_Origin",32.5],UNIT["Meter",1.0]] I eventually found the ogr2ogr utility, and especially with the "peace and joy" promises, I decided to give it a try. I tried this command: ogr2ogr -update -f "PostgreSQL" PG:"Connection details" "File name.shp" -t_srs EPSG:4269 -nln Table_Name I am now getting the error "Terminating translation prematurely after failed translation of layer" -- which seems to indicate that ogr2ogr is not going to be the savior I imagined in getting arbitrary .prj files neatly into the 4269 projection. Any ideas about what to do?

    Read the article

  • Python - CSV: Large file with rows of different lengths

    - by dassouki
    In short, I have a 20,000,000 line csv file that has different row lengths. This is due to archaic data loggers and proprietary formats. We get the end result as a csv file in the following format. MY goal is to insert this file into a postgres database. How Can I do the following: Keep the first 8 columns and my last 2 columns, to have a consistent CSV file Add a Column to the file. 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0, img_id.jpg, -50 1, 2, 3, 4, 5, 0,0,0,0,0,0,0,0,0,0,0 img_id.jpg, -50

    Read the article

  • Multiple database connection in Rails

    - by Sanal
    I'm using active_delegate for multiple connection in Rails. Here I'm using mysql as master_database for some models,and postgresql for some other models. Problem is that when I try to access the mysql models, I'm getting the error below! Stack trace shows that, it is still using the postgresql adapter to access my mysql models! RuntimeError: ERROR C42P01 Mrelation "categories" does not exist P15 F.\src\backend\parser\parse_relation.c L886 RparserOpenTable: SELECT * FROM "categories" STACKTRACE =========== d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:507:in `execute' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:985:in `select_raw' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/postgresql_adapter.rb:972:in `select' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:81:in `cache_sql' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:60:in `select_all' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:661:in `find_by_sql' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1553:in `find_every' d:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:615:in `find' D:/ROR/Aptana/dedomenon/app/models/category.rb:50:in `get_all_with_exclusive_scope' D:/ROR/Aptana/dedomenon/app/models/category.rb:50:in `get_all_with_exclusive_scope' D:/ROR/Aptana/dedomenon/app/controllers/categories_controller.rb:48:in `index' here is my database.yml file postgre: &postgre adapter: postgresql database: codex host: localhost username: postgres password: root port: 5432 mysql: &mysql adapter: mysql database: project host: localhost username: root password: root port: 3306 development: <<: *postgre test: <<: *postgre production: <<: *postgre master_database: <<: *mysql and my master_databse model is like this class Category < ActiveRecord::Base delegates_connection_to :master_database, :on => [:create, :save, :destroy] end Anyone has any solution??

    Read the article

  • How should I define a composite foreign key for domain constraints in the presence of surrogate keys

    - by Samuel Danielson
    I am writing a new app with Rails so I have an id column on every table. What is the best practice for enforcing domain constraints using foreign keys? I'll outline my thoughts and frustration. Here's what I would imagine as "The Rails Way". It's what I started with. Companies: id: integer, serial company_code: char, unique, not null Invoices: id: integer, serial company_id: integer, not null Products: id: integer, serial sku: char, unique, not null company_id: integer, not null LineItems: id: integer, serial invoice_id: integer, not null, references Invoices (id) product_id: integer, not null, references Products (id) The problem with this is that a product from one company might appear on an invoice for a different company. I added a (company_id: integer, not null) to LineItems, sort of like I'd do if only using natural keys and serials, then added a composite foreign key. LineItems (product_id, company_id) references Products (id, company_id) LineItems (invoice_id, company_id) references Invoices (id, company_id) This properly constrains LineItems to a single company but it seems over-engineered and wrong. company_id in LineItems is extraneous because the surrogate foreign keys are already unique in the foreign table. Postgres requires that I add a unique index for the referenced attributes so I am creating a unique index on (id, company_id) in Products and Invoices, even though id is simply unique. The following table with natural keys and a serial invoice number would not have these issues. LineItems: company_code: char, not null sku: char, not null invoice_id: integer, not null I can ignore the surrogate keys in the LineItems table but this also seems wrong. Why make the database join on char when it has an integer already there to use? Also, doing it exactly like the above would require me to add company_code, a natural foreign key, to Products and Invoices. The compromise... LineItems: company_id: integer, not null sku: integer, not null invoice_id: integer, not null does not require natural foreign keys in other tables but it is still joining on char when there is a integer available. Is there a clean way to enforce domain constraints with foreign keys like God intended, but in the presence of surrogates, without turning the schema and indexes into a complicated mess?

    Read the article

  • Converting Complicated Oracle Join Syntax

    - by Grasper
    I have asked for help before on porting joins of this nature, but nothing this complex. I am porting a bunch of old SQL from oracle to postgres, which includes a lot of (+) style left joins. I need this in a format that pg will understand. I am having trouble deciphering this join hierarchy: SELECT * FROM PLANNED_MISSION PM_CTRL, CONTROL_AGENCY CA, MISSION_CONTROL MC, MISSION_OBJECTIVE MOR, REQUEST_OBJECTIVE RO, MISSION_REQUEST_PAIRING MRP, FRIENDLY_UNIT FU, PACKAGE_MISSION PKM, MISSION_AIRCRAFT MA, MISSION_OBJECTIVE MO, PLANNED_MISSION PM WHERE PM.MSN_TASKED_UNIT_TYPE != 'EAM' AND PM.MSN_INT_ID = MO.MSN_INT_ID AND PM.MSN_INT_ID = PKM.MSN_INT_ID (+) AND PM.MSN_INT_ID = MA.MSN_INT_ID (+) AND COALESCE(MA.MA_RESOURCE_INT_ID,0) = (SELECT COALESCE(MIN(MA1.MA_RESOURCE_INT_ID),0) FROM MISSION_AIRCRAFT MA1 WHERE MA.MSN_INT_ID = MA1.MSN_INT_ID) AND MA.FU_UNIT_ID = FU.FU_UNIT_ID (+) AND MA.CC_COUNTRY_CD = FU.CC_COUNTRY_CD (+) AND MO.MSN_INT_ID = MC.MSN_INT_ID (+) AND MO.MO_INT_ID = MC.MO_INT_ID (+) AND MC.CAG_CALLSIGN = CA.CAG_CALLSIGN (+) AND MC.CTRL_MSN_INT_ID = PM_CTRL.MSN_INT_ID (+) AND MO.MSN_INT_ID = MRP.MSN_INT_ID (+) AND MO.MO_INT_ID = MRP.MO_INT_ID (+) AND MRP.REQ_INT_ID = RO.REQ_INT_ID (+) AND RO.MSN_INT_ID = MOR.MSN_INT_ID (+) AND RO.MO_INT_ID = MOR.MO_INT_ID (+) AND MO.MSN_INT_ID = :msn_int_id AND MO.MO_INT_ID = :obj_int_id AND COALESCE(PM.MSN_MISSION_NUM, ' ') LIKE '%' AND COALESCE( PKM.PKG_NM,' ') LIKE '%' AND COALESCE( MA.FU_UNIT_ID, ' ') LIKE '%' AND COALESCE( MA.CC_COUNTRY_CD, ' ') LIKE '%' AND COALESCE(FU.FU_COMPONENT, ' ') LIKE '%' AND COALESCE( MA.ACT_AC_TYPE,' ') LIKE '%' AND MO.MO_MSN_CLASS_CD LIKE '%' AND COALESCE(MO.MO_MSN_TYPE, ' ') LIKE '%' AND COALESCE( MO.MO_OBJ_LOCATION,COALESCE( MOR.MO_OBJ_LOCATION, ' ')) LIKE '%' AND COALESCE(CA.CAG_TYPE_OF_CONTROL, ' ') LIKE '%' AND COALESCE( MC.CAG_CALLSIGN,' ') LIKE '%' AND COALESCE( MC.ASP_AIRSPACE_NM, ' ') LIKE '%' AND COALESCE( MC.CTRL_MSN_INT_ID, 0) LIKE '%' AND COALESCE(MC.CTRL_MO_INT_ID, 0) LIKE '%' AND COALESCE( PM_CTRL.MSN_MISSION_NUM,' ') LIKE '%' Any help is appreciated.

    Read the article

  • Porting Oracle Procedure to PostgreSQL

    - by Grasper
    I am porting an Oracle function into Postgres PGPLSQL.. I have been using this guide: http://www.postgresql.org/docs/8.1/static/plpgsql.html CREATE OR REPLACE PROCEDURE DATA_UPDATE (mission NUMBER, task NUMBER) AS BEGIN IF mission IS NOT NULL THEN UPDATE MISSION_OBJECTIVE MO SET (MO.MO_TKR_TOTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT NVL(SUM(RR.TRQ_FUEL_OFFLOAD),0), NVL(SUM(RR.TRQ_NUMBER_RECEIVERS),0) FROM REFUELING_REQUEST RR, MISSION_REQUEST_PAIRING MRP WHERE MO.MSN_INT_ID = MRP.MSN_INT_ID AND MO.MO_INT_ID = MRP.MO_INT_ID AND MRP.REQ_INT_ID = RR.REQ_INT_ID) WHERE MO.MSN_INT_ID = mission AND MO.MO_INT_ID = task ; END IF ; COMMIT ; END ; I've got it this far: CREATE OR REPLACE FUNCTION DATA_UPDATE (NUMERIC, NUMERIC) RETURNS integer as ' DECLARE mission ALIAS for $1; task ALIAS for $2; BEGIN IF mission IS NOT NULL THEN UPDATE MISSION_OBJECTIVE MO SET (MO.MO_TKR_TOTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT COALESCE(SUM(RR.TRQ_FUEL_OFFLOAD),0), COALESCE(SUM(RR.TRQ_NUMBER_RECEIVERS),0) FROM REFUELING_REQUEST RR, MISSION_REQUEST_PAIRING MRP WHERE MO.MSN_INT_ID = MRP.MSN_INT_ID AND MO.MO_INT_ID = MRP.MO_INT_ID AND MRP.REQ_INT_ID = RR.REQ_INT_ID) WHERE MO.MSN_INT_ID = mission AND MO.MO_INT_ID = task ; END IF; COMMIT; END; ' LANGUAGE plpgsql; This is the error I get: ERROR: syntax error at or near "SELECT" LINE 1: ...OTAL_OFF_SCHEDULED, MO.MO_TKR_TOTAL_RECEIVERS) = (SELECT COA... I do not know why this isn't working... any ideas?

    Read the article

  • JVM/CLR Source-compatible Language Options

    - by Nathan Voxland
    I have an open source Java database migration tool (http://www.liquibase.org) which I am considering porting to .Net. The majority of the tool (at least from a complexity side) is around logic like "if you are adding a primary key and the database is Oracle use this SQL. If database is MySQL use this SQL. If the primary key is named and the database is Postgres use this SQL". I could fork the Java codebase and covert it (manually and/or automatically), but as updates and bug fixes to the above logic come in I do not want to have to apply it to both versions. What I would like to do is move all that logic into a form that can be compiled and used by both Java and .Net versions naively. The code I am looking to convert does not contain any advanced library usage (JDBC, System.out, etc) that would vary significantly from Java to .Net, so I don't think that will be an issue (at worst it can be designed around). So what I am looking for is: A language in which I can code common parts of my app in and compile it into classes usable by the "standard" languages on the target platform Does not add any runtime requirements to the system Nothing so strange that it scares away potential contributors I know Python and Ruby both have implementations on for the JVM and CLR. How well do they fit my requirements? Has anyone been successful (or unsuccesful) using this technique for cross-platform applications? Are there any gotcha's I need to worry about?

    Read the article

  • postgresql error - ERROR: input is out of range

    - by CaffeineIV
    The function below keeps returning this error message. I thought that maybe the double_precision field type was what was causing this, and I tried to use CAST, but either that's not it, or I didn't do it right... Help? Here's the error: ERROR: input is out of range CONTEXT: PL/pgSQL function "calculate_distance" line 7 at RETURN ********** Error ********** ERROR: input is out of range SQL state: 22003 Context: PL/pgSQL function "calculate_distance" line 7 at RETURN And here's the function: CREATE OR REPLACE FUNCTION calculate_distance(character varying, double precision, double precision, double precision, double precision) RETURNS double precision AS $BODY$ DECLARE earth_radius double precision; BEGIN earth_radius := 3959.0; RETURN earth_radius * acos(sin($2 / 57.2958) * sin($4 / 57.2958) + cos($2/ 57.2958) * cos($4 / 57.2958) * cos(($5 / 57.2958) - ($3 / 57.2958))); END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100; ALTER FUNCTION calculate_distance(character varying, double precision, double precision, double precision, double precision) OWNER TO postgres; //I tried changing (unsuccessfully) that RETURN line to: RETURN CAST( (earth_radius * acos(sin($2 / 57.2958) * sin($4 / 57.2958) + cos($2/ 57.2958) * cos($4 / 57.2958) * cos(($5 / 57.2958) - ($3 / 57.2958))) ) AS text);

    Read the article

  • To have an Integer pointing to 3 ordered lists in Java

    - by Masi
    Which datastructure would you use in the place of X to have efficient merges, sorts and additions as described below? #1 Possible solution: one HashMap to X -datastructure Having a HashMap pointing from fileID to some datastructure linking word, wordCount and wordID may be a good solution. However, I have not found a way to implement it. I am not allowed to use Postgres or any similar tool to keep my data neutralized. I want to have efficient merges, sorts and additions according to fileID, wordID or wordCount for the type below. I have the type Words which has th field fileID that points to a list of words and to relating pieces of information: The Type class Words =================================== fileID: int [list of words] : ArrayList [list of wordCounts] : ArrayList [list of wordIDs] : ArrayList Example of the data in fileID word wordCount wordID instance1 of words 1 He 123 1111 1 llo 321 2 instance2 of words 2 Van 213 666 2 cou 777 932 Example of needed merge fileID wordID fileID wordID 1 2 1 3 wordID=2 2 2 ========> 1 2 2 3 2 2 I cannot see any usage of set-operations such as intersections here because order is needed. Having about three HashMaps makes sorting difficult: from word to wordID in a given fileID from wordID to fileID from wordID to wordCount in a given fileID

    Read the article

  • How to connect to SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5

    - by Luke
    Hi all, sorry for the blast. I'm trying to connect to an SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5. But, when I ran 'rake migrate' it throws the following: rake migrate --trace Hoe.new {...} deprecated. Switch to Hoe.spec. Invoke migrate (first_time) Invoke environment (first_time) Execute environment Execute migrate rake aborted! no such file to load -- odbc (...) C:/Program Files/test/Rakefile:146 (...) So, my Rakefile in the line 146 says: ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) The database.yml has been configured in so many ways without success. I've tried setup to mode in odbc, to configure a system dsn, to completely use the activerecord support for sqlserver but no success at all. The same Rakefile works fine over Postgres and Oracle with the proper gems installed off course. But I cann't get this work. Any help will be appreciated. Thanks in advance!

    Read the article

  • JPA - Setting entity class property from calculated column?

    - by growse
    I'm just getting to grips with JPA in a simple Java web app running on Glassfish 3 (Persistence provider is EclipseLink). So far, I'm really liking it (bugs in netbeans/glassfish interaction aside) but there's a thing that I want to be able to do that I'm not sure how to do. I've got an entity class (Article) that's mapped to a database table (article). I'm trying to do a query on the database that returns a calculated column, but I can't figure out how to set up a property of the Article class so that the property gets filled by the column value when I call the query. If I do a regular "select id,title,body from article" query, I get a list of Article objects fine, with the id, title and body properties filled. This works fine. However, if I do the below: Query q = em.createNativeQuery("select id,title,shorttitle,datestamp,body,true as published, ts_headline(body,q,'ShortWord=0') as headline, type from articles,to_tsquery('english',?) as q where idxfti @@ q order by ts_rank(idxfti,q) desc",Article.class); (this is a fulltext search using tsearch2 on Postgres - it's a db-specific function, so I'm using a NativeQuery) You can see I'm fetching a calculated column, called headline. How do I add a headline property to my Article class so that it gets populated by this query? So far, I've tried setting it to be @Transient, but that just ends up with it being null all the time.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25  | Next Page >