Search Results

Search found 34274 results on 1371 pages for 'mysql table'.

Page 458/1371 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • How to find missing alpha values in sets of data within same table in SQL

    - by Jeff
    I have a table of many values where one column has the WO Number, and another column has the Resource ID. I need to be able to find all the WO numbers that do not have a resource value of "RW". Here is an example of the typical information. I need to be able to know that work order 5678 does not have an "RW" Resource ID. WO Number - Resource ID 1234 - IN 1234 - WE 1234 - AS 1234 - RW 5678 - PR 5678 - WE 5678 - IN 5678 - AS

    Read the article

  • SQL updating a column in a table

    - by tecnodude
    Hi, I have the following table in an access database id VisitNo Weight 1 1 100 1 2 95 1 3 96 1 4 94 1 5 93 Now row 2 and 4 are deleted. So i have... id VisitNo Weight 1 1 100 1 3 96 1 5 93 However what i need is... id VisitNo Weight 1 1 100 1 2 96 1 3 93 What is the SQL query i need to accomplish the above? thanks

    Read the article

  • Getting maximum value from table using LINQ

    - by Tena
    I have a table in my database. I want to get the maximum value of a column named NumOfView. I used this code: var advert=(from ad in storedb.Ads where ad.AdScope == "1" select ad.NumOfView).Max(); It works but when there are two or more same maximum values it doesn't work and this message appears: Sequence contains more than one element What should I do now? Your answers will be very helpfull. Thanks

    Read the article

  • Large Data Table with first column fixed

    - by bhavya_w
    I have structure as shown in the fiddle http://jsfiddle.net/5LN7U/. <section class="container"> <section class="field"> <ul> <li> Question 1 </li> <li> question 2 </li> <li> question 3 </li> <li> question 4 </li> <li> question 5 </li> <li> question 6 </li> <li> question 7 </li> </ul> </section> <section class="datawrap"> <section class="datawrapinner"> <ul> <li><b>Answer 1 :</b></li> <li><b>Answer 2 :</b></li> <li><b>Answer 3 :</b></li> <li><b>Answer 4 :</b></li> <li><b>Answer 5 :</b></li> <li><b>Answer 6 :</b></li> <li><b>Answer 7 :</b></li> </ul> </section> </section> </section> Basically its a table structure made using divs. The first column contains a long list of questions and the second column contains answers/multiple answers which can be quite big ( there has to be horizontal scrolling in the second column.) The problem i am facing is when i scroll downwards the second column which has the horizontal scroll bar is also scrolling downward. I want horizontal scrollbar to be fixed there. as in it should be always fixed there no matter how much i scroll vertically. Much Like Google Spreadsheets: where the first column stays fixed and there's horizontal scrolling on rest of the columns with over vertical scrolling for whole of the data. I cannot used position fixed in the second column. P.S : please no lectures on using div's for making a table structure. I have my own reasons. and its kinda urgent. Thanks in advance.

    Read the article

  • why do I need virtual table?

    - by lego69
    I was looking for some information about virtual table, but can't find something easy to understand, can somebody give me good example(not from Wiki, please), with explanations, or link, thanks in advance

    Read the article

  • SQL Server : copy data from one table to another

    - by Gladdy
    I want to update Table2 names with names from Table1 with matching Ids I have around 100 rows in each table. Here is my sample tables. Table1 ID Name Table2 ID Name Sample data Table1 ID |Name -------- 1 |abc 2 |bcd Table2 ID |Name -------- 1 |xyz 2 |OOS Expected result Table2 ID |Name -------- 1 |abc 2 |bcd How can I do this?

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • Wicket, Spring and Hibernate - Testing with Unitils - Error: Table not found in statement [select re

    - by John
    Hi there. I've been following a tutorial and a sample application, namely 5 Days of Wicket - Writing the tests: http://www.mysticcoders.com/blog/2009/03/10/5-days-of-wicket-writing-the-tests/ I've set up my own little project with a simple shoutbox that saves messages to a database. I then wanted to set up a couple of tests that would make sure that if a message is stored in the database, the retrieved object would contain the exact same data. Upon running mvn test all my tests fail. The exception has been pasted in the first code box underneath. I've noticed that even though my unitils.properties says to use the 'hdqldb'-dialect, this message is still output in the console window when starting the tests: INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect. I've added the entire dump from the console as well at the bottom of this post (which goes on for miles and miles :-)). Upon running mvn test all my tests fail, and the exception is: Caused by: java.sql.SQLException: Table not found in statement [select relname from pg_class] at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.fetchResult(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.executeQuery(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:188) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.initSequences(DatabaseMetadata.java:151) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.(DatabaseMetadata.java:69) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.(DatabaseMetadata.java:62) at org.springframework.orm.hibernate3.LocalSessionFactoryBean$3.doInHibernate(LocalSessionFactoryBean.java:958) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:419) ... 49 more I've set up my unitils.properties file like so: database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:PUBLIC database.userName=sa database.password= database.dialect=hsqldb database.schemaNames=PUBLIC My abstract IntegrationTest class: @SpringApplicationContext({"/com/upbeat/shoutbox/spring/applicationContext.xml", "applicationContext-test.xml"}) public abstract class AbstractIntegrationTest extends UnitilsJUnit4 { private ApplicationContext applicationContext; } applicationContext-test.xml: <?xml version="1.0" encoding="UTF-8"? <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd" <bean id="dataSource" class="org.unitils.database.UnitilsDataSourceFactoryBean"/ </beans and finally, one of the test classes: package com.upbeat.shoutbox.web; import org.apache.wicket.spring.injection.annot.test.AnnotApplicationContextMock; import org.apache.wicket.util.tester.WicketTester; import org.junit.Before; import org.junit.Test; import org.unitils.spring.annotation.SpringBeanByType; import com.upbeat.shoutbox.HomePage; import com.upbeat.shoutbox.integrations.AbstractIntegrationTest; import com.upbeat.shoutbox.persistence.ShoutItemDao; import com.upbeat.shoutbox.services.ShoutService; public class TestHomePage extends AbstractIntegrationTest { @SpringBeanByType private ShoutService svc; @SpringBeanByType private ShoutItemDao dao; protected WicketTester tester; @Before public void setUp() { AnnotApplicationContextMock appctx = new AnnotApplicationContextMock(); appctx.putBean("shoutItemDao", dao); appctx.putBean("shoutService", svc); tester = new WicketTester(); } @Test public void testRenderMyPage() { //start and render the test page tester.startPage(HomePage.class); //assert rendered page class tester.assertRenderedPage(HomePage.class); //assert rendered label component tester.assertLabel("message", "If you see this message wicket is properly configured and running"); } } Dump from console when running mvn test: [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building shoutbox [INFO] task-segment: [test] [INFO] ------------------------------------------------------------------------ [INFO] [resources:resources {execution: default-resources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. build is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 3 resources [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [resources:testResources {execution: default-testResources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. build is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 2 resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] Nothing to compile - all classes are up to date [INFO] [surefire:test {execution: default-test}] [INFO] Surefire report directory: F:\Projects\shoutbox\target\surefire-reports INFO - ConfigurationLoader - Loaded main configuration file unitils-default.properties from classpath. INFO - ConfigurationLoader - Loaded custom configuration file unitils.properties from classpath. INFO - ConfigurationLoader - No local configuration file unitils-local.properties found. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.upbeat.shoutbox.web.TestViewShoutsPage Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.02 sec INFO - Version - Hibernate Annotations 3.4.0.GA INFO - Environment - Hibernate 3.3.0.SP1 INFO - Environment - hibernate.properties not found INFO - Environment - Bytecode provider name : javassist INFO - Environment - using JDK 1.4 java.sql.Timestamp handling INFO - Version - Hibernate Commons Annotations 3.1.0.GA INFO - AnnotationBinder - Binding entity from annotated class: com.upbeat.shoutbox.models.ShoutItem INFO - QueryBinder - Binding Named query: item.getById = from ShoutItem item where item.id = :id INFO - QueryBinder - Binding Named query: item.find = from ShoutItem item order by item.timestamp desc INFO - QueryBinder - Binding Named query: item.count = select count(item) from ShoutItem item INFO - EntityBinder - Bind entity com.upbeat.shoutbox.models.ShoutItem on table SHOUT_ITEMS INFO - AnnotationConfiguration - Hibernate Validator not found: ignoring INFO - notationSessionFactoryBean - Building new Hibernate SessionFactory INFO - earchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. INFO - ConnectionProviderFactory - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider INFO - SettingsFactory - RDBMS: HSQL Database Engine, version: 1.8.0 INFO - SettingsFactory - JDBC driver: HSQL Database Engine Driver, version: 1.8.0 INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - TransactionFactoryFactory - Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory INFO - actionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) INFO - SettingsFactory - Automatic flush during beforeCompletion(): disabled INFO - SettingsFactory - Automatic session close at end of transaction: disabled INFO - SettingsFactory - JDBC batch size: 1000 INFO - SettingsFactory - JDBC batch updates for versioned data: disabled INFO - SettingsFactory - Scrollable result sets: enabled INFO - SettingsFactory - JDBC3 getGeneratedKeys(): disabled INFO - SettingsFactory - Connection release mode: auto INFO - SettingsFactory - Default batch fetch size: 1 INFO - SettingsFactory - Generate SQL with comments: disabled INFO - SettingsFactory - Order SQL updates by primary key: disabled INFO - SettingsFactory - Order SQL inserts for batching: disabled INFO - SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory INFO - ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory INFO - SettingsFactory - Query language substitutions: {} INFO - SettingsFactory - JPA-QL strict compliance: disabled INFO - SettingsFactory - Second-level cache: enabled INFO - SettingsFactory - Query cache: enabled INFO - SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge INFO - FactoryCacheProviderBridge - Cache provider: org.hibernate.cache.HashtableCacheProvider INFO - SettingsFactory - Optimize cache for minimal puts: disabled INFO - SettingsFactory - Structured second-level cache entries: disabled INFO - SettingsFactory - Query cache factory: org.hibernate.cache.StandardQueryCacheFactory INFO - SettingsFactory - Echoing all SQL to stdout INFO - SettingsFactory - Statistics: disabled INFO - SettingsFactory - Deleted entity synthetic identifier rollback: disabled INFO - SettingsFactory - Default entity-mode: pojo INFO - SettingsFactory - Named query checking : enabled INFO - SessionFactoryImpl - building session factory INFO - essionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured INFO - UpdateTimestampsCache - starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache INFO - StandardQueryCache - starting query cache at region: org.hibernate.cache.StandardQueryCache INFO - notationSessionFactoryBean - Updating database schema for Hibernate SessionFactory INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml] INFO - SQLErrorCodesFactory - SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase] INFO - DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@3e0ebb: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy INFO - sPathXmlApplicationContext - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@a8e586: display name [org.springframework.context.support.ClassPathXmlApplicationContext@a8e586]; startup date [Tue May 04 18:19:58 CEST 2010]; root of context hierarchy INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [com/upbeat/shoutbox/spring/applicationContext.xml] INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [applicationContext-test.xml] INFO - DefaultListableBeanFactory - Overriding bean definition for bean 'dataSource': replacing [Generic bean: class [org.apache.commons.dbcp.BasicDataSource]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=close; defined in class path resource [com/upbeat/shoutbox/spring/applicationContext.xml]] with [Generic bean: class [org.unitils.database.UnitilsDataSourceFactoryBean]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in class path resource [applicationContext-test.xml]] INFO - sPathXmlApplicationContext - Bean factory for application context [org.springframework.context.support.ClassPathXmlApplicationContext@a8e586]: org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1 INFO - pertyPlaceholderConfigurer - Loading properties file from class path resource [application.properties] INFO - DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy INFO - AnnotationBinder - Binding entity from annotated class: com.upbeat.shoutbox.models.ShoutItem INFO - QueryBinder - Binding Named query: item.getById = from ShoutItem item where item.id = :id INFO - QueryBinder - Binding Named query: item.find = from ShoutItem item order by item.timestamp desc INFO - QueryBinder - Binding Named query: item.count = select count(item) from ShoutItem item INFO - EntityBinder - Bind entity com.upbeat.shoutbox.models.ShoutItem on table SHOUT_ITEMS INFO - AnnotationConfiguration - Hibernate Validator not found: ignoring INFO - notationSessionFactoryBean - Building new Hibernate SessionFactory INFO - earchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. INFO - ConnectionProviderFactory - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider INFO - SettingsFactory - RDBMS: HSQL Database Engine, version: 1.8.0 INFO - SettingsFactory - JDBC driver: HSQL Database Engine Driver, version: 1.8.0 INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - TransactionFactoryFactory - Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory INFO - actionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) INFO - SettingsFactory - Automatic flush during beforeCompletion(): disabled INFO - SettingsFactory - Automatic session close at end of transaction: disabled INFO - SettingsFactory - JDBC batch size: 1000 INFO - SettingsFactory - JDBC batch updates for versioned data: disabled INFO - SettingsFactory - Scrollable result sets: enabled INFO - SettingsFactory - JDBC3 getGeneratedKeys(): disabled INFO - SettingsFactory - Connection release mode: auto INFO - SettingsFactory - Default batch fetch size: 1 INFO - SettingsFactory - Generate SQL with comments: disabled INFO - SettingsFactory - Order SQL updates by primary key: disabled INFO - SettingsFactory - Order SQL inserts for batching: disabled INFO - SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory INFO - ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory INFO - SettingsFactory - Query language substitutions: {} INFO - SettingsFactory - JPA-QL strict compliance: disabled INFO - SettingsFactory - Second-level cache: enabled INFO - SettingsFactory - Query cache: enabled INFO - SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge INFO - FactoryCacheProviderBridge - Cache provider: org.hibernate.cache.HashtableCacheProvider INFO - SettingsFactory - Optimize cache for minimal puts: disabled INFO - SettingsFactory - Structured second-level cache entries: disabled INFO - SettingsFactory - Query cache factory: org.hibernate.cache.StandardQueryCacheFactory INFO - SettingsFactory - Echoing all SQL to stdout INFO - SettingsFactory - Statistics: disabled INFO - SettingsFactory - Deleted entity synthetic identifier rollback: disabled INFO - SettingsFactory - Default entity-mode: pojo INFO - SettingsFactory - Named query checking : enabled INFO - SessionFactoryImpl - building session factory INFO - essionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured INFO - UpdateTimestampsCache - starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache INFO - StandardQueryCache - starting query cache at region: org.hibernate.cache.StandardQueryCache INFO - notationSessionFactoryBean - Updating database schema for Hibernate SessionFactory INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.34 sec <<< FAILURE! Running com.upbeat.shoutbox.integrations.ShoutItemIntegrationTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec <<< FAILURE! Running com.upbeat.shoutbox.mocks.ShoutServiceTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.01 sec <<< FAILURE! Results : Tests in error: initializationError(com.upbeat.shoutbox.web.TestViewShoutsPage) testRenderMyPage(com.upbeat.shoutbox.web.TestHomePage) initializationError(com.upbeat.shoutbox.integrations.ShoutItemIntegrationTest) initializationError(com.upbeat.shoutbox.mocks.ShoutServiceTest) Tests run: 4, Failures: 0, Errors: 4, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to F:\Projects\shoutbox\target\surefire-reports for the individual test results. [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Tue May 04 18:19:58 CEST 2010 [INFO] Final Memory: 13M/31M [INFO] ------------------------------------------------------------------------ Any help is greatly appreciated.

    Read the article

  • Best way to cache resized images using PHP and MySQL

    - by Chris Hawes
    What would be the best practice way to handle the caching of images using PHP. The filename is currently stored in a MySQL database which is renamed to a GUID on upload, along with the original filename and alt tag. When the image is put into the HTML pages it is done so using a url such as '/images/get/200x200/{guid}.jpg which is rewritten to a php script. This allows my designers to specify (roughly - the source image maybe smaller) the file size. The php script then creates a hash of the size (200x200 in the url) and the GUID filename and if the file has been generated before (file with the name of the hash exists in TMP directory) sends the file from the application TMP directory. If the hashed filename does not exist, then it is created, written to disk and served up in the same manner, Is this efficient as it could be? (It also supports watermarking the images and the watermarking settings are stored in the hash as well, but thats out of scope for this.)

    Read the article

  • PHP: How to use mysql fulltext search and handle fulltext search result

    - by garcon1986
    Hello, I have tried to use mysql fulltext search in my intranet. I wanted to use it to search in multiple tables, and get the independant results depending on tables in the result page. This is what i did for searching. $query = " SELECT * FROM testtable t1, testtable2 t2, testtable3 t3 WHERE match(t1.firstName, t1.lastName, t1.details) against(' ".$value."') or match(t2.others, t2.information, t2.details) against(' ".$value."') or match(t3.other, t2.info, t2.details) against(' ".$value."') "; $result = mysql_query($query)or die('query error'.mysql_error()); while($row = mysql_fetch_assoc($result)){ echo $row['firstName']; echo $row['lastName']; echo $row['details'].'<br />'; } Do you have any ideas about optimizing the query and format the output of search results?

    Read the article

  • I need some help with either my SQL or my PHP I do not know which...

    - by sico87
    Hello I am creating a CMS and some of the functionality of it that the images that are within the content are managable. I currently trying to display a table that shows the the content title and then the associated images, ideally I would like a layout similar to this, Content Title Image 1 Image 2 Image 3 Content Title 2 Image 1 Image 2 Content Title 3 Image 1 The SQL the returns the data is actually formed using Codeigniters Active Record class, function getAllContentImages() { $this->db->select('*'); $this->db->from('contentImagesTable'); $this->db->join('contentTable', 'contentTable.contentId = contentImagesTable.contentId'); $this->db->join('categoryTable', 'categoryTable.categoryId = contentTable.categoryId'); $query = $this->db->get(); return $query->result_array(); } The array that is returned is looks like this, I have cut the size down for readability. Array ( [0] => Array ( [contentImageId] => 25 [contentImageName] => green.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/2/green.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265222654 [contentId] => 2 [dashboardUserId] => 0 [contentTitle] => sadsadsadassss [contentAbstract] => <p>Pllllleeeeeeeaaaaasssssseeeeee Work</p> [contentBody] => <p>Please work :-( please</p> [contentOnline] => 0 [contentAllowComments] => 0 [contentDateCreated] => 1265124038 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [1] => Array ( [contentImageId] => 28 [contentImageName] => yellow.png [contentImageType] => .png [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/7/yellow.png [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265388055 [contentId] => 7 [dashboardUserId] => 0 [contentTitle] => Another Blog [contentAbstract] => <p>This is another blog and it is shit becuase this does not work</p> [contentBody] => <p>ioasfihfududfhdufhuishdfiudshfiudhsfiuhdsiufhusdhfuids</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265388034 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [2] => Array ( [contentImageId] => 33 [contentImageName] => portaski.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/11/portaski.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265714175 [contentId] => 11 [dashboardUserId] => 0 [contentTitle] => Portaski - new product and brand launch by Bang [contentAbstract] => <p>Bang's experience in new product development has helped launch PortaSki &ndash; the pocket-sized device which is set to revolutionise skiing.</p> [contentBody] => <p>After developing Portaski's brand identity and positioning, Bang re-designed the product and its packaging ahead of launch in late 2008.</p> <p>A media and PR strategy was devised and implemented using Bang's close relationship with two of the UK's most influential organisations in the Advertising and Media Buying industries. On-line advertising was supported with editorial reviews in the UK's leading broadsheets and tabloids, which combined with pin-point HTML direct mail to drive consumers to the new e-commerce site.</p> <p>Impressive month-on-month growth has been achieved since launch, and the direct marketing activity resulted in an unprecedented 2.71% of targets going on-line to purchase a PortaSki.</p> <p>For further information visit <a href="http://www.portaski.com" target="_blank">www.portaski.com</a></p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265718184 [categoryId] => 1 [categoryTitle] => blogsss [categoryAbstract] => <p>asdsdsadasdsadfdsgdgdsgdsgssssssssssss</p> [categorySlug] => blog [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1266588327 ) [3] => Array ( [contentImageId] => 26 [contentImageName] => housingplus.jpg [contentImageType] => .jpg [contentImagePath] => /var/www/bangmarketing.bang/media/uploads/contentImages/5/housingplus.jpg [isHeadlineImage] => 1 [contentImageDateUploaded] => 1265284989 [contentId] => 5 [dashboardUserId] => 0 [contentTitle] => Bang launches Housing Plus [contentAbstract] => <p>Bang has launched Housing Plus, the new brand for the Central Borders Housing Group, along with new sub-brands Property Care and SSHA.</p> [contentBody] => <p>The Midlands based Group, with turnover in excess of &pound;21M, appointed Bang in 2008 following an open pitch of over 40 agencies. Bang's work began with an extensive marketing research strategy that challenged the Group's former positioning and brand structure.</p> <p>The research unveiled that the housing sector demanded a values-led Group. This led Bang to develop the brave &lsquo;Together for the Right Reasons' positioning for Housing Plus.</p> <p>Chris Garratt, Marketing Director at Bang explained "The housing sector has witnessed wholesale change in recent years. Much to tenant's dismay, many associations and Groups appear to be losing touch with their roots, we wanted to develop a Group for associations who place principles at the heart of their corporate strategy".</p> <p>The repositioned sub-brands also play an important role in the Group's revised brand by highlighting Housing Plus' willingness to embrace and nurture individual identities. Chris Garratt continued "By adopting a &lsquo;house of brands' hierarchy from the outset, Housing Plus has sent out a strong message to prospective strategic partners".</p> <p>Bang handled all aspects of work for the redevelopment of the three brands, including research, brand creation, naming, positioning, internal branding and communications, advertising, the brand launches, building the brands' on-line presence and the creation of a powerful brand film &ndash; which is already attracting significant interest from across the sector.</p> [contentOnline] => 1 [contentAllowComments] => 0 [contentDateCreated] => 1265285940 [categoryId] => 8 [categoryTitle] => News [categoryAbstract] => <p>The world at Bang Marketing moves fast, keep up to date w [categorySlug] => news [categoryIsSpecial] => 0 [categoryOnline] => 1 [categoryDateCreated] => 1265283717 ) I need a way that I can get all the content images associated with the same content title in one group and then display under the content title. Can anyone help?

    Read the article

  • Work-around for PHP5's PDO rowCount MySQL issue

    - by Steven Surowiec
    I've recently started work on a new project using PHP5 and want to use their PDO classes for it. The problem is that the MySQL PDO Driver doesn't support rowCount() so there's no way to run a query and then get the number of affected rows, or rows returned, which is a pretty big issue as far as I'm concerned. I was wondering if anyone else has dealt with this before and what you've done to work around it. Having to do a fetch() or fetchAll() to check if any rows were affected or returned seems like a hack to me, I'd rather just do $stmt-numRows() or something similar.

    Read the article

  • Convert data retrieved from MySQL database into JSON object using Python/Django

    - by rohanbk
    I have a MySQL database called People which contains the following schema <id,name,foodchoice1,foodchoice2>. The database contains a list of people and the two choices of food they wish to have at a party (for example). I want to create some kind of Python web-service that will output a JSON object. An example of output should be like: { "guestlist": [ {"id":1,"name":"Bob","choice1":"chicken","choice2":"pasta"},{"id":2,"name":"Alice","choice1":"pasta","choice2":"chicken"} ], "partyname": "My awesome party", "day": "1", "month": "June", "2010": "null" } Basically every guest is stored into a dictionary 'guestlist' along with their choices of food. At the end of the JSON object is just some additional information that only needs to be mentioned once. The question that I have is regarding the method that I need to utilize to grab the data from my database, and create the JSON object. Do I need to use a standard Model/View structure of Django or can I get away with something that is much simpler since what I need to do is really simple?

    Read the article

  • free public databases with non-trivial table structures?

    - by Caffeine Coma
    I'm looking for some sample database data that I can use for testing and demonstrating a DB tool I am working on. I need a DB that has (preferably) many tables, and many foreign key relationships between the tables. Ideally the data would be in SQL dump format, or at least in something that maintains the foreign key references, and could be easily import into an RDBMS (MySQL or H2). The dataset itself doesn't have to be huge (in fact, best if it's not). I thought about using the Stackoverflow Data Dump, but it's only about 5 tables.

    Read the article

  • What are good hosting companies for PHP 5.3 Mysql / CouchDb / MongoDB Dev ( Lithium / CakePHP Framew

    - by Abba Bryant
    I am looking for a quality reliable host for some lithium development. I don't mind a shared platform as long as I have some ssh access. I require php 5.3.x, Mysql 5.x, and the usual imageMagick etc. Non-relational DB support up front would be nice but if they let me set one up myself I would be okay with doing it. I don't need a lot in the way of control panel tools. Good ones are appreciated but bad ones I would prefer not to even deal with. I don't anticipate needing much in the way of email but mail support would be nice to have. Cost isn't a big issue. I don't want to pay an arm and a leg but don't mind paying for what I need. Good support and decent uptime would be nice but I don't need an SLO or anything.

    Read the article

  • Check if table exists in c#

    - by apoorv020
    I want to read data from a table whose name is supplied by a user. So before actually starting to read data, I want to check if the database exists or not. I have seen several pieces of code on the NET which claim to do this. However, they all seem to be work only for SQL server, or for mysql, or some other implementation. Is there not a generic way to do this? (I am already seperately checking if I can connect to the supplied database, so I'm fairly certain that a connection can be opened to the database.)

    Read the article

  • Python / Django : emulating a multidimensional layer on a MySQL database

    - by Sébastien Piquemal
    Hi, I'm working on a Django project where I need to provide a lot of different visualizations on the same data (for example average of a value for each month, for each year / for a location, etc...). I have been using an OLAP database once in college, and I thought that it would fit my needs, but it appears that it is much too heavy for what I need. Actually the volume of data is not very big, so I don't need any optimization, just a way to present different visualizations of the same data without having to write 1000 times the same code. So, to recap, I need a python library: to emulate a multidimensional database (OLAP style would be nice because I think it is quite convenient : star structure, and everything) non-intrusive, because I can't modify anything on the existing MySQL database easy-to-use, because otherwise there's no point in replacing some overhead by another.

    Read the article

  • Mysql results into array (PHP)

    - by cthulhu
    How can i convert mysql results (from mysql_fetch_array) into such a form? $some = array( "comments" => array( array( "text" => "hi", "id" => "1" ), array( "text" => "hi", "id" => "2" ), array( "text" => "hi", "id" => "3" ), array( "text" => "hi", "id" => "4" ) ) ); while the db looks like: comments id text 1 blabla bla 2 bla bla i've tried to fetch the values with foreach/while and insert it into two arrays but no success...

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >