Search Results

Search found 34274 results on 1371 pages for 'mysql table'.

Page 345/1371 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • MySQL 5.5 : sortie imminente ? Oracle devrait annoncer la nouvelle version du SGBD open-source mercredi

    MySQL 5.5 : sortie imminente ? Oracle devrait annoncer la nouvelle version du SGBD open-source mercredi Mise à jour du 13/12/10 Ce mercredi, Oracle organise un webinar pour présenter « une mise à jour importante de MySQL ». Tomas Ulin, Vice-Président du développement de MySQL et Rob Young, Senior Product Manager, y dévoileront les dernières avancées du SGBD open-source que le géant des bases de données à récupérée avec le rachat de Sun. Oracle avait annoncé une RC de MySQL 5,5 lors de l'Oracle OpenWorld de septembre (lire ci-avant). Cette fois-ci, les responsables du projets pourraient annoncer sa disponibilité officielle.

    Read the article

  • Moving MODx Files to Other MODx Website

    - by Austin
    I have one website with modx installed to www.website.com/modx/ --- Keep in mind there are other Websites In Progress on this storage server. My issue is that I'm moving all this: templates, template variables, chunks, snippets, etc to another server that already has modx installed in it. My first instinct is to go to phpMyAdmin and export the sql file and import them to the new website's server. However, an error occurred when I attempted to do this. It had found many duplicates in fields that were associated a PK (due to it being the same website just a redesign). I don't have to go and dump the table of the oldsite and then upload the new sql file do i? Please advise.

    Read the article

  • java UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid [closed]

    - by Philippe
    I want use Unitils for testing my JAVA program, i have following error message "UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid" My table is into an DDL file, and table is not created Could you says me if dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema properties is still active in unitils 3.3 ? My EMPLOYEE table is not created and DBMAINTAIN_SCRIPTS not created too Where is my mistake ? My DDL file SET REFERENTIAL_INTEGRITY FALSE; SET DATABASE COLLATION "French"; SET SCHEMA PUBLIC; CREATE TABLE DBMAINTAIN_SCRIPTS (FILE_NAME VARCHAR2(150), FILE_LAST_MODIFIED_AT INTEGER, CHECKSUM VARCHAR2(50), EXECUTED_AT VARCHAR2(20), SUCCEEDED INTEGER); CREATE TABLE EMPLOYEES(ID IDENTITY NOT NULL,NAME VARCHAR(20),TITLE VARCHAR(20),SALARY DOUBLE,NI INTEGER NOT NULL) My unitils properties file # comments documenting these unitils configuration properties removed for # brevity. look for commenting in unitils-default.properties in the root of the # unitils jar if needed. unitils.modules=database,dbunit,easymock,inject unitils.module.hibernate.enabled=false unitils.module.spring.enabled=false # these placeholders are set in avaje.properties #gere la configuration DBUNIT database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:unitils-example database.schemaNames=PUBLIC database.userName=SA database.password= database.dialect=hsqldb # unitils will construct the test database using the ddl file found in this # directory dbMaintainer.fileScriptSource.scripts.location=src/main/resources updateDataBaseSchema.enabled=true sequenceUpdater.sequencevalue.lowestacceptable=100 dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema #dbMaintainer.autoCreateExecutedScriptsTable property to true

    Read the article

  • Power Pivot - Average time per item

    - by Username
    I'm trying to calculate on average, how long it takes to make each item. Here is the data table: Date Item Quantity Operator 01/01/2014 Item1 3 John 01/01/2014 Item2 5 John 02/01/2014 Item1 7 Bob 02/01/2014 Item2 4 John 03/01/2014 Item1 2 Bob 07/01/2014 Item2 3 John On 01/01/2014 John made 3 of Item 1 and 5 of Item 2. If we only had the first 2 rows we can guess that it takes 0.375 days to make Item 1 and 0.625 days to make Item 2. I want to be able to calculate this on average using all the data and taking in to account the operators obviously working on different items. Thank you

    Read the article

  • SQL query duplicating results [on hold]

    - by Ben
    I have written a query that results in data being retrieved for the top 5 customers in my table per account manager. Here is the query: SELECT account_manager_id, mgap_ska_id, total FROM (SELECT account_manager_id, mgap_ska_id, mgap_growth + mgap_recovery AS total, @grp_rank := IF(@current_accmanid = account_manager_id, @grp_rank + 1, 1) AS grp_rank, @current_accmanid := account_manager_id FROM mgap_orders ORDER BY total DESC ) ranked WHERE grp_rank <= 5 and here is the result of the query: account_manager_id mgap_ska_id total 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 As you can see the query is partially working as needed, except Im getting duplicates for mgap_ska_id whereas it should be five individual mgap_ska_id numbers. and here is a sample of my data: mgap_ska_id account_manager_id mgap_growth mgap_recovery 5057810 64154 0 1160.78 5178114 24456 0 5773.42 5292421 160338 0 5146.04 5414091 24408 0 104.14 5057810 64154 0 1160.78 Can anyone see where Ive gone wrong in my query and how/where I might correct the error so I get the 5 top individual customers (mgap_ska_id) instead of the duplicated top single customer?

    Read the article

  • SAN shows as unallocated in Windows Server

    - by Gareth Ferneyhough
    Hello. We have a SAN drive that shows as unallocated in Windows Server 2008. I believe it is a raid 10 with 4+ disks. The disks are in good health. I think a server that we rebuilt tried to connect to the drive and re-initialized them, or re-wrote the partition table. (excuse my poor terminology). We ran TestDisk on the drive and it shows no partitions, so now we are doing a quick search (which is not so quick). Can anyone else suggest anything? Thanks, Gareth

    Read the article

  • MySQL port 3306 blocked in csf yet can still telnet to port 3306 from external host

    - by Neek
    We have a Centos 6 VPS that was recently migrated to a new machine within the same web hosting company. It's running WHM/cPanel and has csf/lfd installed. csf is set up with mostly vanilla config. I'm no iptables expert, csf has not let me down before. If a port isn't in the TCP_IN list, it should be blocked on the firewall by iptables. My problem is that I can telnet to port 3306 from an external host, yet I think iptables ought to be blocking 3306 because of csf's rules. We are now failing a security check because of this open port. (this output is obfuscated to protect the innocent: www.ourhost.com is the host with the firewall problem) [root@nickfenwick log]# telnet www.ourhost.com 3306 Trying 158.255.45.107... Connected to www.ourhost.com. Escape character is '^]'. HHost 'nickfenwick.com' is not allowed to connect to this MySQL serverConnection closed by foreign host. So the connection is established, and MySQL refuses the connection due to its configuration. I need the network connection to be refused at the firewall level, before it reaches MySQL. Using WHM's csf web UI I can see 'Firewall Configuration' includes a fairly sensible TCP_IN line: TCP_IN: 20,21,22,25,53,80,110,143,222,443,465,587,993,995,2077,2078,2082,2083,2086,2087,2095,2096,8080 (lets ignore that I could trim that a little for now, my concern is that 3306 is not listed in that list) When csf is restarted it logs the usual slew of output as it sets up iptables rules, for example what looks like it blocking all traffic and then allowing specific ports like SSH on 22: [cut] DROP all opt -- in * out * 0.0.0.0/0 -> 0.0.0.0/0 [cut] ACCEPT tcp opt -- in !lo out * 0.0.0.0/0 -> 0.0.0.0/0 state NEW tcp dpt:22 [cut] I can see that iptables is running, service iptables status returns a long list of firewall rules. Here is my Chain INPUT section from service iptables status, hopefully that's enough to show how the firewall is configured. Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 acctboth all -- 0.0.0.0/0 0.0.0.0/0 2 ACCEPT tcp -- 217.112.88.10 0.0.0.0/0 tcp dpt:53 3 ACCEPT udp -- 217.112.88.10 0.0.0.0/0 udp dpt:53 4 ACCEPT tcp -- 217.112.88.10 0.0.0.0/0 tcp spt:53 5 ACCEPT udp -- 217.112.88.10 0.0.0.0/0 udp spt:53 6 ACCEPT tcp -- 8.8.4.4 0.0.0.0/0 tcp dpt:53 7 ACCEPT udp -- 8.8.4.4 0.0.0.0/0 udp dpt:53 8 ACCEPT tcp -- 8.8.4.4 0.0.0.0/0 tcp spt:53 9 ACCEPT udp -- 8.8.4.4 0.0.0.0/0 udp spt:53 10 ACCEPT tcp -- 8.8.8.8 0.0.0.0/0 tcp dpt:53 11 ACCEPT udp -- 8.8.8.8 0.0.0.0/0 udp dpt:53 12 ACCEPT tcp -- 8.8.8.8 0.0.0.0/0 tcp spt:53 13 ACCEPT udp -- 8.8.8.8 0.0.0.0/0 udp spt:53 14 LOCALINPUT all -- 0.0.0.0/0 0.0.0.0/0 15 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 16 INVALID tcp -- 0.0.0.0/0 0.0.0.0/0 17 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 18 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:20 19 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:21 20 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 21 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:25 22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:53 23 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 24 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:110 25 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:143 26 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:222 27 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:443 28 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:465 29 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:587 30 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:993 31 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:995 32 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2077 33 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2078 34 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2082 35 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2083 36 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2086 37 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2087 38 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2095 39 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2096 40 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:8080 41 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:20 42 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:21 43 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:53 44 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:222 45 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:8080 46 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 47 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 0 48 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 11 49 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 3 50 LOGDROPIN all -- 0.0.0.0/0 0.0.0.0/0 What's the next thing to check?

    Read the article

  • MySQL se tourne encore plus vers InnoDB, qui sera utilisé pour les tables systèmes pour bénéficier des propriétés ACID

    MySQL se tourne encore plus vers InnoDB qui sera utilisé pour les tables systèmes pour bénéficier des propriétés ACID Depuis le rachat de MySQL par Oracle, le célèbre SGBD a entamé une lente transition de son moteur de base de données, passant du moteur d'origine MyISAM qui est basé sur la méthode ISAM (Indexed Sequential Access Methode) à InnoDB. Cette transition s'est d'ailleurs achevée partiellement par l'intronisation d'InnoDB comme moteur par défaut pour les versions de MySQL 5.5 et plus.Partiellement,...

    Read the article

  • Excel Pivot Tables -- Divide Numerical Column Data into Ranges

    - by ktm5124
    Hi, I have an Excel spreadsheet with a column called "Time Elapsed" that stores the number of days it took to complete a task. I would like to make a pivot table out of this spreadsheet where I divide the "Time Elapsed" column into ranges, e.g., how many tasks took 0 to 4 days to complete how many tasks took 5 to 9 days how many took 10 to 14 days how many took 15+ days Do I have to create new columns in my spreadsheet dedicated to each interval (0 to 4, 5 to 9, etc.) or can I use some feature of pivot tables to separate my one "Time Elapsed" column into intervals? Thanks in advance.

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • How to sort time column by value instead of alphabetically

    - by Turch
    I'm creating a pivot table by connecting to an SSAS tabular model (Data - From Other Sources - From Analysis Services) . The model has a "time" column that I want to sort by. The default (database) sorting is earliest to latest: When I click the triangle next to 'Row Labels' and select "Sort A to Z", I get alphabetically sorted times: How can I get the times to sort by time? Changing the number format from "General" to "Time" does nothing. The times aren't stored as text either - the data type of the column in the SSAS model is Auto (Date)

    Read the article

  • MySQL 5.6 : la préversion améliore la disponibilité, les performances et l'administration pour les applications Web, Cloud et embarquées

    MySQL 5.6 : la préversion améliore la disponibilité, les performances et l'administration pour les applications Web, Cloud et embarquées Mise à jour du 12/04/2012, par Hinault Romaric Une nouvelle version intermédiaire de développement (DMR) pour MySQL 5.6 vient d'être publiée par Oracle. La haute disponibilité est renforcée dans cette mouture avec de nouvelles fonctions de réplication basées sur des mécanismes d'autoréparation comme les identifiants globaux de transactions (GTID) pour le suivi de l'avancement de la réplication sur une topologie de réplication et de nouveaux utilitaires de réplication MySQL pour f...

    Read the article

  • Solr DataImportHandler configuration

    - by talo
    I want to get data from mysql database with the help of DataImportHandler so i can create indexes. Now I've configured my Solr instance so that it works on Tomcat (the example admin page), but if I try to change the sorlconfig.xml file i'll get the error message. I'm working with Solr 3.6 So my configuration is: In solrconfig.xml i added: <dataDir>${solr.data.dir:/usr/share/tomcat7/solr2}</dataDir> to specify my working directory and then <requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler"> <lst name="defaults"> <str name="config">/usr/share/tomcat7/solr2/conf/data-config.xml</str> </lst> </requestHandler> to specify new request handler. Theese two are my lib directives for DIH. Do i need to change them? <lib dir="../../dist/" regex="apache-solr-dataimporthandler-\d.*\.jar" /> <lib dir="../../contrib/dataimporthandler/lib/" regex=".*\.jar" /> I also created data-config.xml file and added following: <?xml version="1.0" encoding="UTF-8"?> <dataConfig> <dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/ethicsweb_experts" user="root" password=""/> <document> <entity name="experts" datasource="mysql" pk="mainid" query="SELECT experts.mainid as mainid FROM experts WHERE validRec = 'y'"> <field column="mainid" name="mainid"/> <field column="validRec" name="validRec"/> </entity> </document> I've coppied following jars to tomcat/lib folder (DIH jar files and mysql JDBC connector jar file) apache-solr-dataimporthandler-3.6.0.jar apache-solr-dataimporthandler-extras-3.6.0.jar mysql-connector-java-5.1.20-bin.jar Also in the schema.xml file i added folowing fields: <field name="mainid" type="int" indexed="true" stored="true" /> <field name="validRec" type="string" indexed="true" stored="true" /> <field name="recSource" type="string" indexed="true" stored="true" /> <uniqueKey>mainid</uniqueKey> But now when i try to acces: http://localhost:8080/solr2/ I get following output: HTTP Status 500 - Severe errors in solr configuration. Check your log files for more detailed information on what may be wrong. If you want solr to continue after configuration errors, change: false in solr.xml ------------------------------------------------------------- org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) ------------------------------------------------------------- java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:378) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:419) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:455) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:159) at org.apache.solr.core.SolrCore.(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassNotFoundException: org.apache.solr.util.plugin.SolrCoreAware at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 47 more My log entries show me that: SEVERE: java.lang.NoClassDefFoundError: org/apache/solr/util/plugin/SolrCoreAware at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:791) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698) at java.lang.ClassLoader.loadClass(ClassLoader.java:410) at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:378) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:419) at org.apache.solr.core.SolrCore.createRequestHandler(SolrCore.java:455) at org.apache.solr.core.RequestHandlers.initHandlersFromConfig(RequestHandlers.java:159) at org.apache.solr.core.SolrCore.(SolrCore.java:563) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: java.lang.ClassNotFoundException: org.apache.solr.util.plugin.SolrCoreAware at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:423) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) ... 47 more Jun 15, 2012 4:07:50 PM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Jun 15, 2012 4:07:50 PM org.apache.solr.common.SolrException log SEVERE: org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) So now I wonder where did i screw up the configuration for the DataImportHandler. Do i need to specify more jar files? Did i put jar files in the correct directory? Any help would be greatly appreciated.

    Read the article

  • error when running mysql2psql

    - by Mateo Acebedo
    I am trying to migrate a mysql database to a psql database, and after installing, I write the running command and instead of generationg the mysql2psql.yml file, this is what is being printed on my git bash window. $ mysql2psql c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require': c annot load such file -- 2.0/mysql_api (LoadError) from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql-2.8.1-x86-mingw32/lib/mys ql.rb:7:in `rescue in ' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql-2.8.1-x86-mingw32/lib/mys ql.rb:2:in `' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql2psql-0.1.0/lib/mysql2psql /mysql_reader.rb:1:in `' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql2psql-0.1.0/lib/mysql2psql .rb:5:in `' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require' from c:/Ruby200/lib/ruby/gems/2.0.0/gems/mysql2psql-0.1.0/bin/mysql2psql :5:in `' from c:/Ruby200/bin/mysql2psql:23:in `load' from c:/Ruby200/bin/mysql2psql:23:in `' Any thoughts?

    Read the article

  • Stored procedure strange error when called through php

    - by ravi
    I have been coding a registration page(login system) in php and mysql for a website. I'm using two stored procedures for the same. First stored procedure checks wether the email address already exists in database.Second one inserts the user supplied data into mysql database. User has EXECUTE permission on both the procedures.When is execute them individually from php script they work fine. But when i use them together in script second Stored procedure(insert) not working. Stored procedure 1. DELIMITER $$ CREATE PROCEDURE reg_check_email(email VARCHAR(80)) BEGIN SET @email = email; SET @sql = 'SELECT email FROM user_account WHERE user_account.email=?'; PREPARE stmt FROM @sql; EXECUTE stmt USING @email; END$$ DELIMITER; Stored procedure 2 DELIMITER $$ CREATE PROCEDURE reg_insert_into_db(fname VARCHAR(40), lname VARCHAR(40), email VARCHAR(80), pass VARBINARY(32), licenseno VARCHAR(80), mobileno VARCHAR(10)) BEGIN SET @fname = fname, @lname = lname, @email = email, @pass = pass, @licenseno = licenseno, @mobileno = mobileno; SET @sql = 'INSERT INTO user_account(email,pass,last_name,license_no,phone_no) VALUES(?,?,?,?,?)'; PREPARE stmt FROM @sql; EXECUTE stmt USING @email,@pass,@lname,@licenseno,@mobileno; END$$ DELIMITER; When i test these from php sample script insert is not working , but first stored procedure(reg_check_email()) is working. If i comment off first one(reg_check_email), second stored procedure(reg_insert_into_db) is working fine. <?php require("/wamp/mysql.inc.php"); $r = mysqli_query($dbc,"CALL reg_check_email('[email protected]')"); $rows = mysqli_num_rows($r); if($rows == 0) { $r = mysqli_query($dbc,"CALL reg_insert_into_db('a','b','[email protected]','c','d','e')"); } ?> i'm unable to figure out the mistake. Thanks in advance, ravi.

    Read the article

  • PHP PDO SQL Server Select statement not replacing question marks

    - by Metropolis
    Awhile ago I wrote a database class which uses PDO in order to connect to SQL Server databases and also to MySQL databases. It has always replaced the question marks fine when using it on the MySQL databases, but for the SQL Server database I had to create a work around which basically replaces the question marks manually. Here is the code for that. if($this->getPDODriver() == 'odbc' && !empty($values_a) && substr_count($query_s, "?") > 0) { $query_s = preg_replace(array_fill(0, substr_count($query_s, "?"), '/\?/'), $values_a, $query_s, 1); $values_a = NULL; } Now, I understand that this completely defeats the purpose of the question marks and PDO, but it has been working fine for me. What I would like to do now though, is find out why the question marks are not getting replaced in the first place, and remove this workaround. If I have a select statement like the following SELECT * FROM database WHERE value = ? That is what the query looks like when I go to prepare it, but when I display the query results, it is a blank array. Just remember, this class is working fine with MySQL, and it is working fine with the work around above. So I know it has something to do with the question marks.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • PDO Database Connections Problem

    - by Metropolis
    Hey Everyone, Over a year ago I created my own database classes which use PDO, and handle all preparing, executing, and closing connections. These classes have been working great up until now. There are two different database severs I am grabbing from, MySQL, and MS SQL Express. I am retrieving an employee id from the MySQL server and using it to get that employees information from the MS SQL server. There are about 11k records coming from the MySQL server and my program is only making it through 1200 before crashing with an error like the following. Connection failed (odbc:Driver=FreeTDS;Servername=MSSQLExpress;Database=SMDINC) Class (PDOException) SQLSTATE[08001] SQLDriverConnect: 0 [unixODBC][FreeTDS][SQL Server]Unable to connect to data source It seems like the program is not able to connect to the data source, but it is running the exact same query about 30 times before this and having no problem. Also, I have thoroughly checked all of the data coming into the query and it all looks fine. I believe the issue may be that there are to many connections being created, but I have tried to close all connections in many different places, and nothing seems to be fixing the problem. Any debugging help, or suggestions would be appreciated! Craig Metrolis

    Read the article

  • How to use function to connect to database and how to work with queries?

    - by Abhilash Shukla
    I am using functions to work with database.. Now the way i have defined the functions are as follows:- /** * Database definations */ define ('db_type', 'MYSQL'); define ('db_host', 'localhost'); define ('db_port', '3306'); define ('db_name', 'database'); define ('db_user', 'root'); define ('db_pass', 'password'); define ('db_table_prefix', ''); /** * Database Connect */ function db_connect($host = db_host, $port = db_port, $username = db_user, $password = db_pass, $database = db_name) { if(!$db = @mysql_connect($host.':'.$port, $username, $password)) { return FALSE; } if((strlen($database) > 0) AND (!@mysql_select_db($database, $db))) { return FALSE; } // set the correct charset encoding mysql_query('SET NAMES \'utf8\''); mysql_query('SET CHARACTER_SET \'utf8\''); return $db; } /** * Database Close */ function db_close($identifier) { return mysql_close($identifier); } /** * Database Query */ function db_query($query, $identifier) { return mysql_query($query, $identifier); } Now i want to know whether it is a good way to do this or not..... Also, while database connect i am using $host = db_host Is it ok? Secondly how i can use these functions, these all code is in my FUNCTIONS.php The Database Definitions and also the Database Connect... will it do the needful for me... Using these functions how will i be able to connect to database and using the query function... how will i able to execute a query? VERY IMPORTANT: How can i make mysql to mysqli, is it can be done by just adding an 'i' to mysql....Like:- @mysql_connect @mysqli_connect

    Read the article

  • Doctrine Default Primary Key Problem (Again)

    - by 01010011
    Hi, Should I change all of my uniquely-named MySQL database primary keys to 'id' to avoid getting errors related to Doctrine's default primary key set in the plugin 'doctrine_pi.php'? To further elaborate, I am getting the following reoccurring error, this time after trying to login to my login page: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'u.book_id' in 'field list'' in... I suspect the problem resides at a MySQL table used for my login, of which has a primary key called id Marc B originally solved an identical problem for me in this post http://stackoverflow.com/questions/2702229/doctrine-codeigniter-mysql-crud-errors when I had the same problem with a different table within the same database. Following his suggestion, I changed the default primary key located at system/application/plugins/doctrine_pi.php from 'id' to 'book_id': <?php // system/application/plugins/doctrine_pi.php ... // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'book_id', 'type' => 'integer', 'length' => 4)); and that solved my previous problem. However, my login page stopped working. So what is the safe thing to do? Change all of my primary keys to 'id' (will that solve the problem without causing some other problem I am not aware of). Or should I add some lines of code in doctrine_pi.php?

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • How can I get SQL Server transactions to use record-level locks?

    - by Joe White
    We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic. I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back. I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are. How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table? Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to start the program and tell it to update ID 1; let it get its transaction and modify the record; start a second copy of the program and tell it to update ID 2; have it able to update (and commit) while the first app's transaction is still open. Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed. public static void Main() { Console.Write("ID to update: "); var id = int.Parse(Console.ReadLine()); Console.WriteLine("Starting transaction"); using (var scope = new TransactionScope()) using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id; Console.WriteLine("Updating record"); command.ExecuteNonQuery(); Console.Write("Press ENTER to end transaction: "); Console.ReadLine(); scope.Complete(); } } Here are some things I've already tried, with no change in behavior: Changing the transaction isolation level to "read uncommitted" Specifying a "WITH (ROWLOCK)" on the UPDATE statement

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Is DB logging more secure than file logging for my PHP web app?

    - by iama
    I would like to log errors/informational and warning messages from within my web application to a log. I was initially thinking of logging all of these onto a text file. However, my PHP web app will need write access to the log files and the folder housing this log file may also need write access if log file rotation is desired which my web app currently does not have. The alternative is for me to log the messages to the MySQL database since my web app is already using the MySQL database for all its data storage needs. However, this got me thinking that going with the MySQL option is much better than the file option since I already have a configuration file with the database access information protected using file system permissions. If I now go with the log file option I need to tinker the file and folder access permissions and this will only make my application less secure and defeats the whole purpose of logging. Is this correct? I am using XAMPP for development and am a newbie to LAMP. Please let me know your recommendations for logging. Thanks.

    Read the article

  • Get the last checked checkboxes...

    - by Sara
    Hi everyone, I'm not sure how to accomplish this issue which has been confusing me for a few days. I have a form that updates a user record in MySQL when a checkbox is checked. Now, this is how my form does this: if (isset($_POST['Update'])) { $paymentr = $_POST['paymentr']; //put checkboxes array into variable $paymentr2 = implode(', ', $paymentr); //implode array for mysql $query = "UPDATE transactions SET paymentreceived=NULL"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentdate='0000-00-00'"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentreceived='Yes' WHERE id IN ($paymentr2)"; $result = mysql_query($query); $query = "UPDATE transactions SET paymentdate=NOW() WHERE id IN ($paymentr2)"; $result = mysql_query($query); foreach ($paymentr as $v) { //should collect last updated records and put them into variable for emailing. $query = "SELECT id, refid, affid FROM transactions WHERE id = '$v'"; $result = mysql_query($query) or die("Query Failed: ".mysql_errno()." - ".mysql_error()."<BR>\n$query<BR>\n"); $trans = mysql_fetch_array($result, MYSQL_ASSOC); $transactions .= '<br>User ID:'.$trans['id'].' -- '.$trans['refid'].' -- '.$trans['affid'].'<br>'; } } Unfortunately, it then updates ALL the user records with the latest date which is not what I want it to do. The alternative I thought of was, via Javascript, giving the checkbox a value that would be dynamically updated when the user selected it. Then, only THOSE checkboxes would be put into the array. Is this possible? Is there a better solution? I'm not even sure I could wrap my brain around how to do that WITH Javascript. Does the answer perhaps lie in how my mysql code is written? Thanks - I sincerely appreciate it!!!

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >