Search Results

Search found 18606 results on 745 pages for 'mysql comments'.

Page 81/745 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Setting up MySQL Cluster 7.0 in Linux

    <b>Linux Admin Zone:</b> "You might know that beginning with MySQL 5.1.24, support for the NDBCLUSTER storage engine was removed from the standard MySQL server binaries built by MySQL. Therefore, here I&#8217;m using MySQL Cluster edition instead of MySQL Community edition."

    Read the article

  • best way to import 500MB csv file into mysql database?

    - by mars
    I have a 500MB csv file that needs to be imported into my mysql database. I've made a php file where i can upload the csv file and it analyses the fields n stuff and does the actual importing. but it can only handle small fiels max 5mb. so that's a 100 files and actually pretty slow(uploading) is there another way? I have to repeat this process every month because the data in the file changes every month it's about 12 000 000 lines :D

    Read the article

  • Jetty 7 + MySQL Config [java.lang.ClassNotFoundException: org.mortbay.jetty.webapp.WebAppContext]

    - by Scott Chang
    I've been trying to get a c3p0 db connection pool configured for Jetty, but I keep getting a ClassNotFoundException: 2010-03-14 19:32:12.028:WARN::Failed startup of context WebAppContext@fccada@fccada/phpMyAdmin,file:/usr/local/jetty/webapps/phpMyAdmin/,file:/usr/local/jetty/webapps/phpMyAdmin/ java.lang.ClassNotFoundException: org.mortbay.jetty.webapp.WebAppContext at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:313) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:266) at org.eclipse.jetty.util.Loader.loadClass(Loader.java:90) at org.eclipse.jetty.xml.XmlConfiguration.nodeClass(XmlConfiguration.java:224) at org.eclipse.jetty.xml.XmlConfiguration.configure(XmlConfiguration.java:187) at org.eclipse.jetty.webapp.JettyWebXmlConfiguration.configure(JettyWebXmlConfiguration.java:77) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:975) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:586) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:349) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:165) at org.eclipse.jetty.server.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:162) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerCollection.doStart(HandlerCollection.java:165) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:92) at org.eclipse.jetty.server.Server.doStart(Server.java:228) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:990) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:955) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.jetty.start.Main.invokeMain(Main.java:394) at org.eclipse.jetty.start.Main.start(Main.java:546) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:208) at org.eclipse.jetty.start.Main.main(Main.java:75) I'm new to Jetty and I want to ultimately get phpMyAdmin and WordPress to run on it through Quercus and a JDBC connection. Here are my web.xml and jetty-web.xml files in my WEB-INF directory. jetty-web.xml: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd"> <Configure class="org.mortbay.jetty.webapp.WebAppContext"> <New id="mysql" class="org.mortbay.jetty.plus.naming.Resource"> <Arg>jdbc/mysql</Arg> <Arg> <New class="com.mchange.v2.c3p0.ComboPooledDataSource"> <Set name="Url">jdbc:mysql://localhost:3306/mysql</Set> <Set name="User">user</Set> <Set name="Password">pw</Set> </New> </Arg> </New> </Configure> web.xml: <?xml version="1.0"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN" "http://java.sun.com/j2ee/dtds/web-app_2_2.dtd"> <web-app> <description>Caucho Technology's PHP Implementation</description> <resource-ref> <description>My DataSource Reference</description> <res-ref-name>jdbc/mysql</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> <servlet> <servlet-name>Quercus Servlet</servlet-name> <servlet-class>com.caucho.quercus.servlet.QuercusServlet</servlet-class> <!-- Specifies the encoding Quercus should use to read in PHP scripts. --> <init-param> <param-name>script-encoding</param-name> <param-value>UTF-8</param-value> </init-param> <!-- Tells Quercus to use the following JDBC database and to ignore the arguments of mysql_connect(). --> <init-param> <param-name>database</param-name> <param-value>jdbc/mysql</param-value> </init-param> <init-param> <param-name>ini-file</param-name> <param-value>WEB-INF/php.ini</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>Quercus Servlet</servlet-name> <url-pattern>*.php</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.php</welcome-file> </welcome-file-list> </web-app> I'm guessing that I'm missing a few jars or something. Currently I have placed the following jars in my WEB-INF/lib directory: c3p0-0.9.1.2.jar commons-dbcp-1.4.jar commons-pool-1.5.4.jar mysql-connector-java-5.1.12-bin.jar I have also tried to put these jars in JETTY-HOME/lib/ext, but to no avail... Someone please tell me what is wrong with my configuration. I'm sick of digging through Jetty's crappy documentation.

    Read the article

  • Do you have any tips for comments to keep them in step with the code? [closed]

    - by Rob Wells
    Possible Duplicate: How do you like your comments? G'day, I've read both of Steve McConnell's excellent Code Complete books "Code Complete" and "Code Complete 2" and was wondering if people have any other suggestions for commenting code. My commenting mantra could be summed up by the basic idea of expressing "what the code below cannot say". While enjoying this interesting blog post by Jeff about commenting I was still left wondering "When coding, when do you feel a comment is required?" Edit: Oops. Seems to be a duplicate of this question http://stackoverflow.com/questions/121945/how-do-you-like-your-comments so sorry for the noise. Thanks to my, seemingly, SO shadow for pointing it out - wouldn't have thought I was that interesting. Now off to read the original post and see if it is relevant. Edit: I meant to emphasise the best appraoch to ensure that your comments will stay in step with the code. Maybe expressing an intent rather than the mechansim for instance.

    Read the article

  • How to ignore comments when reading a XML file into a XmlDocument?

    - by tunnuz
    Possible duplicate: How to remove all comment tags from XmlDocument Hello, I am trying to read a XML document with C#, I am doing it this way: XmlDocument myData = new XmlDocument(); myData.Load("datafile.xml"); anyway, I sometimes get comments when reading XmlNode.ChildNodes. For the benefit of who's experiencing the same requirement, here's how I did it at the end: /** Validate a file, return a XmlDocument, exclude comments */ private XmlDocument LoadAndValidate( String fileName ) { // Create XML reader settings XmlReaderSettings settings = new XmlReaderSettings(); settings.IgnoreComments = true; // Exclude comments settings.ProhibitDtd = false; settings.ValidationType = ValidationType.DTD; // Validation // Create reader based on settings XmlReader reader = XmlReader.Create(fileName, settings); try { // Will throw exception if document is invalid XmlDocument document = new XmlDocument(); document.Load(reader); return document; } catch (XmlSchemaException) { return null; } } Thank you Tommaso

    Read the article

  • How to extend the comments framework (django) by removing unnecesary fields?

    - by Ignacio
    Hi, I've been reading on the django docs about the comments framework and how to customize it (http://docs.djangoproject.com/en/1.1/ref/contrib/comments/custom/) In that page, it shows how to add new fields to a form. But what I want to do is to remove unnecesary fields, like URL, email (amongst other minor mods.) On that same doc page it says the way to go is to extend my custom comments class from BaseCommentAbstractModel, but that's pretty much it, I've come so far and now I'm at a loss. I couldn't find anything on this specific aspect.

    Read the article

  • What are the virtues of using XML comments in .NET?

    - by Michal Czardybon
    I can't understand the virtues of using XML comments. I know they can be converted into nice documentation external to the code, but the same can be achieved with the much more concise DOxygen syntax. In my opinion the XML comments are wrong, because: They obfuscate the comments and the code in general. (They are more difficult to read by humans). Less code can be viewed on a single screen, because "summary" and "/summary" take additional lines. They suggest that all method parameters have to be commented, whereas 90% of them are obvious and SHOULD be left not commented. The only problem I have with this is that my point of view seems to be in minority. Why?

    Read the article

  • Nested mysql select statements

    - by Jimmy Kamau
    I have a query as below: $sult = mysql_query("select * from stories where `categ` = 'businessnews' and `stryid`='".mysql_query("SELECT * FROM comments WHERE `comto`='".mysql_query("select * from stories where `categ` ='businessnews'")." ORDER BY COUNT(comto) DESC")."' LIMIT 3") or die(mysql_error()); while($ow=mysql_fetch_array($sult)){ The code above should return the top 3 'stories' with the most comments {count(comto)}. The comments are stored in a different table from the stories. The code above does not return any values and doesn't show any errors. Could someone please help?

    Read the article

  • Help with an SQL query on a single (comments) table (screenshot included)

    - by citrus
    Please see screenshot Goal: id like to have comments nested 1 level deep The comments would be arranged so that rating of the parent is in descending order the rating of the children comments is irrelevant The left hand side of the screenshot shows the output that Id like. The RHS shows the table data. All of the comments are held in 1 table. Im a beginner with SQL queries, the best I can do is: SELECT * FROM [Comments] WHERE ([ArticleId] = @ArticleId) ORDER BY [ThreadId] DESC, [DateMade] This somewhat does the job, but it obviously neglects the rating. So the above statement would show output where Bobs Comment and all of the children comments are before Amy's and her childrens comments. How can I run this query correctly?

    Read the article

  • django, mod_wsgi, MySQL High CPU - Problems

    - by Red Rover
    I am having a problem with an OSQA site. It is Django/Apache/mod_wsgi configured site. Every hour, the CPU spikes to 164% (Average) for task HTTPD. After 10 minutes, it frees back up. I have reviewed the logs, cron tables, made many config changes, but cannot track this problem down. Can someone please look at the information below and let me know if it is a configuration problem, or if anyone else has experienced this issue. Running TOP shows HTTPD using 165% of CPU VMware performance monitor also displays spikes. This happens every hour for 10 minutes. I have the following information from server status Server Version: Apache/2.2.15 (Unix) DAV/2 mod_wsgi/3.2 Python/2.6.6 Server Built: Feb 7 2012 09:50:15 Current Time: Sunday, 10-Jun-2012 21:44:29 EDT Restart Time: Sunday, 10-Jun-2012 19:44:51 EDT Parent Server Generation: 0 Server uptime: 1 hour 59 minutes 37 seconds Total accesses: 1088 - Total Traffic: 11.5 MB CPU Usage: u80.26 s243.8 cu0 cs0 - 4.52% CPU load .152 requests/sec - 1682 B/second - 10.8 kB/request 4 requests currently being processed, 11 idle workers ....._..........__......W....................................... ...................................C._..._....._L__._L_._....... ...................... Scoreboard Key: "_" Waiting for Connection, "S" Starting up, "R" Reading Request, "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup, "C" Closing connection, "L" Logging, "G" Gracefully finishing, "I" Idle cleanup of worker, "." Open slot with no current process Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request 0-0 - 0/0/34 . 0.42 327 17 0.0 0.00 0.67 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 1-0 - 0/0/22 . 0.31 339 32 0.0 0.00 0.26 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 2-0 - 0/0/22 . 0.65 358 10 0.0 0.00 0.31 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 3-0 - 0/0/31 . 1.03 378 31 0.0 0.00 0.60 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 4-0 - 0/0/20 . 0.45 356 9 0.0 0.00 0.31 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 5-0 18852 0/16/34 _ 0.98 27 18120 0.0 0.37 0.62 69.180.250.36 osqa.informs.org GET /questions/289/what-is-the-difference-between-operations-re 6-0 - 0/0/32 . 0.94 309 29 0.0 0.00 0.64 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 7-0 - 0/0/31 . 1.15 382 32 0.0 0.00 0.75 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 8-0 - 0/0/21 . 0.28 403 19 0.0 0.00 0.20 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 9-0 - 0/0/32 . 1.37 288 16 0.0 0.00 0.60 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 10-0 - 0/0/33 . 1.72 383 16 0.0 0.00 0.40 127.0.0.1 osqa.informs.org OPTIONS * HTTP/1.0 I am running Django 1.3 This is a mod_wsgi configuration and copied is the wsgi.conf file: <IfModule !python_module> <IfModule !wsgi_module> LoadModule wsgi_module modules/mod_wsgi.so <IfModule wsgi_module> <Directory /var/www/osqa> Order allow,deny Allow from all #Deny from all </Directory> WSGISocketPrefix /var/run/wsgi WSGIPythonEggs /var/tmp WSGIDaemonProcess OSQA maximum-requests=10000 WSGIProcessGroup OSQA Alias /admin_media/ /usr/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/contrib/admin/media/ Alias /m/ /var/www/osqa/forum/skins/ Alias /upfiles/ /var/www/osqa/forum/upfiles/ <Directory /var/www/osqa/forum/skins> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /var/www/osqa/osqa.wsgi </IfModule> </IfModule> </IfModule> This is the httpd.conf file Timeout 120 KeepAlive Off MaxKeepAliveRequests 100 MaxKeepAliveRequests 400 KeepAliveTimeout 3 <IfModule prefork.c> Startservers 15 MinSpareServers 10 MaxSpareServers 20 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 0 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> We are using MySQL The server is an ESX4i, configured for the VM to use 4 CPUs and 8 GB Ram. Hyper threading is enabled, 2 physical CPU's, with 4 Logical. the CPU are Intel Xeon 2.8 GHz. Total memory is 12GB

    Read the article

  • Nhibernate 2.1 and mysql 5 - InvalidCastException on Setup

    - by Nash
    Hello there, I am trying to use NHibernate with Spring.Net und mySQL 5. However, when setting up the connection and creating the SessionFactoryObject, I get this InvalidCastException: NHibernate seems to cast MySql.Data.MySqlClient.MySqlConnection to System.Data.Common.DbConnection which causes the exception. System.InvalidCastException wurde nicht behandelt. Message="Das Objekt des Typs \"MySql.Data.MySqlClient.MySqlConnection\" kann nicht in Typ \"System.Data.Common.DbConnection\" umgewandelt werden." Source="NHibernate" StackTrace: bei NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Prepare() in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SuppliedConnectionProviderConnectionHelper.cs:Zeile 25. bei NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SchemaMetadataUpdater.cs:Zeile 43. bei NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SchemaMetadataUpdater.cs:Zeile 17. bei NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) in c:\CSharp\NH\nhibernate\src\NHibernate\Impl\SessionFactoryImpl.cs:Zeile 169. bei NHibernate.Cfg.Configuration.BuildSessionFactory() in c:\CSharp\NH\nhibernate\src\NHibernate\Cfg\Configuration.cs:Zeile 1090. bei OrmTest.Program.Main(String[] args) in C:\Users\Max\Documents\Visual Studio 2008\Projects\OrmTest\OrmTest\Program.cs:Zeile 24. bei System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) bei System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) bei Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() bei System.Threading.ThreadHelper.ThreadStart_Context(Object state) bei System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) bei System.Threading.ThreadHelper.ThreadStart() InnerException: I am using the programmatically setup approach in order to get a quick NHibernate Setup. Here is the setup Code: Configuration config = new Configuration(); Dictionary props = new Dictionary(); props.Add("dialect", "NHibernate.Dialect.MySQL5Dialect"); props.Add("connection.provider", "NHibernate.Connection.DriverConnectionProvider"); props.Add("connection.driver_class", "NHibernate.Driver.MySqlDataDriver"); props.Add("connection.connection_string", "Server=localhost;Database=orm_test;User ID=root;Password=password"); props.Add("proxyfactory.factory_class", "NHibernate.ByteCode.Spring.ProxyFactoryFactory, NHibernate.ByteCode.Spring"); config.AddProperties(props); config.AddFile("Person.hbm.xml"); ISessionFactory factory = config.BuildSessionFactory(); ISession session = factory.OpenSession(); Is something missing? I downloaded the current mysql Connector from the mysql Website.

    Read the article

  • Mysql table comment increase length. Is this a bug?

    - by Victor Kimura
    Hi, I read the mysql table lengths questions on stackoverflow on here: questions/391323/table-comment-length-in-mysql questions/2473934/how-to-increase-mysql-table-comments-length The first link suggests that it can be done and the second suggests it cannot. I don't know why there is this limitation as the comments are very useful. Imagine if there was a limit of 60 characters for your programs. I wrote about this on my site and have some snapshots to the phpmyadmin and Dbforge MySQL IDEs: http://mysql.tutorialref.com/mysql-table-comment-length-limit.html Is there a way to change this in phpmyadmin or perhaps even on the CLI? There is a bug commit report from MySQL on this particular problem (follow the link from the stackoverflow (first link). It seems to state that the length problem is fixed. I have MySQL 5.1.42. Thank you, Victor

    Read the article

  • Mysql performance problem & Failed DIMM

    - by murdoch
    Hi I have a dedicated mysql database server which has been having some performance problems recently, under normal load the server will be running fine, then suddenly out of the blue the performance will fall off a cliff. The server isn't using the swap file and there is 12GB of RAM in the server, more than enough for its needs. After contacting my hosting comapnies support they have discovered that there is a failed 2GB DIMM in the server and have scheduled to replace it tomorow morning. My question is could a failed DIMM result in the performance problems I am seeing or is this just coincidence? My worry is that they will replace the ram tomorrow but the problems will persist and I will still be lost of explanations so I am just trying to think ahead. The reason I ask is that there is plenty of RAM in the server, more than required and simply missing 2GB should be a problem, so if this failed DIMM is causing these performance problems then the OS must be trying to access the failed DIMM and slowing down as a result. Does that sound like a credible explanation? This is what DELLs omreport program says about the RAM, notice one dimm is "Critical" Memory Information Health : Critical Memory Operating Mode Fail Over State : Inactive Memory Operating Mode Configuration : Optimizer Attributes of Memory Array(s) Attributes : Location Memory Array 1 : System Board or Motherboard Attributes : Use Memory Array 1 : System Memory Attributes : Installed Capacity Memory Array 1 : 12288 MB Attributes : Maximum Capacity Memory Array 1 : 196608 MB Attributes : Slots Available Memory Array 1 : 18 Attributes : Slots Used Memory Array 1 : 6 Attributes : ECC Type Memory Array 1 : Multibit ECC Total of Memory Array(s) Attributes : Total Installed Capacity Value : 12288 MB Attributes : Total Installed Capacity Available to the OS Value : 12004 MB Attributes : Total Maximum Capacity Value : 196608 MB Details of Memory Array 1 Index : 0 Status : Ok Connector Name : DIMM_A1 Type : DDR3-Registered Size : 2048 MB Index : 1 Status : Ok Connector Name : DIMM_A2 Type : DDR3-Registered Size : 2048 MB Index : 2 Status : Ok Connector Name : DIMM_A3 Type : DDR3-Registered Size : 2048 MB Index : 3 Status : Critical Connector Name : DIMM_B1 Type : DDR3-Registered Size : 2048 MB Index : 4 Status : Ok Connector Name : DIMM_B2 Type : DDR3-Registered Size : 2048 MB Index : 5 Status : Ok Connector Name : DIMM_B3 Type : DDR3-Registered Size : 2048 MB the command free -m shows this, the server seems to be using more than 10GB of ram which would suggest it is trying to use the DIMM total used free shared buffers cached Mem: 12004 10766 1238 0 384 4809 -/+ buffers/cache: 5572 6432 Swap: 2047 0 2047 iostat output while problem is occuring avg-cpu: %user %nice %system %iowait %steal %idle 52.82 0.00 11.01 0.00 0.00 36.17 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 47.00 0.00 576.00 0 576 sda1 0.00 0.00 0.00 0 0 sda2 1.00 0.00 32.00 0 32 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 46.00 0.00 544.00 0 544 avg-cpu: %user %nice %system %iowait %steal %idle 53.12 0.00 7.81 0.00 0.00 39.06 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 592.00 0 592 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 592.00 0 592 avg-cpu: %user %nice %system %iowait %steal %idle 56.09 0.00 7.43 0.37 0.00 36.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 232.00 0.00 64520.00 0 64520 sda1 0.00 0.00 0.00 0 0 sda2 159.00 0.00 63728.00 0 63728 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 73.00 0.00 792.00 0 792 avg-cpu: %user %nice %system %iowait %steal %idle 52.18 0.00 9.24 0.06 0.00 38.51 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 600.00 0 600 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 600.00 0 600 avg-cpu: %user %nice %system %iowait %steal %idle 54.82 0.00 8.64 0.00 0.00 36.55 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 100.00 0.00 2168.00 0 2168 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 100.00 0.00 2168.00 0 2168 avg-cpu: %user %nice %system %iowait %steal %idle 54.78 0.00 6.75 0.00 0.00 38.48 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 84.00 0.00 896.00 0 896 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 84.00 0.00 896.00 0 896 avg-cpu: %user %nice %system %iowait %steal %idle 54.34 0.00 7.31 0.00 0.00 38.35 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 81.00 0.00 840.00 0 840 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 81.00 0.00 840.00 0 840 avg-cpu: %user %nice %system %iowait %steal %idle 55.18 0.00 5.81 0.44 0.00 38.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 317.00 0.00 105632.00 0 105632 sda1 0.00 0.00 0.00 0 0 sda2 224.00 0.00 104672.00 0 104672 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 93.00 0.00 960.00 0 960 avg-cpu: %user %nice %system %iowait %steal %idle 55.38 0.00 7.63 0.00 0.00 36.98 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 74.00 0.00 800.00 0 800 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 74.00 0.00 800.00 0 800 avg-cpu: %user %nice %system %iowait %steal %idle 56.43 0.00 7.80 0.00 0.00 35.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 72.00 0.00 784.00 0 784 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 72.00 0.00 784.00 0 784 avg-cpu: %user %nice %system %iowait %steal %idle 54.87 0.00 6.49 0.00 0.00 38.64 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 80.20 0.00 855.45 0 864 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 80.20 0.00 855.45 0 864 avg-cpu: %user %nice %system %iowait %steal %idle 57.22 0.00 5.69 0.00 0.00 37.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 0.00 432.00 0 432 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 33.00 0.00 432.00 0 432 avg-cpu: %user %nice %system %iowait %steal %idle 56.03 0.00 7.93 0.00 0.00 36.04 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 41.00 0.00 560.00 0 560 sda1 0.00 0.00 0.00 0 0 sda2 2.00 0.00 88.00 0 88 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 39.00 0.00 472.00 0 472 avg-cpu: %user %nice %system %iowait %steal %idle 55.78 0.00 5.13 0.00 0.00 39.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 0.00 392.00 0 392 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 29.00 0.00 392.00 0 392 avg-cpu: %user %nice %system %iowait %steal %idle 53.68 0.00 8.30 0.06 0.00 37.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 78.00 0.00 4280.00 0 4280 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 78.00 0.00 4280.00 0 4280

    Read the article

  • IP address numbers in MySQL subquery

    - by Iain Collins
    I have a problem with a subquery involving IPV4 addresses stored in MySQL (MySQL 5.0). The IP addresses are stored in two tables, both in network number format - e.g. the format output by MySQL's INET_ATON(). The first table ('events') contains lots of rows with IP addresses associated with them, the second table ('network_providers') contains a list of provider information for given netblocks. events table (~4,000,000 rows): event_id (int) event_name (varchar) ip_address (unsigned 4 byte int) network_providers table (~60,000 rows): ip_start (unsigned 4 byte int) ip_end (unsigned 4 byte int) provider_name (varchar) Simplified for the purposes of the problem I'm having, the goal is to create an export along the lines of: event_id,event_name,ip_address,provider_name If do a query along the lines of either of the following, I get the result I expect: SELECT provider_name FROM network_providers WHERE INET_ATON('192.168.0.1') >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 SELECT provider_name FROM network_providers WHERE 3232235521 >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1 That is to say, it returns the correct provider_name for whatever IP I look up (of course I'm not really using 192.168.0.1 in my queries). However, when performing this same query as a subquery, in the following manner, it doesn't yield the result I would expect: SELECT event.id, event.event_name, (SELECT provider_name FROM network_providers WHERE event.ip_address >= network_providers.ip_start ORDER BY network_providers.ip_start DESC LIMIT 1) as provider FROM events Instead the a different (incorrect) value for network_provider is returned - over 90% (but curiously not all) values returned in the provider column contain the wrong provider information for that IP. Using event.ip_address in a subquery just to echo out the value confirms it contains the value I'd expect and that the subquery can parse it. Replacing event.ip_address with an actual network number also works, just using it dynamically in the subquery in this manner that doesn't work for me. I suspect the problem is there is something fundamental and important about subqueries in MySQL that I don't get. I've worked with IP addresses like this in MySQL quite a bit before, but haven't previously done lookups for them using a subquery. The question: I'd really appreciate an example of how I could get the output I want, and if someone here knows, some enlightenment as to why what I'm doing doesn't work so I can avoid making this mistake again. Notes: The actual real-world usage I'm trying to do is considerably more complicated (involving joining two or three tables). This is a simplified version, to avoid overly complicating the question. Additionally, I know I'm not using a between on ip_start & ip_end - that's intentional (the DB's can be out of date, and such cases the owner in the DB is almost always in the next specified range and 'best guess' is fine in this context) however I'm grateful for any suggestions for improvement that relate to the question. Efficiency is always nice, but in this case absolutely not essential - any help appreciated.

    Read the article

  • Does the order of columns in a query matter?

    - by James Simpson
    When selecting columns from a MySQL table, is performance affected by the order that you select the columns as compared to their order in the table (not considering indexes that may cover the columns)? For example, you have a table with rows uid, name, bday, and you have the following query. SELECT uid, name, bday FROM table Does MySQL see the following query any differently and thus cause any sort of performance hit? SELECT uid, bday, name FROM table

    Read the article

  • Values of generated column not appearing in table

    - by msh210
    I'm using mysql version 5.1.41-3ubuntu12.10 (Ubuntu). mysql> show create table tt\G *************************** 1. row *************************** Table: tt Create Table: CREATE TABLE `tt` ( `pz` int(8) DEFAULT NULL, `os` varchar(8) DEFAULT NULL, `uz` int(11) NOT NULL, `p` bigint(21) NOT NULL DEFAULT '0', `c` decimal(23,0) DEFAULT NULL, KEY `pz` (`pz`), KEY `uz` (`uz`), KEY `os` (`os`), KEY `pz_2` (`pz`,`uz`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 1 row in set (0.00 sec) mysql> select pz,uz,pz*uz, -> if(pz*uz,1,.5), -> left(pz,2) pl,left(lpad(uz,5,0),2) ul, -> p from tt limit 10; +-------+----+-------+----------------+--------+----+--------+ | pz | uz | pz*uz | if(pz*uz,1,.5) | pl | ul | p | +-------+----+-------+----------------+--------+----+--------+ | NULL | 0 | NULL | 0.5 | NULL | 00 | 4080 | | NULL | 0 | NULL | 0.5 | NULL | 00 | 323754 | | 89101 | 0 | 0 | 0.5 | 89 | 00 | 6880 | | 0 | 0 | 0 | 0.5 | 0 | 00 | 11591 | | 89110 | 0 | 0 | 0.5 | 89 | 00 | 72 | | 78247 | 0 | 0 | 0.5 | 78 | 00 | 27 | | 90062 | 0 | 0 | 0.5 | 90 | 00 | 5 | | 63107 | 0 | 0 | 0.5 | 63 | 00 | 4 | | NULL | 0 | NULL | 0.5 | NULL | 00 | 54561 | | 94102 | 0 | 0 | 0.5 | 94 | 00 | 12499 | +-------+----+-------+----------------+--------+----+--------+ So far so good. As you see, 0.5 appears as a value of if(pz*uz,1,.5). The problem is: mysql> select os, -> if(pz*uz,left(pz,2)<=>left(lpad(uz,5,0),2),.5) uptwo, -> if(pz*uz,left(pz,3)<=>left(lpad(uz,5,0),3),.5) upthree, -> sum(p) p,sum(c) c -> from tt t -> group by os,uptwo,upthree order by null; +----+-------+---------+---------+-------+ | os | uptwo | upthree | p | c | +----+-------+---------+---------+-------+ | u | 1 | 1 | 52852 | 318 | | i | 1 | 1 | 7046563 | 21716 | | m | 1 | 1 | 1252166 | 7337 | | i | 0 | 0 | 1830284 | 4033 | | m | 0 | 0 | 294612 | 1714 | | i | 1 | 0 | 911486 | 3560 | | m | 1 | 0 | 145182 | 1136 | | u | 0 | 0 | 12144 | 23 | | u | 1 | 0 | 1571 | 8 | +----+-------+---------+---------+-------+ Although I group by uptwo, 0.5 doesn't appear in that column. What happened to the 0.5 values? Edit: As noted in the comments to Todd Gibson's answer, I also tried it with if(pz*uz,cast(left(pz,2)<=>left(lpad(uz,5,0),2) as decimal),.5) instead of if(pz*uz,left(pz,2)<=>left(lpad(uz,5,0),2),.5), but it, too, didn't work.

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • MySQL 5.5 (Percona) assertion failure log.. what would cause this?

    - by Tom Geee
    256GB, 64 Core , AMD running Ubuntu 12.04 with Percona MySQL 5.5.28. Below is the assertion failure. We just had a second assertion failure (different "in file", position, etc) while running a large set of inserts. After the first failure, MySQL restarted after a reboot only - after continuously looping on the same error after trying to recover. I decided to do a mysqlcheck with -o for optimize. Since these are all Innodb tables (very large tables, 60+GB) this would do an alter table on all tables. In the middle of this , the below assertion failure happened again: 121115 22:30:31 InnoDB: Assertion failure in thread 140086589445888 in file btr0pcur.c line 452 InnoDB: Failing assertion: btr_page_get_prev(next_page, mtr) == buf_block_get_page_no(btr_pcur_get_block(cursor)) InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 03:30:31 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=536870912 read_buffer_size=131072 max_used_connections=404 max_threads=500 thread_count=90 connection_count=90 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1618416 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x14edeb710 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f687366ce80 thread_stack 0x30000 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7b52ee] /usr/sbin/mysqld(handle_fatal_signal+0x484)[0x68f024] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9cbb23fcb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f9cbaea6425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f9cbaea9b8b] /usr/sbin/mysqld[0x858463] /usr/sbin/mysqld[0x804513] /usr/sbin/mysqld[0x808432] /usr/sbin/mysqld[0x7db8bf] /usr/sbin/mysqld(_Z13rr_sequentialP11READ_RECORD+0x1d)[0x755aed] /usr/sbin/mysqld(_Z17mysql_alter_tableP3THDPcS1_P24st_ha_create_informationP10TABLE_LISTP10Alter_infojP8st_orderb+0x216b)[0x60399b] /usr/sbin/mysqld(_Z20mysql_recreate_tableP3THDP10TABLE_LIST+0x166)[0x604bd6] /usr/sbin/mysqld[0x647da1] /usr/sbin/mysqld(_ZN24Optimize_table_statement7executeEP3THD+0xde)[0x64891e] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1168)[0x59b558] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x30c)[0x5a132c] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1620)[0x5a2a00] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x14f)[0x63ce6f] /usr/sbin/mysqld(handle_one_connection+0x51)[0x63cf31] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9cbb237e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9cbaf63cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6300004b60): is an invalid pointer Connection ID (thread ID): 876 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. 121115 22:31:07 [Note] Plugin 'FEDERATED' is disabled. 121115 22:31:07 InnoDB: The InnoDB memory heap is disabled 121115 22:31:07 InnoDB: Mutexes and rw_locks use GCC atomic builtins .. Then it recovered , without a reboot this time. from the log, what would cause this? I am currently running a dump to see if the problem resurfaces. edit: data partition is all in / since this is a hosted, defaulted file system unfortunately: Filesystem Size Used Avail Use% Mounted on /dev/vda3 742G 445G 260G 64% / udev 121G 4.0K 121G 1% /dev tmpfs 49G 248K 49G 1% /run none 5.0M 0 5.0M 0% /run/lock none 121G 0 121G 0% /run/shm /dev/vda1 99M 54M 40M 58% /boot my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] skip-name-resolve innodb_file_per_table default_storage_engine=InnoDB user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /data/mysql tmpdir = /tmp skip-external-locking key_buffer = 512M max_allowed_packet = 128M thread_stack = 192K thread_cache_size = 64 myisam-recover = BACKUP max_connections = 500 table_cache = 812 table_definition_cache = 812 #query_cache_limit = 4M #query_cache_size = 512M join_buffer_size = 512K innodb_additional_mem_pool_size = 20M innodb_buffer_pool_size = 196G #innodb_file_io_threads = 4 #innodb_thread_concurrency = 12 innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 8M innodb_log_file_size = 1024M innodb_log_files_in_group = 2 innodb_max_dirty_pages_pct = 90 innodb_lock_wait_timeout = 120 log_error = /var/log/mysql/error.log long_query_time = 5 slow_query_log = 1 slow_query_log_file = /var/log/mysql/slowlog.log [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M

    Read the article

  • Reporter seeking comments on computer science education [closed]

    - by user63982
    I'm a reporter doing a story for a tech website on computer science education, the need for software engineers, and the proficiency of new engineer hires. I would love to chat or exchange emails with anyone on this site who has an opinion on cs education and whether it did or did not prepare them for a job, and the pluses and minuses of the theoretical vs. the practical. I saw 1051's post and its comments and would love to connect with the poster and any of the commenters. Or anyone else with an opinion. I look forward to hearing from you. Thank you.

    Read the article

  • MySQL Connection Timeout Issue - Grails Application on Tomcat using Hibernate and ORM

    - by gav
    Hi Guys I have a small grails application running on Tomcat in Ubuntu on a VPS. I use MySql as my datastore and everything works fine unless I leave the application for more than half a day (8 hours?). I did some searching and apparently this is the default wait_timeout in mysql.cnf so after 8 hours the connection will die but Tomcat won't know so when the next user tries to view the site they will see the connection failure error. Refreshing the page will fix this but I want to get rid of the error altogether. For my version of MySql (5.0.75) I have only my.cnf and it doesn't contain such a parameter, In any case changing this parameter doesn't solve the problem. This Blog Post seems to be reporting a similar error but I still don't fully understand what I need to configure to get this fixed and also I am hoping that there is a simpler solution than another third party library. The machine I'm running on has 256MB ram and I'm trying to keep the number of programs/services running to a minimum. Is there something I can configure in Grails / Tomcat / MySql to get this to go away? Thanks in advance, Gav From my Catalina.out; 2010-04-29 21:26:25,946 [http-8080-2] ERROR util.JDBCExceptionReporter - The last packet successfully received from the server was 102,906,722 milliseconds$ 2010-04-29 21:26:25,994 [http-8080-2] ERROR errors.GrailsExceptionResolver - Broken pipe java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) ... 2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed. 2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed. 2010-04-29 21:26:26,017 [http-8080-2] ERROR servlet.GrailsDispatcherServlet - HandlerInterceptor.afterCompletion threw exception org.hibernate.exception.GenericJDBCException: Cannot release connection at java.lang.Thread.run(Thread.java:619) Caused by: java.sql.SQLException: Already closed. at org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:84) at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.close(PoolingDataSource.java:181) ... 1 more

    Read the article

  • Python MySQL wrong architecture error

    - by phoebebright
    I've been at this for some time and read many sites on the subject. suspect I have junk lying about causing this problem. But where? This is the error when I import MySQLdb in python: >>> import MySQLdb /Library/Python/2.6/site-packages/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg/_mysql.py:3: UserWarning: Module _mysql was already imported from /Library/Python/2.6/site-packages/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg/_mysql.pyc, but /Users/phoebebr/Downloads/MySQL-python-1.2.3c1 is being added to sys.path Traceback (most recent call last): File "<stdin>", line 1, in <module> File "MySQLdb/__init__.py", line 19, in <module> import _mysql File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 7, in <module> File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 6, in __bootstrap__ ImportError: dlopen(/Users/phoebebr/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so, 2): no suitable image found. Did find: /Users/phoebebr/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so: mach-o, but wrong architecture I'm trying for 64 bit so checked here: file $(which python) /usr/bin/python: Mach-O universal binary with 3 architectures /usr/bin/python (for architecture x86_64): Mach-O 64-bit executable x86_64 /usr/bin/python (for architecture i386): Mach-O executable i386 /usr/bin/python (for architecture ppc7400): Mach-O executable ppc file $(which mysql) /usr/local/mysql/bin/mysql: Mach-O 64-bit executable x86_64 Have set my default version of python to 2.6 python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Tried deleting build directory and python setup.py clean Renamed Python/2.5/site-packages so it could not try and pick that up. Run out of ideas. Any suggestions?

    Read the article

  • Memory allocation error from MySql ODBC 5.1 driver in C# application on insert statement

    - by Chinjoo
    I have a .NET Wndows application in C#. It's a simple Windows application that is using the MySql 5.1 database community edition. I've downloaded the MySql ODBC driver and have created a dsn to my database on my local machine. On my application, I can perform get type queries without problems, but when I execute a given insert statement (not that I've tried doing any others), I get the following error: {"ERROR [HY001] [MySQL][ODBC 5.1 Driver][mysqld-5.0.27-community-nt]Memory allocation error"} I'm running on a Windows XP machine. My machine has 1 GB of memory. Anyone have any ideas? See code below OdbcConnection MyConn = DBConnection.getDBConnection(); int result = -1; try { MyConn.Open(); OdbcCommand myCmd = new OdbcCommand(); myCmd.Connection = MyConn; myCmd.CommandType = CommandType.Text; OdbcParameter userName = new OdbcParameter("@UserName", u.UserName); OdbcParameter password = new OdbcParameter("@Password", u.Password); OdbcParameter firstName = new OdbcParameter("@FirstName", u.FirstName); OdbcParameter LastName = new OdbcParameter("@LastName", u.LastName); OdbcParameter sex = new OdbcParameter("@sex", u.Sex); myCmd.Parameters.Add(userName); myCmd.Parameters.Add(password); myCmd.Parameters.Add(firstName); myCmd.Parameters.Add(LastName); myCmd.Parameters.Add(sex); myCmd.CommandText = mySqlQueries.insertChatUser; result = myCmd.ExecuteNonQuery(); } catch (Exception e) { //{"ERROR [HY001] [MySQL][ODBC 5.1 Driver][mysqld-5.0.27-community-nt]Memory // allocation error"} EXCEPTION ALWAYS THROWN HERE } finally { try { if (MyConn != null) MyConn.Close(); } finally { } }

    Read the article

  • oRecordset in ASP.NET mySQL problem

    - by StealthRT
    I have this mySQL code that connects to my server. It connects just fine: Dim MyConString As String = "DRIVER={MySQL ODBC 3.51 Driver};" & _ "SERVER=xxx.com;" & _ "DATABASE=xxx;" & _ "UID=xxx;" & _ "PASSWORD=xxx;" & _ "OPTION=3;" Dim conn As OdbcConnection = New OdbcConnection(MyConString) conn.Open() Dim MyCommand As New OdbcCommand MyCommand.Connection = conn MyCommand.CommandText = "select * from userinfo WHERE emailAddress = '" & theUN & "'"" MyCommand.ExecuteNonQuery() conn.Close() However, i have an old Classic ASP page that uses "oRecordset" to get the data from the mySQL server: Set oConnection = Server.CreateObject("ADODB.Connection") Set oRecordset = Server.CreateObject("ADODB.Recordset") oConnection.Open "DRIVER={MySQL ODBC 3.51 Driver}; SERVER=xxx.com; PORT=3306; DATABASE=xxx; USER=xxx; PASSWORD=xxx; OPTION=3;" sqltemp = "select * from userinfo WHERE emailAddress = '" & theUN & "'" oRecordset.Open sqltemp, oConnection,3,3 And i can use oRecordset as follows: if oRecordset.EOF then.... or strValue = oRecordset("Table_Name").value or oRecordset("Table_Name").value = "New Value" oRecordset.update etc... However, for the life of me, i can not find any .net code that is simular to that of my Classic ASP page!!!!! Any help would be great! :o) David

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >