Search Results

Search found 20659 results on 827 pages for 'var'.

Page 279/827 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Why the huge difference between etch and lenny MySQL

    - by rmarimon
    I've been working on a program for the last year. The development environment is working with a database in MySQL running on debian etch version mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (i486) using readline 5.2. The production environment is working on debian lenny with version mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2. I was just timing some database access and what takes in the development environment 150 seconds, takes 300 in the production environment. I checked the /etc/mysql/my.cnf files on both systems and the only differences are # development bind-address = 10.168.1.82 log_bin = /var/log/mysql/mysql-bin.log # production bind-address = 127.0.0.1 myisam-recover = BACKUP #log_bin = /var/log/mysql/mysql-bin.log I dump a database from the production and load it into the development and with the same server everything takes half the time !!! What should I check?

    Read the article

  • Installing MySQL 5.5 manually on Ubuntu 10.04 server, errors about "/tmp/mysql.sock"

    - by black sensei
    I've set up an Ubuntu server and wanted to install MySQL 5.5. I've been following these MySQL documentation steps. I have libaio dev installed. Everything went fine until I ran bin/mysqld_safe --user=mysql & It runs into an issue and never returns to the shell. The output of mysqld_safe is logging to /usr/local/mysql/data/host_name.err. When I checked that file, it was complaining about /tmp/mysql.sock. I can unfortunately describe just parts of the error, since before I started right now it deleted all the files I've started installing back then by mistake. Should I change the socket to /var/run/mysqld/mysqld.socket after copying the .cnf file to /etc? I've also checked the /var/run/mysqld directory and there is no mysqld.socket. How do I proceed? Thanks for reading this and helping out

    Read the article

  • Ubuntu reboot suddenly

    - by Gladiator
    Its the second day I have this issue, and Ubuntu still reboot suddenly. nothing significatif in syslog. salim@SalimPC:~$ tail -f /var/log/syslog<br> Nov 7 12:34:53 SalimPC dbus[873]: [system] Successfully activated service 'com.ubuntu.SystemService' SalimPC dbus[873]: [system] Activating service name='org.freedesktop.PackageKit' (using servicehelper) SalimPC AptDaemon: INFO: Initializing daemon SalimPC AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer SalimPC dbus[873]: [system] Successfully activated service 'org.freedesktop.PackageKit' SalimPC AptDaemon.PackageKit: INFO: Initializing PackageKit transaction SalimPC AptDaemon.Worker: INFO: Simulating trans:/org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4<br> SalimPC AptDaemon.Worker: INFO: Processing transaction org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4 SalimPC AptDaemon.PackageKit: INFO: Get updates() Nov 7 12:34:58 SalimPC AptDaemon.Worker: INFO: Finished transaction /org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4 ---------------------------------Previous post------------------ Hi My ubuntu has rebooted suddenly (2 time till now in one hour). After login, a crash was indicated in /usr/sbin/ntop. below are the syslog and a screenshot of the crash. salim@SalimPC:~$ tail /var/log/syslog Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (9642->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (8274->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (11010->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (17850->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (8274->8232) Nov 6 18:25:39 ntop[1630]: last message repeated 2 times Nov 6 18:25:39 SalimPC ntop[1630]: **WARNING** packet truncated (16482->8232) Nov 6 18:25:40 SalimPC ntop[1630]: **WARNING** packet truncated (11010->8232) Nov 6 18:25:43 SalimPC ntop[3075]: THREADMGMT[t3063068672]: ntop RUNSTATE: PREINIT(1) Nov 6 18:25:43

    Read the article

  • NTPD issue - syncs then slowly loses ground

    - by ethrbunny
    RHEL 5 workstation. Has been running smoothly for years. I did a 'pup' recently and followed with a nice, cleansing reboot. Afterwards the system had some startup issues: namely MySQL refused to start. It just went "...." for 5-10 minutes before I did another boot and skipped that step (using 'interactive'). This was the only service that didn't wan't to start normally. So now that the system is booted I've found that it doesn't want to stay in sync with the NTP master and after 48 hours is refusing any SSH other than root. NTPD: this service starts normally and gets a lock on 4 servers. Almost immediately it starts to lose ground and now (after 3 days) is almost 40 hours behind. If I stop/start the service it gets the lock, resets the system clock and starts losing ground again. The 'hwclock' is set properly and maintains its time. Login: when I (re)start the ntp server I am able to login normally. I assume this problem is due to losing sync with LDAP. This appears to be verified by LDAP errors in /var/log/messages. Suggestions on where to look? ADDENDA: Tried deleting the 'drift' file. After a bit it gets recreated with 0.000. from /var/log/messages: Jan 17 06:54:01 aeolus ntpdate[5084]: step time server 129.95.96.10 offset 30.139216 sec Jan 17 06:54:01 aeolus ntpd[5086]: ntpd [email protected] Tue Oct 25 12:54:17 UTC 2011 (1) Jan 17 06:54:01 aeolus ntpd[5087]: precision = 1.000 usec Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface wildcard, 0.0.0.0#123 Disabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface wildcard, ::#123 Disabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface lo, ::1#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface eth0, fe80::213:72ff:fe20:4080#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface lo, 127.0.0.1#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: Listening on interface eth0, 10.127.24.81#123 Enabled Jan 17 06:54:01 aeolus ntpd[5087]: kernel time sync status 0040 Jan 17 06:54:02 aeolus ntpd[5087]: frequency initialized 0.000 PPM from /var/lib/ntp/drift Jan 17 06:54:02 aeolus ntpd[5087]: system event 'event_restart' (0x01) status 'sync_alarm, sync_unspec, 1 event, event_unspec' (0xc010) You can see the 30 second offset. This was after about one minute of operation.

    Read the article

  • How to resolve - dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error? I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • Team Foundation Server 2012 Build Global List Problems

    - by Bob Hardister
    My experience with the upgrade and use of TFS 2012 has been very positive. I did come across a couple of issues recently that tripped things up for a while. ISSUE 1 The first issue is that 2012 prior to Update 1 published an invalid build list item value to the collection global list. In 2010, the build global list, list item value syntax is an underscore between the build definition and the build number. In the 2012 RTM this underscore was replaced with a backslash, which is invalid.  Specifically, an upload of the global list fails when the backslash is followed at some point by a period. The error when using the API is: <detail ExceptionMessage="TF26204: The account you entered is not recognized. Contact your Team Foundation Server administrator to add your account." BaseExceptionName="Microsoft.TeamFoundation.WorkItemTracking.Server.ValidationException"><details id="600019" http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03"http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03" /></detail> when uploading the global list via the process editor the error is: This issue is corrected in Update1 as the backslash is changed to a forward slash. ISSUE 2 The second issue is that when upgrading from 2010 to 2012, the builds in 2010 are not published to the 2012 global list.  After the upgrade the 2012 global lists doesn’t have any builds and only builds run in 2012 are published to the global list. This was reported to the MSDN forums and Connect. To correct this I wrote a utility to pull all the builds and recreate the builds global list for each project in each collection.  This is a console application with a program.cs, a globallists.cs and a app.config (not published here). The utility connects to TFS 2012, loops through the collections or a target collection as specified in the app.config. Then loops through the projects, the build definitions, and builds.  It creates a global list for each project if that project has at least one build. Then it imports the new list to TFS.  Here’s the code for program and globalists classes. Program.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Server; using System.IO; using System.Xml; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Diagnostics; using Utilities; using System.Configuration; namespace TFSProjectUpdater_CLC { class Program { static void Main(string[] args) { DateTime temp_d = System.DateTime.Now; string logName = temp_d.ToShortDateString(); logName = logName.Replace("/", "_"); logName = logName + "_" + temp_d.TimeOfDay; logName = logName.Replace(":", "."); logName = "TFSGlobalListBuildsUpdater_" + logName + ".log"; Trace.Listeners.Add(new TextWriterTraceListener(Path.Combine(ConfigurationManager.AppSettings["logLocation"], logName))); Trace.AutoFlush = true; Trace.WriteLine("Start:" + DateTime.Now.ToString()); Console.WriteLine("Start:" + DateTime.Now.ToString()); string tfsServer = ConfigurationManager.AppSettings["TargetTFS"].ToString(); GlobalLists gl = new GlobalLists(); //replace this with the URL to your TFS instance. Uri tfsUri = new Uri("https://" + tfsServer + "/tfs"); //bool foundLite = false; TfsConfigurationServer config = new TfsConfigurationServer(tfsUri, new UICredentialsProvider()); config.EnsureAuthenticated(); ITeamProjectCollectionService collectionService = config.GetService<ITeamProjectCollectionService>(); IList<TeamProjectCollection> collections = collectionService.GetCollections().OrderBy(collection => collection.Name.ToString()).ToList(); //target Collection string targetCollection = ConfigurationManager.AppSettings["targetCollection"]; foreach (TeamProjectCollection coll in collections) { if (targetCollection.Equals(string.Empty)) { if (!coll.Name.Equals("TFS Archive") && !coll.Name.Equals("DefaultCol") && !coll.Name.Equals("Team Project Template Gallery")) { doWork(coll, tfsServer); } } else { if (coll.Name.Equals(targetCollection)) { doWork(coll, tfsServer); } } } Trace.WriteLine("Finished:" + DateTime.Now.ToString()); Console.WriteLine("Finished:" + DateTime.Now.ToString()); if (System.Diagnostics.Debugger.IsAttached) { Console.WriteLine("\nHit any key to exit..."); Console.ReadKey(); } Trace.Close(); } static void doWork(TeamProjectCollection coll, string tfsServer) { GlobalLists gl = new GlobalLists(); //target Collection string targetProject = ConfigurationManager.AppSettings["targetProject"]; Trace.WriteLine("Collection: " + coll.Name); Uri u = new Uri("https://" + tfsServer + "/tfs/" + coll.Name.ToString()); TfsTeamProjectCollection c = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(u); ICommonStructureService icss = c.GetService<ICommonStructureService>(); try { Trace.WriteLine("\tChecking Collection Global Lists."); gl.RebuildBuildGlobalLists(c); } catch (Exception ex) { Console.WriteLine("Exception! :" + coll.Name); } } } } GlobalLists.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.Build.Client; using System.Configuration; using System.Xml; using System.Xml.Linq; using System.Diagnostics; namespace Utilities { public class GlobalLists { string GL_NewList = @"<gl:GLOBALLISTS xmlns:gl=""http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists""> <GLOBALLIST> </GLOBALLIST> </gl:GLOBALLISTS>"; public void RebuildBuildGlobalLists(TfsTeamProjectCollection _tfs) { WorkItemStore wis = new WorkItemStore(_tfs); //export the current globals lists file for the collection to save as a backup XmlDocument globalListsFile = wis.ExportGlobalLists(); globalListsFile.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_backupGlobalList.xml"); LogExportCurrentCollectionGlobalListsAsBackup(_tfs); //Build a new global build list from each build definition within each team project IBuildServer buildServer = _tfs.GetService<IBuildServer>(); foreach (Project p in wis.Projects) { XmlDocument newProjectGlobalList = new XmlDocument(); newProjectGlobalList.LoadXml(GL_NewList); LogInstanciateNewProjectBuildGlobalList(_tfs, p); BuildNewProjectBuildGlobalList(_tfs, wis, newProjectGlobalList, buildServer, p); LogEndOfProject(_tfs, p); } } // Private Methods private static void BuildNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, WorkItemStore wis, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p) { //locate the template node XmlNamespaceManager nsmgr = new XmlNamespaceManager(newProjectGlobalList.NameTable); nsmgr.AddNamespace("gl", "http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists"); XmlNode node = newProjectGlobalList.SelectSingleNode("//gl:GLOBALLISTS/GLOBALLIST", nsmgr); LogLocatedGlobalListNode(_tfs, p); //add the name attribute for the project build global list XmlElement buildListNode = (XmlElement)node; buildListNode.SetAttribute("name", "Builds - " + p.Name); LogAddedBuildNodeName(_tfs, p); //add new builds to the team project build global list bool buildsExist = false; if (AddNewBuilds(_tfs, newProjectGlobalList, buildServer, p, node, buildsExist)) { //import the new build global list for each project that has builds newProjectGlobalList.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_" + p.Name + "_" + "newGlobalList.xml"); //write out temp copy of the global list file to be imported LogImportReady(_tfs, p); wis.ImportGlobalLists(newProjectGlobalList.InnerXml); LogImportComplete(_tfs, p); } } private static bool AddNewBuilds(TfsTeamProjectCollection _tfs, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p, XmlNode node, bool buildsExist) { var buildDefinitions = buildServer.QueryBuildDefinitions(p.Name); foreach (var buildDefinition in buildDefinitions) { var builds = buildDefinition.QueryBuilds(); foreach (var build in builds) { //insert the builds into the current build list node in the correct 2012 format buildsExist = true; XmlElement listItem = newProjectGlobalList.CreateElement("LISTITEM"); listItem.SetAttribute("value", buildDefinition.Name + "/" + build.BuildNumber.ToString().Replace(buildDefinition.Name + "_", "")); node.AppendChild(listItem); } } if (buildsExist) LogBuildListCreated(_tfs, p); else LogNoBuildsInProject(_tfs, p); return buildsExist; } // Logging Methods private static void LogExportCurrentCollectionGlobalListsAsBackup(TfsTeamProjectCollection _tfs) { Trace.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); Console.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); } private void LogInstanciateNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tInstanciated the new build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tInstanciated the new build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogLocatedGlobalListNode(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tLocated the build global list node for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tLocated the build global list node for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogAddedBuildNodeName(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded the name attribute to the build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded the name attribute to the build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogBuildListCreated(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded all builds into the " + "Builds - " + p.Name + " list in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded all builds into the " + "Builds - \n\t\t\t" + p.Name + " list in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogNoBuildsInProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tNo builds found for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tNo builds found for project " + p.Name + " \n\t\t\tin the " + _tfs.Name + " collection."); } private void LogEndOfProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tEND OF PROJECT " + p.Name); Trace.WriteLine(" "); Console.WriteLine("\t\tEND OF PROJECT " + p.Name); Console.WriteLine(); } private static void LogImportReady(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tReady to import the build global list for project " + p.Name + " to the " + _tfs.Name + " collection."); Console.WriteLine("\t\tReady to import the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogImportComplete(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tImport of the build global list for project " + p.Name + " to the " + _tfs.Name + " collection completed."); Console.WriteLine("\t\tImport of the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection completed."); } } }

    Read the article

  • Will Google treat this JavaScript code as a bad practice?

    - by Mathew Foscarini
    I have a website that provides a custom UX experience implemented via JavaScript. When JavaScript is disabled in the browser the website falls back to CSS for the layout. To make this possible I've added a noJS class to the <body> and quickly remove it via JavaScript. <body class="noJS layout-wide"> <script type="text/javascript">var b=document.getElementById("body");b.className=b.className.replace("noJS","");</script> This caused a problem when the page loads and JavaScript is enabled. The body immediately has it's noJS class removed, and this causes the layout to appear messed up until the JavaScript code for layout is executed (at bottom of the page). To solve this I hide each article via JavaScript by adding a CSS class fix which is display:none as each article is loaded. <article id="q-3217">....</article> <script type="text/javascript">var b=document.getElementById("q-3217");b.className=b.className+" fix";</script> After the page is ready I show all the articles in the correct layout. I've read many times in Google's documentation not to hide content. So I'm worried that the Google will penalize my website for doing this.

    Read the article

  • javascript fixed timestep gameloop with requestanimation frame

    - by coffeecup
    hello i just started to read through several articles, including http://gafferongames.com/game-physics/fix-your-timestep/ ...://gamedev.stackexchange.com/questions/1589/fixed-time-step-vs-variable-time-step/ ...//dewitters.koonsolo.com/gameloop.html ...://nokarma.org/2011/02/02/javascript-game-development-the-game-loop/index.html my understanding of this is that i need the currentTime and the timeStep size and integrate all states to the next state the time which is left is then passed into the render function to do interpolation i tried to implement glenn fiedlers "the final touch", whats troubling me is that each FrameTime is about 15 (ms) and the update loop runs at about 1500 fps which seems a little bit off? heres my code this.t = 0 this.dt = 0.01 this.currTime = new Date().getTime() this.accumulator = 0.0 this.animate() animate: function(){ var newTime = new Date().getTime() , frameTime = newTime - this.currTime , alpha if ( frameTime > 0.25 ) frameTime = 0.25 this.currTime = newTime this.accumulator += frameTime while (this.accumulator >= this.dt ) { this.prev_state = this.curr_state this.update(this.t,this.dt) this.t += this.dt this.accumulator -= this.dt } alpha = this.accumulator / this.dt this.render( this.t, this.dt, alpha) requestAnimationFrame( this.animate ) } also i would like to know, are there differences between glenn fiedlers implementation and the last solution presented here ? gameloop1 gameloop2 [ sorry couldnt post more than 2 links.. ] edit : i looked into it again and adjusted the values this.currTime = new Date().getTime() this.accumulator = 0 this.p_t = 0 this.p_step = 1000/100 this.animate() animate: function(){ var newTime = new Date().getTime() , frameTime = newTime - this.currTime , alpha if(frameTime > 25) frameTime = 25 this.currTime = newTime this.accumulator += frameTime while(this.accumulator >= this.p_step){ // prevstate = currState this.update() this.p_t+=this.p_step this.accumulator -= this.p_step } alpha = this.accumulator / this.p_step this.render(alpha) requestAnimationFrame( this.animate ) now i can set the physics update rate, render runs at 60 fps and physics update at 100 fps, maybe someone could confirm this because its the first time i'm playing around with game development :-)

    Read the article

  • mod_fcgid process doesn't respawn

    - by aaronsw
    I have a Python script running on my server as a FastCGI using Apache2 and mod_fcgid. I let it spawn up to five processes. But I soon get messages like these in the Apache logs: [Wed Sep 02 23:16:34 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function [Wed Sep 02 23:16:35 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function and then Apache doesn't seem to recognize that all its processes are dead (I have a max of 5 backends) and refuses to spawn new ones: [Wed Sep 02 23:26:16 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request [Wed Sep 02 23:26:17 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request at which point it refuses to respond to requests from the outside world. This doesn't seem to happen with my other FastCGIs, which all use the same Apache config: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi IPCConnectTimeout 20 MaxProcessCount 5 DefaultMaxClassProcessCount 2 DefaultMinClassProcessCount 1 </IfModule> Any idea what causes it?

    Read the article

  • Hosting custom domains with IP address flexibility

    - by F21
    I am building a small service where users will be assigned a subdomain such as: myusername.myservice.com anotheruser.myservice.com I know that I can set up a wildcard vhost and using some configuration regex, serve the files like so: myusername.myservice.com ===> /var/www/myusername anotherusername.myservice.com ===> /var/www/anotherusername The problem is that I would like to allow users to alias their own domain names to their service. I understand that for the webserver, once the user adds the domain via my web interface, I can easily create a vhost for the domain in nginx and then refresh the webserver. The problem is that I would prefer to NOT let the users add an A record of my webserver's IP address as I would prefer to keep things flexible (when we upgrade our infrastructure to something more complex to scale). What is the best way to achieve this?

    Read the article

  • PLesk Error: pmm-ras error (Error code = -6): during restore.. /tmp folder to increase?

    - by eric
    I had to re-install plesk on a centos 6 system after a crash. The full backup file is 11 gb. but at the beginning of the backup reinstall I get the error Error: pmm-ras error (Error code = -6): argh ! my disk organization is like this Filesystem Size Used Avail Use% Mounted on /dev/xvda1 3.7G 801M 2.9G 22% / /dev/mapper/vg00-usr 14G 1.5G 12G 12% /usr /dev/mapper/vg00-var 155G 14G 134G 10% /var /dev/mapper/vg00-home 3.9G 136M 3.6G 4% /home none 1000M 7.5M 993M 1% /tmp I suppose I have to increase my /tmp folder to accept the backup size,but I don't know how-to. I'm on 1&1 cloud server Thanks for your help. You can imagine the emergency of this situation...

    Read the article

  • Innodb : cannot allocate the memory for the buffer pool

    - by mingyeow
    My innodb keeps crashing. This is the error message below. Does anyone know why this keeps happening? InnoDB: by InnoDB 49201616 bytes. Operating system errno: 12 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. InnoDB: Note that in most 32-bit computers the process InnoDB: memory space is limited to 2 GB or 4 GB. InnoDB: We keep retrying the allocation for 60 seconds... 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! InnoDB: Fatal error: cannot allocate the memory for the buffer pool [ERROR] Default storage engine (InnoDB) is not available

    Read the article

  • BIND - zone not loaded du to errors

    - by Johan Barelds
    After upgrading from Ubuntu 8.04 to 10.04 my DNS isn't working properly anymore. I keep getting this error when I run named-checkzone example.com /var/cache/bind/example.com.zone.db zone example.com/IN: NS 'mx002a.example.com' has no address records (A or AAAA) zone example.com/IN: not loaded due to errors. in /var/cached/bind/example.com.db $TTL 3D @ IN SOA mx002a.example.com. chantra.example.com. ( 200608081 ; serial, todays date + todays serial # 8H ; refresh, seconds 2H ; retry, seconds 4W ; expire, seconds 1D ) ; minimum, seconds ; ; mx002a.example.com IN A 192.168.85.19 example.com. IN NS mx002a.example.com. mx001 60 IN A 192.168.85.17 mx001 60 IN A 192.168.85.18

    Read the article

  • Restart mysql keeping the data

    - by sitonico
    I'm quite new using mysql, so let me know if I'm missing something. I took some holidays, and when I got back to work and I tried to log in phpmyadmin I got a ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2). I never had this problem, so I was browsing to look for a solution. I tried some things, and I'm afraid I touched too much. I couldn't solve the problem, and the I realized that I had some actualizations to be done, and I thought that they may be helpful for mysql. Then I also realized that when I was doing this actualizations first day, they stopped because I had a lack of space, so I restarted then. Then,when the system was configuring mysql, it didn't advance. I waited for a long time and then I just stopped it and restarted the computer. After it, I just tried to uninstall mysql with sudo apt-get remove mysql-server-5.1, and install it again, but it didn't work. Now I have 2 questions: What do you think it is happening? Should I remove mysql completely? What should I do? I'm afraid of losing my databases, is there anyway to recover the data? Thank you very much in advance. -----------EDIT------- These are the messages: alfonso@alfonso-laptop:/$ tail -F /var/log/syslog | grep Feb 15 15:08:01 alfonso-laptop init: mysql post-start process (15192) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process (15263) terminated with status Feb 15 15:08:01 alfonso-laptop init: mysql main process ended, Feb 15 15:08:31 alfonso-laptop init: mysql post-start process (15264) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process (15358) terminated with status Feb 15 15:08:31 alfonso-laptop init: mysql main process ended, Feb 15 15:09:01 alfonso-laptop init: mysql post-start process (15359) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process (15447) terminated with status Feb 15 15:09:01 alfonso-laptop init: mysql main process ended, Feb 15 15:09:32 alfonso-laptop init: mysql post-start process (15448) terminated with status 1 This is the content of error.log-old 110128 13:17:20 [Note] /usr/sbin/mysqld: Normal shutdown 110128 13:17:20 [Note] Event Scheduler: Purging the queue. 0 events 110128 13:17:20 InnoDB: Starting shutdown... 110128 13:17:22 InnoDB: Shutdown completed; log sequence number 0 590872 110128 13:17:22 [Note] /usr/sbin/mysqld: Shutdown complete 110214 2:08:18 [Note] Plugin 'FEDERATED' is disabled. 110214 2:08:19 InnoDB: Started; log sequence number 0 590872 110214 2:08:19 [Note] Event Scheduler: Loaded 0 events 110214 2:08:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.41-3ubuntu12.8' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) -- Some links of similar problems https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.1/+bug/573318 http://www.linuxquestions.org/questions/linux-newbie-8/lamp-install-on-lucid-mysqld-sock-missing-mysql-terminating-status%3D1-853152/ It seems it's a permissions problem... But I don't know which permissions I should change... SOLVED -- mysql error 2002 "cannot connect to socket"

    Read the article

  • Why does httpd handle requests for wrong hostnames in SSL mode?

    - by Manuel
    I have an SSL-enabled virtual host for my sites at example.com:10443 Listen 10443 <VirtualHost _default_:10443> ServerName example.com:10443 ServerAdmin [email protected] ErrorLog "/var/log/httpd/error_log" TransferLog "/var/log/httpd/access_log" SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite HIGH:MEDIUM:!aNULL:!MD5 SSLCertificateFile "/etc/ssl/private/example.com.crt" SSLCertificateKeyFile "/etc/ssl/private/example.com.key" SSLCertificateChainFile "/etc/ssl/private/sub.class1.server.ca.pem" SSLCACertificateFile "/etc/ssl/private/StartCom.pem" </VirtualHost> Browsing to https://example.com:10443/ works as expected. However, also browsing to https://subdomain.example.com:10443/ (with DNS set) shows me the same pages (after SSL certificate warning). I would have expected the directive ServerName example.com:10443 to reject all connection attempts to other server names. How can I tell the virtual host not to serve requests for URLs other than the top-level one?

    Read the article

  • Unable to Install VirtualBox Due to Missing Kernel Module

    - by SoftTimur
    I am trying to install VirtualBox on my Ubuntu. I first tried to sudo apt-get install virtualbox-ose in a terminal, but after the configuration step, it fails with an error: No suitable module for running kernel found When proceeding with starting virtualbox, I get this error: WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-ose-dkms package and the appropriate headers, most likely linux-headers-generic. You will not be able to start VMs until this problem is fixed. So I tried the package from http://www.virtualbox.org/, but starting VirtualBox fails with: WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (2.6.38-8-generic-pae) or it failed to load. Please recompile the kernel module and install it by sudo /etc/init.d/vboxdrv setup You will not be able to start VMs until this problem is fixed. So I ran sudo /etc/init.d/vboxdrv setup, but it fails too: * Stopping VirtualBox kernel modules [ OK ] * Uninstalling old VirtualBox DKMS kernel modules [ OK ] * Trying to register the VirtualBox kernel modules using DKMS Error! Your kernel headers for kernel 2.6.38-8-generic-pae cannot be found at /lib/modules/2.6.38-8-generic-pae/build or /lib/modules/2.6.38-8-generic-pae/source. * Failed, trying without DKMS * Recompiling VirtualBox kernel modules * Look at /var/log/vbox-install.log to find out what went wrong The contents of /var/log/vbox-install.log. As I am stuck, I also tried to install kernel-devel with yum, still fruitless: root@ubuntu# yum install kernel-devel Setting up Install Process No package kernel-devel available. Nothing to do Now I've no idea how to correct this. Any ideas?

    Read the article

  • Multithreading in Windows Phone 7 emulator: A bug

    - by Laurent Bugnion
    Multithreading is supported in Windows Phone 7 Silverlight applications, however the emulator has a bug (which I discovered and was confirmed to me by the dev lead of the emulator team): If you attempt to start a background thread in the MainPage constructor, the thread never starts. The reason is a problem with the emulator UI thread which doesn’t leave any time to the background thread to start. Thankfully there is a workaround (see code below). Also, the bug should be corrected in a future release, so it’s not a big deal, even though it is really confusing when you try to understand why the *%&^$£% thread is not &$%&%$£ starting (that was me in the plane the other day ;) This code does not work: public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); } }); } } This code does work: public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); // NOTICE THIS LINE!!! Thread.Sleep(0); } }); } Note that even if the thread is started in a later event (for example Click of a Button), the behavior without the Thread.Sleep(0) is not good in the emulator. As of now, i would recommend always sleeping when starting a new thread. Happy coding: Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Plesk file permissions - Apache/PHP conflicting with user accounts.

    - by hfidgen
    Hiya, I'm building a Drupal site which performs various automatic disk operations using the apache user (id=40). The problem is that the site was set up on a subdomain belonging to user ID 10001 (ie my main FTP account) so the filesystem belongs to that user ID. So I keep getting errors like this: warning: move_uploaded_file() [function.move-uploaded-file]: SAFE MODE Restriction in effect. The script whose uid is 10001 is not allowed to access /var/www/vhosts/domain.com/httpdocs/sites/default/files/images/user owned by uid 48 in /var/www/vhosts/domain.com/httpdocs/includes/file.inc on line 579. I've tried changing the apache group in httpd.conf to apache:psacln, psacln being the default group for all web users but that's not helped. The situation now is: ..../files/images/ = 777 and chown = ftplogin:psacln ..../files/images/user = 775 and chown = apache:psacln ..../files/tmp = 777 and chown = ftplogin:psacln So apparently uid 40 and 10001 both have permissions to write to any of the 3 directories involved, but still can't. Am i missing something here? Can anyone help? Thanks!

    Read the article

  • Fixing Chrome&rsquo;s AJAX Request Caching Bug

    - by Steve Wilkes
    I recently had to make a set of web pages restore their state when the user arrived on them after clicking the browser’s back button. The pages in question had various content loaded in response to user actions, which meant I had to manually get them back into a valid state after the page loaded. I got hold of the page’s data in a JavaScript ViewModel using a JQuery ajax call, then iterated over the properties, filling in the fields as I went. I built in the ability to describe dependencies between inputs to make sure fields were filled in in the correct order and at the correct time, and that all worked nicely. To make sure the browser didn’t cache the AJAX call results I used the JQuery’s cache: false option, and ASP.NET MVC’s OutputCache attribute for good measure. That all worked perfectly… except in Chrome. Chrome insisted on retrieving the data from its cache. cache: false adds a random query string parameter to make the browser think it’s a unique request – it made no difference. I made the AJAX call a POST – it made no difference. Eventually what I had to do was add a random token to the URL (not the query string) and use MVC routing to deliver the request to the correct action. The project had a single Controller for all AJAX requests, so this route: routes.MapRoute( name: "NonCachedAjaxActions", url: "AjaxCalls/{cacheDisablingToken}/{action}", defaults: new { controller = "AjaxCalls" }, constraints: new { cacheDisablingToken = "[0-9]+" }); …and this amendment to the ajax call: function loadPageData(url) { // Insert a timestamp before the URL's action segment: var indexOfFinalUrlSeparator = url.lastIndexOf("/"); var uniqueUrl = url.substring(0, indexOfFinalUrlSeparator) + new Date().getTime() + "/" + url.substring(indexOfFinalUrlSeparator); // Call the now-unique action URL: $.ajax(uniqueUrl, { cache: false, success: completePageDataLoad }); } …did the trick.

    Read the article

  • RHEL5: Can't create sparse file bigger than 256GB in tmpfs

    - by John Kugelman
    /var/log/lastlog gets written to when you log in. The size of this file is based off of the largest UID in the system. The larger the maximum UID, the larger this file is. Thankfully it's a sparse file so the size on disk is much smaller than the size ls reports (ls -s reports the size on disk). On our system we're authenticating against an Active Directory server, and the UIDs users are assigned end up being really, really large. Like, say, UID 900,000,000 for the first AD user, 900,000,001 for the second, etc. That's strange but should be okay. It results in /var/log/lastlog being huuuuuge, though--once an AD user logs in lastlog shows up as 280GB. Its real size is still small, thankfully. This works fine when /var/log/lastlog is stored on the hard drive on an ext3 filesystem. It breaks, however, if lastlog is stored in a tmpfs filesystem. Then it appears that the max file size for any file on the tmpfs is 256GB, so the sessreg program errors out trying to write to lastlog. Where is this 256GB limit coming from, and how can I increase it? As a simple test for creating large sparse files I've been doing: dd if=/dev/zero of=sparse-file bs=1 count=1 seek=300GB I've tried Googling for "tmpfs max file size", "256GB filesystem limit", "linux max file size", things like that. I haven't been able to find much. The only mention of 256GB I can find is that ext3 filesystems with 2KB blocks are limited to 256GB files. But our hard drives are formatted with 4K blocks so that doesn't seem to be it--not to mention this is happening in a tmpfs mounted ON TOP of the hard drive so the ext3 partition shouldn't be a factor. This is all happening on a 64-bit Red Hat Enterprise Linux 5.4 system. Interestingly, on my personal development machine, which is a 32-bit Fedora Core 6 box, I can create 300GB+ files in tmpfs filesystems no problem. On the RHEL5.4 systems it is no go.

    Read the article

  • Cyrus: How Do I Configure saslauthd For Authentication?

    - by Nick
    I'm trying to get Cyrus IMAP (v 2.2 on Ubuntu 9.04) setup and working, but I'm having a bit of trouble getting the login working correctly. I've created a mailbox for my test user "nrahl": cm user/nrahl and then created a password: $ saslpasswd2 nrahl I'm attempting to connect to the mailbox using Thunderbird. I'm using the machine's LAN IP address as the host, and "nrahl" as the username. It connects to the server and prompts me for the password. When I enter it, I get "Login to server failed." in Thunderbird, and /var/log/mail.log shows: Apr 15 19:20:01 IMAP cyrus/imap[1930]: accepted connection Apr 15 19:20:09 IMAP cyrus/imap[1930]: badlogin: [192.168.5.21] plaintext nrahl SASL(-13): authentication failure: checkpass failed Part of /etc/imapd.conf with comments removed: sieveusehomedir: false sievedir: /var/spool/sieve #mailnotifier: zephyr #sievenotifier: zephyr #dracinterval: 0 #drachost: localhost hashimapspool: true allowplaintext: yes sasl_mech_list: PLAIN #allowapop: no #sasl_maximum_layer: 256 #loginrealms: example.com #virtdomains: userid #defaultdomain: sasl_pwcheck_method: saslauthd #sasl_auxprop_plugin: sasldb sasl_auto_transition: no UPDATE: When setting: sasl_pwcheck_method: alwaystrue in /etc/imapd.conf, login works correctly. So I'm assuming the issue is saslauthd related.

    Read the article

  • Apache config file. Redirect permanent gives 403 error

    - by Homunculus Reticulli
    I am changing my domain from foo.com to foobar.org. I used a Redirect permanent in my apache config file, and then restarted apache. When I try to access the old domain foo.com, I get a 403 error. This is what my apache config file looks like: <VirtualHost *:80> ServerName foo.com #ServerAlias www.foo.com #ServerAdmin [email protected] Redirect permanent / http://www.foobar.org/ DocumentRoot /path/to/project/foo/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.errors.log" <Directory /> Order Deny,Allow Deny from all </Directory> <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> <Directory /path/to/project/foo/web> Options -Indexes -Includes AllowOverride All Allow from All RewriteEngine On # We check if the .html version is here (cacheing) RewriteRule ^$ index.html [QSA] RewriteRule ^([^.])$ $1.html [QSA] RewriteCond %{REQUEST_FILENAME} !-f # No, so we redirect to our front end controller RewriteRule ^(.*)$ index.php [QSA,L] </Directory> <Directory /path/to/project/foo/web/uploads> Options -ExecCGI -FollowSymLinks -Indexes -Includes AllowOverride None php_flag engine off </Directory> Alias /sf /lib/vendor/symfony/symfony-1.3.8/data/web/sf <Directory /lib/vendor/symfony/symfony-1.3.8/data/web/sf> # Alias /sf /lib/vendor/symfony/symfony-1.4.19/data/web/sf # <Directory /lib/vendor/symfony/symfony-1.4.19/data/web/sf> Options -Indexes -Includes AllowOverride All Allow from All </Directory> </VirtualHost> Can anyone spot what I may be doing wrong?. The site foobar.org does exist so I don't know why this error occurs - help?

    Read the article

  • Ubuntu User WWW/FTP

    - by SnIpY
    I have a user named 'user' which I use to login to the ftp of my website. However, this presents me with a problem. If I want to allow my user to access to ftp, I have to typ the following: chown -R user:ftpusers /var/www/ By doing this, my website is no longer available when surfing to it. To make it available again, I have to typ the following command: chown -R www-data:www-data /var/www/ the user 'user' is in both the ftpusers and www-data group. How can I fix this so I wouldn't have to typ this anymore? I'm using apache2 and vsftpd on ubuntu

    Read the article

  • Cyrus: authentication failure: checkpass failed

    - by Nick
    I'm trying to get Cyrus IMAP (v 2.2 on Ubuntu 9.04) setup and working, but I'm having a bit of trouble getting the login working correctly. I've created a mailbox for my test user "nrahl": cm user/nrahl and then created a password: $ saslpasswd2 nrahl I'm attempting to connect to the mailbox using Thunderbird. I'm using the machine's LAN IP address as the host, and "nrahl" as the username. It connects to the server and prompts me for the password. When I enter it, I get "Login to server failed." in Thunderbird, and /var/log/mail.log shows: Apr 15 19:20:01 IMAP cyrus/imap[1930]: accepted connection Apr 15 19:20:09 IMAP cyrus/imap[1930]: badlogin: [192.168.5.21] plaintext nrahl SASL(-13): authentication failure: checkpass failed Part of /etc/imapd.conf with comments removed: sieveusehomedir: false sievedir: /var/spool/sieve #mailnotifier: zephyr #sievenotifier: zephyr #dracinterval: 0 #drachost: localhost hashimapspool: true allowplaintext: yes sasl_mech_list: PLAIN #allowapop: no #sasl_maximum_layer: 256 #loginrealms: example.com #virtdomains: userid #defaultdomain: sasl_pwcheck_method: saslauthd #sasl_auxprop_plugin: sasldb sasl_auto_transition: no

    Read the article

  • Turn off email notification from abrt (Automatic Bug Reporting Tool)

    - by Banjer
    I'm configuring CentOS 6.2 and have seen a few "[abrt] full crash report" emails. I understand that abrt is useful for creating crash dumps and what not, so I don't want to disable the service, I just would like to stop getting the crash report emails. I probably have to add something to the config file in /etc/abrt/abrt.conf. I can't seem to find anything in my searches. Any idea? Thanks. Edit: Here is my abrt.conf, which is rather simple. [root@myhost~]# cat /etc/abrt/abrt.conf # Enable this if you want abrtd to auto-unpack crashdump tarballs which appear # in this directory (for example, uploaded via ftp, scp etc). # Note: you must ensure that whatever directory you specify here exists # and is writable for abrtd. abrtd will not create it automatically. # #WatchCrashdumpArchiveDir = /var/spool/abrt-upload # Max size for crash storage [MiB] or 0 for unlimited # MaxCrashReportsSize = 1000 # Specify where you want to store coredumps and all files which are needed for # reporting. (default:/var/spool/abrt) # #DumpLocation = /var/spool/abrt And a listing of /etc/abrt: [root@myhost~]# ls -la /etc/abrt total 32 drwxr-xr-x. 3 root root 4096 Apr 13 06:14 . drwxr-xr-x. 97 root root 12288 Apr 13 03:50 .. -rw-r--r--. 1 root root 527 Dec 13 22:50 abrt-action-save-package-data.conf -rw-r--r--. 1 root root 572 Dec 13 22:50 abrt.conf -rw-r--r--. 1 root root 175 Dec 13 22:50 gpg_keys drwxr-xr-x. 2 root root 4096 Apr 13 06:13 plugins [root@myhost~]# ls -la /etc/abrt/plugins/ total 12 drwxr-xr-x. 2 root root 4096 Apr 13 06:13 . drwxr-xr-x. 3 root root 4096 Apr 13 06:14 .. -rw-r--r--. 1 root root 278 Dec 13 22:50 CCpp.conf Actually all of those conf files above are only a few lines and do not mention anything about mail, email, or notifications.

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >