Search Results

Search found 24094 results on 964 pages for 'console log'.

Page 103/964 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Why can't I log into Lubuntu after the 13.04 update, even if I know the password?

    - by gudrun
    After the 13.04 update the login screen started to appear, even though I chose to automatically login in the Lubuntu install options one week ago. Even if my password and username is correct it won't login, it will just come back to the login screen. When I press ALT-CTRL-F1 at the login screen I am able to login perfectly, but I have no GUI and I'm kind of lost on that field. WHat is going?Is it just a bug? Can I downgrade?I tried several forums with different solutions but none of them worked.

    Read the article

  • Cannot see user desktop (Applications, Places, System...) when I log in

    - by Jesi
    I am very new to Ubuntu. I recently got a new laptop running Windows 7. I am using Virtual Box and just installed the Ubuntu 12.10 ISO as a new Virtual Machine within Virtual Box. Everything seemed to install just fine and I even added the Guest Additions under Devices. The problem is that I cannot see the menus and my login information. The virtual machine says it is running; however, I do not have the Applications, Places, System, etc. tray to select from. Is there something I am supposed to do after logging in to get this? I entered my password and everything seemed fine, I just don't have those drop-down menus available...

    Read the article

  • Why doesn't the monitor output anything in Linux console mode?

    - by flypen
    I install Linux without graphics support. Previously I used a monitor with 720p support. And it can display normally. Now I change to a monitor with 1080p support. I can see BIOS and GRUB info on monitor, and kernel messages in early stages. However, the monitor says that there is no input immediately, and then I can't see anything again. It seems that it happens after something initializes. Is it related to vesafb? vesafb: mode is 1280x1024x32, linelength=5120, pages=0 vesafb: scrolling: redraw vesafb: Truecolor: size=8:8:8:8, shift=24:16:8:0 mtrr: type mismatch for 7f800000,800000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,400000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,200000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,100000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,80000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,40000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,20000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,10000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,8000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,4000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,2000 old: write-back new: write-combining mtrr: type mismatch for 7f800000,1000 old: write-back new: write-combining vesafb: framebuffer at 0x7f800000, mapped to 0xffffc90011380000, using 5120k, total 5120k Console: switching to colour frame buffer device 160x64 fb0: VESA VGA frame buffer device

    Read the article

  • Centos 5.xx Nagios sSMTP mail cannot be sent from nagios server, but works great from console

    - by adam
    I spent last 3 hours of reasearch on how to get nagios to work with email notifications, i need to send emails form work where the only accesible smtp server is the company's one. i managed to get it done from the console using: mail [email protected] working perfectly for the purpouse i set up ssmtp.conf so as: [email protected] mailhub=smtp.company.com:587 [email protected] AuthPass=mypassword FromLineOverride=YES useSTARTTLS=YES rewriteDomain=company.pl hostname=nagios UseTLS=YES i also edited the file /etc/ssmtp/revaliases so as: root:[email protected]:smtp.company.com:587 nagios:[email protected]:smtp.company.com:587 nagiosadmin:[email protected]:smtp.company.com:587 i also edited the file permisions for /etc/ssmtp/* so as: -rwxrwxrwx 1 root nagios 371 lis 22 15:27 /etc/ssmtp/revaliases -rwxrwxrwx 1 root nagios 1569 lis 22 17:36 /etc/ssmtp/ssmtp.conf and i assigned to proper groups i belive: cat /etc/group |grep nagios mail:x:12:mail,postfix,nagios mailnull:x:47:nagios nagios:x:2106:nagios nagcmd:x:2107:nagios when i send mail manualy, i recieve it on my priv box, but when i send mail from nagios the mail log says: Nov 22 17:47:03 certa-vm2 sSMTP[9099]: MAIL FROM:<[email protected]> Nov 22 17:47:03 certa-vm2 sSMTP[9099]: 550 You are not allowed to send mail from this address it says [email protected] and im not allowed to send mails claiming to be [email protected], its suppoused to be [email protected], what am i doing wrong? i ran out of tricks... kind regards Adam xxxx

    Read the article

  • Datastage 8.7 installs fine on Window7 without any errors but it can not launch localhost:9080 web console

    - by user265273
    When I launch the web console, I get page can not be displayed error. What I have tried so far: I have re-installed DS about 7 times, and each time, I get same errors. I added entries in etc/hosts file for local host and my host name. I have turned off firewall. My hardware/software setup. My host system is window8.1. My vmware workstation is 7. The guest os is windows 7 enterprise x64. I have installed 10g, and given dba role to public. I have installed VS 5, and ms visual c++ 2010 express. I have installed msxml. IE version is 10. Firewall is off. My internet works fine It passed all the DS requirement tests and install completed successfully. When I launch my vmware guest instance, I do get SQL5000c error upon boot which I have tried to ignore in some installs and in some, I used db2systray -clean command to get rid of it. But that has not helped solve the webconsole connect failure to my host. I have spent over 2 weeks exclusively on this issue and badly need some help.

    Read the article

  • How can I tell System Restore in WIndows 7 recovery console to use my recovered backup drive's restore point data?

    - by Rich Shealer
    My Windows 7 desktop PC failed to boot. It would get to a grayish screen with a mouse and would only respond to the power button. After much examination I found that the problem was not a failed drive as running CHKDSK from the Recovery Console on my main drives passed without any errors. I had been installing various Java version in the days before the failure so I decided to use a restore point to roll backwards. I have an external SATA drive controller with two 2 TB drives mirrored using the Windows mirroring function. My system has been backing up to this drive regularly. The problem is I accidently broke the mirror when testing to see if this drive system might have been causing my boot issue. Connecting it to another machine showed two dynamic drives that were invalid. In the end I reformatted one as an NTFS basic disc and used recovery software on the other to copy all of the files to the reformatted drive. I had to copy the restore points into the new drive's System Volume Information folder by granting rights to that user. I moved the drive back to the original machine and rebooted. I can see my new drive, it even uses the same drive letter as it did in normal mode. Running System Restore it lists a new Automatic Restore point created while sitting at the RC along with all of my backups. Selecting the backup I want (or any other) I get a dialog. The backup drive could not be found. System Restore is looking for restore points on your backup. Make sure the backup drive is on and connected to this computer and then click OK. What do I need to do to allow system restore to see the restore points?

    Read the article

  • How do you read a segfault kernel log message.

    - by Sullenx
    This can be a very simple question, I'm am attempting to debug an application which generates the following segfault error in the kern.log /var/log/kern.log.0:Jan 8 13:25:56 myhost kernel: myapp[15514]: segfault at 794ef0 ip 080513b sp 794ef0 error 6 in myapp[8048000+24000] Here are my questions: 1) Is there any documentation as to what are the diff error numbers on segfault, in this instance it is error 6, but i've seen error 4, 5 2) What is the meaning of the information at bf794ef0 ip 0805130b sp bf794ef0 and myapp[8048000+24000]? So far i was able to compile with symbols, and when i do a "x 0x8048000+24000" it returns a symbol, is that the correct way of doing it? My assumptions thus far are the following: sp = stack pointer? ip = instruction pointer at = ???? myapp[8048000+24000] = address of symbol?

    Read the article

  • XmlSerializer throws exception when serializing dynamically loaded type

    - by Dr. Sbaitso
    Hi I'm trying to use the System.Xml.Serialization.XmlSerializer to serialize a dynamically loaded (and compiled class). If I build the class in question into the main assembly, everything works as expected. But if I compile and load the class from an dynamically loaded assembly, the XmlSerializer throws an exception. What am I doing wrong? I've created the following .NET 3.5 C# application to reproduce the issue: using System; using System.Collections.Generic; using System.Xml.Serialization; using System.Text; using System.Reflection; using System.CodeDom.Compiler; using Microsoft.CSharp; public class StaticallyBuiltClass { public class Item { public string Name { get; set; } public int Value { get; set; } } private List<Item> values = new List<Item>(); public List<Item> Values { get { return values; } set { values = value; } } } static class Program { static void Main() { RunStaticTest(); RunDynamicTest(); } static void RunStaticTest() { Console.WriteLine("-------------------------------------"); Console.WriteLine(" Serializing StaticallyBuiltClass..."); Console.WriteLine("-------------------------------------"); var stat = new StaticallyBuiltClass(); Serialize(stat.GetType(), stat); Console.WriteLine(); } static void RunDynamicTest() { Console.WriteLine("-------------------------------------"); Console.WriteLine(" Serializing DynamicallyBuiltClass..."); Console.WriteLine("-------------------------------------"); CSharpCodeProvider csProvider = new CSharpCodeProvider(new Dictionary<string, string> { { "CompilerVersion", "v3.5" } }); CompilerParameters csParams = new System.CodeDom.Compiler.CompilerParameters(); csParams.GenerateInMemory = true; csParams.GenerateExecutable = false; csParams.ReferencedAssemblies.Add("System.dll"); csParams.CompilerOptions = "/target:library"; StringBuilder classDef = new StringBuilder(); classDef.AppendLine("using System;"); classDef.AppendLine("using System.Collections.Generic;"); classDef.AppendLine(""); classDef.AppendLine("public class DynamicallyBuiltClass"); classDef.AppendLine("{"); classDef.AppendLine(" public class Item"); classDef.AppendLine(" {"); classDef.AppendLine(" public string Name { get; set; }"); classDef.AppendLine(" public int Value { get; set; }"); classDef.AppendLine(" }"); classDef.AppendLine(" private List<Item> values = new List<Item>();"); classDef.AppendLine(" public List<Item> Values { get { return values; } set { values = value; } }"); classDef.AppendLine("}"); CompilerResults res = csProvider.CompileAssemblyFromSource(csParams, new string[] { classDef.ToString() }); foreach (var line in res.Output) { Console.WriteLine(line); } Assembly asm = res.CompiledAssembly; if (asm != null) { Type t = asm.GetType("DynamicallyBuiltClass"); object o = t.InvokeMember("", BindingFlags.CreateInstance, null, null, null); Serialize(t, o); } Console.WriteLine(); } static void Serialize(Type type, object o) { var serializer = new XmlSerializer(type); try { serializer.Serialize(Console.Out, o); } catch(Exception ex) { Console.WriteLine("Exception caught while serializing " + type.ToString()); Exception e = ex; while (e != null) { Console.WriteLine(e.Message); e = e.InnerException; Console.Write("Inner: "); } Console.WriteLine("null"); Console.WriteLine(); Console.WriteLine("Stack trace:"); Console.WriteLine(ex.StackTrace); } } } which generates the following output: ------------------------------------- Serializing StaticallyBuiltClass... ------------------------------------- <?xml version="1.0" encoding="IBM437"?> <StaticallyBuiltClass xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Values /> </StaticallyBuiltClass> ------------------------------------- Serializing DynamicallyBuiltClass... ------------------------------------- Exception caught while serializing DynamicallyBuiltClass There was an error generating the XML document. Inner: The type initializer for 'Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterDynamicallyBuiltClass' threw an exception. Inner: Object reference not set to an instance of an object. Inner: null Stack trace: at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id) at System.Xml.Serialization.XmlSerializer.Serialize(TextWriter textWriter, Object o, XmlSerializerNamespaces namespaces) at System.Xml.Serialization.XmlSerializer.Serialize(TextWriter textWriter, Object o) at Program.Serialize(Type type, Object o) in c:\dev\SerTest\SerTest\Program.cs:line 100 Edit: Removed some extraneous referenced assemblies

    Read the article

  • How do I configure the binary log file for auditing in MySQL?

    - by Parth
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution EDIT I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • How do I show the SVN revision number in git log?

    - by Zain
    I'm customizing my git log to be all in 1 line. Specifically, I added the following alias: lg = log --graph --pretty=format:'%Cred%h%Creset - %C(yellow)%an%Creset - %s %Cgreen(%cr)%Creset' --abbrev-commit --date=relative So, when I run git lg, I see the following: * 41a49ad - zain - commit 1 message here (3 hours ago) * 6087812 - zain - commit 2 message here (5 hours ago) * 74842dd - zain - commit 3 message here (6 hours ago) However, I want to add the SVN revision number in there too, so it looks something like: * 41a49ad - r1593 - zain - commit 1 message here (3 hours ago) The normal git log shows you the SVN revision number, so I'm sure this must be possible. How do I do this?

    Read the article

  • Is there a sample set of web log data available for testing analysis against?

    - by Peter
    Sorry if this isn't strictly speaking a programming question, but I figure my best chance of success would be to ask here. I'm developing some web log file analysis algorithms, but to date I only have access to a fairly small amount of web log data to process. One algorithm I want to use makes some assumptions about 'the shape' of typical web log data, and so I'd like to test it against a larger 'exemplar' - perhaps the logs of a busy site with a good distribution of traffic from different sources etc. Is there a set of such data available somewhere? Thanks for any help.

    Read the article

  • How do I manually log a user in with a MembershipProvider?

    - by Allen
    I'm experimenting with writing my own custom MembershipProvider in asp.net and I want to roll my own login page. We do some fairly special stuff at login time so we can't use the default login control so I need a way to manually log a user in. So far I haven't found anything on how to write your own login control so I'm here, wondering how I can manually log a user in via a MembershipProvider. I've tried Membership.ValidateUser("user", "pass"); and while that does call ValidateUser() on my custom MembershipProvider, and it does return true, it doesn't actually log me in. Btw I'm fairly new to the whole MembershipProvider stuff so if I'm not even on the right wavelength, feel free to let me know.

    Read the article

  • Is it possible to programmatically switch error log providers with ELMAH?

    - by Ralph Lavelle
    Is it possible to switch from using the XML provider to SQL Server using ELMAH? I need to investigate this option because of the fallibility of our SQL Server where our ELMAH errors are stored. I want to be able to fail gracefully and continue logging to XML if the server fails. I can see that programmatic connection string switching is possible, and I see that ELMAH Issue 149 announces the programmatic configuration of default error log, but I can't actually see any code examples anywhere, so I'm not too sure if this is possible. I'm guessing it is, in which case does anyone know for sure? My question is similar to this one, except that I want to try and log errors to SQL Server, and if that fails switch to XML, not log to all stores all the time.

    Read the article

  • Where are the best locations to write an error log in Windows?

    - by Keith Sirmons
    Where would you write an error log file, say ErrorLog.txt, in Windows? Keep in mind the path would need to be open to basic users for file write permissions. I know the eventlog is a possible location for writing errors, but does it work for "user" level permissions? EDIT: I am targeting Windows 2003, but I was posing the question in such a way as to have a "General Guideline" for where to write error logs. As for the EventLog, I have had issues before in an ASP.NET application where I wanted to log to the Windows event log, but I had security issues causing me heartache. (I do not recall the issues I had, but remember having them.)

    Read the article

  • How do you log php errors with CakePHP when debug is 0?

    - by Justin
    I would like to log PHP errors on a CakePHP site that has debug = 0. However, even if I turn on the error log, like this: error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED log_errors = On it doesn't log errors. The problem is that even for a parse error that should cause the CakePHP environment to not load completely (I think), it still blocks the error from being logged. If I set debug to 3, it logs to the file without issue. I am using CakePHP 1.2. I know this is apparently made easier in 1.3, but I'm not ready to upgrade.

    Read the article

  • What is the best way to log errors in Zend Framework in my project at this stage?

    - by Pasta
    We built an app in Zend Framework and have not worked a lot in setting up error reporting and logging. Is there any way we could get some level or error reporting without too much change in the code? Is there a ErrorHandler plugin available? The basic requirement is to log errors that happens within the controller, missing controllers, malformed URLs, etc. I also want to be able to log errors within my controllers. Will using error controller here, help me identify and log errors within my controllers? How best to do this with minimal changes?

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Trying to backup system state on server 2003 sp2, getting "Faulting application vssvc.exe - system state backup failed" in applciation log

    - by IT_Fixr
    Trying to backup system state on Windows Server 2003 (SP2), getting "Faulting application vssvc.exe - system state backup failed" in application log. Volume shadow copy creation: Attempt 1. "MSDEWriter" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Event Log Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Registry Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "COM+ REGDB Writer" has reported an error 0x800423f2. This is part of System State. The volume shadow copy operation can be retried. "Removable Storage Manager" has reported an error 0x0. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f2 Aborting Backup.

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • How can find the USB wireless adapter into the dmesg log file?

    - by AndreaNobili
    I am pretty new in Linux (RaspBian for RaspBerry Pi but I think that there are not difference) and I have to install an USB wireless network adapter (the product is the TP-Link TL-WN725N, this one: http://www.tp-link.it/products/details/?model=TL-WN725N ) Now, I think that this is not automatically recognized by my system because if I execute ifconfig command I obtain the following output: pi@raspberrypi ~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:2a:9f:b0 inet addr:192.168.1.8 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:424 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:34195 (33.3 KiB) TX bytes:89578 (87.4 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So now it see only my ethernet network interface and not the wireless. So I was thinkig to try to see into the dmesg, but I don't know what have I to see and how to select it into the dmesg output. For example by the following command I can see the line of the dmesg log file relate to my ethernet port: pi@raspberrypi ~ $ cat /var/log/dmesg |grep -i eth [ 3.177620] smsc95xx 1-1.1:1.0 eth0: register 'smsc95xx' at usb-bcm2708_usb-1.1, smsc95xx USB 2.0 Ethernet, b8:27:eb:2a:9f:b0 [ 18.030389] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 19.642167] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 But what can I try to search for the USB wireless adapter? Tnx

    Read the article

  • How can I find the USB wireless adapter into the dmesg log file?

    - by AndreaNobili
    I am pretty new in Linux (RaspBian for RaspBerry Pi but I think that there are not difference) and I have to install an USB wireless network adapter (the product is the TP-Link TL-WN725N, this one: http://www.tp-link.it/products/details/?model=TL-WN725N ) Now, I think that this is not automatically recognized by my system because if I execute ifconfig command I obtain the following output: pi@raspberrypi ~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:2a:9f:b0 inet addr:192.168.1.8 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:424 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:34195 (33.3 KiB) TX bytes:89578 (87.4 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So now it see only my ethernet network interface and not the wireless. So I was thinkig to try to see into the dmesg, but I don't know what have I to see and how to select it into the dmesg output. For example by the following command I can see the line of the dmesg log file relate to my ethernet port: pi@raspberrypi ~ $ cat /var/log/dmesg |grep -i eth [ 3.177620] smsc95xx 1-1.1:1.0 eth0: register 'smsc95xx' at usb-bcm2708_usb-1.1, smsc95xx USB 2.0 Ethernet, b8:27:eb:2a:9f:b0 [ 18.030389] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 19.642167] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 But what can I try to search for the USB wireless adapter? Tnx

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • How do I get rid of com.apple.launchd.peruser errors in my log?

    - by Chris R
    I'm getting repeated errors in my console log that look (basically) like this: 10-09-29 10:06:08 AM com.apple.launchd[1] (com.apple.launchd.peruser.501[51581]) getpwuid("501") failed 10-09-29 10:06:08 AM com.apple.launchd[1] (com.apple.launchd.peruser.501[51581]) Exited with exit code: 1 This machine was set up using the migration assistant, from a machine where my UID was 501, but here it's 505. I have the same username and group set, of course, but... So, where is this peruser launchd tool configured, so that I can disable the daemons that are causing this error message?

    Read the article

  • How do I get rid of com.apple.launchd.peruser errors in my log?

    - by Chris R
    I'm getting repeated errors in my console log that look (basically) like this: 10-09-29 10:06:08 AM com.apple.launchd[1] (com.apple.launchd.peruser.501[51581]) getpwuid("501") failed 10-09-29 10:06:08 AM com.apple.launchd[1] (com.apple.launchd.peruser.501[51581]) Exited with exit code: 1 This machine was set up using the migration assistant, from a machine where my UID was 501, but here it's 505. I have the same username and group set, of course, but... So, where is this peruser launchd tool configured, so that I can disable the daemons that are causing this error message?

    Read the article

  • Can't log in with a valid password using Authlogic and Ruby on Rails?

    - by kbighorse
    We support a bit of an unusual scheme. We don't require a password on User creation, and use password_resets to add a password to the user later, on demand. The problem is, once a password is created, the console indicates the password is valid: user.valid_password? 'test' = true but in my UserSessions controller, @user_session.save returns false using the same password. What am I not seeing? Kimball UPDATE: Providing more details, here is the output when saving the new password: Processing PasswordResetsController#update (for 127.0.0.1 at 2011-01-31 14:01:12) [PUT] Parameters: {"commit"="Update password", "action"="update", "_method"="put", "authenticity_token"="PQD4+eIREKBfHR3/fleWuQSEtZd7RIvl7khSYo5eXe0=", "id"="v3iWW5eD9P9frbEQDvxp", "controller"="password_resets", "user"={"password"="johnwayne"}} The applicable SQL is: UPDATE users SET updated_at = '2011-01-31 22:01:12', crypted_password = 'blah', perishable_token = 'blah', password_salt = 'blah', persistence_token = 'blah' WHERE id = 580 I don't see an error per se, @user_session.save just returns false, as if the password didn't match. I skip validating passwords in the User model: class User < ActiveRecord::Base acts_as_authentic do |c| c.validate_password_field = false end Here's the simplified controller code: def create logger.info("SAVED SESSION? #{@user_session.save}") end which outputs: Processing UserSessionsController#create (for 127.0.0.1 at 2011-01-31 14:16:59) [POST] Parameters: {"commit"="Login", "user_session"={"remember_me"="0", "password"="johnwayne", "email"="[email protected]"}, "action"="create", "authenticity_token"="PQD4+eIREKBfHR3/fleWuQSEtZd7RIvl7khSYo5eXe0=", "controller"="user_sessions"} User Columns (2.2ms) SHOW FIELDS FROM users User Load (3.7ms) SELECT * FROM users WHERE (users.email = '[email protected]') ORDER BY email ASC LIMIT 1 SAVED SESSION? false CACHE (0.0ms) SELECT * FROM users WHERE (users.email = '[email protected]') ORDER BY email ASC LIMIT 1 Redirected to http://localhost:3000/login Lastly, the console indicates that the new password is valid: $ u.valid_password? 'johnwayne' = true Would love to do it all in the console, is there a way to load UserSession controller and call methods directly? Kimball

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >