Search Results

Search found 4028 results on 162 pages for 'mysqld safe'.

Page 72/162 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • After removing Ubuntu Builder there is still folder in home directory. Why? How to remove it?

    - by user132989
    I removed Ubuntu Builder using Synaptic with option Mark for Complete Removal. I thought that gonna delete folder (made by Ubuntu Builder) in home dir, but it's not. AFAIK Mark for Complete Removal is same as Purging, and that should delete all files made by program. Am I wrong? Is it safe to delete it with sudo rm -r /home/ubuntu-builder because it is in home dir? And last question. Why is it made folder in /home instead in ~/ dir?

    Read the article

  • my.ini optimization on Windows 2008 R2 VPS

    - by MKphpDev
    I have a vmware VPS running Windows Server 2008 R2 Enterprise that has performance issues with MySQL. Every few minutes, MySQL stall for few seconds then responed to queries. I'm sure that my.ini need to be optimized, but unfortunately, I don't have any idea of my.ini configuration. What's running on the server: 2 small wordpress blogs, 1 vbulletin forums (approx. 1.2 GB database, and increasing), small database for some sort of plug-ins (no more than 4000 records) Server Info: Processor: Intel Xeon X5550 @ 2.67GHz, RAM: 6 GB (memory useage never exceeded 2 GB), MySQL 5.5, PHP 5.3.10, IIS 7 current my.ini: [mysqld] default-storage-engine=INNODB sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE _USER,NO_ENGINE_SUBSTITUTION" max_connections=250 myisam_max_sort_file_size=20G innodb_additional_mem_pool_size=256M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=8M innodb_buffer_pool_size=512MB innodb_log_file_size=128M innodb_thread_concurrency=10 key_buffer_size = 512M myisam_sort_buffer_size = 8M join_buffer_size = 256K read_buffer_size = 256K sort_buffer_size = 256K table_cache = 4000 thread_cache_size = 200 wait_timeout = 30 connect_timeout = 10 tmp_table_size = 32M max_allowed_packet = 1M max_connect_errors = 10000 query_cache_size = 16M query_cache_limit = 2M query_cache_type = 1 query_cache_min_res_unit = 1024 query_prealloc_size = 16384 query_alloc_block_size = 16384 skip-external-locking read_rnd_buffer_size=1M max_heap_table_size=16M thread_concurrency=8 [mysqld_safe] open_files_limit = 8192 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M any help with that, please?

    Read the article

  • Ubuntu 12.04 installer does not recognize Windows 7

    - by trainofk
    I recently purchased an ASUS N56VZ-ES71 laptop which came with Windows 7 Home Premium installed on it. I wish to dual boot Windows 7 and Ubuntu 12.04 on it. I shrank the hard drive partitions to leave about 150 GB unallocated for Ubuntu 12.04. When I boot the Live CD of Ubuntu and attempt to install, the installer does not recognize any other operating systems. Through reading a few questions, I have found that this is due to a GPT partitioning table that Windows uses. I ran boot-repair as per other threads' suggestions. This was my output: http://paste.ubuntu.com/1176988/ I suppose my question is: how do I proceed in order to get the installer to recognize Windows, so that I don't have to erase the current partition table and can get a safe install? Thanks in advance.

    Read the article

  • What is this PHP process? It is crippling my server

    - by user1019588
    This process has been using 65% of my site CPU and has lasted for about 10 minutes now (aren't processes only supposed to go for a couple seconds?) It is obviously something with mysql. This makes sense because I have a lot of queries going, but something still seems a bit odd... This could have something to do with my bad PDO connection that I mentioned in the previous question. Perhaps I am opening too many connections or something like that? Here is the stats on it: Owner: mysql Priority: 0 CPU %: 61.1 Memory %: 0.4 Command:/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/cvps54834319.myhost.com.err --pid-file=/var/lib/mysql/cvps54834319.myhost.com.pid Thanks for any help on this. I have over 10GHZ on my server so this is very concerning to me.

    Read the article

  • Settings messed up after crash

    - by ChocoDeveloper
    After an abrupt shutdown many settings were messed up: #1 Firefox was a mess. Bookmarks were gone, and I couldn't even add new ones. I had to reset firefox from safe mode and install all my addons and configure everything. This was a pain but is now solved. #2 The background in the login screen shows the one I chose with Ubuntu Tweak for a second, and then it puts back the default one. I tried changing it again with Ubuntu Tweak but it's still happening. #3 All my shortcuts in the sidebar were replaced by the default ones. I re-added them manually, also a pain. So how can I solve 2? And in case this happens again, is there a way to fix everything easy and fast?

    Read the article

  • Gnome 3.10 doesn't show login, Ubuntu 13.10

    - by TheWebs
    I use gnome as default desktop. And I recently upgrade Ubuntu to 13.10 then upgraded gnome to 3.10. Upon restarting, it brings me to the gnome log in screen (GDM) and all I see is the top banner part so like your date, music volume, option to restart, suspend or shutdown. Hitting control+alt+delete tells me gnome display manager will log out - it does but still nothing. I am on the log in screen, but there is no user to select, like there normally is or was. I can boot into safe mode and drop down to shell but I am not sure what commands I would enter to get the login screen back. I am using the gnome PPA's and the "unstable" PPA, so I expected issues, but not this - ideas?

    Read the article

  • resizing partitions

    - by venetin
    I have the following configuration: sda1 1 GB maybe fat32 (windows recovery partition) sda2 40 GB ntfs(windows drive c) with boot flag sda3 around 100GB ntfs(storage partition) sda4 extended partition:sda5 10 GB ext4 partition sda6 1 GB linux swap I want to make this changes: sda2 30 GB resize(decrease size with 10 GB) sda3 around 100GB(move and maybe decrease size with 4-5 GB) sda4 around 20-22 GB (move and increase size with 10-15GB) sda5 around 20 GB (move and increase size with 10-12 GB) sda6 2 GB (move and increase size with 1 GB) Is it safe to do this operations?Will i lose grub? I will do the changes with gparted on puppy linux live usb. Thanks

    Read the article

  • Touchpad stop working

    - by Diegov
    I'm in a HP 430 PC-Notebook using Oneiric. And sometimes my touchpad just stop working :(... I haven't installed anything, just safe playing with terminal following linuxcommand.org BTW my touchpad has a "hole" that in Windows, it works like this: two touch, block touchpad, two touch, unblock. And as far as I now, Ubuntu hasn't recognize this function so I think that the "hole" is not the problem... Also, I'm pretty sure that I haven't touch a little that... PD: Spanish speaker, sorry if "hole" is not the appropriate term haha.. Thanks!

    Read the article

  • Is this a reliable method of parsing glGetShaderInfoLog()?

    - by m4ttbush
    I want to get a list of errors and their line numbers so I can display the error information differently from how it's formatted in the error string and also to show the line in the output. It looks easy enough to just parse the result of glGetShaderInfoLog(), look for ERROR:, then read the next number up to :, and then the next, and finally the error description up to the next newline. However, the OpenGL docs say: Application developers should not expect different OpenGL implementations to produce identical information logs. This makes me worry that my code may behave incorrectly on different systems. I don't need them to be identical, I just need them to follow the same format. So is there a better way to get a list of errors with the line number separate, is it safe to assume that they'll always follow the "ERROR: 0:123:" format, or is there simply no reliable way to do this?

    Read the article

  • mongodb eating 48G in 1min

    - by ledy
    In mongodb i work with this collection: Size 55.93g Data Size 39.82g Storage Size 41.08g Extents 53 Indexes 4 Index Size 9.64g It takes few seconds of mongdb being up with this single collection and all 48GB RAM on the dedicated server are gone. That's worse because there is also a mysqld+nginx/fcgi on this machine which should be allowed to use at least 24GB together. I.e. remaining 24GB, enough for the mongod! However, it does not share in a fair way. Everybody says that the memory for mongod is managed by OS and releases unneccessary space for other processes if they demand RAM. On my machine it is not releasing RAM. What's wrong? free total used free shared buffers cached` Mem: 49559136 49403908 155228 0 57284 47247564 -/+ buffers/cache: 2099060 47460076 Swap: 8008392 164 8008228

    Read the article

  • Stuck in logon loop

    - by MJeffryes
    Here's the deal. I set up a computer with Ubuntu 10.04 for my grandmother. Everything worked fine. I connected it to the internet at her house today. After rebooting the computer I found that the computer would kick you back to the logon screen if you attempted to logon to her account. It worked fine logging on to my admin account, and also in Gnome's safe mode. I thought it had resolved itself, but turns out it hadn't, and now I don't have physical access to the computer, plus the remote connection I'd hoped to use only works intermittently. I need some suggestions for troubleshooting for when I'm at her house at some point next week. Ask for any more details, but I'm afraid I won't be able to provide many more until I've checked it out in person, since she is basically unable to use a computer beyond web browsing. Thanks in advance!

    Read the article

  • Monit mail alert failed

    - by user119720
    I have configure our Monit to monitor some of the application in our linux box (httpd,mysqld,etc...).We can receive alerts when using gmail SMTP to send email through it but it failed when we are using our exchange SMTP. Here are the gmail configuration in the monitrc : set mailserver smtp.gmail.com port 587 # primary mailserver username "[email protected]" password "mypasswd" using tlsv1 with timeout 30 seconds and it failed when I changed it to this configuration : set mailserver outlook.automanage.net port 587 # primary mailserver username "[email protected]" password "mypasswd" using tlsv1 with timeout 30 seconds I can telnet my exchange server,so the exchange server is alive and can be connected. Did I miss anything here?Or do I need to need configure something in our exchange server?

    Read the article

  • Turn "log slow queries" ON.

    - by CodedK
    Hello, I'm trying to log mysql slow queries, but I can't turn it on. I will explain all my steps: Open and Edit my.cnf and add the following lines: long_query_time = 5 slow_query_log_file = /myfolder/slowq.log log_slow_queries = 1 =(I have MySQL 5.0.7) Give mysql user permitions to write on the file: chown -R mysql:mysql /var/lib/mysql Create the file: touch /myfolder/slowq.log Chmod for this file to 777. service mysqld restart From MySQL Admin Panel I can see that the "log_slow_queries" var is OFF! Also no logs are created. Thanks in advance! Best Regards, Panos.

    Read the article

  • Transient mysqlcheck errors about "size of datafile" (file too small)

    - by Adam Backstrom
    Running mysqlcheck on a live database is giving me transient errors like this one: mydatabase.mytable error : Size of datafile is: 500719688 Should be: 501000484 error : Corrupt When I run the command again or check the table one-off using mysql, it's listed as OK. Is this just a side effect of running checks on live tables? Is it possible that data is not flushed, hence the strange discrepancy? We moved several databases this morning by shutting down mysqld on the source and rsyncing files across to the new server, but these are all MyISAM tables so I don't believe the two things are related. (But I mention it just in case.)

    Read the article

  • "Unverifiable code failed policy check" for a closed source assembly

    - by Jason
    I'm attempting to dynamically load some (purchased) assemblies from resource streams in a C# program during an MSI installation routine, but I'm getting "Unverifiable code failed policy check". I read some tips online about compiling the embedded assembly with /clr:safe, but I don't have that option. Is there a way to work around this policy check? Thanks.

    Read the article

  • SharePoint 2010: CheckSuspiciousPhysicalPath exception

    - by Tommy Jakobsen
    I just installed SharePoint 2010 on my Windows Server 2008 R2 x64 server with SQL Server 2008. Everything is installed on this single server, and I configured SharePoint 2010 as described here: http://sharepoint.microsoft.com/blogs/fromthefield/Lists/Posts/Post.aspx?ID=112 Everything installed correct, and the SQL databases were created. But when I try to access the administration site through http://server:47632/_admin/adminconfigintro.aspx, I get the following exception: [HttpException] at System.Web.Util.FileUtil.CheckSuspiciousPhysicalPath(String physicalPath) at System.Web.CachedPathData.GetConfigPathData(String configPath) at System.Web.CachedPathData.GetConfigPathData(String configPath) at System.Web.CachedPathData.GetConfigPathData(String configPath) at System.Web.HttpContext.GetFilePathData() at System.Web.Configuration.CustomErrorsSection.GetSettings(HttpContext context, Boolean canThrow) at System.Web.HttpResponse.ReportRuntimeError(Exception e, Boolean canThrow, Boolean localExecute) at System.Web.HttpRuntime.FinishRequest(HttpWorkerRequest wr, HttpContext context, Exception e) The same error occurs when accessing http://server:47632/. Do you have any ideas whats causing this? I haven't been able to find anything about this issue. Edit 1: I just tried reinstalling SharePoint 2010 while monitoring the error log. When I run the configuration wizard, the following errors show up in the event log: The site /sites/Help could not be created. The following exception occurred: Dependency feature with id 5f3b0127-2f1d-4cfd-8dd2-85ad1fb00bfc for feature 'BaseSite' (id: b21b090c-c796-4b0f-ac0f-7ef1659c20ae) is not installed.. Safe mode did not start successfully. This page has encountered a critical error. Contact your system administrator if this problem persists. Safe mode did not start successfully. This page has encountered a critical error. Contact your system administrator if this problem persists. The site /sites/Help could not be created. The following exception occurred: Dependency feature with id 2ed1c45e-a73b-4779-ae81-1524e4de467a for feature 'BaseSite' (id: b21b090c-c796-4b0f-ac0f-7ef1659c20ae) is not installed.. The Execute method of job definition Microsoft.Office.InfoPath.Server.Administration.FormsMaintenanceJobDefinition (ID 59158daa-d0b6-458f-bd0a-7d1d713ac743) threw an exception. More information is included below. Access to the path 'C:\ProgramData\Microsoft\SharePoint\Config\319280433bd74178b6b1fd945c064698' is denied. The Execute method of job definition Microsoft.Office.InfoPath.Server.Administration.FormsMaintenanceJobDefinition (ID b3f45215-4d18-44f2-84bd-cc16bc339dd7) threw an exception. More information is included below. Access to the path 'C:\ProgramData\Microsoft\SharePoint\Config\319280433bd74178b6b1fd945c064698' is denied. Any ideas?

    Read the article

  • Dijkstra's Bankers Algorithm

    - by idea_
    Could somebody please provide a step-through approach to solving the following problem using the Banker's Algorithm? How do I determine whether a "safe-state" exists? What is meant when a process can "run to completion"? In this example, I have four processes and 10 instances of the same resource. Resources Allocated | Resources Needed Process A 1 6 Process B 1 5 Process C 2 4 Process D 4 7

    Read the article

  • Singleton pattern in web applications

    - by ryudice
    I'm using a singleton pattern for the datacontext in my web application so that I dont have to instantiate it every time, however I'm not sure how web applications work, does IIS open a thread for every user connected? if so, what would happend if my singleton is not thread safe? Also, is it OK to use a singleton pattern for the datacontext? Thanks.

    Read the article

  • Interpreting w3wp.exe thread-infos, does mscorwks.dll!StrongNameErrorInfo+0x7688 has a negative impa

    - by Robert
    I am trying to interpret the meaning of "mscorwks.dll!StrongNameErrorInfo+0x7688". I guess it means, that the assembly loaded by the mscorworks.dll has no StrongName? If yes, does this have any negative impact for a web application? Is it safe to assume that the thread count of 107 means, that web application has needed a maximum of 107 concurrent threads to handle incoming requests?

    Read the article

  • T-SQL: @@IDENTITY, SCOPE_IDENTITY(), OUTPUT and other methods of retrieving last identity

    - by Terrapin
    I have seen various methods used when retrieving the value of a primary key identity field after insert. declare @t table ( id int identity primary key, somecol datetime default getdate() ) insert into @t default values select SCOPE_IDENTITY() --returns 1 select @@IDENTITY --returns 1 Returning a table of identities following insert: Create Table #Testing ( id int identity, somedate datetime default getdate() ) insert into #Testing output inserted.* default values What method is proper or better? Is the OUTPUT method scope-safe? The second code snippet was borrowed from SQL in the Wild

    Read the article

  • Matlab Simulink version control with multiple developers

    - by Jon Mills
    We're using Matlab Simulink for model development (and Real-Time Workshop autocoding) within a team of several developers. We currently use Visual Source Safe (yes, I know its terrible) for version control, using locks to prevent conflicting changes. We'd like to migrate our programme to a different version control system (svn, hg or git), but we're concerned about performing merges and diffs on Simulink .mdl files. Does anybody have useful experience in performing merges on Simulink files?

    Read the article

  • ArrayCollection versus Vector Objects in FLEX

    - by Vetsin
    Can anyone tell me the applicable differences between an ArrayCollection and a Vector in flex? I'm unsure if I should be using one over the other. I saw that Vector is type safe and that makes me feel better, but are there disadvantages? public var ac:ArrayCollection = new ArrayCollection(); versus public var vec:Vector.<String> = new Vector.<String>(); Thanks.

    Read the article

  • Can I query DOM Document with xpath expression from multiple threads safely?

    - by Dan
    I plan to use dom4j DOM Document as a static cache in an application where multiples threads can query the document. Taking into the account that the document itself will never change, is it safe to query it from multiple threads? I wrote the following code to test it, but I am not sure that it actually does prove that operation is safe? package test.concurrent_dom; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.DocumentHelper; import org.dom4j.Element; import org.dom4j.Node; /** * Hello world! * */ public class App extends Thread { private static final String xml = "<Session>" + "<child1 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText1</child1>" + "<child2 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText2</child2>" + "<child3 attribute1=\"attribute1value\" attribute2=\"attribute2value\">" + "ChildText3</child3>" + "</Session>"; private static Document document; private static Element root; public static void main( String[] args ) throws DocumentException { document = DocumentHelper.parseText(xml); root = document.getRootElement(); Thread t1 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child1"); if(!n1.getText().equals("ChildText1")){ System.out.println("WRONG!"); } } } }; Thread t2 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child2"); if(!n1.getText().equals("ChildText2")){ System.out.println("WRONG!"); } } } }; Thread t3 = new Thread(){ public void run(){ while(true){ try { sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } Node n1 = root.selectSingleNode("/Session/child3"); if(!n1.getText().equals("ChildText3")){ System.out.println("WRONG!"); } } } }; t1.start(); t2.start(); t3.start(); System.out.println( "Hello World!" ); } }

    Read the article

  • How many inserts can you have in a sql transaction.

    - by Mav
    I have a task to do that will require me using a transaction to ensure that many inserts will be completed or the entire update rolled back. I am concerned about the amount of data that needs to be inserted in this transaction and whether this will have a negative affect on the server. We are looking at about 10,000 records in table1 and 60,0000 records into table2. Is this safe to do in a single transaction?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >