Search Results

Search found 20869 results on 835 pages for 'things i hate'.

Page 618/835 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • Suspending/Screen Going Off When Still In Use (Ubuntu & Arch)

    - by luke
    I have a laptop (HP Pavilion G6) that was running Ubuntu and for a while now (at least 6 months) has been having a problems randomly suspending whilst still in use with a full battery and still being charged. Originally the problem was with Ubuntu so I first attempted to disable suspend using every way I could find (gui settings + dconf editor) this didn't work and it still kept suspending so I ended up switching to Arch Linux. Unfortunately not long after switching to Arch Linux I ended up experiencing the same problems. So yet again I modified the settings in /etc/systemd/logind.conf to prevent it from suspending and this time it worked, kind of. Now I am experiencing the screen going off and I have to change to a different tty (by using ctrl-alt-fx, which was something I also found I had to do sometimes when waking up from suspend in Ubuntu) to get the screen to go back on. The strange thing is this only happens when running a Linux distros and only occasionally (e.g. it may happen once/twice a week at most). But when it does happen it can happen multiple times in a row. And it only seems to happen when I am using it. This may just mean that it hasn't happened yet when I am not but generally if I leave it to run something or play a video it hasn't occurred only when I am using it regardless of which program I am using (e.g. it has occurred when using firefox, vim, even when using a virtualbox vm). At first I thought it could be the CPU temperature but after monitoring it I discovered it occurred a lot of the time when my CPU was less than 50 °C. I then checked /var/log/* but could not see anything related to it suspending only a few standard things from when it was woken up. I am really out of ideas and really hoping someone can help. Thanks in advance.

    Read the article

  • linux networking: how to redirect incoming connections from old server to new server?

    - by aliz
    hi I'm in the process of moving my old server to a new server, but i will keep the old server running for database replication and load balancing, etc. each server has a separate internet connection with a static ip, and they are connected through a local Ethernet connection. I've got Ubuntu 8.04 32-bit running on old server and Debian 6.0 64-bit on new one. shorewall firewall is installed on both servers. there are some outdoor devices which are periodically sending data to port 43597 for old server IP address. I can run multiple instances of the network service which is responsible for receiving data from devices on a server but on different ports. here's the question: how can I run the service on new server and have connections coming to old server redirected to it, and new devices can still connect to new server's IP address preferably on the same port and same service? until all devices get updated to send to new server. I've tried a shorewall DNAT rule, but seems like new server's default route should be changed to ethernet connection, which breaks other things. I also found about redir utility, but still haven't tried it. is there any best practice or simple solution for such a scenario, i'm not aware of? thanks in advance.

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Which default Database Systems come installed in Microsoft VS2010 Express?

    - by Tonygts
    Appreciate all advice 0n the following questions Which database systems (Ms SQL 2008, MS SQL Compact, or others) comes installed with VS2010 Express edition. SQL Server 2008 R2 Express is free, can we install and integrate with VS2010 Express? How to uninstall those database already come installed? I have installed VS2010 express on Windows 7; just VS2010 components (VB, C#, C++ and Web Developer) and without installing any other things like SQL Express. In the Console Panel-Program & Features' window, the installed list is shown below: Microsoft SQL Server 2008 Setup Support File Microsoft SQL Server 2008 Browser Microsoft SQL Server VSS Writer Microsoft SQL Server Database Publishing Wizard 1.4 Microsoft ASP.NET MVC2 - VWD Express 2010 Tools Microsoft SQL Server 2008 Management Objects Microsoft SQL Server Compact 3.5 SP2 ENU Microsoft SQL Server System CLR Types Microsoft Silverlight 3 SDK Microsoft ASP.NET MVC 2 Microsoft Visual Studio 2010 ADO.NET Entity Framework Tools Visual Studio 2010 Tools doe SQL Server Compact 3.5 SP2 ENU Web Deployment Tool Microsoft Visual Web Developer 2010 Express - ENU Microsoft Visual C++ 2010 Express - ENU Microsoft Visual C# 2010 Express - ENU Microsoft Visual Visual Basic 2010 Express - ENU Microsoft SQL Server 2008 As you can see, Microsoft SQL Server 2008 (last line) and near the top, Microsoft SQL Server Compact 3.5 SP2 ENU and many of their related SQL components such as Microsoft SQL Server 2008 R2 Management Objects are also installed. These are actually installed by installing VS2010 Express, but I have no idea how to use them or verify their valid existence from VS2010. Also, do I have to uninstall them before I install SQL Server 2008 R2, which is the latest version I believe? And what tool is needed to manage and create data source and tables?

    Read the article

  • Windows Server 2008 is stuck at "configuring updates - stage 3 of 3 - 0% complete"

    - by Chris
    This has happened the last two times I've done updates to this system, and I really have no idea what is going on. It is installing a only a month's worth of updates. It only responds to ping and no services are up, so I can't view the system remotely (I have to hook up a monitor to see this message). In the past I've just restarted the system at this point and it eventually finishes updating. I want to know what I can do to avoid this situation, how to diagnose what is going on, and how to get any kind of remote access during the updates. Edit: I can start the machine in safe mode (where I did nothing but backup some files). I restarted and it no longer tries to do a windows update, just goes to the desktop where everything seems extremely broken. I can click on some things, but not launch most programs. I guess all I can do at this point is do a system restore or something. Edit: Re-installed windows on this system yesterday. That's my usual solution to issues I don't feel like diagnosing, like this one.

    Read the article

  • Mysql server high trafic makes websites really slow or unable to load

    - by Holapress
    Lately we have been having a lot of problems with our mysql server, from websites being really slow or even unable to load them at all. The server is a dedicated server that only runs our mysql database. i have been running some test using a profiler (JetProfiler) and tool to stress test (loadUI). If I use loadUI to connect with 50 simultaneous connections to one of our websites that runs a resently big query it will already make the website be unable to load. One of the things that makes me worried is that when I look at Jetprofile it always shows a Treads_connected of 1.00 and it seems that when it hits around 2.00 that I'm unable to connect. The 3 big peaks are when I run a test with loadUI, first one was 15 simultaneous connections wich made it still able for me to load the website but just really slow, the second one was 40 simultaneous connections which already made it impossible to load and the third one was with 100 connection which also didn't make it load anymore. Another thing that worries me is that in JetProfiler it says all the queries that get used are full table scans, could this maybe be the problem? The website I run as a test runs 3 queries, one for a menu that outputs around 1000 rows, one for the adds that has around 560 rows and a big one to get posts that has around 7000 rows (see screenshot bellow) I also have monitored the cpu of the server and there seems to be no problem there, even when I make a lot of connections with loadui the cpu stays low. I can't seem to figure out what is the main cause of the websites being unable to load when there is a high amount of traffic, if anyone has other suggestions for testing or something that might cause the problem please let me know.

    Read the article

  • Does a high run queue length average result in poor performance for a web server?

    - by Domino
    I'm trying to narrow down the list of suspects of web servers that perform moderately well most of the time with occasional bouts of poor performance. I'm analyzing the data collected and summarized by sar. I've noticed a few things, one of which is high number of tasks in the run queue. 10:15:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked 10:25:01 AM 2 150 0.05 0.05 0.06 0 10:35:01 AM 4 149 0.08 0.12 0.09 0 10:45:01 AM 6 150 0.13 0.19 0.15 0 10:55:01 AM 1 150 0.08 0.10 0.13 0 11:05:01 AM 4 150 0.20 0.35 0.23 0 11:15:01 AM 3 149 0.02 0.09 0.15 0 11:25:01 AM 7 149 0.04 0.05 0.11 0 11:35:01 AM 4 150 0.14 0.15 0.13 0 11:45:01 AM 6 150 0.27 0.18 0.16 0 11:55:01 AM 5 150 0.08 0.10 0.13 0 12:05:01 PM 3 149 0.35 0.40 0.26 0 12:15:01 PM 19 155 0.02 0.10 0.16 1 12:25:01 PM 2 150 0.00 0.07 0.12 0 12:35:02 PM 3 151 0.58 0.24 0.17 0 12:45:01 PM 8 150 0.02 0.13 0.15 0 12:55:01 PM 6 149 0.81 0.29 0.18 0 01:05:01 PM 3 148 0.00 0.09 0.13 0 01:15:01 PM 7 149 0.00 0.04 0.11 0 I believe these are 10 minute averages. Is this an indicator that the web server is not performing as fast as it could if the average run queue length was lower?

    Read the article

  • mod_perl custom configuration directives don't work when placed in .htaccess and there is <Location>

    - by al_l_ex
    I'm trying to complete Redmine's feature request #2693: Use Redmine.pm to authenticate for any directory (1). I have not much knowledge on all these things and need help. Redmine uses mod_perl module Redmine.pm for authentication & authorization. This module defines several custom configuration directives. I've successfully modified patch from (1) and it works when all config is in <Location>: <Location /digischrank/test> AuthType basic AuthName "Digischrank Test" Require valid-user PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=SomedaTaBAse;host=localhost" RedmineDbUser "SoMeuSer" RedmineDbPass "SomePaSS" RedmineProject "digischrank" </Location> But when I move one of these directives (RedmineProject, see (1)) in .htaccess file, Redmine.pm doesn't see it! I've tried to change <Location> to <Directory> and add AllowOverride All. Directives from .htaccess is visible, but remaining ones from <Directory> - not. I don't want to move all directives to each .htaccess. When I add <Location> in addition to <Directory>, again - only directives from <Location> are visible. As far as I know, directives should be merged. I miss something?

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • Setting a subdomain to access home machine with windows remote desktop

    - by ianhales
    I'm trying to remotely connect to home machine through Windows Remote Desktop (amongst other things, but this is currently my primary focus). I can do this fine using my home WAN's static IP (thank god for cable!) with port-forwarding, but I would like to access it from a subdomain of my web-site (e.g. home.mydomain.co.uk). In the cPanel for my hosting account, I've gone into DNS zones and altered the A-record to point to my WAN's IP, which I thought should do the job, but I still cannot connect. When I ping the subdomain, I get my web-host's IP, which I guess is to be expected as I believe the DNS of the host domain is used first, then my server handles the redirection of traffic to the IP in the A-record. Is this the correct idea? Do A-record changes suffer from the same propagation delays as DNS record changes, as I suppose that could explain it? (by the way, this thread confirms my thoughts that setting the A-record should be enough: Hostmonster Subdomain redirected to home server IP: How to ssh into home server using subdomain)

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • Find which files an apache process is writing to?

    - by Haluk
    We have this apache process which becomes io-bound time to time. Using atop, we can see it is a write operation. Using lsof -p <PID> we can see a list of files open by the httpd process. First we thought "log" files must be the problem. So we turned them off just to test. However write operations still continues. We will continue testing a few other things. For instance we use php session variables a lot. Maybe php session files are getting all the writing. But is there a way to quickly identify files which get written to by the httpd process? This way we can focus our efforts on those files. UPDATE: We used the strace command as suggested. Here are two lines from the output. write(23, "\27\0\0\0\3SET CHARACTER SET utf8", 27) = 27 write(23, "\17\0\0\0\3SET NAMES utf8", 19) = 19 We do not have a mysql process on this server. So is strace also showing what is being written to an ethernet port? UPDATE2: During high io load, the process which consumes most of the write resources gives the following output to strace -e trace=write -p <PID>: --- SIGCHLD (Child exited) @ 0 (0) --- write(9, "!", 1) = 1 write(19, "OPTIONS * HTTP/1.0\r\nUser-Agent: Apache (internal dummy connection)\r\n\r\n", 70) = 70 However I cannot figure out where these are being written to.

    Read the article

  • troubleshooting really slow login on a (linux) machine

    - by Peeter Joot
    Within the last couple of weeks, any attempt to login to a specific linux server has gotten really slow. Once I've logged in, things appear to run without significant delay, but some other login like activities (like starting a new screen session) are slow. The machine's been rebooted a couple of times recently and that hasn't helped. , and it doesn't appear to be $PATH search (where $PATH can sometimes include bad NFS mounts), which I've seen historically in our environment. I've also tried completely removing my .profile/.bash*/... type of init files to rule out anything bad there. I also see slow login for at least one other userid on the system. One thing I've noticed is the following message when trying to exit from a screen terminal: Utmp slot not found -> not removed and am wondering if this is related (having a vague recollection that Utmp has something to do with login). Any idea what that message means, or how to fix it, and if it would be related? Failing that, what sort of problem determination tools are available to investigate what is slowing down this login process?

    Read the article

  • Intel Motherboard Lightning Victim Dies Hard

    - by Stetson RDT
    Today, I have a more hardware-related question. I have an Intel board, and I really do not know which board it is, I built the machine for a relative, but he forgot to keep the documentation. Long story short, the computer was disconnected during a lightning storm, but a lightning strike travelled in via the ethernet cable (It was directly connected to a power brick commonly seen on those long distance ISP Wireless transmitters), and the motherboard was shocked. I am attempting to get this PC going. The problem is as follows: The computer will randomly reboot, just in the middle of anything as it pleases. May load to EFI (or whatever BIOS is nowadays), may load to bootloader, may even get to the OS. But before 5 minutes is up, the system will always die. Out of curiosity, I plugged my voltmeter in to a molex connector. On the 5V side, it gets a good, consistant +5.13V. On the 12V side, it fluctuates, as follows: Upon immediate startup, it soars to 12.11-12.13V. It will now do one of two things: it will immediately jump down to 12.04-12.05V, or hover for about a minute at 12.11-12.13, then jump down. It seems the longer the voltage stays at 12.11-12.13, the shorter the machine will stay running. Also, post codes, whenever the machine locks up, but does not die hard, seem to be between "AA" and "AC". Does this make any sense to anybody? Do you all think this motherboard is salvageable? It was an expensive bugger, and I'd prefer to not replace it.

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • How to perform this Windows 7 permissions change on many files via GUI or command line

    - by hippietrail
    After using my external hard drive on another Windows 7 computer to tweak photos with Windows Live Photo Gallery then upload them to Facebook I found the modified images were now not visible on the original Windows 7 computer. I'm not sure if the things I tried to get it working subsequently changed anything, but I do know this is the sequence of actions that makes the permissions of the modified files match those of the unmodified files: Right click on broken image file, select "Properties" On the "Security" tab press the "Advanced" button In the "Permissions" tab press the "Continue" button with the shield icon on it Tick the box marked "Include inheritable permissions from this object's parent Click the "Remove" button to remove the only current entry "Type: Allow, Name: Administrators (XYZ\Administrators), Permission: Full control, Inherited From: OK on the "Permissions" tab. OK on the "Security" tab. Now this same procedure does not work at the folder level. It results in "access denied" dialogs. I'm looking for some way to perform this exact modification on all the images I edited on the other computer. I'm happy to use the Windows GUI in Explorer or any other included tools. I'm happy to use the Windows command line. I'd prefer not to use a third-party tool since I'd have to be satisfied it's not doing anything else. I'm not looking for a different way to change permissions to other settings to make an external drive full of photos editable on multiple computers. At least not in this question.

    Read the article

  • Never getting a JSON response when running server-side PHP proxy script but I do with others

    - by Dohk
    I'm on PHP 5.3.4 and Apache 2.2 btw So I'm using (or trying to use) Simple PHP Proxy (Simple PHP Proxy) I enter a URL at his example page at SPP Example Page and it works fine, I see the JSON response and all the headers. However, when I copy the exact URL, only changing the URL to now have localhost, I get both empty headers and no JSON. Assuming that the script on his site is the same I downloaded, could this be due to a multitude of things or a setting in Apache and/or the PHP ini? So for example: benalman.com/code/projects/php-simple-proxy/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 That will get me a ton of info back Now changing to localhost http://localhost/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 {"headers":[],"status":{"url":"https:\/\/github.com\/","content_type":"text\/html","http_code":301,"header_size":194,"request_size":182,"filetime":-1,"ssl_verify_result":0,"redirect_count":1,"total_time":0.094,"namelookup_time":0,"connect_time":0.047,"pretransfer_time":0,"size_upload":0,"size_download":185,"speed_download":1968,"speed_upload":0,"download_content_length":185,"upload_content_length":0,"starttransfer_time":0,"redirect_time":0.047,"certinfo":[]},"contents":null} I even went basic and just used some curl and of course, empty objects being returned other than false for my content and the url I set in my JSON. Any help is deeply appreciated or any ideas.

    Read the article

  • Problems reviving old pc including graphics card issue.

    - by Mick
    I have a PC that seemed to have died years ago that I am trying to revive. It has a dual core athlon processor and a gigabyte motherboard. It had two dual output graphics cards, and I have long since forgotten which output would print out the diagnostic information as the PC starts up. Also I suspect that the resolution set on all the monitors was probably higher than my current single monitor is capable of displaying. The motherboard also has a built in graphics card, so I thought it may be simplest to remove both the graphics cards and plug my monitor into the onboard graphics just while I get things going. Does that seem sensible? Now the other problem: The PC has two hard drives. I have no idea which one is the primary one it is attempting to boot from. When I power up, the fan comes on and I hear some chuga-chuga-pause chuga-chuga-pause repeat indefinitely. I'm not sure which device is making the noise. There are no-beeps at any time. I see nothing on the screen at any time, not even for a second. Any suggestions? EDIT: If T start up the PC without the power connected to the CDrom there is no chuga-chugan noise.

    Read the article

  • Linux 2.6.24-gentoo-r3-comtrance on x86_64 high Useage for unknown reasons

    - by Dorjan
    Hello everyone, I'm a complete rookie when it comes to all things Linux related so please treat me as such and assume I know nothing. That being said my Top says this: top - 12:08:03 up 11 days, 15:36, 0 users, load average: 5.47, 5.53, 5.46 Tasks: 296 total, 2 running, 294 sleeping, 0 stopped, 0 zombie Cpu(s): 6.3%us, 1.4%sy, 0.0%ni, 71.3%id, 20.6%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 8176880k total, 8118236k used, 58644k free, 89312k buffers Swap: 1004052k total, 0k used, 1004052k free, 7235652k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1229 root 15 -5 0 0 0 D 1 0.0 199:28.63 kjournald 2946 root 20 0 1716 676 552 D 1 0.0 145:02.94 syslogd 14553 root 20 0 2644 1268 876 R 1 0.0 0:00.34 top 14609 postfix 20 0 7896 1884 1460 D 1 0.0 0:00.02 bounce 14630 postfix 20 0 7896 1876 1452 R 0 0.0 0:00.00 bounce And my hard drives says: > df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 4925556 4474836 200508 96% / /dev/sda5 489992 36090 428602 8% /tmp /dev/sda6 377951852 236171160 122581816 66% /var none 4088440 0 4088440 0% /dev/shm It has been like it for a few days now... I know not what is causing the high server load (Normally around 1.3) can anyone give any tips on how to track down the culprit? Many thanks,

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • Ubuntu 11.04 fresh install - "Input signal out of range" or "Mode not supported..."

    - by Dennis
    I recently installed Ubuntu 11.04 using a CD .iso. Installation went fine. Upon completion I rebooted and after a second or two I got a black screen with the message "Input signal out of range". And there it sits... Read a few things about how this could be related to screen resolution, refresh rate, etc. For the heck of it I tried a different monitor. The result is the same but the message provides some clues - "Mode not supported - H:92.7kHz, V:58.3Hz" (the latter is Hz; not kHz). So my thought is that I should probably be able to use the 11.04 install disc to "Try out Ubuntu", find and edit some file that was created by the install with the correct values. Problem is, I am not too sure what I am supposed to edit. Looked at the xorg.conf file but this is so minimal at this point I am not sure it is where I want to go. By the way, the monitor is an I-Inc ix191a. Anyone have any ideas on how to get around this?

    Read the article

  • Server not resolving after restart

    - by DomainSoil
    I restarted our server today, and now cannot for the life of me get anything to resolve... I suspect it has something to do with our routes. I've tried numerous Google results to no avail. Here is as far as I've gotten: [root@www ~]# route -n Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth1 0.0.0.0 192.168.1.101 0.0.0.0 UG 0 0 0 eth1 Things you need to know: Our server (CentOS 6.3) runs two virtual machines, one live, and one development. They mirror each other as much as possible, but I can't find where I've went wrong with the live server. The dev server works fine. [root@www ~]# ifconfig eth1 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx inet6 addr: xxxx:xxx:xxxx:xxxx:xxxx/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:118206 errors:0 dropped:0 overruns:0 frame:0 TX packets:165 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7825749 (7.4 Mib) TX bytes:7146 (69.2 KiB) Interrupt:28 [root@www ~]# /etc/init.d/network status Configured devices: lo Auto eth0 Currently active devices: lo eth1 If there is any other information you need, please don't hesitate to ask!

    Read the article

  • In need of assistance for recovering a lost partition

    - by Tek
    The program that has worked for the most part is Active@ Partition Recovery. I'm so close but yet so far from recovering my data. Okay so here's what happens. In the following screenshot (blanked out a folder and filename with profanity in case some of you guys are at work :P), it detects the partition I accidentally deleted with ALL 100% of my data listed. Of course, I didn't write ANY data to that drive after I did this. But when I click "recover" and finishes the recovery process, in Windows I click on the partition that was just was just recovered... It's EMPTY. The program seems to be able to see my lost files, but when I recover the partition windows doesn't seem to think the same =( Things I tried: I tried running a chkdsk /r /f after I recovered the partition, apparently it couldn't find any errors. Tried using other software like TestDisk to recover the partition, but they (all) act similar to Windows in that it detects the (missing) partition but when I browse it there's no files. The partition is there along with all the file and data information. The sector information is also in the screenshot, is there any way I can use this to my advantage in recovering my data? Other information: Dualboot: Win8 / Ubuntu 12.10 x64 1TB Internal desktop drive, GPT Layout, NTFS formatted drive, 64K allocation size.

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >