Search Results

Search found 800 results on 32 pages for 'locks'.

Page 23/32 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Server configurations for hosting MySQL database

    - by shyam
    I have a web application which uses a MySQL database hosted on a virtual server. I've been using this server when I started the application and when the database was really small. Now it has grown and the server is not able to handle the db, causing frequent db errors. I'm planning to get a server and I need suggestions for that. Like I said, the db is now 9 GB, and is growing considerably fast. There are a number of tables with millions of rows, which are frequently updated and queried. The most frequent error the db shows is Lock wait timeout exceeded. Previously there used to be "The total number of locks exceeds the lock table size" errors too, but I could avoid it by increasing Innodb buffer pool size. Please suggest what configurations should I look for in the server I should buy. I read somewhere that the db should ideally have a buffer pool size greater than the size of its data, so in my case I guess I'd need memory gt 9 GB. What other things should I look for in the server? Just tell me if I should give you more info about the

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Windows 7 Blank Screen on Boot / Login

    - by Greg
    I have a new system that's having a few problems... sometimes (seems to be when the PC is cold, i.e. has been switched off for a while, though that could be my imagination) I get a blank blue screen when I boot up. The system boots normally and auto-logs-in. The desktop loads and I'm even able to launch applications, but then everything disappears and the screen goes to the default windows desktop blue colour (not the desktop image, just a plain blue with no mouse cursor). At this point the machine completely locks up - I'm unable to even toggle Num Lock and have to hold in the power button for 5 seconds to kill it. Interestingly if I manage to launch some applications before it goes blank, they will usually crash... sometimes explorer.exe will crash too. When I reboot, the system is fine and stable. I've installed the latest graphics drivers and run memtest86+ for 6 passes (and counting) with no errors. The system specs are: CPU: Intel I7 2.66 @ 3.4GHz RAM: 6GB (3 * 2GB DDR3) HDD: 128GB Crucial M225 SSD Motherboard: Gigabyte EX58-UD3R Gfx: ATI Radeon Sapphire 5870 1GB Note: There are a few similar questions but I haven't found one that matches my symptoms

    Read the article

  • Computer locking up, looking for bootable hardware diagnostic tool.

    - by Carl Menke
    Well today I helped my friend build a computer. All went pretty well until we got to installing Win7. Thing is, we thought, it was crashing constantly. I adjusted pretty much every setting in the BIOS and removed as much hardware as possible to try and prevent a crash. No dice. So far I've tried running an Ubuntu live cd without the harddrive installed. Nope, crashed on boot. And then I just tried Microsoft's ram utility disk and it eventually locked up on that (the ram passed though). So it seems to me like it's either the CPU (AMD PhenomII x3) or the motherboard that could be bad, but I don't know how to test them individually for problems. I thought it could be a overheating issue, but the BIOS reports that the CPU temp is fine idling around 34C. Any advice or diagnostic disk that could help me out? TL;DR: Computer locks up frequently during use (cannot even boot/install an operating system), memory is fine, probably CPU or Mobo, BIOS says CPU temps are fine. What should I try?

    Read the article

  • Cannot access drive in Windows 7 after scandisk lockup, but can in safe mode....

    - by Matt Thompson
    I ran scandisk on my external USB drive due to the inability to delete a few files. Windows asked me if I wanted to unmount the drive before the scan, warning me that it would be unusable until the scan was finished, and I said yes. During the scan, my machine locked up, and I was forced to reboot the machine. When it came up, I was unable to access the drive, getting an error that "L:is not accessible, access is denied". Comupter Management sees the drive, and has the proper amount of disk space filled. I booted into safe mode, and can access the drive with no problems, and I noticed that in explorer, all the folders have locks on them. I booted back into windows, but still could not access the drive, getting the same error as above. Hovever, if I right click on the drive, select properties, and go to Customize, in the folder pictures ares, I select Choose File, and a window open up, that shows the root of the directory, with all the folder able to be accessed, but again, the icon is the folder icon with a lock on it. I can even copy files from the drive to another. So, the files are not gone, windows can obviously access the drive no matter what it thinks, so there has to be a problem with the flag windows put on the drive when it ran the original scan that failed. I was able to run a scan both in safe mode with no problems, and in windows. In windows, I received the cannot access error the first time I run scan disk on it, but if I try again, it works fine. Any ideas on how to clear the flag that windows set, so I can access the drive normally again?

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • What would cause my Windows machine to seize?

    - by Coltin
    I run Windows XP SP2 on a Desktop I built (I know that doesn't provide much, but I can't say "on a Dell X103938R" or something). It has 'seized' about 7 times in its year and a half life. Everything freezes. I can't move the mouse cursor and the keyboard seems unresponsive. I can turn the monitor on and off and it will hold the last image. The light for the mouse is responsive if I move it. The keyboard lights change when pressed (cap locks, etc). I've waited upto ten minutes for a change, nothing. I haven't connected any activity to the seizing. It's happened when all I was "running" were fullscreen programs (games), just checking email, or once when I was sitting at my desktop (I was reading a book and when I tried to use the mouse, nothing). I've never been able to figure it out. I have to hard reset, and then its fine. It doesn't file system check or anything (not sure if Windows does that). No error when I load up the computer, nothing. If I had uTorrent open, it will have to recheck the torrent files to make sure they weren't corrupted though. (It's not always open when it seizes either). I'm using an AMD Athlon 5400+ with a NVidia GeForce 8600 GT, if that helps. I'm using two hard drives, 500Gb Wester Digital with a 1Tb Hitachi.

    Read the article

  • Mac SMB connections to Windows 2003 server, leaving Open Files

    - by Bruce Garlock
    We have several Mac clients (Both 10.5, and 10.6) mounting a share from a Windows 2003 server. At least once a day, our archivist will go into this share to archive items from it, to the backup server. Most of the time, she has no issues: she copies the folder to the archive server, when it's done, she deletes it from this share. Then, she will come upon one, and it will say she doesn't have permission. When I go into the Open sessions, it will say that a particular user has a READ lock on the file, in Windows 2003. Of course, this person does not have the file open, and the only way we can delete it, is to close the open session on the file. My thoughts: The Mac likes to "sprinkle" Hidden "Resource Forks" on SMB servers, and possibly, when this Mac who last wrote to that share, closes out of the file, and these files still exist. Windows 2003 has a bug, that doesn't properly "release" the OPLOCK on the file? Steve Ballmer just doesn't like Mac's, so he wants to annoy everyone by not releasing file locks :-) What can be done about this? It happens every day, and sometimes several times per day! Many thanks, Bruce

    Read the article

  • Not able to run firefox in a head-less Ubuntu server 9.10

    - by Julio J.
    I need to run Firefox in my server in order to execute some Selenium tests from Hudson. I would love no to have to install a complete gui. So I installed Xvfb in order to fake the Gui (I understand it this way correct me if my assumptions are wrong). After some time trying to make it work, I'm stuck with the next situation: $ sudo Xvfb -ac :99 & [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) $ firefox [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! [config/dbus] couldn't register object path (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) Xlib: extension "RANDR" missing on display ":99.0". GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: /bin/dbus-launch terminated abnormally without any error message) I'm runnig firefox without installing it from the repositories. And I'm getting a socket timeout when I try to run the selenium tests, so I guess the problem is in Firefox and Xvfb. I have installed already the nex package: i gconf-defaults-service - GNOME configuration database system (system defaults service) That in some forums suggest to be a fix, that in my case doesn't work. Any explanation about the problem and ways of solving it without installing a full gui, will be very helpful.

    Read the article

  • Maximum limit of filepointer in php reached and not changeable

    - by mlaug
    I have a server with the current 5.3.x version installed. Since we are running a really simple and small server in php using sockets, that connects to a lot clients using sockets we need to raise the open file limit that has been already done on the server for the user, that runs the server #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 29879 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 8192 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29879 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited and we compiled php with --enable-fd-setsize=8192 still we are getting [19-Nov-2012 09:24:23 Europe/Berlin] PHP Warning: socket_select(): You MUST recompile PHP with a larger value of FD_SETSIZE. It is set to 1024, but you have descriptors numbered at least as high as 1024. --enable-fd-setsize=2048 is recommended, but you may want to set it to equal the maximum number of open files supported by your system, in order to avoid seeing this error again at a later date. once in a while in our logs. Anyone knows who to configure the unix server and php correctly to have that working? I found a bug, but that is related to 2006 and marked as "not a bug" https://bugs.php.net/bug.php?id=37025&edit=1

    Read the article

  • SBD killing both cluster nodes when there are even small SAN network problems

    - by Wieslaw Herr
    I am having problems with stonith SBD in a openais-based cluster. Some background: The active/passive cluster has two nodes, node1 and node2. They are configured to provide an NFS service to users. To avoid problems with split-brain, they are both configured to use SBD. SBD is using two 1MB disks available to the hosts via an multipath fibre-channel network. The problems start if something happens with the SAN network. For example, today one of the brocade switches got rebooted and both nodes lost 2 out of 4 paths to each disks, which resulted in both nodes committing suicide and rebooting. This, of course, was highly undesirable because a) there were paths left b) even if the switch would be out for 10-20 seconds a reboot cycle of both nodes would take 5-10 minutes and all NFS-locks would be lost. I tried increasing the SBD timeout values (to 10sec+ values, dump attached at the end), however a "WARN: Latency: No liveness for 4 s exceeds threshold of 3 s" hints that something isn't working as I would it expect to. Here is what I would like to know: a) Is SBD working as it should killing nodes when 2 paths are available? b) If not, is the multipath.conf file attached correct? The storage controller we use is an IBM SVC (IBM 2145), should there be any specific configuration for it? (as in multipath.conf.defaults) c) How should I go about increasing the timeouts in SBD attachements: Multipath.conf and sbd dump (http://hpaste.org/69537)

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • Why does Outlook 2007 lose connection to Exchange when Windows 7 64-bit turns off display?

    - by Greg R.
    The problem: When Windows 7 puts the display to sleep, Outlook 2007 and also Microsoft Office Communicator 2005 lose the connection to the Exchange server. When I unlock the computer, Outlook is logged out of Exchange and prompts me for credentials (although usually I have to restart Outlook to get it to reconnect). The network connection is still active, e.g. other applications don't lose their connection to the network or Internet when Windows 7 puts the display to sleep. I'm using a Dell E5400 notebook running Windows 7 Enterprise 64-bit with Outlook 2007 connecting to a corporate Exchange server (not sure if it's Exchange 2007 or 2010). The Dell is typically docked and connected via DVI (through the dock) to two Dell monitors. The Power Options in Windows 7 are set as follows: Turn Off The Display: 15 minutes Put The Computer To Sleep: never Those are the "Plugged In" settings but the problematic behavior is the same when running on battery. When Windows 7 turns off the display, it automatically locks the computer. E.g., I have to re-enter my credentials to access the machine. This is per corporate policy. The equivalent set up on my previous Dell notebook running Windows XP SP3 did not result in this problem with Outlook 2007 or Office Communicator 2005 connecting the very same exchange server. The problem began when I switched to the new Dell E5400 with Windows 7.

    Read the article

  • Server stops responding, can't find issue?

    - by Corey W
    I've had a pretty basic server up and running CentOS with webserver/database, and have noticed that it has locked up a few times in the middle of the night. It seems to happen randomly. When it locks up I can ssh in, (although it seems to hang once connected), but can't access cpanel/whm and have to reboot the server to get everything back up. Checking the messages log I see the below like clockwork every 5minutes 1 second, and then it just stops logging anything until I reboot. I can't seem to find any log showing any issue? Is there somewhere I can check to try to figure out what is happening? Could this be caused by CPU being maxed? Nov 17 08:01:35 s1 pure-ftpd: (__cpanel__service__auth__ftpd__Q13SKrtaCJCHjBezTfU8Iqmsi@127.0.0.1) [INFO] Logout. Nov 17 08:06:36 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:06:36 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__mxidFBSnQXmR0QzqSxlqrXLIH0CmJ0GPh9bZ5V3 is now l ogged in Nov 17 08:06:37 s1 pure-ftpd: (__cpanel__service__auth__ftpd__mxidBDaCgnqSxlqrXLIH0CmJ0GPh9bZ5V3@127.0.0.1) [INFO] Logout. Nov 17 08:11:37 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:11:38 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__T4B7F71acf1dsdJSeJHdqKNcbOdpzNnN_GttgcM is now l ogged in Nov 17 08:11:38 s1 pure-ftpd: (__cpanel__service__auth__ftpd__T4B7F71acf1KNcbOdpzNnN_GttgcM@127.0.0.1) [INFO] Logout. Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__W5C1RzumtaNwe4cU8Lt1 is now logged in Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] Logout. Nov 17 09:10:58 s1 kernel: imklog 4.6.2, log source = /proc/kmsg started. Nov 17 09:10:58 s1 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="1094" x-info="http://www.rsyslog.com"] (re)start Nov 17 09:10:58 s1 kernel: Initializing cgroup subsys cpuset

    Read the article

  • How can I print from my lion mac mini to my windows XP, with simple file sharing?

    - by Jules
    I have quite a complicated setup, perhaps. And a lot of history on this issue, I'm hoping that I don't have to buy a new printer. I've got a HP Wireless USB Print Server, which requires client software, I can't just use it as an IP Printer. The HP software is pretty poor on the mac and is no longer supported and often locks up the printer server and takes some considerable effort to actually print something. Let alone if a windows machine attaches to it first. My printer is an Epson Stylus R285. However, the windows client software is fine and we can print from windows 7 / XP without problem. We have simple file sharing setup as this is the only way I could get windows XP to talk to windows 7. However, I can't seem to get my mac mini to connect as anything other than a guest to my xp machine, to connect to the shared printer. I'm not considering some kind of internet printing as this would seems the simplest solution. But I'm not sure what will work with my setup ?

    Read the article

  • How can I pause console output in rxvt?

    - by Javid Jamae
    I'm running rxvt in Cygwin on a Windows box. This is how I invoke it: rxvt -sr -sl 2500 -sb -geometry 90x30 -tn rxvt -fn "Lucida Console-14" -e /usr/bin/bash --login -i Anyone know how to pause the console output in rxvt? I can use Ctrl-S / Ctrl-Q to pause / un-pause, but this won't work if a script is already running and spewing output to stdout. Highlighting the terminal window with the mouse doesn't seem to work like with other consoles such as the standard Cygwin console, or the Windows command prompt console. Some sort of scroll lock would be nice, but I can't seem to find any way to do this. I know I could just pipe my output to a file, but I want a way to pause the output for something that I didn't expect to explode with console output. Basically I want to scroll back while its running without it constantly moving me to the bottom of the output buffer as it updates more data to stdout. I don't particularly care if the solution given actually pauses the script (like when you highlight the mouse in the Windows Command window), or just scroll locks and let's me scroll while its still running the underlying script, though I'd like to know how to do both if its possible.

    Read the article

  • How to kill tasks in Windows 7 when even Task Manager won't open or respond?

    - by endolith
    Occasionally one of my computers will get so bogged down that everything locks up, Ctrl+Alt+Del doesn't work, Task Manager won't open, or they work, but are opening so slowly that it will take hours or days to shut down other processes and regain control of the computer, etc. Is there a way to, for instance, force Task Manager to be highest priority so it always opens immediately with Ctrl+Shift+Esc even when some other process/driver is hogging the CPU? Is there some other program that can run in the background and open immediately like this? This question isn't about fixing "underlying problems". No matter how much memory you have, it's still possible for a rogue process to eat it all up and lock up the computer in page fault thrashing, hog the CPU, etc. This question is about how to take back control of the computer when that happens. Basically when these kind of lock-ups happen, I want to open some kind of task manager that pauses every other process and allows me to kill one of them, and then let everything resume so I can save my work, etc. Otherwise my only option is to hold down the power button. Antifreeze is supposed to do exactly what i want, pausing all other applications and starting a task manager to kill the offender, but in my testing, it actually does neither.

    Read the article

  • How Do I Stop NFS Clients from Using All of the NFS Server's Resources?

    - by Ken S.
    I have a v4 NFS server running on Ubuntu 12.04LTS. It is the main repository for the web assets that four external nginx webservers mount to serve up to site visitors. These client servers connect to it via a read-only mount. Each of these RO servers has this displayed when I check the mounts: 10.0.0.90:/assets on /var/www/assets type nfs4 (ro,addr=10.0.0.90,clientaddr=0.0.0.0) The NFS master's /etc/exports file contains entries like this for each server: /mnt/lvm-ext4 10.0.0.40(ro,fsid=0,insecure,no_subtree_check,async) The problem that I'm seeing is that these clients are eventually utilizing all the RAM on the NFS server and causing it to crash. If I do a watch free -m I can watch the used memory creep up until it's used and then see the free buffers/cache entry creep down to near zero before the server eventually locks up requiring a reboot. There is some sort of memory leak somewhere that is causing this, and the optimal solution would be to find it and fix it, but in the meantime I need to find a way to have the NFS server protect itself from connected clients using all it's RAM. There must be some sort of setting that limits the resources the clients can use, but I can't seem to find it. I've tried adjusting the values for rsize and wsize but they don't seem to help or be related. Thanks for any tips.

    Read the article

  • How to determine the root cause of a system lockup on Ubuntu 8.04 LTS?

    - by jdt141
    I'm currently working a project that involves setting up a PC/104 stack and running Ubuntu 8.04 LTS. We need to use the PC/104 stack because its an embedded application - and we're required to use a DeviceNet peripheral card to communicate to other devices. (DeviceNet is just a protocol on top of CAN.) Anyway, the following hardware is on the stack: Kontron MOPSPM104 with a 1GHz Intel Celeron processor ConnectTech FlashDrive/104 4GB Industrial Temp (-40 to +85 C) Woodhead (Molex) PC104DVNIO DeviceNet card A run of the mill 104 power supply The Kontron Board offers two serial ports, one VGA out, and two USB ports. The DeviceNet card is an ISA card. Because of this (per the User's Guide for the Kontron Board), I have manually set the IRQs in the BIOS to be appropriately configured, and turned off ACPI in both the BIOS and passed the appropriate flag in GRUB. I've installed Ubuntu 8.04 desktop, 32 bit. The problem that I'm having is that, from time to time, the entire 104 stack locks up. This only seems to happen in two cases, both of which we're running GNOME. We have a custom application that uses the DeviceNet card, and the system will lock up, or (more frequently) when we're running Firefox and either surfing for some information or trying to test it - typically by streaming video from a IP-camera. The reason I ask this questions is I cannot determine the root cause of this lockup. The IRQs appear to correctly configured in the BIOS and as the Kernel sees them, and nothing is logged to dmesg. If you all could help me determine the root cause of this lockup, I would greatly appreciate it. Thanks.

    Read the article

  • Safely remove external USB drive fails due to $extend

    - by moontear
    When connecting an external USB 3.0 hard drive to my USB 3.0 ports I can never safely remove it. Somehow windows always keeps the journal files open: "Always" as in this time I only connected the drive, copied a 10GB VM and wanted to disconnect it afterwards (like 15 minutes after copying, so all copying was done). As you can see there is no other program keeping a handle on the disk besides System. I tried restarting explorer.exe as well as RemoveDrive.exe from Uwe Sieber. No luck, the locks on the hard drive always remain. My only solution is just unplugging it (whereas I'm afraid of damaging the data?) or restarting the computer (always helps, doesn't it?). Might it have something to do with me only having a SSD hard drive and the external disk is a regular drive? Might it have something to do with the USB 3.0 drivers (NEC Electronics USB Hub)? I never have this problem when using the regular USB 2.0 ports. Any ideas on how to properly unmount the disk?

    Read the article

  • Server Recovery from Denial of Service

    - by JMC
    I'm looking at a server that might be misconfigured to handle Denial of Service. The database was knocked offline after the attack, and was unable to restart itself after it failed to restart when the attack subsided. Details of the Attack: The Attacker either intentionally or unintentionally sent 1000's of search queries using the applications search query url within a couple of seconds. It looks like the server was overwhelmed and it caused the database to log this message: Server Specs: 1.5GB of dedicated memory Are there any obvious mis-configurations here that I'm missing? **mysql.log** 121118 20:28:54 mysqld_safe Number of processes running now: 0 121118 20:28:54 mysqld_safe mysqld restarted 121118 20:28:55 [Warning] option 'slow_query_log': boolean value '/var/log/mysqld.slow.log' wasn't recognized. Set to OFF. 121118 20:28:55 [Note] Plugin 'FEDERATED' is disabled. 121118 20:28:55 InnoDB: The InnoDB memory heap is disabled 121118 20:28:55 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121118 20:28:55 InnoDB: Compressed tables use zlib 1.2.3 121118 20:28:55 InnoDB: Using Linux native AIO 121118 20:28:55 InnoDB: Initializing buffer pool, size = 512.0M InnoDB: mmap(549453824 bytes) failed; errno 12 121118 20:28:55 InnoDB: Completed initialization of buffer pool 121118 20:28:55 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121118 20:28:55 [ERROR] Plugin 'InnoDB' init function returned error. 121118 20:28:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121118 20:28:55 [ERROR] Unknown/unsupported storage engine: InnoDB 121118 20:28:55 [ERROR] Aborting **ulimit -a** core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 13089 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited **httpd.conf** StartServers 10 MinSpareServers 8 MaxSpareServers 12 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 **my.cnf** innodb_buffer_pool_size=512M # Increase Innodb Thread Concurrency = 2 * [numberofCPUs] + 2 innodb_thread_concurrency=4 # Set Table Cache table_cache=512 # Set Query Cache_Size query_cache_size=64M query_cache_limit=2M # A sort buffer is used for optimizing sorting sort_buffer_size=8M # Log slow queries slow_query_log=/var/log/mysqld.slow.log long_query_time=2 #performance_tweak join_buffer_size=2M **php.ini** memory_limit = 128M post_max_size = 8M

    Read the article

  • Windows File Access Denied

    - by Tom
    I seem to have a general problem with "access denied on Windows". It manifests itself every time if e.g: My bat file calls a compiler creates a file on disk My bat file renames a file But I also have files downloaded (FireFox) to Windows desktop where Windows is giving me "access denied" if I try delete the file. Tried disable AVG + make exception in AVG resident shield (I have tried checking with Task Manager + Winternals process explorer that it is not process running still running that should cause the locks.) Windows 7. My user account is an administrator. All files are created by same user account. The problem is recent, but some things I first noticed yesterday (when I started calling .bat files again which I have used for many years) I have tried: Starting e.g. Windows Explorer with "run as administrator", but that makes no difference right-click - properties - security and changes permissions/ownership (I also get "access denied" when trying this so this does not help) Here is a ascreenshot if I try change security of a "locked" file. (The problem here is the locking occurs continously every time the file is created) ! If I click on, it states I am not the owner? Which baffles me as I just created it. (Yes, through a .bat file calling executables that create the file. But all running under my administrator user account. Interestingly after having this dialog open, the file somehow sometimes suddenly seem to allow me delete it)

    Read the article

  • Installed Bunch of New Fonts on Windows 7 - Now None Show Up and System Lags

    - by Josh Stodola
    So I went to install about 5,000 fonts on my Windows 7 64-bit machine. It was slow to install them, and I had to leave. I came back and my PC was shut down, and I had to go through the Windows recovery BS when I powered it on. Now my computer runs EXTREMELY slow and any program that has a font menu locks up my whole machine (nothing in Microsoft Office works). When I go to "Fonts" in the control panel, it says 0 items. I went through all of the font settings trying to get them to appear. Nothing helps. I tried to bring up the Character Map and that froze up my machine too. How can I fix this? If I do not get this issue resolved soon, I am wiping this drive and going back to XP (and probably never purchasing another version of Windows again). I never had any issues with XP and have had nothing but performance problems when switching to Windows 7. My quad-core intel extreme with 8GB of RAM should never flinch with the kind of work that I do, and something simple like playing a song off an external HD takes up to five seconds on Windows 7. Unbelievable that I had to pay for this crap!

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >