Search Results

Search found 4921 results on 197 pages for 'ps ohm'.

Page 142/197 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • How to debug a kernel created using ubuntu-vm-builder?

    - by user265592
    Aim: Trying to perform a code walkthrough of what functions are getting called for sending and receiving packets over the network. I am building a kernel and using gdb for debugging/ tracing purposes. I have build a vm using the following command : time sudo ubuntu-vm-builder qemu precise --arch 'amd64' --mem '1024' --rootsize '4096' --swapsize '1024' --kernel-flavour 'generic' --hostname 'ubuntu' --components 'main' --name 'Bob' --user 'ubuntu' --pass 'ubuntu' --bridge 'br0' --libvirt 'qemu:///system' And I can run the VM successfully in qemu using the following command: qemu-system-x86_64 -smp 1 -drive file=tmpGgEOzK.qcow2 "$@" -net nic -net user -serial stdio -redir tcp:2222::22 Now, I want to debug the kernel using gdb. For this I need an executable with debug symbols(vmlinux), which apparently I don't have, as the vm-builder never asked for any such options and simply created a .qcow2 file. Question 1: Am I taking the correct approach to solve the problem and is there an easier way to do it? Question 2: Is there a way to debug this kernel using GDB? P.S: I don't have hardware support for KVM. Please correct me if I am wrong. Thanks.

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Installing xampp on system that already have mysql

    - by Charith
    I'm rather new to PHP and xampp. I have a computer that has installed MySQL server and MySQL workbench as I was working with Java and NetBeans. Now I want to use my computer for developing PHP and other web stuff too. I installed xampp successfully. But when I'm trying to access phpMyAdmin, it gives me an error saying mysql server rejected its connection Actually I tried stopping my current MySQL service and installing it again. However xampp have a its own mysql server in its installation path too. I tried configuring config.inc.php to use my existing installation of MySQL which is on a separate path. But I failed. Can anyone please instruct me how to configure this xampp to use my existing MySQL server to do everything and ignore the installed one with itself? I don't want two MySQL services to run on my system and clash in future. I'll be glad if anyone can explain to me what is best to use when you're developing Java, PHP, C and all the stuff on the same machine. P.S.: I have been given a password for my existing MySQL sever (user = root) as we do it usually when installing MySQL alone.

    Read the article

  • Some process/service/driver repeatedly presses F5

    - by VitalyB
    Hi everyone, I have the strangest problem that started 2 days ago (Windows7, 64bit). SOMETHING causes my F5 key to be constantly pressed. Rebooting helps, but only for a while, it keeps coming back to that. So far I've tried to disconnect and reconnect the keyboard (physically), however, disconnecting the keyboard doesn't actually do anything. Reconnecting it back again, causes the F5-pressing to stop, but not for very long (seconds/minutes). I'd really like to avoid a binary search for the programs (process closing/keyboard switching/etc) before I can, at the very least, identify the source of the keypress. Is there an application that can show me what is causing a key press? E.g is it the keyboard driver, or some process that executes SendKey repeatedly for reasons unknown. Thanks! P.S FYI, having F5 causes the strangest side effects. Task Manager refreshes very very quickly (as F5 is refresh), the desktop is constantly flickering and all the browsers stop working as they keep trying to refresh. I was lucky to find out what the heck is happening only because I started notepad and saw that current date/time started to appear constantly. If not that, I'd still be wondering.

    Read the article

  • Apple Magic Trackpad 3-Finger Drop Lag

    - by activestylus
    After enabling three-finger dragging for my Trackpad, I notice that it drags well, but when I release there is about 1-2 seconds of lag before it actually drops. I understand this is supposed to be a feature so when you run out of space to drag, you have time to move your hand. But, for those of us powerusers, who move really fast, this is a BUG, not a feature. There should be some way to turn it off! For some perspective, I personally own a Fingerworks trackpad as well (the company Apple bought to make the Trackpad) and it does not suffer this problem. Drops are instantaneous no matter what program I am in. This is hugely frustrating for me, because I thought I was upgrading here and Apple's version does not perform as well as the Fingerworks model (which I purchased in 2004) I actually made a short video illustrating the problem, and why it is so frustrating for anyone who uses the pad as an artistic tool. Anyone here face this problem? If not, how would you recommend that I address Apple directly about this? PS - Already looked at this thread and the conclusion does not help me. I do not have one-finger drag enabled. PPS - I understand that for most people this is not an issue because they use the 'click' feature of the Trackpad. However, after years of using Fingerworks and not having to click ever, I find that it slows me down.

    Read the article

  • Problems compiling coreutils-8.5 on Solaris 5.10 on Intel platform

    - by PP
    I am having trouble compiling coreutils-8.5 on Solaris 5.10 on the Intel platform using cc. Firstly I had the following error during ./configure: checking whether <wchar.h> uses 'inline' correctly... no configure: error: <wchar.h> cannot be used with this compiler (/tool/sunstudio12.1/bin/cc -xc99=all -g -D_REENTRANT). This seemed similar to the problem in this question. The solution was to edit configure and replace the reference of -xc99=all to -xc99=all,no_lib. This permitted the configure to complete. Then I ran /usr/sfw/bin/gmake and it progressed until I received the following message: Making all in src gmake[2]: Entering directory `/home/peterp/src/coreutils-8.5/src' gmake all-am gmake[3]: Entering directory `/home/peterp/src/coreutils-8.5/src' CCLD chroot Undefined first referenced symbol in file eaccess ../lib/libcoreutils.a(euidaccess.o) ld: fatal: Symbol referencing errors. No output written to chroot What could cause this problem? PS I was only compiling coreutils because I wanted colour ls.

    Read the article

  • Nginx and automatic updates

    - by Desmond Hume
    I'm on Ubuntu 12.04.1 with unattended-upgrades configured for automatic security updates, and I installed Nginx by first adding deb http://nginx.org/packages/ubuntu/ lucid nginx deb-src http://nginx.org/packages/ubuntu/ lucid nginx to /etc/apt/sources.list file, just as was suggested by the official wiki, and then by sudo apt-get update sudo apt-get install nginx which installed Nginx with all the standard modules. But now I think I could make good use of one or two of the Nginx optional modules, like the gzip precompression module or some security-related one. So far, I see two ways of adding an optional module to Nginx, one is compiling and installing from the source code and the other is described in this article. So, which of the ways should I choose so that automatic updates still run for and apply to Nginx and its optional modules? Or should I create a cron job with a command/script specific for Nginx instead of using unattended-upgrades utility? Can I choose between volume updates and security-only updates to be automatically applied to the standard and optional modules? And finally, is there a possibility to automatically update Nginx's modules on the fly (without any connections having been dropped), like the documentation suggests it's possible with sudo kill -USR2 $( cat /run/nginx.pid ) P.S. Actually I'm not certain if unattended-upgrades utility would automatically update the standard modules in the first place, not enough time has passed since Nginx was installed to say for sure.

    Read the article

  • Exchange Connector Won't Send to External Domains

    - by sisdog
    I'm a developer trying to get my .Net application to send emails out through our Exchange server. I'm not an Exchange expert so I'll qualify that up front!! We've set up a receive Connector in Exchange that has the following properties: Network: allows all IP addresses via port 25. Authentication: Transport Layer Security and Externally Secured checkboxes are checked. Permission Groups: Anonymous Users and Exchange Servers checkboxes are checked. But, when I run this Powershell statement right on our Exchange server it works when I send to a local domain address but when I try to send to a remote domain it fails. WORKS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER (BTW: my value for OURSERVER=boxname.domainname.local. This is the same fully-qualified name that shows up in our Exchange Management Shell when I launch it). FAILS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER Send-MailMessage : Mailbox unavailable. The server response was: 5.7.1 Unable to relay At line:1 char:17 + Send-Mailmessage <<<< -To [email protected] -From [email protected] -Subject testing -Body himom -SmtpServer FTI-EX + CategoryInfo : InvalidOperation: (System.Net.Mail.SmtpClient:SmtpClient) [Send-MailMessage], SmtpFailed RecipientException + FullyQualifiedErrorId : SmtpException,Microsoft.PowerShell.Commands.SendMailMessage EDIT: From @TheCleaner 's advice, I ran the Add-ADPermission to the relay and it didn't help; [PS] C:\Windows\system32Get-ReceiveConnector "Allowed Relay" | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -E xtendedRights "Ms-Exch-SMTP-Accept-Any-Recipient" Identity User Deny Inherited -------- ---- ---- --------- FTI-EX\Allowed Relay NT AUTHORITY\ANON... False False Thanks for the help. Mark

    Read the article

  • Hyper-v on 2012R2 startup gen1 vm causes the host to freeze up

    - by sputnik
    I've searched a lot to resolve the following issue, but nothing helped me. My problem is, that starting up a first-gen vm locks up the whole host. Only a hard reset helps. Second-gen vm starts and runs perfectly. The freezes happened on 3 different vms. FreeBSD, Ubuntu, Windows Server 2008R2, while Windows 8.1 on second gen config works perfectly. Im using this pc mainly as a workstation. No eventlog errors nor dumps are generated. My system: Windows Server 2012R2 FX-8350, non OC ASRock 870 Extreme R2 (Crappy board imho) 32GB DDR3 1866@1600 (My motherboard, against the "support" for 1866ram won't work with full speed) 120GB SSD 4.5TB Storage space device I dont think that its due to my system, because vmware workstation was running without problems. Did I forget to configure something? Any help is appreciated. P.S: Even deactivating C1E, C6, C&Q didnt work. P.P.S: With no virtual network adapter set, the system still locks up. Creating a first gen vm without any hdds and network and launching works. Attaching a boot dvd causes the host to freeze. The host freezes as the gen1 vm begins to boot, doesn't matter if from dvd or hdd

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • Clarification on signals (sighup), jobs, and the controlling terminal

    - by asolberg
    So I've read two different perspectives and I'm trying to figure out which one is right. 1) Some sources online say that signals sent from the controlling terminal are ONLY sent to the foreground process group. That means if want a process to continue running in the background when you logout it is sufficient to simply suspend the job (ctrl-Z) and resume it in the background (bg). Then you can log out and it will continue to run because SIGHUP is only sent to the foreground job. See: http://blog.nelhage.com/2010/01/a-brief-introduction-to-termios-signaling-and-job-control/ ...In addition, if any signal-generating character is read by a terminal, it generates the appropriate signal to the foreground process group.... 2) Other sources claim you need to use the "nohup" command at the time the program is executed, or failing that, issue a "disown" command during execution to remove it from the jobs table that listens for SIGHUP. They say if you don't do this when you logout your process will also exit even if its running in a background process group. For example: http://docstore.mik.ua/orelly/unix3/upt/ch23_11.htm ...If I log out anyway, the shell sends my background job a HUP signal... In my own experiments with Ubuntu linux it seems like 1) is correct. I executed a command: "sleep 20 &" then logged out, logged back in and pressed did a "ps aux". Sure enough the sleep command was still running. So then why is it that so many people seem to believe number 2? And if all you have to do is place a job in the background to keep it running why do so many people use "nohup" and "disown?"

    Read the article

  • Need to have access to my office PC from my laptop hopping through two VPN servers

    - by Andriy Yurchuk
    Here's the illustration of what I have ( http://clip2net.com/s/2fvar ): My office PC with it's IP of 123.45.e.f. Office VPN, which I will connect to from my VPS to get to my office PC. My own VPS, which I use as a: client to connect to office VPN (through vpnc, which creates a tun0 with 123.45.c.d IP address); VPN server my laptop can connect to (OpenVPN, tun1, 10.8.0.1) My own laptop I will use as a VPN client to connect to VPS OpenVPN server (will create a tun0 with 10.8.0.2 IP address) Now what I have to do is to allow my laptop to connect to at least my office PC, but preferably to all the 123.45.x.x subnet. Please advice on how to best configure OpenVPN, routing, iptables or whatever else is needed on my VPS so that my laptop could gain access to my office PC. P.S. The reason I'm hopping through my VPS is that being connected to the office WiFi I cannot access my office PC and I cannot connect to office VPN (which is another way to access my office PC). The only way to access my PC from office WiFi I have is hopping though an outside network.

    Read the article

  • 502 Bad Gateway error after failed requests using Passenger

    - by Nicolas Buduroi
    I've got a staging server running nginx 1.0.5 using Rails 3.1 under Passenger 3.0.9. The problem is that a request sent just after one where there's an application error return 502 Bad Gateway. To test it I've set up a simple controller with an action that just raise a dummy exception. One request will show the Rails error message and the next one will show nginx 502 Bad Gateway error, then it goes back to the Rails application error, etc. While investigating this problem I've found out that load testing the application (which increase the number of application processes) make that issue disapear. That is until the extra processes are shutdown, then it reappear. I've tried setting the passenger_min_instances option, but doing so doesn't change anything and in this case each time an application error happen one instance is killed while after load testing all instances are kept alive. P.S.: Some people on my team told me that they've seen the 502 error even when there's no application error but I've not been able to reproduce that. Update: Just found out how to display the responses status codes using ab and most of them are 502!

    Read the article

  • How to find process that's using 100% of CPU

    - by Gabriel
    As i'm looking at htop and top i see that my processor usage is 100% allways. But i can not see any process that is using that much CPU. Htop shows me only 1-2 processes that use around 5% cpu time. Is there a way to find the processes that use that much cpu time? Here is the output of ps -eo pcpu,pid,user,args | sort -r -k1 | less %CPU PID USER COMMAND 0.8 20413 root jsvc.exec -user tomcat -cp ./bootstrap.jar -Djava.endorsed.dirs=../common/endorsed -outfile ../logs/catalina.out -errfile ../logs/catalina.err -verbose org.apache.catalina.startup.Bootstrap -security 0.3 631 mysql /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/mysql.pid --skip-external-locking 0.2 3380 root /usr/local/apache/bin/httpd -k restart -DSSL 0.2 24698 root tailwatchd 0.2 22472 root /usr/local/jdk/bin/java -Djava.util.logging.config.file=/usr/local/jakarta/tomcat/conf/logging.properties -Dfile.encoding=UTF8 -XX:MaxPermSize=128m -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/local/jakarta/tomcat/common/endorsed -classpath /usr/local/jakarta/tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/local/jakarta/tomcat -Dcatalina.home=/usr/local/jakarta/tomcat -Djava.io.tmpdir=/usr/local/jakarta/tomcat/temp org.apache.catalina.startup.Bootstrap start 0.1 32095 root cpanellogd - processing bandwidth 0.0 9733 root sleep 1m

    Read the article

  • Windows Firewall Software to Filter Transit Traffic

    - by soonts
    I need to test my networking code for Nintendo Wii under the conditions when some specific Internet server is not available. Wii is connected to my PC with crossover ethernet cable. PC has 2 NICs. PC is connected to hardware router with ethernet cable. The hardware router serves as NAT and has an internet connected to its uplink. I set the Wii to be in the same lan as PC by using Windows XP Network bridge. I can observe the WII network traffic using e.g. Wireshark sniffer. Is there a software firewall that can selectively filter out transit traffic? (e.g. block outgoing TCP connections to 123.45.67.89 to port 443) I tried Outpost Pro 2009 and Comodo. Outpost firewall blocks all transit traffic with it's implicit "block transit packet" rule. If the transit traffic is explicitly allowed by creating the system-wide low level rule, then it's allowed completely and no other filter can selectively block it. Comodo firewall only process rules when the packet has localhost's IP as either source or destination, allowing the rest of the traffic. Any ideas? Thanks in advance! P.S. Platform is Windows XP 32 bit, no other OSes is allowed, Windows ICS (Internet Connection Sharing) doesnt work since the Wii is unable to connect, becides I don't like the idea of adding one more level of NAT.

    Read the article

  • My home box as my own host?

    - by Majid
    Hi all, I have a 512 kb/s DSL service at home. I do not have a static IP but I can get one if I pay some extra to my ISP. Now, if I get the static IP, can I make my home box act as my internet host? What else do I need? Thanks P.S. I know that if at all possible, the site I make available this way might be slow, that is alright, my question is if it is possible at all. Edit: I need this for very small traffic. I am a php developer and for my projects I am often asked to provide a demo. I currently use a free hosting for this purpose but it is down most of the time and support is non-existent. So I thought to set-up my home computer as my test server. With this please note that: I will only occasionally have 'visitors' and that will be one or possibly two visitors at any time. These demos, are to showcase functionality, so no big images are served and page views will normally generate under 100KB of traffic.

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • How does Linux determine the SCSI address of a disk?

    - by Chris Sears
    Greetings, I'm working with RHEL 5.5 guest VMs under VMware ESX 4. When I configure the virtual disks in the VM hardware settings, each disk has a SCSI address in the format "N:M". For example, "1:3" would mean SCSI host number 1 and SCSI target ID 3. When I look at the disk info from the VM's BIOS or a Windows OS, the detected SCSI address info matches up with the virtual hardware settings. But under Linux, the SCSI address components don't match up, at least not completely or consistently. I've tried the three supported virtual SCSI and SAS drivers and they all seem to be "broken", but in different ways. Here's a list of the virtual hardware addresses vs what was detected under Linux with each of the drivers: Driver vHW Addr Linux Addr -------- -------- ---------- LSI SAS 0:0 0:0 LSI SAS 0:3 0:1 LSI SAS 0:6 0:2 LSI SCSI 1:1 2:1 LSI SCSI 1:4 2:4 LSI SCSI 1:7 2:7 pvSCSI 2:2 1:2 pvSCSI 2:5 1:5 pvSCSI 2:8 1:8 My main question is why does this happen under Linux? The next question is: how do I get it fixed or fix it myself? If I was going to guess, I'd say it's an issue with how the kernel is handing out the SCSI host number and how the Linux SCSI driver (included with VMware tools) is detecting the SCSI target number. Perhaps the order the drivers are loaded also has something to do with the issue. I'm guessing this would not involve udev, but I could be wrong. Any thoughts would be appreciated. Thanks! PS. My environment is VMware, but I don't need an answer for these drivers specifically. I imagine this might be a problem with any SCSI driver under Linux.

    Read the article

  • recursive grep started at / hangs

    - by Martin
    I have used following grep search pattern on multiple platforms: grep -r -I -D skip 'string_to_match' / For example on FreeBSD 8.0, FreeBSD 6.4 and Debian 6.0(squeeze). Command does a recursive search starting from root directory, assumes that binary files do not have the 'string_to_match' and skips devices, sockets and named pipes. FreeBSD 8.0 and FreeBSD 6.4 use GNU grep version 2.5.1 and Debian 6.0 uses GNU grep version 2.6.3. On FreeBSD 6.4, last information printed to stderr was "grep: /dev/cuad0: Device busy". After this grep just idles as according to "top -m io -o total" the I/O usage of grep is nonexistent. Same behavior is true under FreeBSD 8.0, but last information sent to stderr is "grep: /tmp/.wine-0: Permission denied" on my installation. In case of Debian, last output to stderr is "grep: /proc/sysrq-trigger: Input/output error". If I check the I/O usage of grep process under Debian, it is following: root@Debian:~# iotop -bp 22439 Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 22439 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % grep -r -I -D skip 10.10.10.99 / ^Croot@Debian:~# What might cause this? Is there a way to view which file grep is currently processing in case lsof is not present? I'm able to use lsof under Debian and looks like the problematic file name there is "0xc6b2c230 file struct, ty=0, op=0xc0d34120". I'm not sure what this is.. I'm not able to use lsof or fstat under FreeBSD. PS: I know I could use find utility, but this is not the question.

    Read the article

  • AGP 4x slot running at 2x

    - by Wesley
    Hi all, Here are the specs of the machine related to this question: AMD Athlon XP 2400+ @ 2 GHz / 2x 512MB PC3200 DDR RAM / 3D Fuzion 128MB GeForce 6200 AGP / WD 160GB IDE HDD / FIC AM37 (mobo) / Codegen 350W PSU / Windows XP Pro SP3 So, for some odd reason, the GeForce 6200 that is in the AGP 4x slot is not running at 4x, but 2x. The card itself supports AGP 4x and even 8x. When I enter the BIOS, the options for the AGP slot only allow 2x as the max. Before, the max option for the AGP slot in the BIOS was 4x. (I do get the 4x option after I reset CMOS, but it does not stick after a restart.) I don't exactly know what the problem could be, but a few weeks ago, this machine's hard drive was reformatted and got a fresh install of XP. Also, I did install Rivatuner, but I never overclocked it at all. (I uninstalled it after thinking that it had done something.) Otherwise, I cannot figure out the cause of the problem. Does anybody have an idea why this is happening? (Would I have to do another reformat?) Thanks in advance. PS: the other options for the AGP slot are aperture size (which I tried changing, but it did nothing) and FastWrite (enabled by default).

    Read the article

  • Dovecot (on Mac OS X Server 3 - Mavericks) giving “Error: Failed to autocreate mailbox INBOX: Permission denied”

    - by user2965240
    I recently upgraded my server to Mavericks and I thought everything had gone perfectly. As it turns out, that was not exactly the case. After everything working fine for a week or so, I created a new user and my mail server flat out stopped working. Most frustratingly, there were no log errors (in any of the many mail logs) which shed any light on the cause. After much trial and error, I finally got my mail server functioning (partially) by reinstalling server and restoring the mail folder from the previous day's backup (yay for backups). However, as is often the case, there were MANY permission issues. After slogging through various permission problems, I believe that my system is now receiving mail, however none of my users can check it because every time someone attempts to login the server generates following error: Error: Failed to autocreate mailbox INBOX: Permission denied I would assume that this is yet another permissions problem, however after much searching, I still cannot resolve this. To be clear, I don't want Dovecot to autocreate any mailboxes at this point because all of the mailboxes should exist. Any help on this would be sooooooo greatly appreciated. P.S. On a side note: why is it that repairing permissions never takes care of difficult issues like these? That would be awfully helpful...

    Read the article

  • esxi 5 monitoring

    - by user134880
    I'm new to esxi/vmware world. Have few question about monitoring esxi host. Now I'm using trial versian esxi 5.1 I can see performance charts in vShpere client. Cool. But I want to export this raw data (raw - I mean cvs,txt or some format which I can parse later) to other server to be able to parse this data later and create custom charts. (please do not advise to try vCenter, I need custom charts etc.) I could run esxtop in batch mode and use this data... But... How does vShpere client performance charts work? Where client takes data for charts? So if I will use esxtop batch mode it will add extra load to server. Is it possible to use same source as vSphere client use for charts? There is /var/lib/vmware/hostd/stats/hostAgentStats-20.stats file. Its looks like it is binary format. As I understood it is exactly data which I need?!?! Any ideas how to parse it? Thanks! PS: maybe some one know where to find, if there is one, info about processes running on esxi host?

    Read the article

  • FreeBSD rc.d script doesn't work when starting up

    - by kastermester
    I am trying to write a rc.d script to startup the fastcgi-mono-server4 on FreeBSD when the computer starts up - in order to run it with nginx. The script works when I execute it while being logged in on the server - but when booting I get the following message: eval: -applications=192.168.50.133:/:/usr/local/www/nginx: not found The script looks as follows: #!/bin/sh # PROVIDE: monofcgid # REQUIRE: LOGIN nginx # KEYWORD: shutdown . /etc/rc.subr name="monofcgid" rcvar="monofcgid_enable" stop_cmd="${name}_stop" start_cmd="${name}_start" start_precmd="${name}_prestart" start_postcmd="${name}_poststart" stop_postcmd="${name}_poststop" command=$(which fastcgi-mono-server4) apps="192.168.50.133:/:/usr/local/www/nginx" pidfile="/var/run/${name}.pid" monofcgid_prestart() { if [ -f $pidfile ]; then echo "monofcgid is already running." exit 0 fi } monofcgid_start() { echo "Starting monofcgid." ${command} -applications=${apps} -socket=tcp:127.0.0.1:9000 & } monofcgid_poststart() { MONOSERVER_PID=$(ps ax | grep mono/4.0/fastcgi-m | grep -v grep | awk '{print $1}') if [ -f $pidfile ]; then rm $pidfile fi if [ -n $MONOSERVER_PID ]; then echo $MONOSERVER_PID > $pidfile fi } monofcgid_stop() { if [ -f $pidfile ]; then echo "Stopping monofcgid." kill $(cat $pidfile) echo "Stopped monofcgid." else echo "monofcgid is not running." exit 0 fi } monofcgid_poststop() { rm $pidfile } load_rc_config $name run_rc_command "$1" In case it is not already super clear, I am fairly new to both FreeBSD and sh scripts, so I'm kind of prepared for some obvious little detail I overlooked. I would very much like to know exactly why this is failing and how to solve it, but also if anyone has a better way of accomplishing this, then I am all open to ideas.

    Read the article

  • Strange battery behavior on laptop

    - by EpsilonVector
    My laptop is behaving rather strangely lately, and I was hoping to get some idea as to what may be causing such symptoms. The problem: When charging, very minute or so it loses connectivity with the AC adapter for a split second, and regains it back immediately. When this happens the little light that indicates the computer is plugged in does flicker off and back on, but I checked the adapter by replacing the battery on my laptop, and this indeed solves the problem, so it is probably the battery which is at fault, not the adapter (I also tried to move the adapter's wire around just to make sure it had nothing to do with torn wires). I suppose that the obvious solution is to get a new battery, but as far as battery defects go- this is a rather strange one; it loses connection with the adapter, but still powers the computer, and changing the power setting to a balanced plan (was maximum performance) seem to have solved the problem too. Is there a chance this is not simply the battery, but some kind of other electronic defect? And if not, what can cause it to behave so strangely? PS I tried to recalibrate it- didn't help.

    Read the article

  • IPtables: DNAT not working

    - by GetFree
    In a CentOS server I have, I want to forward port 8080 to a third-party webserver. So I added this rule: iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination thirdparty_server_ip:80 But it doesn't seem to work. In an effort to debug the process, I added these two LOG rules: iptables -t mangle -A PREROUTING -p tcp --src my_laptop_ip --dport ! 22 -j LOG --log-level warning --log-prefix "[_REQUEST_COMING_FROM_CLIENT_] " iptables -t nat -A POSTROUTING -p tcp --dst thirdparty_server_ip -j LOG --log-level warning --log-prefix "[_REQUEST_BEING_FORWARDED_] " (the --dport ! 22 part is there just to filter out the SSH traffic so that my log file doesn't get flooded) According to this page the mangle/PREROUTING chain is the first one to process incomming packets and the nat/POSTROUTING chain is the last one to process outgoing packets. And since the nat/PREROUTING chain comes in the middle of the other two, the three rules should do this: the rule in mangle/PREROUTING logs the incomming packets the rule in nat/PREROUTING modifies the packets (it changes the dest IP and port) the rule in nat/POSTROUTING logs the modified packets about to be forwarded Although the first rule does log incomming packets comming from my laptop, the third rule doesn't log the packets which are supposed to be modified by the second rule. It does log, however, packets that are produced in the server, hence I know the two LOG rules are working properly. Why are the packets not being forwarded, or at least why are they not being logged by the third rule? PS: there are no more rules than those three. All other chains in all tables are empty and with policy ACCEPT.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >