Search Results

Search found 12376 results on 496 pages for 'active pattern'.

Page 387/496 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • debian VM refusing all traffic apart from http

    - by james lewis
    I've got a VM with a fresh install of Debian (wheezy) and I've installed node and mongo on it. The VM is using a bridged network connection so I was expecting to be able to point my host machines browser at the ip address of the Debian VM (port 1337 for my node example or port 28017 for my mongo status page) and see one of the two services (node or mongo). My requests are refused though. As far as I can tell Debian allows all traffic by default and you have to manually configure iptables to drop traffic. I've checked iptables and it says it's setup to allow anything through. It looks like this: root@devbox:/home/jlewis# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination As a test I setup nginx and I was able to get to the nginx landing page from my host no problems so obviously http traffic is allowed. I then set nginx up to forward all traffic upstream to mongo - no problems there, I was able to see the status page. I then did the same for my example node server and again, no problems. So http traffic is fine, but all other traffic is blocked. Anyone know why debian might be refusing all other traffic other than iptables being setup to drop it? EDIT - output from netstat -nltp: Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:51028 0.0.0.0:* LISTEN 1541/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2462/sshd tcp 0 0 127.0.0.1:1337 0.0.0.0:* LISTEN 2794/node tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2274/exim4 tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1510/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2189/nginx tcp6 0 0 :::22 :::* LISTEN 2462/sshd tcp6 0 0 :::45335 :::* LISTEN 1541/rpc.statd tcp6 0 0 ::1:25 :::* LISTEN 2274/exim4 tcp6 0 0 :::111 :::* LISTEN 1510/rpcbind

    Read the article

  • Ubuntu lucid, maverick high iowait

    - by netom
    I'm using Ubuntu, and I've got the same problem with Lucid and Maverick. From time to time, especially a few minutes after boot, the iowait goes between 50-100% and the box is unusable. Everything that tries to access the disk freezes. I have the following setup: Hard disk: Model Family: Western Digital Caviar Green family Device Model: WDC WD15EADS-00P8B0 Serial Number: WD-WMAVU0391287 Firmware Version: 01.00A01 User Capacity: 1.500.301.910.016 bytes I have a quad core Intel Core2 Q6600 processor, and 4G of memory. When the high iowait occurs, usually 4 processes are active: kdmflush (two procs) jbd2/dm-0-8 jbd2/db-1-8 and a few more starving user processes of course. I know this from top and iotop. Any suggestions about why this is happening? There are a lot of q/a-s about linux and high iowait, but none of them helped so far, I even tweaked the hard disk not to park the head in every 8 seconds (Load cycle count is 50334!!!!! :o ), but nothing. Problem persists. Thank you in advance.

    Read the article

  • How can I connect integrated webcam with virtualbox

    - by Mike Stumpf
    I am trying to use a Windows XP VM for VirtualBox on my Windows 8.1 laptop. I have tried the usual attaching USB device but I get an error saying "USB device is busy with previous request". My webcam is not active in any applications and this happens after a clean reboot of the host, the guest, and VirtualBox. Here are the details: Host -HP Pavilion 17 Notebook PC (stock) -Windows 8.1 -AMD A10-5750M APU -HP Truevision HD (integrated webcam) VM I got the VM here: http://www.modern.ie/en-us/virtualization-tools VirtualBox -VirtualBox 4.3.12 installed -VirtualBox Extension pack installed -Guest additions are installed for 4.3.12 -Enable USB Controller is checked -It does not matter if enable 2.0 controller is checked or not -It does not matter if a USB device filter is set up for the webcam or not -Here is the error message: Failed to attach the USB device DDFEQ01G45BFBV HP Truevision HD [0004] to the virtual machine IE8 - WinXP. USB device 'DDFEQ01G45BFBV HP Truevision HD' with UUID {7a2e2a45-974d-482b-9b4e-9f9abbcd0ebb} is busy with a previous request. Please try again later. Result Code: E_INVALIDARG (0x80070057) Component: HostUSBDevice Interface: IHostUSBDevice {173b4b44-d268-4334-a00d-b6521c9a740a} Callee: IConsole {8ab7c520-2442-4b66-8d74-4ff1e195d2b6} I read on some VirtualBox forums that disabling USB 2.0 support in the host BIOS solved their issue but I wanted to know if there were any other ideas before I muck around in there. Thanks

    Read the article

  • ASA Slow IPSec Performance with Inconsistent Window Size

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremely poor performance when traffic passes over the IPSec VPN. CPU on the devices is ~13%, Memory at 408 MB, and active VPN sessions 2. The load on both of the the devices is particularly low. Latency between the two sites is ~40ms. Screenshot of wireshark file transfer between the two hosts over the firewall IPSec VPN performing at 10MBPS. Note the changing window size. http://imgur.com/wGTB8Cr Screenshot of wireshark file transfer between the two hosts over the firewall not going over IPSec performing at 55MBPS. Constant window size. http://imgur.com/EU23W1e I'm showing an inconsistent window size when transferring over the IPSec VPN ranging in 46,796 to 65535. When performing at 55+MBPS, the window size is consistently 65,535. Does this show a problem in my configuration of the IPSec VPN in the ASA or a Layer1/2 issue? Using ping xxxxxx -f -l I finally get a non-fragment at 1418 bytes so 1418+28 for IP/ICMP headers = 1446. I know that I have 1500 set on the ASA and Ethernet. I do have "Force Maximum segment size for TCP proxy connection to be" "1380" bytes set under Configuration Advanced TCP Options on the ASA. Using IPERF, I am getting a "TCP Window Full" every few seconds and ~3 MBPS performance. http://imgur.com/elRlMpY Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • 8007064c(2011) and 80280007(2009) persistant after all known repairs

    - by tiu44
    I'm on Windows 7 Home x64, and have ran into a major issue with Live Messenger(which I use daily). I have full offline installers for both 2011 and the last Wave 3 2009(14.0.8117.0416) Suites. Both give the following errors: Live Essentials 2011 Offline installer(official): An unknown error occured. Error:0x8007064c Source WLXSuite WL 2009 offline installer(official): You already have a more recent version of Windows Live. Error: OnCatalogResult:0x80280007 Next steps: If you want to install this older version, first uninstall any later versions that are on your computer. Get help with this error The 2011 installer also says it is updating messenger, I don't select anything else. Then last 2009 installer says their is a newer version that needs uninstalled even after the following procedures. MS Help pages provided all basically lead to using uninstall from control panel. Which I've uninstalled all Live components including watcom safety scanner and portable SQL from. I've followed online instructions for manually deleting folders from Program Files(x86), Appdata, and some others under \User\All Users and the one for the one account on the machine. I've used CCcleaner 3.01, ASC 3.7.3 and Beta 4 with deep scan along with deleting folders, and checked their uninstallers for Live components too, and none were there. wlmuninstaller.exe tool reports nothing, but after a failed install it finds something, but failes to clean it under all user admin privilege. The same errors still occur after all of that. Google searching I see people on forums suggesting reinstalling the OS cause MS doesn't even know how to fix this, but I'm hoping someone here can help. NOTE: I don't have System Restore or any other state freeze utilities going, and I don't have any real time AV going(I sometime scan with defender, anti rootkits, and online scanners). NOTE2:I posted this on windowslivehelp.com, before looking to see if the place was active or not, hoping I can get help here. Thanks

    Read the article

  • Spots appear in a rectangle area on screen, ubuntu gnome 13.04, nvidia driver

    - by frozen-flame
    I am using Ubuntu Gnome 13.04 with nvidia-310 driver installed. My GPU is GeForce GTX650. Strange spots freqently appear on screen, with following traits: Spots are restricted in one or two rectangle areas at any instant. When typing, the pattern of spots change. Possibly increase, or all disappear when one key pressed. Mouse movement also influences. This problem last within one boot. The only way can I get rid of this problem is to reboot. It can be detected as soon as entering desktop if it appears. Simultaneously, the "power off" option is lost in the top-right menu of Gnome3. Never such problem when using windows 7 on the same computer, neither ubuntu with Nouveou driver. Seldomly, half of the screen become black. I googled a lot. Similar conditions are described, but no confirmed solution. Uninstall-r einstall strategy does not work. Any clue solving this will be appeciated.

    Read the article

  • CentOS - mdadm raid1 drive won't mount to default location

    - by danny
    I'm running CentOS 5.5, the system, boot, swap, etc. is all on /dev/sda and I have two identical single-partition drives /dev/sdb1 /dev/sdc1 that are configured in RAID1 (using mdadm). It was working fine (configured to mount to /mnt/data in the fstab file) and I recently let yum install a couple of automatic updates without paying attention to what they were, and now it doesn't work. Raid is working fine (dmesg shows it gets loaded correctly). mdstat shows: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] XXXX blocks [2/2] [UU] unused devices: <none> Additionally, I can mount it anywhere other than its default directory (i.e. the following works, and I can read data off the drives). # mount /dev/md0 /mnt/data2 EXT3-fs warning: mounting fs with errors, running e2fsck is recommended But when I run the following I get: # mount -a mount: /dev/sdb1 already mounted or /mnt/data busy It says nothing is mounted when I try to umount /dev/sdb1 or umount /mnt/data, so I assume it's the second of those errors. However, lsof | grep mnt shows nothing. The weird thing is that I can save files in /mnt/data. So something is obviously mounted there, but when I try to umount it I get the error that nothing is mounted. /etc/mtab doesn't mention any of the partitions or files I am trying to work with, and fstab just has that one line I mentioned above that is supposed to mount my raid partition. Again, it was all working fine until I On Google I've found a few things about dmraid interfering with mdadm after an update, but I yum remove'd dmraid and rebooted and it didn't help. I'm really confused and need to get this working to get on with my work!

    Read the article

  • Subversion Edge LDAP (require CAC Certificate not Username and Password)

    - by Frank Hale
    What I've Done: I've successfully installed and configured Subversion Edge 3.1.2 with LDAP support on a Windows 2008 server. I have configured LDAP users and am able to use LDAP credentials to work on repositories just fine. No issues whatsoever. Works great! What I Want To Do: I've been searching for several hours now in hopes to find some information on how to configure Subversion Edge server to require client certificates for user authentication against an LDAP environment. I have not found anything yet that gives me an indication of how to do it. I know there are SVN clients that are capable of prompting for CAC certificates but I cannot figure out how to set my server up to require it. NOTE: CAC authentication is already setup and working in the windows environment. Desired Outcome: When running svn commands that require authentication against my Subversion Edge Server I want it to prompt me for my CAC certificate instead of my Active Directory username and password. If anyone has any information on this I'd greatly appreciate it. EDIT: I'm still digging so if I find out anything I'll update this question with what I found.

    Read the article

  • Sonicwall TZ210 - Set up public wifi on separate subnet & interface

    - by thomasjbarrett
    I want to set up a public wifi by connecting another router to the X6 interface, and put it on a separate subnet (192.168.10.0/24) & in the DMZ Zone to keep it away from the regular LAN. I believe I have the network settings correct: the router has acquired the IP and DNS information from the TZ210, and the TZ210 shows it as an active DHCP lease. X6 is in the DMZ. I now have a routing/NAT/firewall problem, since I can't get any traffic to travel from the subnet to the internet. I can't get to any external websites and can't ping the TZ210 from the subnet. X0 is the regular LAN, and X1 is the WAN. Looking for any tips or tutorials on this. Here's my current relevant rules: Routing Source: X6 Subnet Destination: Any Service: Any Gateway: Default Gateway Interface: X6 Source: Any Destination: X6 Subnet Service: Any Gateway: 0.0.0.0 Interface: X6 NAT Policies Source Original: Any Translated: WAN IP Destination Original: Any Translated: Original Inbound: X6 Outbound: X1 Source Original: Any Translated: U0 IP Destination Original: Any Translated: Original Inbound: X6 Outbound: U0 Firewall DMZ LAN : Deny All DMZ WAN : Allow All LAN DMZ : Allow All WAN DMZ : Allow All

    Read the article

  • Apache Process question about RAM usage

    - by Andrew Fashion
    So everytime I load a new page, I notice a new HTTPD process opens, every time I click a page, and each process says it's using anywhere from 2-4.5% of memory. Does that mean every single process is running at that time using 2-4% of RAM? It's a brand new server and I'm the only one on the server at the moment. Or does it mean all the other processes are dying, and only the new one is active. Because 4% of my 2048MB of RAM is already 82MB for just one process!?!? Let me know, because I am trying to determine what I need to beef my server up in order to handle high loads of traffic. I'm expect to get 20,000 uniques per day on launch. I am currently running a Dual Quad Xeon server, with only 2GB of ram, I will upgrade to 8GB or more shortly. Let me know what you suggest! thank you [root@D18634 log]# top | grep 'httpd' 11315 apache 15 0 362m 82m 24m R 12.3 4.1 0:03.00 httpd 11310 apache 16 0 322m 41m 21m S 5.7 2.1 0:02.98 httpd 11315 apache 15 0 362m 83m 25m S 24.3 4.1 0:03.73 httpd 11319 apache 16 0 324m 42m 20m R 1.0 2.1 0:01.85 httpd 11319 apache 16 0 362m 82m 23m R 78.5 4.1 0:04.21 httpd 11321 apache 16 0 323m 44m 23m S 35.3 2.2 0:04.13 httpd 11319 apache 15 0 361m 82m 23m S 8.3 4.1 0:04.46 httpd 11321 apache 15 0 323m 44m 23m S 35.9 2.2 0:05.21 httpd 11313 apache 15 0 324m 41m 19m S 48.6 2.1 0:03.23 httpd 11322 apache 16 0 354m 72m 20m R 11.0 3.6 0:05.11 httpd 11322 apache 16 0 354m 72m 20m S 23.9 3.6 0:05.83 httpd 11314 apache 16 0 355m 75m 22m R 18.3 3.7 0:04.64 httpd

    Read the article

  • Using IIS7 as a reverse proxy

    - by Eric Petroelje
    I'm setting up a server at home to host a few small websites. One of them is .NET based and needs IIS, the others are PHP based and need Apache. So, I have both IIS 7 and Apache 2.2.x installed on my server with IIS on port 80 and Apache running on port 8080. I would like to set up IIS to work as a reverse proxy, forwarding the requests for the Apache sites to port 8080 and serving the requests for the .NET site itself based on the host headers. Like this: www.mydotnetsite.com/* -> IIS -> serve from IIS www.myapachesite.com/* -> IIS -> forward to apache on port 8080 www.myothersite.com/* -> IIS -> forward to apache on port 8080 I did a bit of googling and it seemed like the Application Request Routing feature would do what I needed, but I can't seem to get it to work the way I want it to. I can get it to forward ALL traffic to the Apache server and I can get it to forward traffic with a specific URL pattern to the Apache server, but I can't seem to get it to forward based on the host headers (e.g. "forward all requests for www.apachesite.com - localhost:8080") So the question is, how would I go about configuring ARR to do this? Or do I need a different tool? I'm also open to using Apache as the reverse proxy and forwarding the .NET site requests to IIS instead if that's easier (running Apache on port 80 and IIS on 8080).

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

  • Copy from CDROM is very slow in Ubuntu

    - by ???
    I'm using the command to copy CDROM image: # dd if=/dev/sr0 of=./maverick.iso But it's very slow, at about 350k bytes/sec. I've searched the google, and try the command # hdparm -vi /dev/sr0 /dev/sr0: HDIO_DRIVE_CMD(identify) failed: Bad address IO_support = 1 (32-bit) readonly = 0 (off) readahead = 256 (on) HDIO_GETGEO failed: Inappropriate ioctl for device Model=DVD-ROM UJDA775, FwRev=DA03, SerialNo= Config={ Fixed Removeable DTR<=5Mbs DTR>10Mbs nonMagnetic } RawCHS=0/0/0, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=0 (maybe): CurCHS=0/0/0, CurSects=0, LBA=yes, LBAsects=0 IORDY=yes, tPIO={min:180,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: sdma0 sdma1 sdma2 mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 *udma2 AdvancedPM=no Drive conforms to: ATA/ATAPI-5 T13 1321D revision 3: ATA/ATAPI-1,2,3,4,5 * signifies the current active mode Seems like DMA is already on. And a device test gives: # hdparm -t /dev/sr0 /dev/sr0: Timing buffered disk reads: 2 MB in 6.58 seconds = 311.10 kB/sec # sudo hdparm -tT /dev/sr0 /dev/sr0: Timing cached reads: 2 MB in 2.69 seconds = 760.96 kB/sec Timing buffered disk reads: m 4 MB in 5.19 seconds = 789.09 kB/sec The CD-ROM device and disc should be okay because I can copy it very fast in Windows, using UltraISO utility. So I guess there is something not configured right in Ubuntu, is it?

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • Why do I often have to refresh pages I navigate to once for them (or content in them) to load?

    - by GetOutOfBox
    I have noticed a bizarre pattern when using my PC, that when I open a link to a website, it often will often take a very long time to load, or time out. Sometimes content on the website will be drawn, but again, it seems to get "stuck" for an unusual amount of time before finishing. Most affected is Youtube; almost every time I navigate to a youtube video from another website such as Google, the video will not begin playing, but will instead just display the player controls with a black screen where the video should be and the buffering symbol, usually before displaying an error such as "The video failed to load". The unusual part of this problem is that whenever this happens, refreshing the page always causes it to load almost immediately the second time around, without any problems. Note that I'm not talking about how some browsers will dump whatever has been cached to the "pallet" briefly when the page is refreshed or loading stopped; but that the second time loading the website being faster. I have done my best to rule out some of the obvious causes: My Windows 7 desktop computer is the only device that seems to be affected. I use Firefox on it (latest version, flash updated, etc). My connection has more than enough bandwidth (30 megabits down, 4 up), and I've even tried QoSing all other devices to make sure this isn't happening due to usage spikes. Wireshark is not showing any clearly unusual network activity (i.e frequently dropped packets).

    Read the article

  • Procmail Postfix issue

    - by Blucreation
    Our server is using CENTOS uses postfix: Nov 1 11:31:52 webserver postfix/smtpd[30424]: 822A91872F: client=unknown[5.133.168.42], sasl_method=PLAIN, [email protected] Nov 1 11:31:52 webserver postfix/cleanup[30427]: 822A91872F: message-id=<[email protected]> Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: from=<[email protected]>, size=620, nrcpt=1 (queue active) Nov 1 11:31:52 webserver postfix/virtual[30505]: 822A91872F: to=<[email protected]>, relay=virtual, delay=0.12, delays=0.12/0/0/0, dsn=2.0.0, status=sent (delivered to maildir) Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: removed Nov 1 11:31:52 webserver postfix/smtpd[30424]: disconnect from unknown[5.133.168.42] I have this in my etc/postfix/main.cf: mailbox_command = /usr/bin/procmail -a "$EXTENSION" My etc/procmailrc contains: PATH="/usr/bin" SHELL="/bin/bash" LOGFILE="/var/log/procmail.log" VERBOSE="YES" LOG="#TEST#" I don't think procmail is picking up on my procmailrc as nothing ever gets logged from normal emails. If i type this: procmail DEFAULT=/dev/null VERBOSE=yes LOGFILE=/var/log/procmail.log /dev/null </dev/null I get entries in my log file so i know procmail is working Am i doing something wrong? am i missing something? I eventually want my rule to call a php script only if the subject contains "SUPPORT TICKET" and the to is "[email protected]" but that's once i this issue solved.

    Read the article

  • Mac: window manager frozen, have ssh access

    - by Bernd
    I have a Mac which regularly runs into a problem. The user interface stops reponding, showing a "frozen" user interface. The mouse is still moving but clicking does not trigger anything. This happens about once a week. Solution so far is to force switch-off the Mac and reboot it. I have ssh root access to the Mac. Killing (kill -9) the active application has no visible impact on what is shown on the screen. Any ideas on how to diagnose this? Is there a way to restart the window manager from the ssh shell? Killing /System/Library/Frameworks/ApplicationServices.framework/Frameworks/CoreGraphics.framework/Resources/WindowServer seems not to be possible. The Mac is an early 2008 iMac and runs Lion with latest updates. /Library/Logs/DiagnosticReports is empty. Update: Problem stays after update to Mountain Lion. The WindowServer process is in "uninterruptible wait" state ("U" flag in ps output set): imac:~ root# ps ax|awk "NR==1|| /WindowServer/"|grep -v awk PID TT STAT TIME COMMAND 86 ?? Us 50:51.69 /System/Library/Frameworks/ApplicationServices.framework/Frameworks/CoreGraphics.framework/Resources/WindowServer -daemon Any idea for diagnosing what blocks the process? Any idea for "waking up" the process?

    Read the article

  • JVM system time runs faster than HP UNIX OS system time

    - by winston
    Hello I have the following output from a simple debug jsp: Weblogic Startup Since: Friday, October 19, 2012, 08:36:12 AM Database Current Time: Wednesday, December 12, 2012, 11:43:44 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:45:38 AM Line 1 was a recorded variable during WebLogic webapp startup. Line 2 was output from database query select sysdate from dual; Line 3 was output from java code new Date() I have checked from shell date command that line 2 output conforms with OS time. The output of line 3 was mysterious. I don't know how it comes from Java VM. On another machine with same setting, the same jsp output like this: Weblogic Startup Since: Tuesday, December 11, 2012, 02:29:06 PM Database Current Time: Wednesday, December 12, 2012, 11:51:48 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:51:50 AM Another machine: Weblogic Startup Since: Monday, December 10, 2012, 05:00:34 PM Database Current Time: Wednesday, December 12, 2012, 11:52:03 AM Weblogic JVM Current Time: Wednesday, December 12, 2012, 11:52:07 AM Findings: the pattern shows that the longer Weblogic startup, the larger the discrepancy of OS time with JVM time. Anybody could help on HP JVM? On HP UNIX, NTP was done daily. Anyway here comes the server versions: HP-UX machinex B.11.31 U ia64 2426956366 unlimited-user license java version "1.6.0.04" Java(TM) SE Runtime Environment (build 1.6.0.04-jinteg_28_apr_2009_04_46-b00) Java HotSpot(TM) Server VM (build 11.3-b02-jre1.6.0.04-rc2, mixed mode) WebLogic Server Version: 10.3.2.0 Java properties java.runtime.name=Java(TM) SE Runtime Environment java.runtime.version=1.6.0.04-jinteg_28_apr_2009_04_46-b00 java.vendor=Hewlett-Packard Co. java.vendor.url=http\://www.hp.com/go/Java java.version=1.6.0.04 java.vm.name=Java HotSpot(TM) 64-Bit Server VM java.vm.info=mixed mode java.vm.specification.vendor=Sun Microsystems Inc. java.vm.vendor="Hewlett-Packard Company" sun.arch.data.model=64 sun.cpu.endian=big sun.cpu.isalist=ia64r0 sun.io.unicode.encoding=UnicodeBig sun.java.launcher=SUN_STANDARD sun.jnu.encoding=8859_1 sun.management.compiler=HotSpot 64-Bit Server Compiler sun.os.patch.level=unknown os.name=HP-UX os.version=B.11.31

    Read the article

  • Logmein does not work at home?

    - by Littlet-ENG
    I've been using logmein successfully for may situations and have had very good success. Our company has an Log me in Pro account. I have used this to share my desktop with customers. At work, I have had no problem with my laptop. At home, one program (solid-works) that I need to share with my customers, will not display the active screen. I spent 45 min on the phone with both the software for the cad system and logmein support with not help. I need help in narrowing down what the problem is on my computer. The support guys at Solid-works got another remote software to work, so its not the program. I can get the logmein to work at the office so its not the settings of the logmein pro account. The LMI people say its a setting on my computer.? -internet is fast enough at home -can't narrow down the problem -changed graphical settings and that didn't work. Any Suggestions?

    Read the article

  • Unable to ping domain.local, but can ping server.domain.local

    - by Force Flow
    I have a single windows 2008 server running active directory, group policy, and DNS. DHCP is running from the firewall (this is because there are multiple branch locations, and each location has its own firewall supplying DHCP. But, for this problem, the server and workstation are at the same location). On an XP workstation, if I try to visit \\domain.local or ping domain.local, the workstation can't find it. A ping returns Ping request could not find host domain.local. If I try to visit \\server or \\server.domain.local or ping server or server.domain.local, I'm able to connect normally. If I ping or visit domain.local on the server, I'm able to connect normally. A-Records are in place in the DNS service for server, domain.local, and server.domain.local. A reverse lookup zone also is enabled and PTR records are in place. If I wait 20-30 minutes, I am eventually able to ping and visit domain.local--but, when attempting to ping, it takes 30 second to return an IP address. I am also unable to join a new workstation to the domain during this wait period. If I try, the error message returned is "network path not found". Is there something I'm missing?

    Read the article

  • unable to destroy windows 2008 r2 failover cluster after SAN rebuild

    - by Zack
    I created a windows 2008 r2 failover cluster for a sql 2008 active/passive cluster. This two node cluster was using a SAN device for a quorum disk resource as well as MSDTC resource. Well....I decided to reconfigure the SAN device, but I didn't destroy the cluster first. Now that the quorum disk and mstdc disk are completely gone, the cluster is obviously not working. But, I can't even destroy the cluster and start again. I've tried from the Windows Clustering tool, as well as the command line. I was able to get the cluster service to start using the "/fixquorum" parameter. After doing this I was able to remove the passive node from the cluster, but it wouldn't let me destroy the cluster because the default resource group and msdtc are still attached as resources. I tried to delete these resources from both the GUI tool, as well as command line. It will either freeze for several minutes and crash the program, or once it even BSOD'd the server. Can someone advise on how to destroy this cluster so I can start over?

    Read the article

  • MySQL not releasing temp file descriptors

    - by Wakaru44
    Since a few days ago, we’ve been experiencing some serious problems with our MySQL installation: MySQL keeps opening temporal files (normal behaviour) but these files are never released. The consequence is that, eventually, the disk space is exhausted and we have to restart the service and clean up /tmp manually. Using lsof, we see something like this: mysqld 16866 mysql 5u REG 8,3 0 692 /tmp/ibyWJylQ (deleted) mysqld 16866 mysql 6u REG 8,3 0 707 /tmp/ibf5adsT (deleted) mysqld 16866 mysql 7u REG 8,3 0 728 /tmp/ibGjPRyW (deleted) mysqld 16866 mysql 8u REG 8,3 0 5678 /tmp/ibMQDLMZ (deleted) mysqld 16866 mysql 13u REG 8,3 0 5679 /tmp/ibQAnM42 (deleted) Maybe it's not related, but when we shutdown the server, the files are finally freed, and we can see the following warnings in the MySQL log: 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1333 user: 'xxx' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1156 user: 'yyy' 121029 7:44:27 [Warning] /usr/local/mysql/bin/mysqld: Forcing close of thread 1151 user: 'zzz' where 'xxx', 'yyy' and 'zzz' are distinct mysql users (and the only 3 users with active connections to the database). We have a few theories: There is a problem in the OS, that keeps file handlers open. Could it be possible that the OS "delete" operation blocks the threads until shutdown? This may explain the warning at shutdown and the fact that files are finally deleted when the process dies. Until now, data sets were so small that temp files were relatively small and there was enough time to release the file handles without exhausting disk space. We are using Mysql 5.5 on a RHEL 6.2 with the default kernel.

    Read the article

  • Netgear GS724Tv3 and link aggregation Mac OS X Server 10.6.8

    - by Manca Weeks
    I need to link aggregate 2 sets of ports on the Netgear GS724T with my Apple server tower (latest generation). I have 2 built in ports and 2 ports on a PCIe ethernet card. It is not obvious to me how to properly configure the Netgear end. I have access to the Netgear box through its web interface, just don't know how to properly set the settings. I tried going to Netgear for help, but they said my software support has expired. I bought this unit on their recommendation - they say it is compatible with 802.3ad protocol. I cannot locate any references to this protocol in the manual and I noticed some people in formus say that this device is actually not compatible with 802.3ad and that Netgear is misleading potential customers by saying it is. Any help will be appreciated. Thanks, M My own answer - posted as edit because of restrictions on my user: OK folks, turns out one must use a Windows machine on this one or nothing makes sense. I was unable to get much farther than viewing the default inactive LAGs because in Firefox and Safari on Mac things don't make much sense - i.e. the Apply buttons (supposedly JavaScript) don't work. You can view the configurations, but none of the modifications you make stick. Then, in Switching - LAGs, choose the ports to include and make sure you switch the LAG type from Static to LACP and all is well. Haven't tested the performance of the config yet, but both sides appear to be happy with the configuration. Apple server says link active and so does the Netgear. Will report if any other discoveries. Thanks for all who read and to user84104 for responding. M

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >