Search Results

Search found 5619 results on 225 pages for 'starts'.

Page 194/225 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • How to install 32-bit libraries using Debian Testing

    - by bgoodr
    Question: What is the way to determine, ahead of time and without doing a full install of 64-bit Debian Testing NETINST, when Debian Testing has 32-bit libraries available and fully working and installable so that the following command works without broken package errors?: apt-get install ia32-libs ia32-libs-gtk The errors that occur when 32-bit libraries are not available, still in some broken state, or whatever is broken are detailed below. I already have concluded that "Just install Stable" is my stop-gap measure for now, but I would like to know the answer to the above question so as to avoid a lengthy installation process only to run into these problems at the very end. Details: I downloaded the 64-bit Debian Testing netinst a couple of days ago. This was "Jessie" built 20131014-06:07 via http://tinyurl.com/lejpa. This is weekly testing build. Yes, I know I should expect problems, and I did. I managed to get it completely installed and was able to invoke into GNOME, but not get past the 32-bit library problem. The problems starts when I attempt to install the 32-bit libraries via: apt-get install ia32-libs ia32-libs-gtk that returns: root@breath:~# apt-get install ia32-libs ia32-libs-gtk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 but it is not installable ia32-libs-gtk : Depends: ia32-libs-i386 but it is not installable Depends: ia32-libs-gtk-i386 but it is not installable E: Unable to correct problems, you have held broken packages. I then found an old (2012 is old to me) answer at ia32-libs : Depends: ia32-libs-i386 but it is not installable and even tried what they suggested there which was dpkg --add-architecture i386 apt-get update After executing the above, I tried again but got: root@breath:~# apt-get install ia32-libs ia32-libs-gtk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 ia32-libs-gtk : Depends: ia32-libs-i386 E: Unable to correct problems, you have held broken packages. root@breath:~# And then tried this: root@breath:~# dpkg --get-selections | grep hold And that returned nothing. Not only is there broken packages, the system doesn't even know what packages are broken, so Debian Stable is my only solution I know of right now. Hence my question above.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Slow Windows Explorer on Windows 7

    - by MadBoy
    I have Laptop with i7 (4 cores), 8GB ram and SSD OCZ Vertex 3 MaxIOPS which in testing that I did just now does 400mb/s+ read/write. However the responsiveness of Windows Explorer is far from being perfect. Opening up Computer, Documents, going into folders is very slow (1-5seconds). I don't have any viruses or spyware and I have tried changing properties to optimize view for General Items. I tried disabling Search Indexer but it made search in Outlook 2010 crawl and didn't bring any other effect. Even double clicking on file takes some time to open things up (like clicking a Word document). I don't have any drives mapped, my computer is not joined to domain. I have multiple VPN connections that I connect to but they all have disabled default gateways. I tried using CC Cleaner or some Windows 7 Tweaks app to disable some things. I am power user using Visual Studio, Tortoise SVN and other developer/administration apps. Any non obvious ideas? Edit: So I've been trying to pinpoint where the issue comes from and it seems that straight after reboot Windows Explorer opens very fast, when I load 3-4 programs (Royal TS, Visual Studio, Outlook) it's noticeably slower and the more programs I have it gets worse. After I start closing programs it starts working better and if I leave 2 open it's fast again. I tried doing some research with DiskMon and other programs from sysinternals but couldn't find anything suspicious. Below are stats during normal usage with a lots of programs open: - Ram usage with a lot of programs open and no swap file (i disabled it for testing): 6.95GB - CPU usage: 15%, none of the cores takes more then 50% (I have VS 2010 open x 4) HD Tune Pro: OCZ-VERTEX3 MI Benchmark Test capacity: full Read transfer rate Transfer Rate Minimum : 363.9 MB/s Transfer Rate Maximum : 505.5 MB/s Transfer Rate Average : Access Time : Burst Rate : CPU Usage : HD Tune Pro: OCZ-VERTEX3 MI File Benchmark Drive C: Transfer rate test File Size: 500 MB Sequential read 484102 KB/s Sequential write 444714 KB/s Random read 7779 IOPS Random write 16888 IOPS Random read (queue depth = 32) 73007 IOPS Random write (queue depth = 32) 69790 IOPS HD Tune Pro: OCZ-VERTEX3 MI Random Access Test capacity: full Read test Transfer size operations / sec avg. access time max. access time avg. speed 512 bytes 3260 IOPS 0.306 ms 2.106 ms 1.592 MB/s 4 KB 4161 IOPS 0.240 ms 2.006 ms 16.256 MB/s 64 KB 2382 IOPS 0.419 ms 2.367 ms 148.934 MB/s 1 MB 449 IOPS 2.225 ms 4.197 ms 449.407 MB/s Random 809 IOPS 1.235 ms 6.551 ms 410.527 MB/s HD Tune Pro: OCZ-VERTEX3 MI Extra Tests Test capacity: full Random seek 3975 IOPS 0.252 ms 1.941 MB/s Random seek 4 KB 4245 IOPS 0.236 ms 16.583 MB/s Butterfly seek 4086 IOPS 0.245 ms 1.995 MB/s Random seek / size 64 KB 3812 IOPS 0.262 ms 58.606 MB/s Random seek / size 8 MB 120 IOPS 8.348 ms 485.737 MB/s Sequential outer 4524 IOPS 0.221 ms 282.721 MB/s Sequential middle 4429 IOPS 0.226 ms 276.818 MB/s Sequential inner 5504 IOPS 0.182 ms 344.000 MB/s Burst rate 4472 IOPS 0.224 ms 279.475 MB/s

    Read the article

  • Custom initrd init script: how to create /dev/initctl

    - by Posco Grubb
    I have a virtual machine (VMM is Xen 3.3) equipped with two IDE HDD's (/dev/hda and /dev/hdb). The root file system is in /dev/hda1, where Scientific Linux 5.4 is installed. /dev/hdb contains an empty ext2 file system. I want to protect the root file system from writes by the VM by using aufs (AnotherUnionFS) to layer a writable file system on top of the root file system. The changes to / will be written to the file system located on /dev/hdb. (Furthermore, outside the VM, the file backing the /dev/hda will also be set to read-only permissions, so the VMM should also prevent the VM from modifying at that level.) (The purpose of this setup: be able to corrupt a virtual machine using software-implemented fault injection but preserve the file system image in order to quickly reboot the VM to a fault-free state.) How do I get an initrd init script to do the necessary mounts to create the union file system? I've tried 2 approaches: I've tried modifying the nash script that mkinitrd creates, but I don't know what setuproot and switchroot do and how to make them use my aufs as the new root. Apparently, nobody else here knows either. (EDIT: I take that back.) I've tried building a LiveCD (using linux-live-6.3.0) and then modifying the Bash /linuxrc script from the generated initrd, and I got the mounts correct, but the final /sbin/init complains about /dev/initctl. Specifically, my /linuxrc mounts the aufs at /union. The last few lines of /linuxrc effectively do the following: cd /union mkdir -p mnt/live pivot_root . mnt/live exec sbin/chroot . sbin/init </dev/console >/dev/console 2>&1 When init starts, it outputs something like init: /dev/initctl: No such file or directory. What is supposed to create this FIFO? I found no such filename in the original linuxrc and liblinuxlive scripts. I tried creating it via "mkfifo /dev/initctl", but then init complained about a timeout opening or writing to the FIFO. Would appreciate any help or pointers. Thanks.

    Read the article

  • Benchmark MySQL Cluster using flexAsynch: No free node id found for mysqld(API)?

    - by quanta
    I am going to benchmark MySQL Cluster using flexAsynch follow this guide, details as below: mkdir /usr/local/mysqlc732/ cd /usr/local/src/mysql-cluster-gpl-7.3.2 cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysqlc732/ -DWITH_NDB_TEST=ON make make install Everything works fine until this step: # /usr/local/mysqlc732/bin/flexAsynch -t 1 -p 80 -l 2 -o 100 -c 100 -n FLEXASYNCH - Starting normal mode Perform benchmark of insert, update and delete transactions 1 number of concurrent threads 80 number of parallel operation per thread 100 transaction(s) per round 2 iterations Load Factor is 80% 25 attributes per table 1 is the number of 32 bit words per attribute Tables are with logging Transactions are executed with hint provided No force send is used, adaptive algorithm used Key Errors are disallowed Temporary Resource Errors are allowed Insufficient Space Errors are disallowed Node Recovery Errors are allowed Overload Errors are allowed Timeout Errors are allowed Internal NDB Errors are allowed User logic reported Errors are allowed Application Errors are disallowed Using table name TAB0 NDBT_ProgramExit: 1 - Failed ndb_cluster.log: WARNING -- Failed to allocate nodeid for API at 127.0.0.1. Returned eror: 'No free node id found for mysqld(API).' I also have recompiled with -DWITH_DEBUG=1 -DWITH_NDB_DEBUG=1. How can I run flexAsynch in the debug mode? # /usr/local/mysqlc732/bin/flexAsynch -h FLEXASYNCH Perform benchmark of insert, update and delete transactions Arguments: -t Number of threads to start, default 1 -p Number of parallel transactions per thread, default 32 -o Number of transactions per loop, default 500 -l Number of loops to run, default 1, 0=infinite -load_factor Number Load factor in index in percent (40 -> 99) -a Number of attributes, default 25 -c Number of operations per transaction -s Size of each attribute, default 1 (PK is always of size 1, independent of this value) -simple Use simple read to read from database -dirty Use dirty read to read from database -write Use writeTuple in insert and update -n Use standard table names -no_table_create Don't create tables in db -temp Create table(s) without logging -no_hint Don't give hint on where to execute transaction coordinator -adaptive Use adaptive send algorithm (default) -force Force send when communicating -non_adaptive Send at a 10 millisecond interval -local 1 = each thread its own node, 2 = round robin on node per parallel trans 3 = random node per parallel trans -ndbrecord Use NDB Record -r Number of extra loops -insert Only run inserts on standard table -read Only run reads on standard table -update Only run updates on standard table -delete Only run deletes on standard table -create_table Only run Create Table of standard table -drop_table Only run Drop Table on standard table -warmup_time Warmup Time before measurement starts -execution_time Execution Time where measurement is done -cooldown_time Cooldown time after measurement completed -table Number of standard table, default 0

    Read the article

  • How to add an iptables rule with source IP address

    - by ???
    I have a bash script that starts with this: if [[ $EUID -ne 0 ]]; then echo "Permission denied (are you root?)." exit 1 elif [ $# -ne 1 ] then echo "Usage: install-nfs-server <client network/CIDR>" echo "$ bash install-nfs-server 192.168.1.1/24" exit 2 fi; I then try to add the iptables rules for NFS as follows: iptables -A INPUT -i eth0 -p tcp -s $1 --dport 111 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p udp -s $1 --dport 111 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 111 -m state --state ESTABLISHED -j ACCEPT service iptables save service iptables restart I get the error: Try iptables -h' or 'iptables --help' for more information. Bad argument111' Try iptables -h' or 'iptables --help' for more information. Bad argument111' Saving firewall rules to /etc/sysconfig/iptables: ^[[60G[^[[0;32m OK ^[[0;39m]^M Flushing firewall rules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Setting chains to policy ACCEPT: filter ^[[60G[^[[0;32m OK ^[[0;39m]^M Unloading iptables modules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Applying iptables firewall rules: ^[[60G[^[[0;32m OK ^[[0;39m]^M Loading additional iptables modules: ip_conntrack_netbios_ns ^[[60G[^[[0;32m OK ^[[0;39m]^M When I open /etc/sysconfig/iptables these are the rules: # Generated by iptables-save v1.3.5 on Mon Mar 26 08:00:42 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [466:54208] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A OUTPUT -o eth0 -p tcp -m tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --sport 111 -m state --state ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p esp -j ACCEPT -A RH-Firewall-1-INPUT -p ah -j ACCEPT -A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Mon Mar 26 08:00:42 2012 ~ "/etc/sysconfig/iptables" 32L, 1872C I've also tried: iptables -I RH-Firewall-1-INPUT 1 -m state --state NEW -m tcp -p tcp --source $1 --dport 111 -j ACCEPT iptables -I RH-Firewall-1-INPUT 2 -m udp -p udp --source $1 --dport 111 -j ACCEPT

    Read the article

  • USB mouse disconnecting and reconnecting randomly and often

    - by Marc
    Specs: Q6600, evga 780I sli mobo, 4gig RAM, logitech MX518, windows 7 64bit, evga gtx 260, 650W power supply (single rail) The problem I am having is my mouse will reconnect/disconnect (will even hear the sounds from windows) and the light on the bottom of the mouse will turn off/turn on as it starts working again. It really sucks to be playing a game (and happens on desktop as well) for the mouse to just die out for a few seconds and come back. Sometimes it will not happen for days and other times it will do it 2 or more times within 15 seconds. I have tried two different wired mice, have tried multiple USB ports (on front of computer, back of computer, have also used a USB hub and have also plugged in a card that connects to the USB connectors on the motherboard and adds a few usb ports to the back of the computer, and I also bought a USB 2.0 PCI card and that did not help). Nothing else seems to reconnect like this, my usb keyboard has never once cut out like the mouse does and neither have any of the other devices I have connected (webcam, usb hub, various devices sometimes connected through usb cables, and IR reciever for windows media center remote). I have disconnected all usb devices except for my keyboard and mouse and the problem still occurs. I guess it could be something wrong with my motherboard but since no other devices behave similarly I'm just hoping that it is some kind of driver conflict. Installing logitech's drivers has had no effect. It seemed at first that if I go to device manager and uninstall HID-compliant mouse (that and logitech mx518 are listed) that would fix it but it doesn't seem to work anymore or at least not every time (it keeps reinstalling). I have googled "usb mouse disconnects and reconnects" and it seems to be fairly common but none of those were resolved. To stick some easy steps: It happens with or without the drivers installed It has happened with multiple mice on the same computer Bios is the latest version (P08) Motherboard drivers are the latest version Device manager is listing no problems on any USB devices Happens with every usb port, even addon usb cards Happens when all usb devices aside from mouse and keyboard are unplugged I read that maybe it is an IRQ conflict and I tried to look into that but did not really know what was going on, but didn't see anything obviously wrong. Thanks for any help guys, its driving me crazy!

    Read the article

  • Use launchctl to fire an AppleScript script periodically

    - by Daktari
    I have written an AppleScript that lets me back up a particular file. The script runs fine inside AppleScript Editor: it does what it's supposed to do perfectly. So far so good. Now I'd like to run this script at timed intervals. So I use launchctl & .plist to make this happen. That's where the trouble starts. the script is loaded at set interval by launchctl the AppleScript Editor (when open) brings its window (with that script) to the foreground but no code is executed when AppleScript Editor is not running, nothing seems to be happening at all Any ideas as to why this is not working? -- After editing (as per Daniel Beck's suggestions) my plist now looks like: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <false/> <key>Label</key> <string>com.opera.autosave</string> <key>ProgramArguments</key> <array> <string>osascript</string> <string>/Users/user_name/Library/Scripts/opera_autosave_bak.scpt</string> </array> <key>StartInterval</key> <integer>30</integer> </dict> </plist> and the AppleScript I'm trying to run: on appIsRunning(appName) tell application "System Events" to (name of processes) contains appName end appIsRunning --only run this script when Opera is running if appIsRunning("Opera") then set base_path to "user_name:Library:Preferences:Opera Preferences:sessions:" set autosave_file to "test.txt" set autosave_file_old to "test_old.txt" set autosave_file_older to "test_older.txt" set autosave_file_oldest to "test_oldest.txt" set autosave_path to base_path & autosave_file set autosave_path_old to base_path & autosave_file_old set autosave_path_older to base_path & autosave_file_older set autosave_path_oldest to base_path & autosave_file_oldest set copied_file to "test copy.txt" set copied_path to base_path & copied_file tell application "Finder" duplicate file autosave_path delete file autosave_path_oldest set name of file autosave_path_older to autosave_file_oldest set name of file autosave_path_old to autosave_file_older set name of file copied_path to autosave_file_old end tell end if

    Read the article

  • Bash Completion Script Help

    - by inxilpro
    So I'm just starting to learn about bash completion scripts, and I started to work on one for a tool I use all the time. First I built the script using a set list of options: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions="change configure create disable enable show" COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf This works fine. Next, I decided to dynamically create the list of available actions. I put together the following command: zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' Which basically does the following: Grabs all the text in the "zf" command after "Providers and their actions:" Grabs all the lines that start with "zf" (I had to do some fancy work here 'cause the ZF command prints in color) Grab the second piece of the command and remove any spaces from it (the spaces part is probably not needed any more) Sort the list Get rid of any duplicates Escape dashes (I added this when trying to debug the problem—probably not needed) Trim all new lines Trim all leading and ending spaces The above command produces: $ zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/' change configure create disable enable show $ So it looks to me like it's producing the exact same string as I had in my original script. But when I do: _zf_comp() { local cur prev actions COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" actions=`zf | grep "Providers and their actions:" -A 100 | grep -P "^\s*\033\[36m\s*zf" | awk '{gsub(/[[:space:]]*/, "", $3); print $3}' | sort | uniq | awk '{sub("\-", "\\-", $1); print $1}' | tr \\n " " | sed 's/^ *\(.*\) *$/\1/'` COMPREPLY=($(compgen -W "${actions}" -- ${cur})) return 0 } complete -F _zf_comp zf My autocompletion starts acting up. First, it won't autocomplete anything with an "n" in it, and second, when it does autocomplete ("zf create" for example) it won't let me backspace over my completed command. The first issue I'm completely stumped on. The second I'm thinking might have to do with escape characters from the colored text. Any ideas? It's driving me crazy!

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • Windows 8.1 Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    [Note: After I entered the problem statement, I found this question, which is apparently the same problem. Maybe one of us will get a good answer...] I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • Hyper-V Machine drifts time all over, even with NTP

    - by MichaelGG
    Resolved The problem was Hyper-V on that machine. I removed Hyper-V, installed VMware Server, ran the same VM. Time sync issues went away (< 100ms difference after a day). My setup is like this: HYV1 - HyperV machine (non domain) - sync irrelevant AD1 - VM AD server on HYV1, sync'd to time.nist.gov. HyperV time sync off. S1 - Physical machine, sync'd to domain. S2 - Physical machine running HyperV, sync'd to domain. V1 - Linux VM machine on S2, sync'd to AD1. No HyperV integration. AD1 and S1 have fine sync -- stripchart shows less than 100ms difference. S2 drifts like crazy. Here's a bit of the stripchart against AD1: 18:33:22 d:+00.0010138s o:+05.4101899s 18:33:24 d:+00.0010138s o:+05.4319765s 18:33:26 d:+00.0000000s o:+05.4788429s 18:33:28 d:+00.0000000s o:+05.6089942s 18:33:30 d:+00.0010138s o:+05.7240269s 18:33:32 d:+00.0000000s o:+06.0421911s 18:33:34 d:+00.0081104s o:+06.5613708s 18:33:37 d:+00.0000000s o:+06.9096594s 18:33:39 d:+00.0000000s o:+06.8867838s 18:33:41 d:+00.0010127s o:+06.8936401s In 20 seconds, it drifted over a second. If I manually reset it to within 1s, within a few minutes it'll be back drifting about 2 seconds. Overnight it went from ~2s to ~5s. The Linux VM inside S2 has perfect sync with AD1. Here's the config: C:\Users\mgg>w32tm /dumpreg /subkey:Parameters Value Name Value Type Value Data ------------------------------------------------------------ ServiceDll REG_EXPAND_SZ %systemroot%\system32\w32time.dll ServiceMain REG_SZ SvchostEntry_W32Time ServiceDllUnloadOnStop REG_DWORD 1 Type REG_SZ NT5DS NtpServer REG_SZ ad01.mydomain ad02.mydomain C:\Users\mgg>w32tm /dumpreg /subkey:Config Value Name Value Type Value Data ----------------------------------------------------------- FrequencyCorrectRate REG_DWORD 4 PollAdjustFactor REG_DWORD 5 LargePhaseOffset REG_DWORD 50000000 SpikeWatchPeriod REG_DWORD 900 LocalClockDispersion REG_DWORD 9 HoldPeriod REG_DWORD 5 PhaseCorrectRate REG_DWORD 1 UpdateInterval REG_DWORD 30000 EventLogFlags REG_DWORD 2 AnnounceFlags REG_DWORD 5 TimeJumpAuditOffset REG_DWORD 28800 MinPollInterval REG_DWORD 2 MaxPollInterval REG_DWORD 8 MaxNegPhaseCorrection REG_DWORD -1 MaxPosPhaseCorrection REG_DWORD -1 MaxAllowedPhaseOffset REG_DWORD 300 I looked at the event log, and apart from warnings about sync (after it gets way out of sync), there's no other warnings. How can I go about troubleshooting this? It's the only machine that is having this problem. All the other machines (physical and virtual) are doing fine. Edit: To clarify: The VM (AD1) has integration turned off and syncs to time.nist.gov. AD1 is fine. It's the physical machine S1 that can't sync to AD1 and drifts all over. All the other physical servers are able to sync to AD1 just fine. Update So, it appears to be an issue of running the VM. The clock slips slowly with the VM off. Turned on, it immediately starts losing seconds. I swt the VM to only use half the resources, and that seems to have slightly mitigated it, for now. Thanks!

    Read the article

  • 403 Forbidden Error when trying to view localhost on Apache

    - by misbehavens
    I think my Apache must be all screwed up. I don't know if it ever worked. I just upgraded to Snow Leopard, and the first step on this tutorial is to start apache and check that it's working by opening http://localhost. It starts fine but when I go to localhost I get a 403 forbidden error. I don't know where to start figuring out how to fix it, so I wonder if a fresh install of Apache would do the trick. What do you think? Update: I found some error logs in /private/var/log/apache2/. Found this in one of the logs. Not sure what it means: [Tue Nov 10 17:53:08 2009] [notice] caught SIGTERM, shutting down [Tue Nov 10 21:49:17 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist httpd: Could not reliably determine the server's fully qualified domain name, using Andrews-Mac-Pro.local for ServerName mod_bonjour: Skipping user 'andrew' - cannot read index file '/Users/andrew/Sites/index.html'. [Tue Nov 10 21:49:19 2009] [notice] Digest: generating secret for digest authentication ... [Tue Nov 10 21:49:19 2009] [notice] Digest: done [Tue Nov 10 21:49:19 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 configured -- resuming normal operations Update: I also found something in the dummy-host.example.com-error_log file. I didn't set these dummy-host things by the way. Is this the default configuration? [Tue Nov 10 21:59:57 2009] [error] [client ::1] client denied by server configuration: /usr/docs Update: Woohoo! I found the file that had the virtual host definitions. It was in /etc/apache2/extra/httpd-vhosts.conf. It had those two dummy virtual host settings in there. I added a localhost virtual host. Not sure if this is necessary, but since it wasn't working before, decided to do it anyway. After removing the old virtual hosts, adding my new localhost virtual host, and restarting apache, it seems to work. So I guess whenever I want to add a virtual host, I only need to add them to this file? Or is there a hosts file somewhere, like there is on Linux? Update: Yes, there is an /etc/hosts file that need to be changed to. Add the virtual host name to that file.

    Read the article

  • Has the hardware in my modem gone bad?

    - by Tyler Scott
    I contacted CenturyLink about my modem recently and received useless and unrelated information. The problem seems to be that the modem will no longer save settings, the web interface is unusable except in internet explorer for some reason, and the modem keeps resetting. CenturyLink claimed it had to do with signal strength but I checked and it is currently between good and outstanding according to this. All of the lights remain green even when it starts acting up and I lose internet and shortly before it crashes and reboots. Does anyone have any idea what is going on or what I can do to fix it? (Asking CenturyLink again is obviously not going to help.) Update 1: Accessing the syslog from the web interface causes a crash. After it reboots, the log looks like as follows: 01/01/1970 12:01:29 AM Ethernet Ethernet client connected ,ip(192.168.0.2), mac(1c:6f:65:4c:6d:3b) 01/01/1970 12:01:38 AM Wireless 802.11 client connected ,ip(192.168.0.18), mac(d0:df:c7:c2:73:ca) 01/01/1970 12:01:41 AM System Event Line 0: VDSL2 link up, Bearer 0, us=20128, ds=40127 01/01/1970 12:01:43 AM dhcp6s[2028] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:50 AM dhcp6s[2469] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:52 AM radvd[2306] poll error: Interrupted system call 01/01/1970 12:01:56 AM PPP Link PPP server detected. 01/01/1970 12:01:56 AM PPP Link PPP session established. 01/01/1970 12:01:56 AM PPP Link PPP LCP UP. 01/01/1970 12:01:56 AM System Event Received valid IP address from server. Connection UP. 06/05/2014 08:16:01 AM radvd[2511] poll error: Interrupted system call 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM dhcp6s[3236] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 06/05/2014 08:16:08 AM Wireless 802.11 client connected ,ip(192.168.0.7), mac(44:6d:57:c4:d7:08) I also get it to crash on various other pages. I am guessing the web server is unstable.

    Read the article

  • Kickstart based installation of RHEL 6.4 from USB

    - by Peter
    I want to setup some brand new servers with RHEL 6.4. Servers do not have DVD, so, I have to use USB for the installation. I already have a custom ISO with a kickstart file that I use on servers with DVD flawlessly. I used iso2usb to move the ISO t? my USB. When I boot from the USB, the ks file is found, anaconda starts, but then stops with the following error: "The installation source given by device ['sda1'] could not be found. Please check your parameters and try again" Notes: The USB IS the sda. My custom ISO file is renamed to linux.iso from iso2usb and it is present in the root directory of the USB. Kickstart file has the following entry: harddrive --partition=sda1 --dir=/ Please help me to automate the installation with kickstart. Edit 1: This the anaconda.log file: 09:01:57,029 INFO : no /etc/zfcp.conf; not configuring zfcp 09:01:57,259 INFO : created new libuser.conf at /tmp/libuser.4rAbps with instPath="/mnt/sysimage" 09:01:57,259 INFO : anaconda called with cmdline = ['/usr/bin/anaconda', '--stage2', 'hd:sda1:///images/install.img', '--dlabel', '--kickstart', '/tmp/ks.cfg', '--graphical', '--selinux', '--lang', 'en_US.UTF-8', '--keymap', 'us', '--repo', 'hd:sda1:/'] 09:01:57,260 INFO : Display mode = g 09:01:57,260 INFO : Default encoding = utf-8 09:01:59,444 DEBUG : X server has signalled a successful start. 09:01:59,446 INFO : Starting window manager, pid 1345. 09:01:59,537 INFO : Starting graphical installation. 09:01:59,741 INFO : Detected 7968M of memory 09:01:59,741 INFO : Swap attempt of 7968M 09:02:00,840 INFO : ISCSID is /usr/sbin/iscsid 09:02:00,840 INFO : no initiator set Edit 2: This is the part of anaconda log that indicates that it found the USB etc: 09:01:47,918 INFO : starting STEP_STAGE2 09:01:47,918 INFO : partition is sda1, dir is //images/install.img 09:01:47,918 INFO : mounting device sda1 for hard drive install 09:01:48,005 INFO : Path to stage2 image is /mnt/isodir///images/install.img 09:01:54,214 INFO : mounted loopback device /mnt/runtime on /dev/loop0 as /tmp/install.img 09:01:54,214 INFO : Looking for updates for HD in /mnt/isodir///images/updates.img 09:01:54,214 INFO : Looking for product for HD in /mnt/isodir///images/product.img 09:01:54,227 INFO : got stage2 at url hd:sda1:///images/install.img 09:01:54,254 INFO : Loading SELinux policy 09:01:54,700 INFO : getting ready to spawn shell now 09:01:54,975 INFO : Running anaconda script /usr/bin/anaconda 09:01:56,882 INFO : _Fedora is the highest priority installclass, using it 09:01:56,921 INFO : Running kickstart %%pre script(s) 09:01:56,922 WARNING : '/bin/sh' specified as full path 09:01:56,926 INFO : All kickstart %%pre script(s) have been run

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • Workaround with paths in vista

    - by Argiropoulos Stavros
    Not really a programming question but i think quite relative. I have a .exe running in my windows XP PC. This exe needs a file in the same directory to run and has no problem finding it in XP BUT in Vista(I tried this in several machines and works in some of them) fails to run. I'm guessing there is a problem finding the path.The program is written in basic(Yes i know..) I attach the code below. Can you think of any workarounds? Thank you The exe is located in c:\tools Also the program runs in windows console(It starts but then during the execution cannot find a custom file type .TOP made by the creator of the program) ' PROGRAMM TOP11.BAS DEFDBL A-Z CLS LOCATE 1, 1 COLOR 14, 1 FOR i = 1 TO 80 PRINT "±"; NEXT i LOCATE 1, 35: PRINT "?? TOP11 ??" PRINT " €€‚—‚„‘ ’— ‹„’†‘„— ‘’† „”€„€ ’†‘ ‡€€‘‘†‘ ‰€ † „.‚.‘.€. " COLOR 7, 0 PRINT "-------------------------------------------------------------------------------" PRINT INPUT "ƒ?©« «¦¤ ©¬¤«?©«? ¤???... : ", Factor# INPUT "¤¦£ ¨®e¦¬ [.TOP] : ", topfile$ VIEW PRINT 7 TO 25 file1$ = topfile$ + ".TOP" file2$ = topfile$ + ".T_P" file3$ = "Syntel" OPEN file3$ FOR OUTPUT AS #3 PRINT #3, " ‘¬¤«?©«?? ¤??? = " + STR$(Factor#) + " †‹„‹†€: " + DATE$ CLOSE #3 command1$ = "copy" + " " + file1$ + " " + file2$ SHELL command1$ '’¦ ¨®e¦ .TOP ¤« ¨a­«  £ «¤ ?«a?¥ .T_P OPEN file2$ FOR INPUT AS #1 OPEN file1$ FOR OUTPUT AS #2 bb$ = " \\\ \ , ###.#### ###.#### ####.### ##.### " DO LINE INPUT #1, Line$ Line$ = RTRIM$(LTRIM$(Line$)) icode$ = LEFT$(Line$, 1) IF icode$ = "1" THEN Line$ = " " + Line$ PRINT #2, Line$ PRINT Line$ ELSEIF icode$ = "2" THEN Line$ = " " + Line$ PRINT #2, Line$ PRINT Line$ ELSEIF icode$ = "3" THEN Number$ = MID$(Line$, 3, 6) Hangle = VAL(MID$(Line$, 14, 9)) Zangle = VAL(MID$(Line$, 25, 9)) Distance = VAL(MID$(Line$, 36, 9)) Distance = Distance * Factor# Height = VAL(MID$(Line$, 48, 6)) PRINT #2, USING bb$; icode$; Number$; Hangle; Zangle; Distance; Height PRINT USING bb$; icode$; Number$; Hangle; Zangle; Distance; Height ELSE END IF LOOP UNTIL EOF(1) VIEW PRINT CLS LOCATE 1, 1 PRINT " *** ’„‘ ’“ ‚€‹‹€’‘ *** " END

    Read the article

  • iCloud stuff stops working while connected to OpenVPN [closed]

    - by Taco Bob
    I have a fairly simple OpenVPN setup on an OpenVZ VPS with Ubuntu 11.10. Client is the Viscosity client on Mac OS X 10.8.2, and after some testing, we can rule out the client as being part of the problem. Everything has been working fine except for Apple's iCloud stuff. Web surfing, email, FTP, NNTP, and Skype are all working as expected. It's ONLY the iCloud services that cease to function. If I connect to the VPN, iCloud stuff stops working. I no longer get anything in Messages, Calendar items don't get updated, and Notifications stop working. If I disconnect, the iCloud stuff all starts working. Connect again, iCloud stops working. Here's the server.conf: status openvpn-status.log log /var/log/openvpn.log verb 4 port 1194 proto udp dev tun ca /etc/openvpn/ca.crt cert /etc/openvpn/server.crt key /etc/openvpn/server.key dh /etc/openvpn/dh1024.pem server 10.9.8.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1" push “dhcp-option DNS 10.9.8.1? keepalive 10 120 duplicate-cn cipher BF-CBC comp-lzo user nobody group nogroup persist-key persist-tun tun-mtu 1500 mssfix 1400 I'm using iptables in a script, and it's also fairly simplistic. iptables -F iptables -t nat -F iptables -t mangle -F iptables -A FORWARD -i tun0 -o venet0 -j ACCEPT iptables -A FORWARD -i venet0 -o tun0 -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -p udp --dport 1194 -j ACCEPT iptables -t nat -A POSTROUTING -s 10.9.8.0/24 -j SNAT --to-source <server's public ip> echo 1 > /proc/sys/net/ipv4/ip_forward I tried forwarding ports as well, with no success. iptables -A FORWARD -p tcp -d 10.9.8.0/24 --dport 5222:5230 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 5222:5230 -j DNAT --to-destination 10.9.8.6 I am also sometimes behind a double-NAT situation that I have no control over. Client -> work VPN -> my OpenVPN box -> Internet. Client -> Airport Express -> ISP (which is doing NAT) -> my OpenVPN box -> Internet. Those two situations are just the fact of life where I am, and I cannot change them. I do have full control over my client and the OpenVPN server. I am completely out of ideas. I have posted a similar query at the OpenVPN forums, but it hasn't posted yet and seems to be in their moderation queue still. Tried on freenode irc channels, but nobody is awake, so here I am. I have Googled extensively for this, and can find nothing that is related. Help me get iCloud stuff working again!

    Read the article

  • CPU Lockup when loading folder of bookmarks in Firefox

    - by Gary M. Mugford
    I am running Firefox 3.6 on WinXPSP3 on a duo Core machine with 4G of memory. I am also running Avast! free anti-virus and ZoneAlarm free firewall, both latest versions. Within the last month, my service provider basically forced me to upgrade to a Docsis 3.0 compliant modem (offered me a deal I couldn't turn down). At that point, I also upgraded to FF3.6. Basically, I am not unhappy with many aspects of this switchover, EXCEPT, when I now load a folder of bookmarks (anywhere from 10 to 38) I get nothing like the load times I experienced a couple of months back. It's taking minutes rather than seconds. And the first bookmark in one of the folders, GMail, rarely loads before timing out. I have used the old trick of powering off the cable modem before my day's work. This used to fix 'load-lag' in the old days. I have switched from my ISP's DNS server to Google, OpenDNS and back. And nothing seems to work currently. It's not my DNS cache. That's been flushed and secondary computers also have the same issue when loading folders. I have watched the CPU usage and loading the folder will send VSMon (ZoneAlarm) usage over 40 percent, AVastSvc (Avast!) over 30 and Firefox will then push the needle to 100. There's a brief burst by SVCHost when the others falter in devouring cycles. Then everything subsides to single digits once the last tab is loaded. The only other nominal nastiness is VSMon ALWAYS hitting 50 percent when ANY program starts downloading content from the internet. If I shutdown ZoneAlarm (and VSMon with it), the same slow loading takes place, but this time System is running 50% plus, again driving the usage to 100 per cent. I have my doubts FF3.6 vs FF3.5 is an issue, since the other computers are still running 3.5 and suffer the same issue. Those computers are on, but inactive, most of the time, being backups. Obviously, when the CPU hits 100, I can't do much of anything in FireFox OR in other programs. Video play through WMPC or VLC is extremely choppy, although it doesn't seem to affect the audio. Any ideas what I can try next? Thanks, GM

    Read the article

  • Windows cannot find the host name "download.microsoft.com" using DNS

    - by joedotnot
    When trying to download a file found on the Microsoft downloads center that starts with, for example, http://download.microsoft.com/download/6/8/7/(some_GUID)/(some_file_name.ext) i get a timeout with "Internet Explorer cannot display the webpage". More information says: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. Diagnose Connection problems says: Windows cannot find the host name "download.microsoft.com" using DNS Bear with me while i expand on the problem: It all started when i tried to download Windows XP mode for my Windows 7 machine. I went to the virtual PC site, then thru the motions of Windows Genuine Advantage which validated ok, but when it redirects to grab the file just times out with above error. (NB: i also tried with the latest Chrome and Firefox but no use due to the Genuine Advantage stuff, so i decided to stick with IE). I am behind an ADSL2+ modem router connecting via wireless (Win 7 Pro laptop); so i hop over to the desktop connected via ethernet (Vista Business), and same result; begin to think site download.microsoft.com site is down. So i give it a break an read up on EDNS, flushing the cache, hosts file, etc... Try again an hour later on the Win 7 machine, still no go; so i turn off the Win 7 (software) firewall, and lo and behold, i can connect and grab any files from download.microsoft.com; (...nice, so we have a Micro$0ft firewall preventing access to a Micro$0ft website, no wonder my auto-updates kept failing but that's another story). But i still am not happy that the desktop connected via ethernet still cannot get to download.microsoft.com, even though i turned off all firewalls, defenders, anti-virus, etc. What is so special / specific about the url download.microsoft.com, any other site is ok, including www.microsoft.com. Any networking guru know what's REALLY going on, and how can i get the desktop to connect? Ping download.microsoft.com - Ping request could not find host download.microsoft.com. Please check the name and try again. Ping google.com or even www.microsoft.com works gives me an IP address. NB: On the wireless laptop ping download.microsoft.com works, i get xxxx.ms.akamai.net [202.7.177.33].

    Read the article

  • SQL 2008 R2 Named Instance Client Connectivity Issues?

    - by Jerry Dodge
    We're upgrading our software from using SQL 2000 to 2008 R2. Our customers will be installing an update which uninstalls 2000 and installs 2008 R2 under the same instance. So if no instance existed, then no instance name will be set (default). However, the problem starts with the customers which have a named SQL instance. Starting in 2008 R2 (not sure of ones before), for some reason, a client connecting to the server by its instance name is unsuccessful. I'm testing from the Management Studio - if I can't connect this, then nothing can connect. I browse network servers, and find the specific server\instance in the list. But, upon trying to connect to an instance name like MyServer\INST, I get: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) I do in fact have TCP/IP and Named Pipes protocols enabled, this is the first thing I did. When I connect to the server using a comma (,) and port number like MyServer, 49195, it works just fine. So it appears that client computers are just unable to identify the instance names. This has happened on all our installations of SQL 2008 R2 and from all client computers, including Win 7, XP, Vista, Server 2008, and Server 2003. We never experienced such issues on earlier versions of SQL. The problem even persists if the firewalls and antiviruses are all disabled. Now, this is a large update which we will be distributing soon to all our customers, and we want to minimize the interaction they need with us to get this installed. We absolutely hate the idea of using a port number, because it will always be different, and we would have to modify each client to point to this server/port. Some of our customers may have hundreds of client computers. How do I make client connections to a named SQL instance work again? After all, this is the whole purpose of named instances, and if a client can't connect to this instance by its name, then what is it even named for? EDIT It was mentioned to make sure SQL Browser is running, so I checked, and it is running. The server is also able to connect to its self (locally) - just external connections are refused. UPDATE After more careful checking, I learned the firewall wasn't completely disabled when testing, and upon disabling it completely, this works. So it appears that SQL Browser is being blocked by the firewall from external clients from accessing.

    Read the article

  • Adobe Reader not loading form content

    - by wullxz
    We have an FDL file which is used to offer an online application possibility. The FDL is filled out and sent to a mailbox. When I open the received file, Adobe Reader starts, loads the document in Internet Explorer (had to change my default browser because it doesn't work in chrome - the customer uses IE as default) and displays a warning that Adobe Reader has blocked the connection to the server where the initial document is saved: I can then click on "Trust this document once" (translated by me!) or "Add this host to trusted hosts" (also translated by me!). The second option doesn't work at all. The first option works but is a little bit annoying. I looked into Adobe Readers options (Edit - "Voreinstellungen" in german / the last option - Security (advanced)) and found the possibility to add hosts, files and directories or allow Adobe Reader to use the "Trusted Websites" list from Internetoptions. When I add the website either to Trusted Websites or the trusted list in Adobe Readers options, the warning doesn't pop up but the content in the prefilled (by the applicant) input boxes of the document doesn't show up on Windows 7 but it does show up on Windows XP. This Screenshot shows the settings window described in the last paragraph. The big input box at the bottom normally holds the trusted files/directories/hosts list. System Information: Windows 7 Enterprise x64 Adobe Reader X multiple IE versions (mine is latest but there's also IE 7 or 8) How do I get Adobe Reader to load the content of the form? This behaviour can be reproduced on a PC. When opening an fdf from a command line the form fields are blank even though there is data in the fdf and the pdf is located in a mnaully entered trsuted folder. Steps to reproduce: Clean install a Windows 7 PC (or use a virtual box) Map a network drive to a shared folder with a subfolder e.g. c:\test\docs becomes m:\docs Set security permissions to allow full control to everyone Add an fdf and a matching pdf file in the subfolder Manually add m:\docs to each of the trusted folders in the trust manager registry settings Ensure that Enhanced Security is on Run a command line to open the fdf file Expected result: pdf is opened in Adobe Reader with form fields filled out with data Actual results: pdf is opened with blank fields 'Yellow bar' appears asking to add document to trusted locations It appears that Adobe Reader XI is ignoring the privileged locations entries in the registry. Adding the document via the 'yellow bar' adds the individual document, with the same folder, to the privileged locations but means that the process has to be repeated for every document that needs to be opened from the folder.

    Read the article

  • Legacy VB6 application under Win7 SQL error

    - by Shial
    We have a rather unfortunate legacy application at work, written originally in VB6 it predates anybody in our IT department by at least 5 years. We have a contracted developer for ongoing maintenance and where he can he rewrites sections over into .NET code (Not sure about his techniques here, this is a side job for his regular work as an IBM engineer) the application works fine (such as it is) under windows XP. We have only a couple of Windows 7 machines mainly for testing and this application seems to run into a wall. Things like the background not loading and SQL errors. This is even running under administrator. Running an SQL trace from the ODBC control panel shows several interesting things. It makes a connection to the database successfully initially where it runs a query to determine if it is running the correct version. This query works fine. 558-1af0 ENTER SQLExecDirectW HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 BMS 558-1af0 EXIT SQLExecDirectW with return code 1 (SQL_SUCCESS_WITH_INFO) HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 It then seems to drop its connection and can't find the ODBC connection despite the fact its connecting to the same DB. From the trace it looks like it configures the connection then it starts firing off SQLFreeStmt to unbind and close out then when in the application and it tries to do its thing there is no connection. 558-1af0 ENTER SQLFreeStmt HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> BMS 558-1af0 EXIT SQLFreeStmt with return code 0 (SQL_SUCCESS) HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> Then this happens when I try to do something that pulls data 558-1af0 ENTER SQLDriverConnectW HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> BMS 558-1af0 EXIT SQLDriverConnectW with return code -1 (SQL_ERROR) HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> DIAG [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) Nearly all of my searching on this issue comes up with programming issues where the connection string has a problem. The only thing that is different in this particular scenario though is Windows 7, I know the connection string is fine since it works on the XP machines. The VB components are supposed to be still functional under Win7. My computer is running 32 bit win7 and my VP is running Win7 64 bit and both have the same problem so that can be ruled out. I have already tried reinstalling the SQL Native Client and the VB runtime as well as the application in question. Hopefully I can find a solution and not have to resort to using the XP VM.

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • Too many TIME_WAIT state connections!

    - by Hamza
    I've been reading about this everywhere all day, and from what I've gathered, TIME_WAIT is a relatively harmless state. It's supposed to be harmless even when there's too many. But if they're jumping to the numbers I've been seeing for the past 24 hours, something is really wrong! [root@1 ~]# netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n 1 established) 1 Foreign 12 CLOSE_WAIT 15 LISTEN 64 LAST_ACK 201 FIN_WAIT2 334 CLOSING 605 ESTABLISHED 816 SYN_RECV 981 FIN_WAIT1 26830 TIME_WAIT That number fluctuates from 20,000 to 30,000+ (so far, the maximum I've seen it go is 32,000). What worries me is that they're all different IP addresses from all sorts of random locations. Now this is supposed to be (or was supposed to be) a DDoS attack. I know this for a fact, but I won't go into the boring details. It started out as a DDoS and it did impact my server's performance for a couple minutes. After that, everything was back to normal. My server load is normal. My internet traffic is normal. No server resource is being abused. My sites load fine. I also have IPTABLES disabled. There's an odd issue with that too. Every time I enable the firewall/iptables, my server starts experiencing packet loss. Lots of it. About 50%-60% packets are lost. It happens within an hour or within a few hours of enabling the firewall. As soon as I disable it, ping responses from all locations I test them from start clearing up and get stable again. Very strange. The TIME_WAIT state connections have been fluctuating at those numbers since yesterday. For 24 hours now, I've had that, and although it hasn't impacted performance in any way, it's disturbing enough. My current tcp_fin_timeout value is 30 seconds, from the default 60 seconds. However, that seems to not help, at all. Any ideas, suggestions? Anything at all would be appreciated, really!

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >