Search Results

Search found 5462 results on 219 pages for 'continue'.

Page 189/219 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Can't get network bridging to work

    - by Antonis Christofides
    I'm trying to make network bridging to work on a Debian squeeze (I'm experimenting in order to make a QEMU/KVM virtual machine that will be visible to the outside network as if it were a distinct machine). The problem is that when I type brctl addif br0 eth0 then I lose connectivity to the network until I type brctl delif br0 eth0. More specifically, here's how my machine looks like before I do anything (essentially eth0 is listening on 147.102.160.153): root@laura:/home/anthony# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 8c:73:6e:db:1c:1b brd ff:ff:ff:ff:ff:ff inet 147.102.160.153/24 brd 147.102.160.255 scope global eth0 inet6 2001:648:2000:a0:8e73:6eff:fedb:1c1b/64 scope global dynamic valid_lft 2591848sec preferred_lft 604648sec inet6 fe80::8e73:6eff:fedb:1c1b/64 scope link valid_lft forever preferred_lft forever 3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 4c:ed:de:8e:44:d7 brd ff:ff:ff:ff:ff:ff 4: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff 5: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether ee:7c:88:59:d0:e8 brd ff:ff:ff:ff:ff:ff Now let me add the bridge: root@laura:/home/anthony# brctl addbr br0 root@laura:/home/anthony# ip tuntap add dev tap0 mode tap root@laura:/home/anthony# ip link set tap0 up root@laura:/home/anthony# brctl addif br0 tap0 Until here everything continues to work normally. Finally, I try to add eth0 to the bridge: root@laura:/home/anthony# brctl addif br0 eth0 At this point, I no longer have a network connection. If I try to ping something, it tells "Destination Host Unreachable". The output of ip addr show seems normal: root@laura:/home/anthony# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 8c:73:6e:db:1c:1b brd ff:ff:ff:ff:ff:ff inet 147.102.160.153/24 brd 147.102.160.255 scope global eth0 inet6 2001:648:2000:a0:8e73:6eff:fedb:1c1b/64 scope global dynamic valid_lft 2591908sec preferred_lft 604708sec inet6 fe80::8e73:6eff:fedb:1c1b/64 scope link valid_lft forever preferred_lft forever [snip wlan0, vboxnet0 and pan0, which are down and irrelevant] 8: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 16:30:f2:67:ab:75 brd ff:ff:ff:ff:ff:ff 9: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether 16:30:f2:67:ab:75 brd ff:ff:ff:ff:ff:ff inet6 fe80::1430:f2ff:fe67:ab75/64 scope link valid_lft forever preferred_lft forever Also: root@laura:/home/anthony# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 147.102.160.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0.0.0.0 147.102.160.200 0.0.0.0 UG 0 0 0 eth0 I can't understand what I'm doing wrong. I want the machine to continue to listen on 147.102.160.153 on eth0, and in addition to that I want to have a tap0 interface, bridged to eth0, that will be available to the guest machine so that the latter listens on another ip address (say 147.102.160.205). (If there's another way to achieve what I want, I'm also interested.)

    Read the article

  • Understanding NFS4 (Linux server)

    - by drumfire
    I've been a bit bothered by NFS4 on Linux. Some information 'out there' seems to conflict with other information, and other information appears hard to find. So here are a couple of things that caught my attention, hopefully someone out there can shed some light on this. This question focuses exclusively on NFS4 without Kerberos etc. 1. Exports There is ambiguous information in the exports manpage on the structure of /etc/exports. To quote from exports(5): Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list. The option list is used for all subsequent exports on that line only. What does "subsequent exports on that line only" mean? 1.2 fsid=0 not required anymore? I was searching for fsid when I found a comment on the linux-nfs list stating fsid=0 is not required anymore. Now I'm just confused, do I need it with nfs4 or not?! 2. Non-exported directory still mountable Say I have the following tree: /exp /exp/users /exp/distr /exp/distr/archlinux /exp/distr/debian And I have the following entries in this fstab entry: /dev/disk/by-label/users /mnt/users ext4 defaults 0 0 /dev/disk/by-label/distr /mnt/distr ext4 defaults 0 0 /mnt/users /exp/users none bind 0 0 /mnt/distr /exp/distr none bind 0 0 And my exports is exactly this: /exp 192.168.1.0/24(fsid=0,rw,async,no_subtree_check,no_root_squash) /exp/distr 192.168.1.0/24(rw,async,no_subtree_check,no_root_squash) And exportfs -arv shows: exporting 192.168.1.0/24:/exp/distr exporting 192.168.1.0/24:/exp Then why am I able to do this and get no error on a client: mount -t nfs4 server:/exp/users /tmp/test Even though /exp/users is not exported? I didn't export this directory, and while I don't see the contents of /dev/disk/by-label/users unless I specify crossmnt, I am still able to write to the directory. Everything I write to there goes to the underlying directory of /exp/users which can be seen when I umount /exp/users; ls /exp/users.. 3. The odd case of showmount -d server As stated by rpc.mountd(8), this command should display directories that are either currently mounted by clients, or stale entries in /var/lib/nfs/rmtab, as can be read: The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export. (...) Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab. After reading this I surely wonder: Isn't it terribly insecure to just expose this type of client information; Aren't unaware server admins bound to have an rmtab with a lot of stale clients; Is this the reason that clients that mount nfs4 directories with mount -v get to see output like "nothing was mounted" even though something was mounted? I have a lot of other questions regarding nfs4, but I'll keep it at this for the moment.. :)

    Read the article

  • Courier Maildrop error user unknown. Command output: Invalid user specified

    - by cad
    Hello I have a problem with maildrop. I have read dozens of webs/howto/emails but couldnt solve it. My objective is moving automatically spam messages to a spam folder. My email server is working perfectly. It marks spam in subject and headers using spamassasin. My box has: Ubuntu 9.04 Web: Apache2 + Php5 + MySQL MTA: Postfix 2.5.5 + SpamAssasin + virtual users using mysql IMAP: Courier 0.61.2 + Courier AuthLib WebMail: SquirrelMail I have read that I could use Squirrelmail directly (not a good idea), procmail or maildrop. As I already have maildrop in the box (from courier) I have configured the server to use maildrop (added an entry in transport table for a virtual domain). I found this error in email: This is the mail system at host foo.net I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]>: user unknown. Command output: Invalid user specified. Final-Recipient: rfc822; [email protected] Action: failed Status: 5.1.1 Diagnostic-Code: x-unix; Invalid user specified. ---------- Forwarded message ---------- From: test <[email protected]> To: [email protected] Date: Sat, 1 May 2010 19:49:57 +0100 Subject: fail fail An this in the logs May 1 18:50:18 foo.net postfix/smtpd[14638]: connect from mail-bw0-f212.google.com[209.85.218.212] May 1 18:50:19 foo.net postfix/smtpd[14638]: 8A9E9DC23F: client=mail-bw0-f212.google.com[209.85.218.212] May 1 18:50:19 foo.net postfix/cleanup[14643]: 8A9E9DC23F: message-id=<[email protected]> May 1 18:50:19 foo.net postfix/qmgr[14628]: 8A9E9DC23F: from=<[email protected]>, size=1858, nrcpt=1 (queue active) May 1 18:50:23 foo.net postfix/pickup[14627]: 1D4B4DC2AA: uid=5002 from=<[email protected]> May 1 18:50:23 foo.net postfix/cleanup[14643]: 1D4B4DC2AA: message-id=<[email protected]> May 1 18:50:23 foo.net postfix/pipe[14644]: 8A9E9DC23F: to=<[email protected]>, relay=spamassassin, delay=3.8, delays=0.55/0.02/0/3.2, dsn=2.0.0, status=sent (delivered via spamassassin service) May 1 18:50:23 foo.net postfix/qmgr[14628]: 8A9E9DC23F: removed May 1 18:50:23 foo.net postfix/qmgr[14628]: 1D4B4DC2AA: from=<[email protected]>, size=2173, nrcpt=1 (queue active) **May 1 18:50:23 foo.netpostfix/pipe[14648]: 1D4B4DC2AA: to=<[email protected]>, relay=maildrop, delay=0.22, delays=0.06/0.01/0/0.15, dsn=5.1.1, status=bounced (user unknown. Command output: Invalid user specified. )** May 1 18:50:23 foo.net postfix/cleanup[14643]: 4C2BFDC240: message-id=<[email protected]> May 1 18:50:23 foo.net postfix/qmgr[14628]: 4C2BFDC240: from=<>, size=3822, nrcpt=1 (queue active) May 1 18:50:23 foo.net postfix/bounce[14651]: 1D4B4DC2AA: sender non-delivery notification: 4C2BFDC240 May 1 18:50:23 foo.net postfix/qmgr[14628]: 1D4B4DC2AA: removed May 1 18:50:24 foo.net postfix/smtp[14653]: 4C2BFDC240: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.211.97]:25, delay=0.91, delays=0.02/0.03/0.12/0.74, dsn=2.0.0, status=sent (250 2.0.0 OK 1272739824 37si5422420ywh.59) May 1 18:50:24 foo.net postfix/qmgr[14628]: 4C2BFDC240: removed My config files: http://lar3d.net/main.cf (/etc/postfix) http://lar3d.net/master.c (/etc/postfix) http://lar3d.net/local.cf (/etc/spamassasin) http://lar3d.net/maildroprc (maildroprc) If I change master.cf line (as suggested here) maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d ${recipient} with maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d vmail ${recipient} I get the email in /home/vmail/MailDir instead of the correct dir (/home/vmail/foo.net/info/.SPAM ) After reading a lot I have some guess but not sure. - Maybe I have to install userdb? - Maybe is something related with mysql, but everything is working ok - If I try with procmail I will face same problem... - What are flags DRhu for? Couldnt find doc about them - In some places I found maildrop line with more parameters flags=DRhu user=vmail argv=/usr/lib/courier/bin/maildrop -d $ ${recipient} ${extension} ${recipient} ${user} ${nexthop} ${sender} I am really lost. Dont know how to continue. If you have any idea or need another config file please let me know. Thanks!!!

    Read the article

  • Debian Lenny to Debian Squeeze upgrade problems

    - by Roland Soós
    Hi! Yesterday I made a dist-upgrade on my Debian Lenny server. I thought it will be easy as an usual upgrade, but it's not. I got a lot of problem after the update: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-2.6-amd64 : Depends: linux-image-2.6.32-5-amd64 but it is not installed E: Unmet dependencies. Try using -f. Then I tried the suggestion: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libio-compress-base-perl libatk1.0-0 libts-0.0-0 libmime-types-perl libc-client2007b libgtk2.0-common libxfixes3 libgsf-1-common hicolor-icon-theme libfile-remove-perl libxcomposite1 libltdl3-dev libneon27 libmd5-perl libwmf0.2-7 libilmbase6 libatk1.0-data djvulibre-desktop libdirectfb-1.0-0 fam libxinerama1 libcroco3 libopenexr6 libgsf-1-114 libmail-box-perl libdjvulibre21 openssl-blacklist librsvg2-2 libio-compress-zlib-perl libsysfs2 libbeecrypt6 libxdamage1 libobject-realize-later-perl libuser-identity-perl libgtk2.0-bin libxi6 libxcursor1 portmap libxrandr2 libgtk2.0-0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-image-2.6.32-5-amd64 Suggested packages: linux-doc-2.6.32 The following NEW packages will be installed: linux-image-2.6.32-5-amd64 0 upgraded, 1 newly installed, 0 to remove and 121 not upgraded. 98 not fully installed or removed. Need to get 0 B/28.6 MB of archives. After this operation, 103 MB of additional disk space will be used. Do you want to continue [Y/n]? y perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Preconfiguring packages ... (Reading database ... 37915 files and directories currently installed.) Unpacking linux-image-2.6.32-5-amd64 (from .../linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r dpkg: error processing /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb (--unpack): failed in write on buffer copy for backend dpkg-deb during `./lib/modules/2.6.32-5-amd64/kernel/sound/pci/hda/snd-hda-codec-realtek.ko': No space left on device configured to not write apport reports dpkg-deb: subprocess paste killed by signal (Broken pipe) locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r Running postrm hook script /sbin/update-grub. Searching for GRUB installation directory ... found: /boot/grub Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-2.6.26-2-amd64 Updating /boot/grub/menu.lst ... done Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.32-5-amd64 /boot/vmlinuz-2.6.32-5-amd64 Errors were encountered while processing: /var/cache/apt/archives/linux-image-2.6.32-5-amd64_2.6.32-30_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) # dpkg-reconfigure locales perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "hu_HU.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: Nincs ilyen f?jl vagy k?nyvt?r /usr/sbin/dpkg-reconfigure: locales is broken or not fully installed Then I stucked. Do you have any idea how could I solve this?

    Read the article

  • Process not Listed by PS or in /proc/

    - by Hammer Bro.
    I'm trying to figure out how to operate a rather large Java program, 'prog'. If I go to its /bin/ dir and configure its setenv.sh and prog.sh to use local directories and my current user account. Then I try to run it via "./prog.sh start". Here are all the relevant bits of prog.sh: USER=(my current account) _CMD="/opt/jdk/bin/java -server -Xmx768m -classpath "${CLASSPATH}" -jar "${DIR}/prog.jar"" case "${ACTION}" in start) nohup su ${USER} -c "exec ${_CMD} >>${_LOGFILE} 2>&1" >/dev/null & echo $! >${_PID} echo "Prog running. PID="`cat ${_PID}` ;; stop) PID=`cat ${_PID} 2>/dev/null` echo "Shutting down prog: ${PID} kill -QUIT ${PID} 2>/dev/null kill ${PID} 2>/dev/null kill -KILL ${PID} 2>/dev/null rm -f ${_PID} echo "STOPPED `date`" >>${_LOGFILE} ;; When I actually do ./prog.sh start, it starts. But I can't find it at all on the process list. Nor can I kill it manually, using the same command the shell script uses. But I can tell it's running, because if I do ./prog.sh stop, it stops (and some temporary files elsewhere clean themselves out). ./prog.sh start Prog running. PID=1234 ps eaux | grep 1234 ps eaux | grep -i prog.jar ps eaux >> pslist.txt (It's not there either by PID or any clear name I can find: prog, java or jar.) cd /proc/1234/ -bash: cd: /proc/1234/: No such file or directory kill -QUIT 1234 kill 1234 kill -KILL 1234 -bash: kill: (1234) - No such process ./prog.sh stop Shutting down prog: 1234 As far as I can tell, the process is running yet not in any way listed by the system. I can't find it in ps or /proc/, nor can I kill it. But the shell script can still stop it properly. So my question is, how can something like this happen? Is the process supremely hidden, actually unlisted, or am I just missing it in some fashion? I'm trying to figure out what makes this program tick, and I can barely prove that it's ticking! Edit: ps eu | grep prog.sh (after having restarted; so random PID) 50038 19381 0.0 0.0 4412 632 pts/3 S+ 16:09 0:00 grep prog.sh HOSTNAME=machine.server.com TERM=vt100 SHELL=/bin/bash HISTSIZE=1000 SSH_CLIENT=::[STUFF] 1754 22 CVSROOT=:[DIR] SSH_TTY=/dev/pts/3 ANT_HOME=/opt/apache-ant-1.7.1 USER=[USER] LS_COLORS=[COLORS] SSH_AUTH_SOCK=[DIR] KDEDIR=/usr MAIL=[DIR] PATH=[DIRS] INPUTRC=/etc/inputrc PWD=[PWD] JAVA_HOME=/opt/jdk1.6.0_21 LANG=en_US.UTF-8 SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass M2_HOME=/opt/apache-maven-2.2.1 SHLVL=1 HOME=[~] LOGNAME=[USER] SSH_CONNECTION=::[STUFF] LESSOPEN=|/usr/bin/lesspipe.sh %s G_BROKEN_FILENAMES=1 _=/bin/grep OLDPWD=[DIR] I just realized that the stop) part of prog.sh isn't actually a guarantee that the process it claims to be stopping is running -- it just tries to kill the PID and suppresses all output then deletes the temporary file and manually inserts STOPPED into the log file. So I'm no longer so certain that the process is always running when I ps for it, although the code sample above indicates that it at least runs erratically. I'll continue looking into this undocumented behemoth when I return to work tomorrow.

    Read the article

  • Bizarre and very specific Internet connection loss

    - by Synetech
    Yesterday (Friday, September 21, 2012), my Internet connection started acting up. After some testing, I confirmed a very specific and baffling set of symptoms: Internet connection goes away every 25-35 minutes (I did not confirm the exact interval, but it seems to be about 30 mins.) Only some protocols are affected; HTTP*, P2P, etc. stop working; FTP, etc. continue to work When it’s stopped, cannot even ping router or cable-modem IPs or view their firmware pages Domain-names and IPs are irrelevant (for protocols that stop working, neither work, for those that still work, both work) Resetting router fixes it for another 30 minutes Keeping the connection idle or active doesn’t seem to make a difference (nor the bandwidth usage in that period) Connecting directly to cable-modem allows it to work indefinitely Disconnecting the router from the cable-modem works indefinitely (no Internet connection obviously, but can still access router IP and firmware page) Connecting the router to the cable-modem, but putting the modem on standby also works indefinitely Same problem with both a wireless laptop and wired (on any port) desktop (both Windows 7; will try to test Windows XP when possible) Nothing had changed in the days leading up to the issue. No modifications to the networking configuration or the router; there were not even any Windows updates except for an MSSE definition update. Waiting does not fix it, nor does any amount of fiddling with anything; only resetting the router fixes it for 30 minutes (resetting the cable-modem doesn't work either) I tried cleaning the pins in the router’s plugs, but that didn’t help, which was not really a surprise since I was not getting a lost connection error. Obviously my first thought was that the router was having a problem, and this is borne out by some tests. The problem is that when it drops, it is not a full drop since I can still do things like ftp ftp.mcafee.com and such which means that the connection and DNS are still working. Moreover, if it were the router, then why does it stay alive indefinitely when not connected to the cable-modem (i.e., no outside influence)? The problem doesn't seem to be either the cable-modem nor the router, but rather an interaction between the two, like something from the outside (port scan? hacker? ISP?) that is triggering a problem in the router. I see that there have been a couple of vulnerabilities for the DI-524, but those were a while back and should be fixed since I have the last firmware for it. I don’t think it’s my ISP (Rogers) since I have been using the router for several years without problem and can connect indefinitely when bypassing it. But I can’t rule them out since that is one of the only possible things that could have suddenly changed. Does anybody have any ideas of explanations, fixed, or tests? (I note that when I opened the router, I heard a very high-pitched noise from somewhere near the capacitors/ferrite ring which I don’t think I heard the last time I opened it a few years ago, but then if it were that, then why would it affect only a very small, specific set of functions?)

    Read the article

  • Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over?

    - by vmiazzo
    Hi, Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here. When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-over. AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over. Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers. Edit 2: Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete. 20 seconds are possible on networks where the topology was optimized for this. Probably just CDN operators can grant such fast fail-overs. Edit 3:* I did some DNS look-ups and traceroutes (maybe some expert can double check) and: The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover. Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records. From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down.

    Read the article

  • How can I work around problems with certificate configuration in Remote Desktop Services?

    - by Michael Steele
    I am setting up a Remote Desktop Services farm, and am having trouble configuring certificates for it to use. A demonstration of the problem I'm seeing can be found in Step #4. At this point I am convinced that there are problems with the user interface, and am looking for ways around them. Is there any way to configure certificates in Remote Desktop Services so that the settings hold and are reflected in the GUI? If not, is there any way for me to verify that the settings are correct? Step #1 - Create certificate to be used. I've configured a certificate to use with RD Web Access. The certificate is stored with in the Certificates MMC on my RD Connection Broker, and I am configuring the farm from that computer. I found by letting RD Web Access generate its own certificate that the following properties are required: Enhanced Key Usage Server Authentication Client Authentication This may not be required, but the self-signed certificate includes it. Key Usage Digital Signature Key Agreement Subject Alternative Name DNS Name=domain.com Detour about self-signed certificate generation As a quick detour, I was able to work around a problem with creating self-signed certificates using powershell. The documentation for the New-RDCertificate cmdlet gives the following example: PS C:\> $password = ConvertTo-SecureString -string "password" -asplaintext -force New-RDCertificate -Role RDWebAccess -DnsName "test-rdwa.contoso.com" -Password $password -ConnectionBroker rdcb.contoso.com -ExportPath "c:\test-rdwa.pfx" Typing this into the shell will result in an error message claiming that a function, Get-Server cannot be found. Prior to using New-RDCertificate, you must import the RemoteDesktop Module with Import-Module RemoteDesktop. Step #2 - Observe out-of-box behavior The first time you visit the Deployment Properties dialog box by navigating to Server Manager - Remote Desktop Services - Collections and selecting "Edit Deployment Properties" from the "TASKS" dropdown list in the "COLLECTIONS" grouping, you will see the following screen: This window is misleading because the level field is listed as "Not Configured". If I understand correctly all three of the role services are using a self-signed certificate. For the RD Web Access role this can be verified by visiting the website: The certificate being used also appears in the Certificates MMC: Step #3 - Assign new certificate The Deployment Properties dialog box will allow me to select my existing certificate. The certificate must be placed within the local computers Certificates MMC in the "Personal" certificate store. The private key will need to be exportable, and you will need to provide the password. I temporarily exported my certificate to a file named temp.pfx with a password, and then imported it into Remote Desktop Services from there. Once this is done the GUI will indicate that it is ready to accept the new configuration. Once I click the "Apply" button, the GUI indicates success. This can be verified by visiting the RD Web Access web site a second time. There is no certificate error. Step #4 - The GUI fails to maintain its state If the GUI is closed and reopened, all of these settings appear to be lost. Actually, the certificate I configured is still being used. I am able to continue accessing the RD Web Access site without any certificate errors. Oddly, if I use the "Create new certificate..." button to generate a self-signed certificate this window will update to an "Untrusted" level. This setting will then be maintained through the opening and closing of the Deployment Properties dialog box. Is there anything I can do to have my settings appear to stick? I feel like something is wrong when the GUI claims I haven't fully configured certificates.

    Read the article

  • conflict in debian packages

    - by Alaa Alomari
    I have Debian 4 server (i know it is very old) cat /etc/issue Debian GNU/Linux 4.0 \n \l I have the following in /etc/apt/sources.list deb http://debian.uchicago.edu/debian/ stable main deb http://ftp.debian.org/debian/ stable main deb-src http://ftp.debian.org/debian/ stable main deb http://security.debian.org/ stable/updates main apt-get upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 but it is not installable E: Unmet dependencies. Try using -f. Now it shows that i have Debian 6!! cat /etc/issue Debian GNU/Linux 6.0 \n \l EDIT I have tried apt-get update Get: 1 http://debian.uchicago.edu stable Release.gpg [1672B] Hit http://debian.uchicago.edu stable Release Ign http://debian.uchicago.edu stable/main Packages/DiffIndex Hit http://debian.uchicago.edu stable/main Packages Get: 2 http://security.debian.org stable/updates Release.gpg [836B] Hit http://security.debian.org stable/updates Release Get: 3 http://ftp.debian.org stable Release.gpg [1672B] Ign http://security.debian.org stable/updates/main Packages/DiffIndex Hit http://security.debian.org stable/updates/main Packages Hit http://ftp.debian.org stable Release Ign http://ftp.debian.org stable/main Packages/DiffIndex Ign http://ftp.debian.org stable/main Sources/DiffIndex Hit http://ftp.debian.org stable/main Packages Hit http://ftp.debian.org stable/main Sources Fetched 3B in 0s (3B/s) Reading package lists... Done apt-get dist-upgrade Reading package lists... Done Building dependency tree... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies. libt1-5: Depends: libc6 (= 2.7) but 2.3.6.ds1-13etch10+b1 is installed locales: Depends: glibc-2.11-1 E: Unmet dependencies. Try using -f. apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies...Done The following extra packages will be installed: gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin libc6 Suggested packages: glibc-doc Recommended packages: libc6-i686 The following packages will be REMOVED libc6-dev libedit-dev libexpat1-dev libgcrypt11-dev libjpeg62-dev libmcal0-dev libmhash-dev libncurses5-dev libpam0g-dev libsablot0-dev libtool libttf-dev The following NEW packages will be installed gcc-4.4-base libbsd-dev libbsd0 libc-bin libc-dev-bin The following packages will be upgraded: libc6 1 upgraded, 5 newly installed, 12 to remove and 349 not upgraded. 7 not fully installed or removed. Need to get 0B/5050kB of archives. After unpacking 23.1MB disk space will be freed. Do you want to continue [Y/n]? y Preconfiguring packages ... dpkg: regarding .../libc-bin_2.11.3-2_i386.deb containing libc-bin: package uses Breaks; not supported in this dpkg dpkg: error processing /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb (--unpack): unsupported dependency problem - not installing libc-bin Errors were encountered while processing: /var/cache/apt/archives/libc-bin_2.11.3-2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Now: it seems there is a conflict!! how can i fix it? and is it true that the server has became debian 6!!?? Thanks for your help

    Read the article

  • Unison synchronization problem. Roots are not identical after synchronization.

    - by binary255
    Hi. When I synchronize two folders using Unison, only one of the roots seems to be affected. Below are all the information I would think is necessary to figure out why it is working like it is. I'm using $ unison -version unison version 2.27.57 From the Ubuntu repositories. My work laptop: $ echo $UNISONLOCALHOSTNAME worklaptop $ pwd /home/userfoo $ ls -lAR .unison* .unison: total 8 drwxr-xr-x 2 userfoo userfoo 4096 2010-04-26 11:39 backups -rw-r--r-- 1 userfoo userfoo 231 2010-04-26 11:38 default.prf .unison/backups: total 0 .unisonroot: total 0 $ cat .unison/default.prf # Roots of the synchronization root = /home/userfoo/.unisonroot root = ssh://devel//home/userbar/.unisonroot path = * backuplocation = central backupdir = /home/.unison/backups backupprefix = $VERSION.bak $ mkdir .unisonroot/aDirectoryFrom-$UNISONLOCALHOSTNAME $ echo something >.unisonroot/aFileFrom-$UNISONLOCALHOSTNAME $ ls .unisonroot/ aDirectoryFrom-worklaptop aFileFrom-worklaptop And the Ubuntu server I want to synchronize with: $ echo $UNISONLOCALHOSTNAME workcmpuserbardevel $ pwd /home/userbar $ ls -lAR .unison* .unison: total 4 drwxr-xr-x 2 userbar userbar 4096 2010-04-26 11:38 .unison .unison/.unison: total 0 .unisonroot: total 0 $ mkdir .unisonroot/aDirectoryFrom-$UNISONLOCALHOSTNAME $ echo something >.unisonroot/aFileFrom-$UNISONLOCALHOSTNAME $ ls .unisonroot/ aDirectoryFrom-workcmpuserbardevel aFileFrom-workcmpuserbardevel I perform the unison synchronization: $ echo $UNISONLOCALHOSTNAME worklaptop $ unison Contacting server... Connected [//worklaptop//home/userfoo/.unisonroot -> //workcmpuserbardevel//home/userbar/.unisonroot] Looking for changes Warning: No archive files were found for these roots, whose canonical names are: /home/userfoo/.unisonroot //workcmpuserbardevel//home/userbar/.unisonroot This can happen either because this is the first time you have synchronized these roots, or because you have upgraded Unison to a new version with a different archive format. Update detection may take a while on this run if the replicas are large. Unison will assume that the 'last synchronized state' of both replicas was completely empty. This means that any files that are different will be reported as conflicts, and any files that exist only on one replica will be judged as new and propagated to the other replica. If the two replicas are identical, then no changes will be reported. If you see this message repeatedly, it may be because one of your machines is getting its address from DHCP, which is causing its host name to change between synchronizations. See the documentation for the UNISONLOCALHOSTNAME environment variable for advice on how to correct this. Donations to the Unison project are gratefully accepted: http://www.cis.upenn.edu/~bcpierce/unison Press return to continue.[<spc>] Waiting for changes from server Reconciling changes local workcmps... dir ----> aDirectoryFrom-worklaptop [f] file ----> aFileFrom-worklaptop [f] Proceed with propagating updates? [] y Propagating updates UNISON 2.27.57 started propagating changes at 11:49:14 on 26 Apr 2010 [BGN] Copying aDirectoryFrom-worklaptop from /home/userfoo/.unisonroot to //workcmpuserbardevel//home/userbar/.unisonroot [BGN] Copying aFileFrom-worklaptop from /home/userfoo/.unisonroot to //workcmpuserbardevel//home/userbar/.unisonroot [END] Copying aDirectoryFrom-worklaptop [END] Copying aFileFrom-worklaptop UNISON 2.27.57 finished propagating changes at 11:49:14 on 26 Apr 2010 Saving synchronizer state Synchronization complete (2 items transferred, 0 skipped, 0 failures) And then check the .unisonroot directory on the computer I started the synchronization from: $ ls .unisonroot/ aDirectoryFrom-worklaptop aFileFrom-worklaptop And on the server: $ echo $UNISONLOCALHOSTNAME workcmpuserbardevel $ ls .unisonroot/ aDirectoryFrom-worklaptop aFileFrom-worklaptop aDirectoryFrom-workcmpuserbardevel aFileFrom-workcmpuserbardevel As can be seen above, the contents of the laptop .unisonroot has not changed while the servers .unisonroot has. The desired result would have been that the two folders would have ended up being identical, holding the union of the contents of the two roots.

    Read the article

  • All client browsers repeatedly asking for NTLM authentication when running through local proxy server

    - by Marko
    All client browsers repeatedly asking for NTLM authentication when running through local proxy server. When pointing browsers through the local proxy to the internet, some but not all clients are being repeatedley prompted to authenticate to the proxy server. I have inspected the headers using firefox live headers as well as fiddler, and in all cases the authentication prompts happen when requesting SSL resources. an example of this would be as follows: GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: gmail.google.com GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: gmail.google.com Proxy-Authorization: NTLM TlRMTVNTUAABAAAAB7IIogkACQAvAAAABwAHACgAAAAFASgKAAAAD1dJTlhQMUdGTEFHU0hJUDc= GET http://gmail.google.com/mail/ HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave- flash, application/x-ms-application, application/x-ms-xbap, application/vnd.ms- xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Proxy-Authorization: NTLM TlRMTVNTUAADA (more stuff goes here I cut it short) Host: gmail.google.com At this point the username and password prompt has appeared in the browser, it does not matter what is typed into this box, correct credentials, random nonsense the browser does not accept anything in this box it will continue to popup. If I press cancel, I sometimes get a http 407 error, but on other occasions I click cancel the website proceeds to download and show normally. This is repeatable with some clients running through my proxy server, but in other cases it does not happen at all. In the cases where a client computer works normally, the only difference I can see is that the 3rd request for SSL resource comes back with a 200 response, see below: CONNECT gmail.google.com:443 HTTP/1.0 User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; MALC) Proxy-Connection: Keep-Alive Content-Length: 0 Host: gmail.google.com Pragma: no-cache Proxy-Authorization: NTLM TlRMTVNTUAADAAAAGAAYAIAAAA A SSLv3-compatible ClientHello handshake was found. I have tried resetting user accounts as well as computer accounts in Active Directory. User accounts and passwords that are being used are correct and the passwords have been reset so they are not out of sync. I have removed the clients and even the proxy server from the domain, and rejoined them. I have installed a complete separate proxy server and get exactly the same problem when I point clients to a different proxy server on a different IP address.

    Read the article

  • Windows XP Video Configuration Issues

    - by Matt
    Recently I had my motherboard burn out on me. Needing the machine for work, I purchased a different motherboard and installed that. Generally a reinstall of windows is good at that point but I am not in a position to do that so I just decided I would live with it for now. When I can log-in, everything works fine, what doesn't is getting to the log-in prompt to begin with. Basically when I first installed the new mobo, every time I rebooted the machine, I would not get the windows login prompt. One of the monitors would receive a signal but the screen would be black. Moving the mouse would not show the cursor and hitting the up arrow key and typing my password and hitting enter (which will normally log you in without mouse) wouldn't change anything. I would then change the monitor configuration around (2 lcd's and a crt) and reboot and at least one of the monitors would work and display the login prompt. I could then go into display properties and turn on the other monitors. However if I rebooted again, I would get the black screen on one monitor again. I would then have to change the configuration again to one not used before and I could re-do the manual setup at that point. I think windows saves the configurations so I had to keep giving it new ones. Needless to say I've been trying to not turn off my machine. Early this week I actually got the prompt to come up without playing musical monitors. Thinking everything was getting better, I found no harm in rebooting to install the latest windows updates. Boy was I wrong. Now no matter what I do I can't get a windows log-in prompt to display. I've tried almost every conceivable combination. The new mobo has onboard video so I set that in the bios (yea bios screen always displays fine, its not until windows boots that there is a problem) to be the primary video. Still no luck. I have two other graphics cards in the machine which I'm using. Tried all kinds of configurations between those and on-board but still get this black screen of death. I read somewhere that deleting the video drivers would reset the configurations. I logged into safe mode (which works on one monitor), and uninstalled the display drivers. Still no luck and when I booted back into safe mode, it wanted to install new hardware and the display adapters weren't there as expected. Anyone have any ideas? A fresh install would be a pain and I might be getting my old board back from RMA soon so not sure I want to go through with that just yet. Only thing I can think of is to continue to try other combinations like physically removing the graphics cards. They are both EVGA 8600 cards and the windows boot screen does display fwiw.

    Read the article

  • Choice and setup of version control

    - by Peter M
    I am about to set up an new laptop and in the process transition to a new version control system as part of a general cleanup. Currently I use a centralized version control system (yes it is VSS, and yes I know all the pro's and con's of that system, but as a single user system it works well for me). I have very little requirements for a new system and I am free to choose among any of the current mainstream players, but cost constraints will push me towards oss. Some of my requirements are: Runs on a single machine (ie the laptop in question) under windows I am not sharing things with other developers or workers - this is more for my own historical benefits. I want to version source code, documentation and binary files I have a large hierarchy of projects that are unrelated (see below) I have files within the hierarchy that don't need to be controlled (but could be) Some projects use Visual Studio, so some integration there could be nice. There could be some sharing of files between jobs. I generally only need a small about of branching in code files The directory hierarchy that I have at the moment is somewhat like: Root | |--Customer #1 | | | |--Job #1 | | | | | |--Data files received from Customer for Job (not controlled) | | |--Documentation files (controlled) | | |--Project information files (not controlled - but could be) | | |--Software Project Files (controlled) | | |--Scratch dir for job (not controlled) | | | |--Job #2 | | (same structure as above) | |--Customer #2 | |.. | |--Cusmtomer #n |.. Currently I have about 22 customers with differing numbers of projects underneath them. At the moment I have a single VSS repository based at the root of the directory structure. If I kept with a centralized system (ie SVN) I believe that I should keep the same approach and continue with a single repository based from the root dir. Is this a valid approach? However if I move to a distributed tool then I am unsure of how I should handle the situation. My initial guess is that I should not have a repository based on the root of my entire directory structure - but that is a guess so I really don't know how valid it is. Should I pitch a distributed approach at the Root, Customer, Job or sub-Job directory level? Also what I am not clear on with distributed tools (and perhaps with SVN as well), is if I can branch parts of a repository. For example, I can see branching source code in software projects as being useful, but branching my documentation as not being useful. So if I pitch a repository at the Job level, can I just branch the Software Project Files? Or would all files in that Job be branched? Every time I look at distributed tools I get a nagging feeling that they are not suited to my style of setup. I am uncomfortable with idea of having to manually set up something like 50 to 80 separate repositories (if I pitch at the Job level, or 20+ if at the Customer level) within my directory hierarchy. This feeling also extends to having all those repositories scattered around as well - however I do have a backup strategy that I trust, so this latter feeling is pretty well unfounded. So what advice can you all give me? Thanks in advance!

    Read the article

  • Bugzilla : No SASL mechanism found

    - by niteshsinha
    I am using Bugzilla on windows 7. I am using the unofficial Bugzilla installer. I followed the steps accordingly and gave valid credentials wherever required. I open Bugzilla and try to create a new account , but i get the following error. Software error: No SASL mechanism found at C:/Program Files/Bugzilla/perl/perl/site/lib/Authen/SASL.pm line 77 at C:/Program Files/Bugzilla/perl/perl/lib/Net/SMTP.pm line 143 i ran checksetup.pl and found that Authen::SASL and SMTP both are available on my machine. The output of checksetup.pl is as follows. * This is Bugzilla 3.6.3 on perl 5.10.1 * Running on Win7 Build 7600 Checking perl modules... Checking for CGI.pm (v3.33) ok: found v3.49 Checking for Digest-SHA (any) ok: found v5.48 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.53 Checking for DateTime-TimeZone (v0.79) ok: found v1.10 Checking for DBI (v1.41) ok: found v1.609 Checking for Template-Toolkit (v2.22) ok: found v2.22 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.861) ok: found v1.903 Checking for Email-MIME-Encodings (v1.313) ok: found v1.313 Checking for Email-MIME-Modifier (v1.442) ok: found v1.903 Checking for URI (any) ok: found v1.52 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.16.1 Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for DBD-Oracle (v1.19) not found The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.44 Checking for Chart (v2.1) ok: found v2.4.1 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for XML-Twig (any) ok: found v3.34 Checking for MIME-tools (v5.406) ok: found v5.427 Checking for libwww-perl (any) ok: found v5.834 Checking for PatchReader (v0.9.4) ok: found v0.9.5 Checking for perl-ldap (any) ok: found v0.39 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.17 Checking for SOAP-Lite (v0.710.06) ok: found v0.710.10 Checking for JSON-RPC (any) ok: found v0.95 Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.40) ok: found v3.64 Checking for HTML-Scrubber (any) ok: found v0.08 Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * *********************************************************************** * Note For Windows Users * *********************************************************************** * In order to install the modules listed below, you first have to run * * the following command as an Administrator: * * * * ppm repo add theory58S http://cpan.uwinnipeg.ca/PPMPackages/10xx/ * * * Then you have to do (also as an Administrator): * * * * ppm repo up theory58S * * * * Do that last command over and over until you see "theory58S" at the * * top of the displayed list. * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for MySQL (v4.1.2) ok: found v5.1.44-community-log Removing existing compiled templates... Precompiling templates...done. Now that you have installed Bugzilla, you should visit the 'Parameters' page (linked in the footer of the Administrator account) to ensure it is set up as you wish - this includes setting the 'urlbase' option to the correct URL. Press any key to continue . . . Please tell me what should i do. Please note: i am running behind a corporate proxy , SSL/TLS is not used internally but i am giving the smtpUser and smtpPass also.

    Read the article

  • Installing chrome in Centos 6.2 (Final)

    - by usjes
    I need to install chrome in a dedicated centos server where I only access via ssh, it doesn't have X or any windows graphical stuff. I need it to be able to pack extensions using google-chrome --pack-extension. I tried adding this to /etc/yum.repos.d/google.repo [google-chrome] name=google-chrome - 32-bit baseurl=http://dl.google.com/linux/chrome/rpm/stable/i386 enabled=1 gpgcheck=1 gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub And then yum install google-chrome-stable, but there's a huge list of dependencies problems: How can I install chrome without breaking anything else? UPDATE: Ok, I installed perl-CGI from .rpm because yum couldn't find it, now dependencies resolve and it show me this list of packages to install: Dependencies Resolved ============================================================================================================================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================================================================================================================= Installing: google-chrome-stable x86_64 19.0.1084.52-138391 google-chrome 35 M Installing for dependencies: ConsoleKit x86_64 0.4.1-3.el6 base 82 k ConsoleKit-libs x86_64 0.4.1-3.el6 base 17 k GConf2 x86_64 2.28.0-6.el6 base 964 k ORBit2 x86_64 2.14.17-3.1.el6 base 168 k bc x86_64 1.06.95-1.el6 base 110 k cdparanoia-libs x86_64 10.2-5.1.el6 base 47 k cups x86_64 1:1.4.2-44.el6_2.3 updates 2.3 M dbus x86_64 1:1.2.24-5.el6_1 base 207 k desktop-file-utils x86_64 0.15-9.el6 base 47 k ed x86_64 1.1-3.3.el6 base 72 k eggdbus x86_64 0.6-3.el6 base 91 k foomatic x86_64 4.0.4-1.el6_1.1 base 251 k foomatic-db noarch 4.0-7.20091126.el6 base 980 k foomatic-db-filesystem noarch 4.0-7.20091126.el6 base 4.4 k foomatic-db-ppds noarch 4.0-7.20091126.el6 base 19 M ghostscript x86_64 8.70-11.el6_2.6 updates 4.4 M ghostscript-fonts noarch 5.50-23.1.el6 base 751 k gstreamer x86_64 0.10.29-1.el6 base 764 k gstreamer-plugins-base x86_64 0.10.29-1.el6 base 942 k gstreamer-tools x86_64 0.10.29-1.el6 base 23 k iso-codes noarch 3.16-2.el6 base 2.4 M lcms-libs x86_64 1.19-1.el6 base 100 k libIDL x86_64 0.8.13-2.1.el6 base 83 k libXScrnSaver x86_64 1.2.0-1.el6 base 19 k libXfont x86_64 1.4.1-2.el6_1 base 128 k libXv x86_64 1.0.5-1.el6 base 21 k libfontenc x86_64 1.0.5-2.el6 base 24 k libgudev1 x86_64 147-2.40.el6 base 59 k libmng x86_64 1.0.10-4.1.el6 base 165 k libogg x86_64 2:1.1.4-2.1.el6 base 21 k liboil x86_64 0.3.16-4.1.el6 base 121 k libtheora x86_64 1:1.1.0-2.el6 base 129 k libvisual x86_64 0.4.0-9.1.el6 base 135 k libvorbis x86_64 1:1.2.3-4.el6_2.1 updates 168 k mailx x86_64 12.4-6.el6 base 234 k man x86_64 1.6f-29.el6 base 263 k mesa-libGLU x86_64 7.11-3.el6 base 201 k nvidia-graphics195.30-libs x86_64 195.30-120.el6 atrpms 13 M openjpeg-libs x86_64 1.3-7.el6 base 59 k pax x86_64 3.4-10.1.el6 base 69 k phonon-backend-gstreamer x86_64 1:4.6.2-20.el6 base 125 k polkit x86_64 0.96-2.el6_0.1 base 158 k poppler x86_64 0.12.4-3.el6_0.1 base 557 k poppler-data noarch 0.4.0-1.el6 base 2.2 M poppler-utils x86_64 0.12.4-3.el6_0.1 base 73 k portreserve x86_64 0.0.4-4.el6_1.1 base 22 k qt x86_64 1:4.6.2-20.el6 base 4.0 M qt-sqlite x86_64 1:4.6.2-20.el6 base 50 k qt-x11 x86_64 1:4.6.2-20.el6 base 12 M qt3 x86_64 3.3.8b-30.el6 base 3.5 M redhat-lsb x86_64 4.0-3.el6.centos base 24 k redhat-lsb-graphics x86_64 4.0-3.el6.centos base 12 k redhat-lsb-printing x86_64 4.0-3.el6.centos base 11 k sgml-common noarch 0.6.3-32.el6 base 43 k time x86_64 1.7-37.1.el6 base 26 k tmpwatch x86_64 2.9.16-4.el6 base 31 k xdg-utils noarch 1.0.2-17.20091016cvs.el6 base 58 k xml-common noarch 0.6.3-32.el6 base 9.5 k xorg-x11-font-utils x86_64 1:7.2-11.el6 base 75 k xz x86_64 4.999.9-0.3.beta.20091007git.el6 base 137 k xz-lzma-compat x86_64 4.999.9-0.3.beta.20091007git.el6 base 16 k Transaction Summary ============================================================================================================================================================================================================================================= Install 62 Package(s) Is it safe to continue and install all that or could I break something already installed?

    Read the article

  • How to setup stunnel so that gmail can use my own smtp server to send messages.

    - by igorhvr
    I am trying to setup gmail to send messages using my own smtp server. I am doing this by using stunnel over a non-ssl enabled server. I am able to use my own smtp client with ssl enabled just fine to my server. Unfortunately, however, gmail seems to be unable to connect to my stunnel port. Gmail seems to be simply closing the connection right after it is established - I get a "SSL socket closed on SSL_read" on my server logs. On gmail, I get a "We are having trouble authenticating with your other mail service. Please try changing your SSL settings. If you continue to experience difficulties, please contact your other email provider for further instructions." message. Any help / tips on figuring this out will be appreciated. My certificate is self-signed - could this perhaps be related to the problem I am experiencing? I pasted the entire SSL session (logs from my server) below. 2011.01.02 16:56:20 LOG7[20897:3082491584]: Service ssmtp accepted FD=0 from 209.85.210.171:46858 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp started 2011.01.02 16:56:20 LOG7[20897:3082267504]: FD=0 in non-blocking mode 2011.01.02 16:56:20 LOG7[20897:3082267504]: Option TCP_NODELAY set on local socket 2011.01.02 16:56:20 LOG7[20897:3082267504]: Waiting for a libwrap process 2011.01.02 16:56:20 LOG7[20897:3082267504]: Acquired libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Releasing libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Released libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp permitted by libwrap from 209.85.210.171:46858 2011.01.02 16:56:20 LOG5[20897:3082267504]: Service ssmtp accepted connection from 209.85.210.171:46858 2011.01.02 16:56:20 LOG7[20897:3082267504]: FD=1 in non-blocking mode 2011.01.02 16:56:20 LOG6[20897:3082267504]: connect_blocking: connecting 127.0.0.1:25 2011.01.02 16:56:20 LOG7[20897:3082267504]: connect_blocking: s_poll_wait 127.0.0.1:25: waiting 10 seconds 2011.01.02 16:56:20 LOG5[20897:3082267504]: connect_blocking: connected 127.0.0.1:25 2011.01.02 16:56:20 LOG5[20897:3082267504]: Service ssmtp connected remote server from 127.0.0.1:3701 2011.01.02 16:56:20 LOG7[20897:3082267504]: Remote FD=1 initialized 2011.01.02 16:56:20 LOG7[20897:3082267504]: Option TCP_NODELAY set on remote socket 2011.01.02 16:56:20 LOG5[20897:3082267504]: Negotiations for smtp (server side) started 2011.01.02 16:56:20 LOG7[20897:3082267504]: RFC 2487 not detected 2011.01.02 16:56:20 LOG5[20897:3082267504]: Protocol negotiations succeeded 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): before/accept initialization 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client hello A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write server hello A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write certificate A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write certificate request A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 flush data 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=2, /C=US/O=Equifax/OU=Equifax Secure Certificate Authority 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=1, /C=US/O=Google Inc/CN=Google Internet Authority 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=0, /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client certificate A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client key exchange A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read certificate verify A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read finished A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write change cipher spec A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write finished A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 flush data 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 items in the session cache 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client connects (SSL_connect()) 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client connects that finished 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client renegotiations requested 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 server connects (SSL_accept()) 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 server connects that finished 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 server renegotiations requested 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache hits 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 external session cache hits 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache misses 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache timeouts 2011.01.02 16:56:20 LOG6[20897:3082267504]: SSL accepted: new session negotiated 2011.01.02 16:56:20 LOG6[20897:3082267504]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL socket closed on SSL_read 2011.01.02 16:56:20 LOG7[20897:3082267504]: Socket write shutdown 2011.01.02 16:56:20 LOG5[20897:3082267504]: Connection closed: 167 bytes sent to SSL, 37 bytes sent to socket 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp finished (0 left)

    Read the article

  • Ubuntu 14.04, OpenLDAP TLS problems

    - by larsemil
    So i have set up an openldap server using this guide here. It worked fine. But as i want to use sssd i also need TLS to be working for ldap. So i looked into and followed the TLS part of the guide. And i never got any errors and slapd started fine again. BUT. It does not seem to work when i try to use ldap over tls. root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se ldap_start_tls: Protocol error (2) additional info: unsupported extended operation Ganking up the debug level some notches returns some more information: root@server:~# ldapsearch -x -ZZ -H ldap://83.209.243.253 -b dc=daladevelop,dc=se -d 5 ldap_url_parse_ext(ldap://83.209.243.253) ldap_create ldap_url_parse_ext(ldap://83.209.243.253:389/??base) ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 83.209.243.253:389 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 83.209.243.253:389 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({) ber: ber_flush2: 31 bytes to sd 3 ldap_result ld 0x7f25df51e220 msgid 1 wait4msg ld 0x7f25df51e220 msgid 1 (infinite timeout) wait4msg continue ld 0x7f25df51e220 msgid 1 all 1 ** ld 0x7f25df51e220 Connections: * host: 83.209.243.253 port: 389 (default) refcnt: 2 status: Connected last used: Fri Jun 6 08:52:16 2014 ** ld 0x7f25df51e220 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f25df51e220 request count 1 (abandoned 0) ** ld 0x7f25df51e220 Response Queue: Empty ld 0x7f25df51e220 response count 0 ldap_chkResponseList ld 0x7f25df51e220 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f25df51e220 NULL ldap_int_select read1msg: ld 0x7f25df51e220 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 42 contents: read1msg: ld 0x7f25df51e220 msgid 1 message type extended-result ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f25df51e220 0 new referrals read1msg: mark request completed, ld 0x7f25df51e220 msgid 1 request done: ld 0x7f25df51e220 msgid 1 res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_extended_result ber_scanf fmt ({eAA) ber: ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_err2string ldap_start_tls: Protocol error (2) additional info: unsupported extended operation ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 3 ldap_free_connection: actually freed So no good information there neither. In /var/log/syslog i get: Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 ACCEPT from IP=83.209.243.253:56440 (IP=0.0.0.0:389) Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 EXT oid=1.3.6.1.4.1.1466.20037 Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" Jun 6 08:55:42 master slapd[21383]: conn=1008 op=0 RESULT tag=120 err=2 text=unsupported extended operation Jun 6 08:55:42 master slapd[21383]: conn=1008 op=1 UNBIND Jun 6 08:55:42 master slapd[21383]: conn=1008 fd=23 closed If i portscan the host i get the following: Starting Nmap 6.40 ( http://nmap.org ) at 2014-06-06 08:56 CEST Nmap scan report for h83-209-243-253.static.se.alltele.net (83.209.243.253) Host is up (0.0072s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 389/tcp open ldap 636/tcp open ldapssl But when i check certs root@master:~# openssl s_client -connect daladevelop.se:636 -showcerts -state CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:unknown state 140244859233952:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 317 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE --- And i feel like i am clearly out in deep water not knowing at all where to go from here. Anny hints appreciated on what to do or to get better debug logging... EDIT: This is my config slapcated from cn=config and it does not mention at all anything about TLS. I have inserted my certinfo.ldif: root@master:~# cat certinfo.ldif dn: cn=config add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/cacert.pem - add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/daladevelop_slapd_cert.pem - add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/daladevelop_slapd_key.pem and when doing that i only got this as an answer. root@master:~# sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f certinfo.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry "cn=config" So still no wiser.

    Read the article

  • Installing .NET Framework 4 Client Profile breaks Windows Update

    - by Richard
    I have a Samsung NC-10 netbook with a fresh install of Windows 7 Home Premium 32-bit (it only had 2GB). If Microsoft .NET Framework 4 Client Profile is installed on it, Windows Update will always return error code 8024402F ("Windows Update encountered an unknown error"). As soon as I uninstall it, Windows Update works just fine again. Out of the four computers in this house, only this netbook has the problem. My question is: How can I get the .NET Framework 4 Client Profile installed on my netbook and continue to have a functioning Windows Update? ----- More information ----- The hard-drive recently died on my netbook so I replaced it with a nice new SSD and did a fresh installation of Windows 7 Home Premium (SP1) - along with all the updates. At some point I found that, when I ran Windows Update, I was greeted with error code 8024402F ("Windows Update encountered an unknown error"). Looking in C:\Windows\WindowsUpdate.log, I saw the following issue: WARNING: ECP: Failed to validate cab file digest downloaded from http://download.windowsupdate.com/msdownload/update/software/dflt/2012/02/4913552_4a5c9563d1f58c77f30d0d5c9999e4b8bff3ab21.cab with error 0x80091007 WARNING: ECP: This roundtrip contained some optimized updates which failed. New Update count = 0, Old Count = 3 FATAL: ProcessCoreMetadata did not return any update to be committed WARNING: Sync of Updates: 0x8024402f WARNING: SyncServerUpdatesInternal failed: 0x8024402f When I downloaded the CAB from the URL listed and opened it, it contained a file called 4913552.txt. A search on Google suggested that it's related to Microsoft .NET Framework 4 Client Profile. Other people had reported problems with it breaking Windows Update, but they were running Windows XP. I tried the steps outlined on the Microsoft site for this error code, but it reported that there was nothing wrong. I also found this superuser question, I tried all the answers listed but none of them made any difference. My router doesn't block ActiveX, changing my internet settings in IE made no difference, assuming it was a corrupted update repository didn't do anything (except wipe my update history), my date and time was correct, switching to Google's DNS didn't work and neither did disabling IPv6. Figuring that this update was corrupted, I repaired it and nothing changed. In desperation I un-installed it and Windows Update started working again! Brilliant! I then downloaded the full version from the Microsoft website, installed it and, thankfully, Windows Update continued to work just fine. A week later I turn on my netbook and Windows Update is broken again with exactly the same error message and log entries as before. Repairing .NET Framework 4 Client Profile did nothing, removing it entirely solved the problem again. Thinking this might be some odd Windows installation issue, I formatted the hard-drive and re-installed Windows. Same problem as before - as soon as .NET Framework 4 Client Profile ended up on the netbook, Windows Update stopped working and reported error 8024402F. As soon as it was un-installed, everything worked just fine again. There are three other machines in this house and all of them have working Windows Update and this Client Profile. Does anyone know why it causes this netbook to break and, more importantly, how I can fix it?

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Can I make my drives visible and change their partition type without losing my data?

    - by user165408
    I have made a lot of mistakes and now I cannot see my hard disk nor I can start my operating system on my laptop. All my passwords and important files on my hdd without any backup. I followed this course of action Changed my hard disk partitions to dynamic just for getting 5th partition. (1st mistake) Decreased partitions to 4 again. Backed up operating system from 4th to 3rd partition with Norton Ghost. Booted from a live CD for Windows XP. Formatted 4th partition and moved my all important data from 1st and 2nd partitions to the 4th partition. Deleted 1st and 2nd partitions and got 1 partition from half of empty space. So I have just 3 partitions and empty space between 1st and 2nd partitions. Tried to install Windows 8 to the first partition but it did not allow because it is dynamic. Also it did not allow to install to other partitions. Tried to install Windows XP to the 1st partition but it said if I continue I cannot use other drivers. Therefore I escaped from installing it. Booted from the Windows XP live CD then increased 1st partiton to less than 400mb of empty space. Therefore I thought it will be adjacent but it was shown as 2 partitions. In my computer I see just 3 drivers. Using Norton Ghost I recovered my OS to the 1st partition. (2nd mistake it was on 4th partition originally) Booted from a Windows XP live CD I tried to install bcdedit to the Windows XP live CD but it did not work. Then I tried to install EaseUS Partition Master Home Edition. It was installed with errors then I start it and it showed me an error like there is no hard disk. I looked to my PC and my drivers were not there. Booted from the Norton Ghost CD and it did not show me my drivers either, but before I was able to see them. I checked numbers of partition shown by the Norton Ghost utility and they are still have same numbers so I have to see my drivers but I cannot see them now. My hard disk is shown as extarnal dynamic now so I cannot see any drive in my PC in the live Windows XP. There are two options; first one is import extarnal disk and second one is convert disk to basic. Will they delete my data? I fear booting from CDs like Windows XP live CD, Norton Ghost CD, and the operating system CD/DVD, because they may overwrite a few MB their data to my data. These recover tools are already exist in Windows XP live CD by The Ultimate Boot CD for Windows. Can any of them help me? CompuAppa SwissKnife V3 DBXtract Disk Investigator Fab's AutoBackup 2.0 FileRecovery Floppy Repair Free Undelete Handy Recovery Recovery Manager Restorastion Restorastion Help File by UBCD4Win UnChk Unstoppable Copier Finally How can I make it so that my drives are visible again without losing my data? How can I convert my dynamic partitions to basic without losing my data?

    Read the article

  • dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack)

    - by udo
    I had an issue (Question 199582) which was resolved. Unfortunately I am stuck at this point now. Running root@X100e:/var/cache/apt/archives# apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following NEW packages will be installed: file libexpat1 libmagic1 libreadline6 libsqlite3-0 mime-support python python-minimal python2.6 python2.6-minimal readline-common 0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/5,204kB of archives. After this operation, 19.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from .../python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/python2.6-minimal_2.6.6-5ubuntu1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb results in above error. Running root@X100e:/var/cache/apt/archives# dpkg -i --force-depends python2.6-minimal_2.6.6-5ubuntu1_i386.deb (Reading database ... 6108 files and directories currently installed.) Unpacking python2.6-minimal (from python2.6-minimal_2.6.6-5ubuntu1_i386.deb) ... new installation of python2.6-minimal; /usr/lib/python2.6/site-packages is a directory which is expected a symlink to /usr/local/lib/python2.6/dist-packages. please find the package shipping files in /usr/lib/python2.6/site-packages and file a bug report to ship these in /usr/lib/python2.6/dist-packages instead aborting installation of python2.6-minimal dpkg: error processing python2.6-minimal_2.6.6-5ubuntu1_i386.deb (--install): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: python2.6-minimal_2.6.6-5ubuntu1_i386.deb is not able to fix this. Any clues how to fix this?

    Read the article

  • Vista 64-bit, DISK BOOT FAILURE

    - by weka
    So I have this Acer Aspire AX3200-U3600A with Windows Vista (64-bit). Every night I turn it off and turn it back on in the morning. Around three weeks ago, I did a fresh factory reimage. Good as new. Then around two days ago, when I turned it on, I noticed it was running extremly slow. As in, it would often freeze up while I had multiple applications open when it usually never froze up. So I decided to restart my computer. Big mistake. My computer froze right after I clicked shut-down. I waited a while. Nothing. Waited some minutes. Nope. I decided to shut it down by pressing the power button. Here is where the problems begin. When I turned it back on, I saw the Windows logo and loading bar and then it loaded to black. I turned it off again forcefully by power button and then once more... then I got: AMD Data Change... Update New Data to DMI! then later the screen clears and I get: AHCI Option ROM BIOS Revision: 01.05.92 Date: 02-19-2008 Copyright (c) 2006-2008 Phoenix Technologies, LTD Port 01: Reset Port Error!! Port 02: then the screen clears again but this time, this loads from the bottom: Nvidia Boot Agent 249.0542 (copyright stuff... blah blah) PXE-E61: Media test failure, check cable. PXE-M0F: Exiting Nvidia Boot Agent DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER. So I try to go into Safe Mode. Well, first of all it doesn't load as fast. After it loads disk.sys from windows/drivers, it will wait a while (2-3 mins) THEN load. However it loads the Acer eRecovery Management Tool. I have three options: Reset computer to factory default, Restore computer from user's backup, or Exit. However, the top two options are gray and disabled where as the Exit is in blue and definitely clickable. So obviously safe mode is not there... A strong thing to note: In the beginning when all of this started, I did a Boot Windows Normal from pressing f8 and I got to my desktop! It logged me in. I could see the icons on my files. However my desktop was extremely slow as in when I clicked on the Start menu, it would wait a while, then load up the menu with JUST the gradient, no text or icons... so as you can see... it saw my HDD? Also, before anyone says, I have NO USB plugged in. My mouse and keyboard are not USB inputs, I assure you. And this came without a recovery CD AND when I went in BIOS, to change the BOOT ORDER, I did NOT see a CD-ROM option. And when I tried pressing ALT+F10 to get into Acer eRecovery Management, the top two options were disabled as well. But sometimes on start-up, I get: Windows has encountered a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removeable storage is properly connected and then restart your computer. If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occured. Then I tried Last Known Good Configuration Settings, that gives me a BSOD. What should I do/

    Read the article

  • Setting environment variables in OS X

    - by Percival Ulysses
    Despite the warning that questions that can be answered are preferred, this question is more a request for comments. I apologize for this, but I feel that it is valuable nonetheless. The problem to set up environment variables such that they are available for GUI applications has been around since the dawn of Mac OS X. The solution with ~/.MacOSX/environment.plist never satisfied me because it was not reliable, and bash style globbing wasn't available. Another solution is the use of Login Hooks with a suitable shell script, but these are deprecated. The Apple approved way for such functionality as provided by login hooks is the use of Launch Agents. I provided a launch agent that is located in /Library/LaunchAgents/: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>user.conf.launchd</string> <key>Program</key> <string>/Users/Shared/conflaunchd.sh</string> <key>ProgramArguments</key> <array> <string>~/.conf.launchd</string> </array> <key>EnableGlobbing</key> <true/> <key>RunAtLoad</key> <true/> <key>LimitLoadToSessionType</key> <array> <string>Aqua</string> <string>StandardIO</string> </array> </dict> </plist> The real work is done in the shell script /Users/Shared/conflaunchd.sh, which reads ~/.conf.launchd and feeds it to launchctl: #! /bin/bash #filename="$1" filename="$HOME/.conf.launchd" if [ ! -r "$filename" ]; then exit fi eval $(/usr/libexec/path_helper -s) while read line; do # skip lines that only contain whitespace or a comment if [ ! -n "$line" -o `expr "$line" : '#'` -gt 0 ]; then continue; fi eval launchctl $line done <"$filename" exit 0 Notice the call of path_helper to get PATH set up right. Finally, ~/.conf.launchd looks like that setenv PATH ~/Applications:"${PATH}" setenv TEXINPUTS .:~/Documents/texmf//: setenv BIBINPUTS .:~/Documents/texmf/bibtex//: setenv BSTINPUTS .:~/Documents/texmf/bibtex//: # Locale setenv LANG en_US.UTF-8 These are launchctl commands, see its manpage for further information. Works fine for me (I should mention that I'm still a Snow Leopard guy), GUI applications such as texstudio can see my local texmf tree. Things that can be improved: The shell script has a #filename="$1" in it. This is not accidental, as the file name should be feeded to the script by the launch agent as an argument, but that doesn't work. It is possible to put the script in the launch agent itsself. I am not sure how secure this solution is, as it uses eval with user provided strings. It should be mentioned that Apple intended a somewhat similar approach by putting stuff in ~/launchd.conf, but it is currently unsupported as to this date and OS (see the manpage of launchd.conf). I guess that things like globbing would not work as they do in this proposal. Finally, I would mention the sources I used as information on Launch Agents, but StackExchange doesn't let me [1], [2], [3]. Again, I am sorry that this is not a real question, I still hope it is useful.

    Read the article

  • x11vnc working in Ubuntu 10.10

    - by pablorc
    I'm trying to start x11vnc in a Ubuntu 10.10 (my server is in Amazon EC2), but I have the next error $ sudo x11vnc -forever -usepw -httpdir /usr/share/vnc-java/ -httpport 5900 -auth /usr/sbin/gdm 25/11/2010 13:29:51 passing arg to libvncserver: -httpport 25/11/2010 13:29:51 passing arg to libvncserver: 5900 25/11/2010 13:29:51 -usepw: found /home/ubuntu/.vnc/passwd 25/11/2010 13:29:51 x11vnc version: 0.9.10 lastmod: 2010-04-28 pid: 3504 25/11/2010 13:29:51 XOpenDisplay(":0.0") failed. 25/11/2010 13:29:51 Trying again with XAUTHLOCALHOSTNAME=localhost ... 25/11/2010 13:29:51 *************************************** 25/11/2010 13:29:51 *** XOpenDisplay failed (:0.0) *** x11vnc was unable to open the X DISPLAY: ":0.0", it cannot continue. *** There may be "Xlib:" error messages above with details about the failure. Some tips and guidelines: ** An X server (the one you wish to view) must be running before x11vnc is started: x11vnc does not start the X server. (however, see the -create option if that is what you really want). ** You must use -display <disp>, -OR- set and export your $DISPLAY environment variable to refer to the display of the desired X server. - Usually the display is simply ":0" (in fact x11vnc uses this if you forget to specify it), but in some multi-user situations it could be ":1", ":2", or even ":137". Ask your administrator or a guru if you are having difficulty determining what your X DISPLAY is. ** Next, you need to have sufficient permissions (Xauthority) to connect to the X DISPLAY. Here are some Tips: - Often, you just need to run x11vnc as the user logged into the X session. So make sure to be that user when you type x11vnc. - Being root is usually not enough because the incorrect MIT-MAGIC-COOKIE file may be accessed. The cookie file contains the secret key that allows x11vnc to connect to the desired X DISPLAY. - You can explicitly indicate which MIT-MAGIC-COOKIE file should be used by the -auth option, e.g.: x11vnc -auth /home/someuser/.Xauthority -display :0 x11vnc -auth /tmp/.gdmzndVlR -display :0 you must have read permission for the auth file. See also '-auth guess' and '-findauth' discussed below. ** If NO ONE is logged into an X session yet, but there is a greeter login program like "gdm", "kdm", "xdm", or "dtlogin" running, you will need to find and use the raw display manager MIT-MAGIC-COOKIE file. Some examples for various display managers: gdm: -auth /var/gdm/:0.Xauth -auth /var/lib/gdm/:0.Xauth kdm: -auth /var/lib/kdm/A:0-crWk72 -auth /var/run/xauth/A:0-crWk72 xdm: -auth /var/lib/xdm/authdir/authfiles/A:0-XQvaJk dtlogin: -auth /var/dt/A:0-UgaaXa Sometimes the command "ps wwwwaux | grep auth" can reveal the file location. Starting with x11vnc 0.9.9 you can have it try to guess by using: -auth guess (see also the x11vnc -findauth option.) Only root will have read permission for the file, and so x11vnc must be run as root (or copy it). The random characters in the filenames will of course change and the directory the cookie file resides in is system dependent. See also: http://www.karlrunge.com/x11vnc/faq.html I've already tried with some -auth options but the error persist. I have gdm running. Thank you in advance

    Read the article

  • Network update solutions for a company of ~20 (5 local, 15 remote)?

    - by Margaret
    Hi all This is probably going to be a bit up in the air, because we're still in the "reaching towards solutions" phase, but I figured I'd see what you guys had to say. Plus I honestly know very little about systems and what is good and bad pratice. My organisation has always more or less worked on the concept of local machines; since it primarily employed contractors who were working from home, each of those people was largely responsible for their own machine and backup procedures and the like. We're now expanding, though we're still reasonably small (we're up to about 20 staff members). Most people still work remotely, but we have a central office where about five people are working. But we're getting large enough that we're starting to think it would be a good idea to have a central file server, and things like that - if someone gets hit by a bus, we want someone else to know where to look for the files to continue their work. A lot of the people who work for us remotely work on projects for other companies as well, so I don't want to force them to log in to our server whenever they're on a network. But I do want to make connection to be as painless as possible to do so, to improve utilisation. The other thing is that we're getting more people who would like to remote into the office server and do their work there. Our current remote connection application is an SSH install that allows people access to the network; the problem is, it's a black box to me, and I've never understood how to even connect to it (despite supposedly being de facto sysadmin). Thus far I've been able to bounce questions about how to get it working to the guy who does know it well, but he's leaving the company soon. So we probably need a solution for this that I actually understand. We were knocking around the idea of implementing a VPN with some form of remote desktop, and someone mentioned that this was largely a matter of purchasing a router capable of it; I'm not sure of the truth of that statement. This is what we have in the office: Two shiny new i7 servers, each running Windows Server 2008. Precise eventual layout is still being debated, a little, but the current suggestion is that one is primary database crunching, while the other is a warm backup of the databases, along with running Reporting Services. They currently have SQL Server 2008 installed on them, which is being connected to via the 'sa' account. We're hoping to make each person use their own account (preferably one tied to the 'central' password we set up, so we can use Windows Authentication). An older server, running XP Pro, that we are currently using as a test bed for a project that requires access to older versions of software. This machine is also being used to take backups, but I'm thinking of moving that functionality elsewhere. A spare desktop from a guy who left the company (XP Pro). We're thinking of bumping up the hard disk space and using it as the magical file server that's going to solve one particular everything. Assorted desktops, laptops, etc, at least one for each person in the office (mix of Win XP and Win 7; occasionally a person who normally works remotely might drop in to the office and bring a laptop bearing Vista, but it's pretty rare). All are set up as local user accounts at the moment; I don't know if it's the best arrangement. Purchasing more hardware is not a big problem, but we figure we might as well make use of what we've got first. Is Active Directory a big magic wand that's going to solve all the world's problems? Is there some other arrangement we should be looking to instead?

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >