Search Results

Search found 22480 results on 900 pages for 'internet archive'.

Page 651/900 | < Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >

  • Multiple Homed Windows 2008 Server / Windows 7 Client

    - by Daniel Scott
    I have a small Windows 2008 network, with some Windows 7 clients. The clients are both laptops with docking stations and I would like them to communicate with the Windows 2008 server (for filesharing) through the wired network whilst they're docked. Internet connectivity for all machines (clients and server) is via a Wireless LAN, so the wireless adapter in the Windows 7 clients stays active while they're docked. When the laptops are un-docked, it would be nice to still be able to contact the windows 2008 server for print sharing (and slower file sharing) - hence the server also being on the wireless LAN. The windows 2008 server is running Active Directory, DHCP and DNS. It controls DHCP leases on the wired network and holds the DNS records for "myserver.mycompany.local", which is what the filesharing clients connect to. Ideally I'd like the DNS records to return the wired IP first so that this is the address that the laptops will attempt initially - but there doesn't seem to be a way to do that? At present the server's IP on the wireless LAN comes out of an nslookup above the wired Lan IP. The multi-homing works perfectly - but in the wrong order! Switch on the wireless lan and ping myserver and it goes to the wireless IP. Disable the wireless on the client and do the same ping again and after a couple of seconds it starts pinging the wired address. Does anyone have any suggestions on how to make this work in a predictable order? - or even if it can work. Alternative 1? If it can't work, then would this work: Remove the wireless adapter from the server, put a wireless router/bridge on the wired network (set up to route to/from the wireless LAN's subnet), then configure the clients with two routes to the (now) single IP of the server with metrics favouring direct communication over the wired LAN first? Alternative 2? Should I instead single-home the laptops so all of their connectivity is via the wired-LAN while they're docked? (and route via the windows 2008 server - or a dedicated wireless bridge/router)? My concern here is that I'd like undocking to be seamless - and if the clients are in the middle of downloading something from the internet I wouldn't want whatever they're doing interupted as they switch IP addresses onto the Wireless network. Perhaps this isn't the case and I'm concerned over nothing? Any thoughts? :) UPDATE I seem to have cracked it (at least DNS entries come out in the order I hope for - and pinging the server with various combinations of wired, wireless and both interfaces enabled uses the IP I want) ... I set the binding order of the NICs on the Server (which is acting as Domain Controller, DHCP and DNS server) so that the Wired NIC is before the Wireless adapter. (Start -- type "Network Interfaces" -- Select "View Network Connections" -- Press Alt to show classic dropdown menus -- Advanced -- Advanced Settings) Now, an nslookup (from the client) of the server's hostname returns the Wired IP first, followed by the Wireless IP. The wired IP now seems to be used whenever it's contactable. Incidentally, the metrics on the wired and wireless routes (on the client) also favour the wired LAN (based on Windows' automatically assigned metrics) - but this was always the case, even when I was having trouble getting the wired IP to be "favoured". I'm not entirely sure if this is coincidence - or if a DNS server running on Windows, handing back IP addresses for itself does actually take the binding order of it's own network interfaces into account? It would be interesting to hear from someone who can confirm or deny that (or confirm that the binding order on the server plays a role for some other reason?)

    Read the article

  • How to transfer data between two netowks efficiently

    - by Tono Nam
    I will like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I will like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the vpn is because it is two slow. Having high upload speed is very expensive/impossible on residential places so I will like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with dropbox is it only enables 2 GB of storage in order for it to be free. I think the deals they offer are ok and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I will like it even faster lol. Anyways I was thinking why not create a program my self. This is the algorithm that I was thinking let me know if it sounds to crazy. (remember my goal is to transfer files as fastest as possible) Things that I will use in this algorithm: Server on the internet called S ( has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it) Client A on location 1 Client B on location 2 So lets say on location 1 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each package. Have server S help speed up things. For example every time a package is lost we can use Server S to inform client A that it needs to resend a package. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data meanwhile it is being compressed. Also if it is possible to start decompressing data even if we are not done receiving all the info. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So i guess my question is should using UDP over TCP could increase performance by using an extra server to keep track of lost packages? and How should I compress files before sending? compressing a 1 GB file with the highest compression ration takes about 1 hour! I will like to take advantage of that time by sending it meanwhile it is compressed.

    Read the article

  • Zero sized tar.gz file found inside a tar.gz file

    - by PavanM
    My current directory contains a single file like this- $ls -l -rw-r--r-- 1 root staff 8 May 28 09:10 pavan Now, I want to tar and gzip this file like $tar -cvf - * 2>/dev/null |gzip -vf9 > pavan.tar.gz 2>/dev/null (I am aware I am creating the zipped file in the same directory as the original file) When I run the above tar/gzip commands around 20 times, a few times I observe that the final tarred and zipped file pavan.tar.gz file has a ZERO sized pavan.tar.gz file. I am not sure from where is this zero sized file coming into the archive from. Note: I am NOT running tar/gzip commands on an already existing tar.gz file. I always make sure that the directory has only one file before running the commands On googling, as described here, I suspected that the tar.gz being created was also part of the file being archived. But in my case, gzip is the one who's creating the final file and by the time gzip runs, tar should be done tarring. This is happening on AIX but I've used Linux tag too, to draw more attention, as I guess the problem is platform independent.

    Read the article

  • How to install subversion on 1&1 server with windows?

    - by Miles M.
    I would like to start using Unfuddle for my project on 1&1 server. I never used subversion and core control before. So, I read a lot of documentation about it but each time, I get lost at the very beginning : I've downloaded the latest version of subversion. But on every tutorial, the way to follow is different. First I sae, on a lot of tuts, that you have to enter command lines. Is that ONLY for Linux ? Like here : http://chwalisz.org/2007/08/05/subversion-on-11-shared-hosting/ I also find something completely different on some website, I think (correct me if I'm wrong) it is the Windows tuts, deeply different frm the linu one. So I found that : http://www.codinghorror.com/blog/2008/04/setting-up-subversion-on-windows.html http://geekswithblogs.net/emanish/archive/2006/06/14/81905.aspx http://better-scm.shlomifish.org/subversion/Svn-Win32-Inst-Guide.html And I don t understand : Do I still have to put the sibversion file on the server ? Do I have to install Apach ? where, on my computer or on my server ? I'm working ith WampServer so I thing I have already Apach installed right ? When they say it is for Windows, do they mean it is for windows servers or for your own OS ? 'Cause my servers are on linux. How could I install Subversion on a 1&1 linux server from my W7 OS computer ? Thanks, that's a lot of question but that realle messy in my mind, I can't find something clear ..

    Read the article

  • How to access remote lan machines through a ipsec / xl2ptd vpn (maybe iptables related)

    - by Simon
    I’m trying to do the setup of a IPSEC / XL2TPD VPN for our office, and I’m having some problems accessing the remote local machines after connecting to the VPN. I can connect, and I can browse Internet sites trough the VPN, but as said, I’m unable to connect or even ping the local ones. My Network setup is something like this: INTERNET eth0 ROUTER / VPN eth2 LAN These are some traceroutes behind the VPN: traceroute to google.com (173.194.78.94), 64 hops max, 52 byte packets 1 192.168.1.80 (192.168.1.80) 74.738 ms 71.476 ms 70.123 ms 2 10.35.192.1 (10.35.192.1) 77.832 ms 77.578 ms 77.865 ms 3 10.47.243.137 (10.47.243.137) 78.837 ms 85.409 ms 76.032 ms 4 10.47.242.129 (10.47.242.129) 78.069 ms 80.054 ms 77.778 ms 5 10.254.4.2 (10.254.4.2) 86.174 ms 10.254.4.6 (10.254.4.6) 85.687 ms 10.254.4.2 (10.254.4.2) 85.664 ms traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 * * * 2 *traceroute: sendto: No route to host traceroute: wrote 192.168.1.3 52 chars, ret=-1 *traceroute: sendto: Host is down traceroute: wrote 192.168.1.3 52 chars, ret=-1 * traceroute: sendto: Host is down 3 traceroute: wrote 192.168.1.3 52 chars, ret=-1 *traceroute: sendto: Host is down traceroute: wrote 192.168.1.3 52 chars, ret=-1 These are my iptables rules: iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT # allow lan to router traffic iptables -A INPUT -s 192.168.1.0/24 -i eth2 -j ACCEPT # ssh iptables -A INPUT -p tcp --dport ssh -j ACCEPT # vpn iptables -A INPUT -p 50 -j ACCEPT iptables -A INPUT -p ah -j ACCEPT iptables -A INPUT -p udp --dport 500 -j ACCEPT iptables -A INPUT -p udp --dport 4500 -j ACCEPT iptables -A INPUT -p udp --dport 1701 -j ACCEPT # dns iptables -A INPUT -s 192.168.1.0/24 -p tcp --dport 53 -j ACCEPT iptables -A INPUT -s 192.168.1.0/24 -p udp --dport 53 -j ACCEPT iptables -t nat -A POSTROUTING -j MASQUERADE # logging iptables -I INPUT 5 -m limit --limit 1/min -j LOG --log-prefix "iptables denied: " --log-level 7 # block all other traffic iptables -A INPUT -j DROP And here are some firewall log lines: Dec 6 11:11:57 router kernel: [8725820.003323] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=192.168.1.3 LEN=60 TOS=0x00 PREC=0x00 TTL=255 ID=62174 PROTO=UDP SPT=61910 DPT=53 LEN=40 Dec 6 11:12:29 router kernel: [8725852.035826] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=15344 PROTO=UDP SPT=56329 DPT=8612 LEN=24 Dec 6 11:12:36 router kernel: [8725859.121606] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=11767 PROTO=UDP SPT=63962 DPT=8612 LEN=24 Dec 6 11:12:44 router kernel: [8725866.203656] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=11679 PROTO=UDP SPT=57101 DPT=8612 LEN=24 Dec 6 11:12:51 router kernel: [8725873.285979] iptables denied: IN=ppp0 OUT= MAC= SRC=192.168.1.81 DST=224.0.0.1 LEN=44 TOS=0x00 PREC=0x00 TTL=1 ID=39165 PROTO=UDP SPT=62625 DPT=8612 LEN=24 I’m pretty sure that the problem should be related with iptables, but after trying a lot of different confs, I was unable to find the right one. Any help will be greetly appreciated ;). Kind regards, Simon. EDIT: This is my route table: default 62.43.193.33.st 0.0.0.0 UG 100 0 0 eth0 62.43.193.32 * 255.255.255.224 U 0 0 0 eth0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth2 192.168.1.81 * 255.255.255.255 UH 0 0 0 ppp0

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • Problems using Mesa demos

    - by Rodnower
    Hello, I successfully installed Mesa with "yum install Mesa*" and downloaded MesaDemos-7.8.tar.gz archive. Now I try follow instructions from "Mesa3d.org - Download / Insall - Compiling and Installing - 1.5 Running the demos", but in progs/demos there is only *.c files, when I try to compile them, I get many similar errors like: gears.c:(.text+0x54): undefined reference to `glShadeModel' I guess that this is very noob question, and I understand that there is very simple solution, but I haven't any idea... In beggining of the file there are all necessary #includes: #include <math.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <GL/glut.h> So I have some questions: Is there some Mesa forum on the web? Is there some compiled demos? Is there some site with well described examples of Mesa using? What I need for compile those examples? I have CentOS 5 Thank you for ahead.

    Read the article

  • Duplicity on a ReadyNAS

    - by Jason Swett
    Has anyone here run Duplicity on a ReadyNAS? I'm trying but here's what I get: duplicity full --encrypt-key="ABC123" /home/jason/ scp://[email protected]//gob Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) I've also found this post that says the "Invalid SSH password" message doesn't actually mean invalid SSH password. This would make sense because I'm not using an SSH password; I'm using a public key. I can ssh, ftp, sftp and rsync into my ReadyNAS just fine. (Actually, to be more accurate, I can get past authentication with ssh, ftp and sftp but I can't actually do anything past that. Regardless, that's enough to tell me that "Invalid SSH password" is bogus. Rsync works with no problems.) The post I found says the command will work as soon as the directory at the end of your scp command exists, but I don't know how to check for that. I know the share gob exists on my ReadyNAS and I know it's writable because I'm writing to it with rsync. Also, here is the verbose output: Using archive dir: /home/jason/.cache/duplicity/3bdd353b29468311ffa8485160da6873 Using backup name: 3bdd353b29468311ffa8485160da6873 Import of duplicity.backends.rsyncbackend Succeeded Import of duplicity.backends.sshbackend Succeeded Import of duplicity.backends.localbackend Succeeded Import of duplicity.backends.botobackend Succeeded Import of duplicity.backends.cloudfilesbackend Succeeded Import of duplicity.backends.giobackend Succeeded Import of duplicity.backends.hsibackend Succeeded Import of duplicity.backends.imapbackend Succeeded Import of duplicity.backends.ftpbackend Succeeded Import of duplicity.backends.webdavbackend Succeeded Import of duplicity.backends.tahoebackend Succeeded Main action: full ================================================================================ duplicity 0.6.10 (September 19, 2010) Args: /usr/bin/duplicity full --encrypt-key=ABC123 -v9 /home/jason/ scp://[email protected]//gob Linux gob 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 /usr/bin/python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] ================================================================================ Using temporary directory /tmp/duplicity-cridGi-tempdir Registering (mkstemp) temporary file /tmp/duplicity-cridGi-tempdir/mkstemp-ztuF5P-1 Temp has 86334349312 available, backup will use approx 34078720. Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' (attempt #1) State = sftp, Before = '[email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp -oServerAliveInterval=15 -oServerAliveCountMax=2 [email protected]' failed (attempt #1) Any ideas as to what's going wrong?

    Read the article

  • Unzipping archives, preserving folder hierarchy

    - by Hydrangea
    I've got a problem and am not sure what it is, but hope someone can help me think this through because this has me stumped. Backstory: I wrote a Java app (Android) that unzips some zip files downloaded from the network. Until now, this was working great. Then, this week, the archives that I'm creating on my pc (in Ubuntu 12.04) unzip on the Android phone into a flat hierarchy instead of preserving the folders. I'm creating the archives the same way (right-click on folder compress) but even though my old archives (created in 10.04) still unzip as expected, the new ones don't. On Ubuntu, the new zip files look the same to me as the old ones. When unzipped on my pc the folders in these new archives are restored the same as the old ones... it's the Android app that extracts the old ones fine and the new ones flat. What I really want to know, though, is what the difference between the archives is. Question: How could one determine why one zip archive would be extracted with folder hierarchy preserved, when an identical one (to all appearances on Ubuntu 12.04) is extracted with no hierarchy? Are there different ways in which a .zip file can "have" folders, but Ubuntu doesn't distinguish between them?

    Read the article

  • Robocopy launches and then hangs/just sits there

    - by NateO
    I'm setting up an archive process to store old files on an external hard drive. The computer in question is running Windows 7 Pro 32bit. We have a server folder with 150,000+ files in it, most of which are pretty small (below 200k). I'm trying to use robocopy in a batch file to do this. It was working fine the other day, now all it does upon launch is sit there. It shows me all the options and whatnot, and also lists the number of files in the directory and the directory itself, but it never gets past that line. If I switch the destination to the local C drive, it eventually starts copying files. Is there something in my batch file that needs to change? Or could there be a problem with the external Western Digital drive that I'm using? The WD drive currently is holding about 175,000 files. Here is the one line batch file I have: robocopy "\\cgifp01\Prepress\Public\ImportedPDF" "E:\OldFiles" *.* /R:2 /W:10 /MINAGE:15 /MOV /B /XJ /XF "blank_test.pdf" Thanks for any tips or ideas. Nate

    Read the article

  • Exchange 2010: Find Move Request Log after move request completes

    - by gravyface
    EDIT: significantly changed my question here to streamline it a bit. I've gone ahead and used 100 as my corrupted item count and ran it from the Exchange Shell. So the trail of tears continues with my SBS 2003 to 2011 migration: all the mailboxes have moved mailbox store from OLDSERVER to NEWSERVER, with the Local Move Requests completing successfully, except for one. What I'd like to do now is review the previous move request log files: when they were in progress, I could right-click Properties Log View Log File, but now that they're completed, that's not available. Nor can I use: Get-MoveRequestStatistics <user> -includereport | fl MoveReport ...as the move request has now completed and it errors out with "couldn't find a move request that corresponds...". Basically what I'd like to do is present the list of baditems to the user so that they're aware of what items didn't come across and if anything important was lost, be able to check their current OST, an archive.pst, etc. to recover it if possible. If this all needs to be wrapped up in a batch Exchange power shell command to pipe the output to log files on disk somewhere, I'm all ears, and would appreciate it for the next migration we do.

    Read the article

  • Installing Tomcat on CentOS 5

    - by andybaird
    Disclaimer: I am not a server admin, I am a windows user that has lead a life of sinful installation wizards and drag and drop I'm attempting to install Tomcat on CentOS 5 hosted by a MediaTemple dedicated virtual server. I basically followed this guide: Installed jpackage and configured the yum.repo.d jpackage file to set enabled=1 Used yum to install java (yum install java) Downloaded the binary distribution of tomcat with "wget http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.14/bin/apache-tomcat-6.0.14.tar.gz" set JAVA_HOME to point at the jdk location I found with "export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/" I gunzip/untar the Tomcat files and run ./startup.sh to start the Tomcat server. That is supposed to put the Tomcat server at myserver.com:8080 - however, I just get a could not contact host error when I try to browse to it (or when I try 'curl localhost:8080' from SSH) After I type ./startup.sh, here is the console output: [root@myserver bin]# ./startup.sh Using CATALINA_BASE: /root/apache-tomcat-6.0.14 Using CATALINA_HOME: /root/apache-tomcat-6.0.14 Using CATALINA_TMPDIR: /root/apache-tomcat-6.0.14/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/ [root@myserver bin]# Is there a step I have missed here? Edit: I've now discovered by looking at the log the following error is occuring: Error occurred during initialization of VM Could not reserve enough space for object heap

    Read the article

  • How to set up TightVNC Java viewer index.html on web server?

    - by penyuan
    I've got the Java TightVNC viewer applet set up with the provided index.html on my Mac OS X 10.6.3 with web sharing enabled. Using a remote computer I was able to get to the webpage but I only see a white box with an X (for error?) that represents where the viewer is supposed to be. Any ideas on how to get this to work? I've tried to set the port (in index.html) to 5900 and 5901, none worked. Are any of these the default VNC port for Mac OS X 10.6.3? Also, I've activated Screen Sharing and Remote Login in System Preferences, allowing VNC viewers to connect. Here is the code for my index.html: <HTML> <TITLE> TightVNC desktop </TITLE> <APPLET CODE="classes/VncViewer.class" ARCHIVE="classes/VncViewer.jar" WIDTH="1440" HEIGHT="900"> <PARAM NAME="PORT" VALUE="5900"> <PARAM NAME="Scaling factor" VALUE="50"> </APPLET> <BR> <A href="http://www.tightvnc.com/">TightVNC site</A> </HTML> Again I can get to this page, but the applet doesn't seem to work, the Java console also doesn't say anything. Thanks in advance for your help!

    Read the article

  • Ubuntu 12 crashed and took down network

    - by Leopd
    We recently set up a new Ubuntu 12.04LTS server on our network. It's not fully configured so it's not doing much beyond sshd and a default apache2 install. But this evening it appears to have crashed. It wasn't responding to the network or the keyboard. But the worst part is, it took down the entire network. My knowledge of the network stack below OSI layer 3 is very limited, so the rest confuses me. When this machine was physically connected to the network, no other machine could connect to the outside internet. When things were broken, running arp showed that our gateway's IP address (10.0.1.1) was listed as "invalid." Unplugging the server from the network fixed the problem, and plugging it back in broke it again. So the crashed server was advertising itself as owning the gateway's IP address? There's nothing at all in syslog during the time when it was causing problems. Any ideas about how to figure out what went wrong or what we can do to prevent it from happening again? I'm hesitant to even put the machine back on the network right now. Update ** It crashed again, and I ran tcpdump -penn arp (thanks bahamat!) for several minutes and got this... (timestamps and duplicate lines removed) 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.191, length 46 00:1e:65:f8:dc:24 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.44 tell 10.0.2.191, length 46 60:d8:19:d4:71:d6 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 10.0.1.1 tell 10.0.2.125, length 46 d4:9a:20:04:e9:78 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.1.1 tell 192.168.1.100, length 28 Update 2 ** When the network is functioning properly, arping -c4 10.0.1.1 returns this: ARPING 10.0.1.1 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=0 time=267.982 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=1 time=422.955 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=2 time=299.215 usec 60 bytes from c0:c1:c0:77:25:8e (10.0.1.1): index=3 time=366.926 usec --- 10.0.1.1 statistics --- 4 packets transmitted, 4 packets received, 0% unanswered (0 extra) When the bad server is plugged in, arping -c4 10.0.1.1 returns: ARPING 10.0.1.1 --- 10.0.1.1 statistics --- 4 packets transmitted, 0 packets received, 100% unanswered (0 extra) Context ** 10.0.x.x is the main subnet. 10.0.1.1 is the main internet gateway 10.0.1.44 is a printer 10.0.2.* devices are all laptops / workstations I have no idea what's using the 192.168.x.x subnet -- your guesses are at least as good as mine. A VM on a workstation? A misconfigured WAP? Somebody re-sharing wifi? A machine that failed to DHCP? The offending ubuntu server's MAC address ends in cd:80 so isn't listed in the dump. It should DHCP to 10.0.3.3 Thanks for any help. This ARP stuff is all voodoo to me. Packets just go to IP addresses, right? ;)

    Read the article

  • How do you setup FTP with IIS Manager Users in an NLB environment with shared IIS configs?

    - by William Jens
    I've setup a 2 node NLB cluster and used the following to share IIS configs between them. http://blogs.technet.com/b/meamcs/archive/2012/05/30/configuring-iis-7-5-shared-configuration.aspx The IIS configs and content is located on a network share via a UNC path. This works - updating IIS settings on one node, is visible in another node and my website works on the individual nodes and the cluster as whole. I'm able to setup an FTP site and successfully connect with my Windows login. However, I want to use IIS Manager Authentication as defined in: http://www.iis.net/learn/publish/using-the-ftp-service/configure-ftp-with-iis-manager-authentication-in-iis-7 I've tried using "Network Service" with the FTP COM object as well as a dedicated user account that exists on all three hosts, but every time I try to login with an IIS user I get something like the following: IISWMSVC_AUTHENTICATION_UNABLE_TO_READ_CONFIG An unexpected error occurred while retrieving the authentication information. Exception:System.Runtime.InteropServices.COMException (0x8007052E): Filename: Error: at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Management.Server.ConfigurationAuthenticationProvider.GetSection(ServerManager serverManager) Process:dllhost User=NT AUTHORITY\NETWORK SERVICE Can anyone point me in the right direction here?

    Read the article

  • Installing Trac on Windows under Apache 2.2?

    - by Warren P
    Trac is a python-powered bug-tracking and project-management app. According to Trac's wiki, there are several options for installing Trac, a standalone server (tracd), or under a dedicated webserver using one of these options: FastCGI - Not available on windows. mod_wsgi - No version of mod_wsgi available for Apache 2.2.22 and Python 2.7.3-amd64 that actually runs on my system! mod_python - no longer recommended, as mod_python is not actively maintained anymore) CGI -should not be used, as the performance is far from optimal) That leaves me with zero ways to run Trac on Windows. Apache 2.2.22 with ModWSGI loading, crashes the Apache2.2 service on startup without any error logs. Disabling the line in the apache configuration to load mod_wsgi restores sanity. I just want an installation of Trac on windows with Authentication enabled. I am unable to get authenetication to work using basic tracd like this: tracd -p 8000 --basic-auth="c:\tmp,c:\tmp\Passwords.md5.txt,mycompany" c:\tmp\RootFolder And I am unable to get Mod_WSGI installed. I'm going to keep trying to figure out a combination that works, I suspect I should have installed 32 bit python instead of 64 bit python, to start with. Did I do wrong to install Python 64 bit 2.7.3? I tried again with all 32 bit components, and still can't get MOD_WSGI to work with apache 2.2.22. I'm going to try to compile mod_wsgi myself with Visual C++ Express 2010, but it seems to me that it ought to be easier than this to get Trac running on windows, with authentication. Is there a way to run Trac on Windows, under Apache, with authentication? The last "Trac on windows" article died in 2008, leaving only this internet archive link for "Trac on windows" setup.

    Read the article

  • DNS Server Behind NAT

    - by Bryan
    I've got a Bind 9 DNS server sitting behind a NAT firewall, assume the Internet facing IP is 1.2.3.4 There are no restrictions on outgoing traffic, and port 53 (TCP/UDP) is forwarded from 1.2.3.4 to the internal DNS server (10.0.0.1). There are no IP Tables rules on either the VPS or the internal Bind 9 server. From a remote Linux VPS located elsewhere on the internet, nslookup works fine # nslookup foo.example.com 1.2.3.4 Server: 1.2.3.4 Address: 1.2.3.4#53 Name: foo.example.com Addresss: 9.9.9.9 However, when using the host command on the remote VPS, I receive the following output: # host foo.example.com 1.2.3.4 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; reply from unexpected source: 1.2.3.4#13731, expected 1.2.3.4#53 ;; connection timed out; no servers could be reached. From the VPS, I can establish a connection (using telnet) to 1.2.3.4:53 From the internal DNS server (10.0.0.1), the host command appears to be fine: # host foo.example.com 127.0.0.1 Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: foo.example.com has address 9.9.9.9 Any suggestions as to why the host command on my VPS is complaining about the reply coming back from another port, and what can I do to fix this? Further info: From a windows host external to the network >nslookup foo.example.com 1.2.3.4 DNS request timeout timeout was 2 seconds Server: UnKnown Address: 1.2.3.4 DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds DNS request timed out. timeout was 2 seconds *** Request to UnKnown timed-out This is a default install of bind from Ubuntu 12.04 LTS, with around 11 zones configured. $ named -v BIND 9.8.1-P1 TCP Dump (filtered) from internal DNS server 20:36:29.175701 IP pc.external.com.57226 > dns.example.com.domain: 1+ PTR? 4.3.2.1.in-addr.arpa. (45) 20:36:29.175948 IP dns.example.com.domain > pc.external.com.57226: 1 Refused- 0/0/0 (45) 20:36:31.179786 IP pc.external.com.57227 > dns.example.com.domain: 2+[|domain] 20:36:31.179960 IP dns.example.com.domain > pc.external.com.57227: 2 Refused-[|domain] 20:36:33.180653 IP pc.external.com.57228 > dns.example.com.domain: 3+[|domain] 20:36:33.180906 IP dns.example.com.domain > pc.external.com.57228: 3 Refused-[|domain] 20:36:35.185182 IP pc.external.com.57229 > dns.example.com.domain: 4+ A? foo.example.com. (45) 20:36:35.185362 IP dns.example.com.domain > pc.external.com.57229: 4*- 1/1/1 (95) 20:36:37.182844 IP pc.external.com.57230 > dns.example.com.domain: 5+ AAAA? foo.example.com. (45) 20:36:37.182991 IP dns.example.com.domain > pc.external.com.57230: 5*- 0/1/0 (119) TCP Dump from client during query 21:24:52.054374 IP pc.external.com.43845 > dns.example.com.53: 6142+ A? foo.example.com. (45) 21:24:52.104694 IP dns.example.com.29242 > pc.external.com.43845: UDP, length 95

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • How do I install the latest version of packages in Ubuntu?

    - by Roman
    For example I want to install the latest version of "numpy". I type the following: "sudo apt-get install python-numpy". When I type this the first time it installs something and if I type this the second time it writes that I have already the latest version of numpy. However, I see that my version of numpy is 1.1.1. and I know that it NOT the latest version. Why it happens and how this problem can be solved? I can find the *tar.gz file with the latest version, I can extract files with the archive and than I need to rune one of the scripts which will be somewhere among the extracted files. But I do not like this way. It is too complicated. I do not know where I should put all these files, I do not know which dependencies I should install before I run the script for the installation of numpy, I do not know where numpy will be put after installation and so on. Is there an easy way to get the latest version of numpy?

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Can't connect to research.microsoft.com on home Qwest DSL connection

    - by rakingleaves
    I have a puzzling issue regarding accessing research.microsoft.com from my home Qwest DSL connection. By default, I frequently get timeouts when accessing research.microsoft.com from Firefox, Safari, or Chrome on my Mac. I also cannot access the site from Internet Explorer in a Windows VM. However, I am able to access the site through proxify.com, so I know the site is not down. Furthermore, I haven't noticed problems accessing other sites (in particular, www.microsoft.com works fine). Also, I can access research.microsoft.com when I'm connected to networks other than my home Qwest DSL connection. Together, the above make me suspect a problem with either my router (Airport Express) or, more likely, my ISP. Anyone have any thoughts on how I can narrow down the problem further? I could call my ISP and tell them the above, but my feeling is that probably won't get me very far. I can get by browsing research.microsoft.com through a proxy, but it would be nice to figure out what's going on here and fix the problem. Oh, the only relevant discussion I found via Google was here: http://forums.whirlpool.net.au/forum-replies-archive.cfm/1311734.html Update: Thanks to those who have tried to help! I found one other thing while Googling that may be vaguely relevant: http://thedaneshproject.com/posts/supportmicrosoftcom-not-working-behind-squid/ Disabling the Accept-Encoding headers in Firefox actually didn't make a difference for me. I just thought the above might spark some other ideas about how mishandling of HTTP headers somewhere might be causing this problem. Thanks again! Another update: In case anyone is still thinking about this; I've found that I can't surf research.microsoft.com using the links text-based browser, but I can reliably download individual files with wget. Maybe that helps?

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • `:Zone.Identifier` files keep on appearing in Windows XP virtual machine

    - by Jonathan Reno
    I have a Windows XP Home Edition guest and a Linux Mint 13 host. I use VirtualBox and the ~/Public directory is shared with the guest. It sometimes happens that I use IE on the guest system to download files (until I get a better Windows browser). All of the downloaded files go the the L:\ drive (the ~/Public directory). When they are finished downloading, Windows Explorer adds a :Zone.Identifier file for each file I download. When I extract a downloaded ZIP archive on the guest (on drive L:\), Windows creates a :Zone.Identifier file for every file in the extracted directory. This even occurs if I use the host to move a file to the ~/Public directory. The shared ~/Public directory is on an ext4 partition and the colon character is supposed to be illegal in file names in Windows, but not on the ext4 partition. Is there any way to stop Windows from putting all this rubbish on my filesystem? (I might have to create a shell script to clean up after Windows' act.) Here is what I see in Windows Explorer: By the way, if I were running a Mac OS X host (where colons are illegal file name characters) this would be even more horrendous.

    Read the article

  • "Password Server: Stopped" on Mac OS Lion Server. Stops with error -1 during startup

    - by V1ru8
    Since I've restored the Open Directory from an archive because my Server crashed and the DB was corrupt. The password server does not start anymore. The log looks like this: Feb 14 2012 21:41:20 156746us Mac OS X Password Service version 376.1 (pid = 2438) was started at: Tue Feb 14 21:41:20 2012. Feb 14 2012 21:41:20 156801us RunAppThread Created Feb 14 2012 21:41:20 156852us RunAppThread Started Feb 14 2012 21:41:20 156879us Initializing Server Globals ... Feb 14 2012 21:41:20 163094us Initializing Networking ... Feb 14 2012 21:41:20 163196us Initializing TCP ... Feb 14 2012 21:41:20 191790us SASL is using realm "SERVER.HOME.POST-NET.CH" Feb 14 2012 21:41:20 191847us Starting Central Thread ... Feb 14 2012 21:41:20 191860us Starting other server processes ... Feb 14 2012 21:41:20 191873us StartCentralThreads: 1 threads to stop Feb 14 2012 21:41:20 191905us Initializing TCP ... Feb 14 2012 21:41:20 191954us Starting TCP/IP Listener on ethernet interface, port 106 Feb 14 2012 21:41:20 192012us Starting TCP/IP Listener on ethernet interface, port 3659 Feb 14 2012 21:41:20 192048us Starting TCP/IP Listener on interface lo0, port 106 Feb 14 2012 21:41:20 192082us Starting TCP/IP Listener on interface lo0, port 3659 Feb 14 2012 21:41:20 192117us StartCentralThreads: Created 4 TCP/IP Connection Listeners Feb 14 2012 21:41:20 192132us Starting UNIX domain socket listener /var/run/passwordserver Feb 14 2012 21:41:20 193034us CRunAppThread::StartUp: caught error -1. Feb 14 2012 21:41:20 193056us ** ERROR: The Server received an error during startup. See error log for details. Feb 14 2012 21:41:20 193075us RunAppThread::StartUp() returned: 4294967295 Feb 14 2012 21:41:20 193107us Stopping server processes ... Feb 14 2012 21:41:20 193119us Stopping Network Processes ... Feb 14 2012 21:41:20 193131us Deinitializing networking ... Feb 14 2012 21:41:20 193149us Server Processes Stopped ... Feb 14 2012 21:41:20 193165us RunAppThread Stopped Feb 14 2012 21:41:20 193202us Aborting Password Service. See error log. The error log repeats the following: Feb 14 2012 21:41:50 409022us Server received error -1 during startup. Feb 14 2012 21:41:50 409141us Aborting Password Service. Anyone an idea what's wrong here and how I can fix this?

    Read the article

< Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >