Search Results

Search found 38522 results on 1541 pages for 'single source'.

Page 334/1541 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Windows - Website unaccessible only on windows pcs in LAN

    - by DorentuZ
    For serveral days now, a website isn't accessible on a single pc in the LAN. On the other pc's, it works just fine. And it's just a single website that's not accessible as far as I know of. The website generates a timeout on every single web browser I've tried (IE8, Firefox and Chrome). However, traceroute, nmap and telnet all work just fine. I've even tried multiple user accounts and safe mode, but that didn't work either. As a side note: using a linux live cd did work and I could access the website without any problems. The hosts file is the windows default, the ip- and dns settings on the network adapter normal as well. No strange processes are running and no viruses found. According to tcpview and netstat there are connections to the domain, but every request in the browser results in a timeout.. Any idea what's happening? Update: All of the computers on the network running Windows (any version) are showing this problem now. The website is still working under linux and mac osx. So, it has to be related to some kind of windows update (although I haven't installed any on one computer in the past week, which I've set to do manual updates only)..

    Read the article

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

  • .NET 3.5 installation comes up with Error 0x800F0906, then 0x800F0081F using dism

    - by Austin Meadows
    I've recently tried installing .NET 3.5 for an application on Windows 8.1. I used the OS's popup thing to download/install .NET 3.5 and always get error code 0x800F0906. Upon further research, I found I would have to pop in my Windows 8 CD and install it with this command, where "E:\" is where my CD is mounted: Dism /online /enable-feature /featurename:NetFx3 /All /Source:E:\sources\sxs /LimitAccess This and any derivative of it (e.g., removing /LimitAccess) has not worked for me and has either given me the same error code (0x800F0906) or a different one, 0x800F0081F. I've even copied the sxs folder to my hard drive, just in case something was going on with the CD Drive, only to have the same results. In that case, I used this command line: Dism /online /enable-feature /featurename:NetFx3 /All /Source:C:\dotnet35 /LimitAccess I find this surreal because in both cases, the files are indeed there but the program thinks it's not. Here's the CBS.log file. Any ideas on how to fix this? Any help is very appreciated :) EDIT: I now have a proper dism.log file, I'm not sure what happened to the last one or why it did that. Here's the link to the new log file. It's interesting to note that it doesn't recognize some of the commands in the script such as "featurename" or "source".

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • Flushing iptables broke my pipe, how can I save my instance?

    - by Niels
    I was setting up my iptables when I performed a iptables -F and my ssh pipe broke. This is the last output of my session: root@alfapaints:~# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW,ESTABLISHED tcp dpt:2222 ACCEPT tcp -- li465-68.members.linode.com anywhere state NEW,ESTABLISHED tcp dpt:nrpe ACCEPT tcp -- anywhere anywhere tcp dpt:9200 state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http state NEW,ESTABLISHED ACCEPT udp -- anywhere anywhere udp spt:domain Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:2222 ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:nrpe ACCEPT tcp -- anywhere anywhere tcp spt:9200 state ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp spt:http state ESTABLISHED ACCEPT udp -- anywhere anywhere udp dpt:domain root@alfapaints:~# iptables -F Write failed: Broken pipe I tested my connection just before and I was able to connect with ssh. Now I did a nmap scan and not a single port is open anymore. I know my VPS is running on VMWare ESXi, could a reboot help? Or if not could I attach and mount the disk to another vm to save the data? Does anybody have some advise? And maybe an explanation what happend or what could have cause my pipe to break? ps: I didn't save my rules on the config directories of iptables. But used a file I stored in ~/rules.config to apply my rules like this: iptables-restore < rules.config So probably a reboot would help? Thanks a lot in advance.

    Read the article

  • Possible to IPSec VPN Tunnel Public IP Addresses?

    - by caleban
    A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload. IPSec Payload: source:1.1.1.101:1001 destination:2.2.2.101:2001 IP Packet: source:1.1.1.1:101 destination:2.2.2.1:201 Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

    Read the article

  • Seeking web-based FTP client for very large file upload

    - by Paul M. Nguyen
    I have looked around for these for some time... the limits imposed by the web server and/or the dynamic programming environment (e.g. PHP) are far too restrictive for the application I'm working on. We need to be able to move large graphics and video files to and from clients (ranging from tens of MB to a few GB in a single file). Plain FTP with a proper desktop client will do the trick, and we're hosting this in Amazon EC2 with EBS. User management will be done from the office via webmin. Users are chroot-jailed into their home dir by proftpd. net2ftp will work for many clients, but we often need to move single files that approach 1GB or exceed 2-3GB which is way out of the range of any http-based uploader. So we turn to Java or Flash - can they do it? From within the web browser establish an FTP connection and grab a huge file? There are licensed applets and such out there, but none seem convincing. Again, I'm looking for some code that can speak FTP and read (& write?) the local disk, that is delivered in a web browser, and can move single files of 2GB+. The reason for having a web-based interface to FTP is to skip the software installation step for our clients. I will consider proper desktop client software as long as it's "portable" and at least Win+Mac and can be easily configured by lay-man users in a hurry.

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

  • Selenium server won't start

    - by moff
    I'm getting the following error when trying to start selenium: C:\Temp\selenium-server-1.0.3java -jar selenium-server.jar 22:02:07.615 INFO - Java: Sun Microsystems Inc. 16.0-b13 22:02:07.617 INFO - OS: Windows 7 6.1 x86 22:02:07.625 INFO - v2.0 [a2], with Core v2.0 [a2] 22:02:07.811 INFO - RemoteWebDriver instances should connect to: http://127.0.0. 1:4444/wd/hub 22:02:07.813 INFO - Version Jetty/5.1.x 22:02:07.815 INFO - Started HttpContext[/selenium-server/driver,/selenium-server /driver] 22:02:07.817 INFO - Started HttpContext[/selenium-server,/selenium-server] 22:02:07.818 INFO - Started HttpContext[/,/] 22:02:07.866 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler@2bbd86 22:02:07.867 INFO - Started HttpContext[/wd,/wd] 22:02:07.870 WARN - Failed to start: [email protected]:4444 Exception in thread "main" org.openqa.jetty.util.MultiException[java.net.SocketE xception: Unrecognized Windows Sockets error: 0: JVM_Bind] at org.openqa.jetty.http.HttpServer.doStart(HttpServer.java:686) at org.openqa.jetty.util.Container.start(Container.java:72) at org.openqa.selenium.server.SeleniumServer.start(SeleniumServer.java:3 96) at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:23 4) at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:19 8) java.net.SocketException: Unrecognized Windows Sockets error: 0: JVM_Bind at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(Unknown Source) at java.net.ServerSocket.bind(Unknown Source) at java.net.ServerSocket.(Unknown Source) at org.openqa.jetty.util.ThreadedServer.newServerSocket(ThreadedServer.j ava:391) at org.openqa.jetty.util.ThreadedServer.open(ThreadedServer.java:477) at org.openqa.jetty.util.ThreadedServer.start(ThreadedServer.java:503) at org.openqa.jetty.http.SocketListener.start(SocketListener.java:204) at org.openqa.jetty.http.HttpServer.doStart(HttpServer.java:716) at org.openqa.jetty.util.Container.start(Container.java:72) at org.openqa.selenium.server.SeleniumServer.start(SeleniumServer.java:3 96) at org.openqa.selenium.server.SeleniumServer.boot(SeleniumServer.java:23 4) at org.openqa.selenium.server.SeleniumServer.main(SeleniumServer.java:19 8) Java is installed: C:\Temp\selenium-server-1.0.3java -version java version "1.6.0_18" Java(TM) SE Runtime Environment (build 1.6.0_18-b07) Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing) Thanks in advance

    Read the article

  • Haproxy not properly passing on X-Forwarded-For header

    - by JesseP
    I have backend web servers that receive requests by way of haproxy-nginx-fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left). At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application. I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set: option forwardfor Nginx fastcgi_params config defines this header to be passed to the app: fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; Anyone have any ideas on what might be going wrong here? EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging... EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd: Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O Here are the options i set for this in my frontend: mode http option httplog capture request header X-Forwarded-For len 25 capture response header X-Forwarded-For len 25 option httpclose option forwardfor EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

    Read the article

  • debian VM refusing all traffic apart from http

    - by james lewis
    I've got a VM with a fresh install of Debian (wheezy) and I've installed node and mongo on it. The VM is using a bridged network connection so I was expecting to be able to point my host machines browser at the ip address of the Debian VM (port 1337 for my node example or port 28017 for my mongo status page) and see one of the two services (node or mongo). My requests are refused though. As far as I can tell Debian allows all traffic by default and you have to manually configure iptables to drop traffic. I've checked iptables and it says it's setup to allow anything through. It looks like this: root@devbox:/home/jlewis# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination As a test I setup nginx and I was able to get to the nginx landing page from my host no problems so obviously http traffic is allowed. I then set nginx up to forward all traffic upstream to mongo - no problems there, I was able to see the status page. I then did the same for my example node server and again, no problems. So http traffic is fine, but all other traffic is blocked. Anyone know why debian might be refusing all other traffic other than iptables being setup to drop it? EDIT - output from netstat -nltp: Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:51028 0.0.0.0:* LISTEN 1541/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2462/sshd tcp 0 0 127.0.0.1:1337 0.0.0.0:* LISTEN 2794/node tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2274/exim4 tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1510/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2189/nginx tcp6 0 0 :::22 :::* LISTEN 2462/sshd tcp6 0 0 :::45335 :::* LISTEN 1541/rpc.statd tcp6 0 0 ::1:25 :::* LISTEN 2274/exim4 tcp6 0 0 :::111 :::* LISTEN 1510/rpcbind

    Read the article

  • Kernel Logging disabled?

    - by Tiffany Walker
    uname -a Linux host 2.6.32-279.9.1.el6.i686 #1 SMP Tue Sep 25 20:26:47 UTC 2012 i686 i686 i386 GNU/Linux And start ups: ls /etc/init.d/ abrt-ccpp certmonger dovecot irqbalance matahari-broker mdmonitor nfs proftpd rpcbind single ypbind abrtd cgconfig functions kdump matahari-host messagebus nfslock psacct rpcgssd smartd abrt-oops cgred haldaemon killall matahari-network mysqld ntpd qpidd rpcidmapd sshd acpid cpuspeed halt ktune matahari-rpc named ntpdate quota_nld rpcsvcgssd sssd atd crond httpd lfd ma tahari-service netconsole oddjobd rdisc rsyslog sysstat auditd csf ip6tables lvm2-lvmetad matahari-sysconfig netfs portreserve restorecond sandbox tuned autofs cups iptables lvm2-monitor matahari-sysconfig-console network postfix rngd saslauthd udev-post But when I installed CSF/LFD I am getting nothing. LFD does not create lfd.log and nor are any blocks being logged in /var/log/messages either from the firewall. This is not natural. I looked for klogd but maybe I am looking in the wrong place for it to see if it is enabled? ls /etc/init.d/syslog ls: cannot access /etc/init.d/syslog: No such file or directory Also noticed no syslog? Also noticed this: csf -d 84.113.21.201 Adding 84.113.21.201 to csf.deny and iptables DROP... iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. I've never seen this before and this is a dedicated box. Also: ./csftest.pl Testing ip_tables/iptable_filter...OK Testing ipt_LOG...OK Testing ipt_multiport/xt_multiport...OK Testing ipt_REJECT...OK Testing ipt_state/xt_state...OK Testing ipt_limit/xt_limit...OK Testing ipt_recent...OK Testing xt_connlimit...OK Testing ipt_owner/xt_owner...OK Testing iptable_nat/ipt_REDIRECT...OK Testing iptable_nat/ipt_DNAT...OK RESULT: csf should function on this server iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Not playing .mov files in WMP, not loading .mov files in Windows Live Movie Maker. What am I missing?

    - by royatl
    My friend needs to create a video and she has some source files that are .MOV format. I assume they are h.264. She can view the files with QuickTime. She has a laptop running Vista, and a just-downloaded version of Windows Live Movie Maker (which I'll call LMM for short). LMM shows an 'X' icon when she tries to add one of these files to it. My machine runs Windows 7 Pro, and a slightly earlier build of LMM and has no problems editing video with these source files. I assume she's missing a codec but what can I tell her? I've looked at the answer that mentions a QuickTime DirectShow Source Filter Plugin. It mentions only playback through WMP, not editing via LMM, but is that what she needs? I didn't have to load anything like that. That project's now done (she punted and used iMovie on a Mac). But I did gain another clue. She could play a 720p .MOV file, but these were 1920x1080 files.

    Read the article

  • Redirect local, not internal, requests using SuSEfirewall2 or an iptables rule

    - by James
    I have a server that is running a web application deployed on Tomcat and is sitting in a test network. We're running SuSE 11 sp1 and have some redirection rules for incoming requests. For example we don't bind port 80 in Tomcat's server.xml file, instead we listen on port 9600 and have a configuration line in SuSEfirewall2 to redirect port 80 to 9640. This is because Tomcat doesn't run as root and can't open up port 80. My web application needs to be able to make requests to port 80 since that is the port it will be using when deployed. What rule can I add so that local requests get redirected by iptables? I tried looking at this question: How do I redirect one port to another on a local computer using iptables? but suggestions there didn't seem to help me. I tried running tcpdump on eth0 and then connecting to my local IP address (not 127.0.0.1, but the actual address) but I didn't see any activity. I did see activity if I connected from an external machine. Then I ran tcmpdump on lo, again tried to connect and this time I saw activity. So this leads me to believe that any requests made to my own IP address locally aren't getting handled by iptables. Just for reference he's what my NAT table looks like now: Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:xfer redir ports 9640 REDIRECT tcp -- anywhere anywhere tcp dpt:https redir ports 8443 Chain POSTROUTING (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Export-Mailbox Error

    - by tuck918
    All, I am using export-mailbox to move some data and it is working fine until I get this error: StatusMessage : Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000 This is the command I am using: export-mailbox -identity mailboxA -targetmailbox mailboxB -targetfolder folderA -allowmerge We are on SP2 and I am running this under an account that is not a domain or enterprise admin. THe account has Exchange Server Administrator Permission Both Source and Target Exchange Mailbox Server. THe account is part of the Local Administrators Group Member Both Source and Target Exchange Mailbox Server. This account has Full Access permission on both the target and source servers. THe issue happens at any time and I am only trying to run this on one mailbox, the only mailbox I need to run it on. THe event log is "Error Exchange Migration Export Mailbox Event 1008". The log under migration logs just shows that it was running okay then it gives the same error as above "Error was found for mailboxA ([email protected]) because: Error occurred in the step: Moving messages. Failed to copy messages to the destination mailbox store with error: MAPI or an unspecified service provider. ID no: 00000000-0000-00000000, error code: -1056749164" Any ideas on what to do/try?

    Read the article

  • How do I setup routing for 2 companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: 2 companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants it's own Internet connection (ISP A) and company B wants it's own Internet connection (ISP B). VLANs are deployed internally to separate the 2 company's networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I setup policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • Possible to IPSec VPN Tunnel Public IP Addresses?

    - by caleban
    A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload. IPSec Payload: source:1.1.1.101:1001 destination:2.2.2.101:2001 IP Packet: source:1.1.1.1:101 destination:2.2.2.1:201 Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

    Read the article

  • can't connect to vsftpd from outside network

    - by rick
    i know this has been asked many times before, but nothing seems to resolve my issue. i have vsftpd running on ubuntu 10.04. i can connect with ftp localhost on the machine. i can connect from another machine in my network. i just cannot connect from outside. the machine is behind an airport extreme managed by airport utility on a mac. 21 is open as per nmap: macmini:~$ nmap localhost Starting Nmap 5.21 ( http://nmap.org ) at 2011-04-10 23:49 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.00045s latency). Hostname localhost resolves to 2 IPs. Only scanned 127.0.0.1 rDNS record for 127.0.0.1: localhost.localdomain Not shown: 997 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 631/tcp open ipp netstat says 21 is listening: macmini:~$ netstat -lep --tcp | grep ftp (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 *:ftp *:* LISTEN iptables: macmini:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination when i try to connect from my external IP (or a dyndns name which resolves there) it times out. ("control connection timed out") as i know very little about networking, i feel like something may jump out as clearly wrong?

    Read the article

  • Backup plan for linux webserver in small business?

    - by radman
    Hi, I am currently in the process of writing a backup plan for the webserver in use by my business. I am very new to this area and have a few ideas about how things should work but am unsure of what tools to use and what sort of restore process is appropriate. I'm looking for something relatively simplistic and it doesn't have to be 100% paranoid just enough to give me a reliable backup. Speed is not of the essence and there is not going to be a live fallback in place. The backup will be onto a single hdd that will be stored onsite (no option for offsite as yet). Backups will be taking place weekly. I am constrained by both time and money which is why I'm aiming for a good enough solution. Is taking an image of the webserver system drive periodically and using that as the backup appropriate? Should I be testing that the backups restore correctly every time that I perform one? This is a bit broad but what setup would you use if you were in my place, given the services I am running? Should I add additonal machines and split the services? Any advice is much appreciated! See below for server details Webserver Platform Linux Ubuntu server Running mail-server svn-server mediawiki wordpress apache-webserver Hardware single 500gb sata drive Architecture Single machine behind router (with firewall) accessible to the internet.

    Read the article

  • Nmap XML parsing with Powershell

    - by Craig620
    I am trying to parse the XML output from NMAP and isolate just the hostadddress and the vendor from the osmatch. I've actually done that with the following: select-xml -path nmap.xml -xpath "nmaprun/host/address/@addr|nmaprun/host/os/osmatch/osclass/@vendor" | select -expandproperty node Which produces: #text ----- 10.20.30.1 HP 10.20.30.2 Linux 10.20.30.3 HP What I was not expecting is that it would jam it all into a single column.Silly me would like the address in one column, and the vendor in another column. I Would like: #addr #vendor ----- ------- 10.20.30.1 HP 10.20.30.2 Linux 10.20.30.3 HP In the several hours I spent learning xpath today, I also realized that this file has a single address for each host, but multiple OS guesses for each host. I would also like to use only the first osGuess in the output. Tired using: -xpath "(nmaprun/host/os/osmatch/osclass/@vendor)[1]" But that truncates the whole data set to a single line of output, instead of only limiting the only the first osclass element of each host. Changing the parens to surround only the @vendor element like .../(@vendor)[1] and .../(@vendor[1]) but both fail with "Expression must evaluate to a node-set." Thanks in advance

    Read the article

  • Proper way to use hiera with puppetlabs-spec-helper?

    - by Lee Lowder
    I am trying to write some rspec tests for my modules. Most of them now use hiera. I have a .fixures.yml: fixtures: repositories: stdlib: https://github.com/puppetlabs/puppetlabs-stdlib.git hiera-puppet: https://github.com/puppetlabs/hiera-puppet.git symlinks: mongodb: "#{source_dir}" and a spec/classes/mongodb_spec.rb: require 'spec_helper' describe 'mongodb', :type => 'class' do context "On an Ubuntu install, admin and single user" do let :facts do { :osfamily => 'Debian', :operatingsystem => 'Ubuntu', :operatingsystemrelease => '12.04' } end it { should contain_user('XXXX').with( { 'uid' => '***' } ) should contain_group('XXXX').with( { 'gid' => '***' } ) should contain_package('mongodb').with( { 'name' => 'mongodb' } ) should contain_service('mongodb').with( { 'name' => 'mongodb' } ) } end end but when I run the spec test, I get: # rake spec /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color F Failures: 1) mongodb On an Ubuntu install, admin and single user Failure/Error: should contain_user('XXXX').with( { 'uid' => '***' } ) LoadError: no such file to load -- hiera_puppet # ./spec/fixtures/modules/hiera-puppet/lib/puppet/parser/functions/hiera.rb:3:in `function_hiera' # ./spec/classes/mongodb_spec.rb:15 Finished in 0.05415 seconds 1 example, 1 failure Failed examples: rspec ./spec/classes/mongodb_spec.rb:14 # mongodb On an Ubuntu install, admin and single user rake aborted! /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color failed Tasks: TOP => spec_standalone (See full trace by running task with --trace) Module spec testing is relatively new, as is hiera. So far I have been unable to find any suitable solutions. (the back and forth on puppet-dev was interesting, but not helpful). What changes do I need to make to get this to work? Installing puppet from a gem and hacking on rubylib isn't a viable solution due to corporate policy. I am using Ubuntu 12.04 LTS + Puppet 2.7.17 + hiera 0.3.0.

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • Why is my Drupal Registration email considered spam by gmail? (headers included)

    - by Jasper
    I just created a Drupal website on a uni.cc subdomain that is brand-new also (it has barely had the 24 hours to propagate). However, when signing up for a test account, the confirmation email was marked as spam by gmail. Below are the headers of the email, which may provide some clues. Delivered-To: *my_email*@gmail.com Received: by 10.213.20.84 with SMTP id e20cs81420ebb; Mon, 19 Apr 2010 08:07:33 -0700 (PDT) Received: by 10.115.65.19 with SMTP id s19mr3930949wak.203.1271689651710; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Return-Path: <[email protected]> Received: from bat.unixbsd.info (bat.unixbsd.info [208.87.242.79]) by mx.google.com with ESMTP id 12si14637941iwn.9.2010.04.19.08.07.31; Mon, 19 Apr 2010 08:07:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) client-ip=208.87.242.79; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of [email protected] designates 208.87.242.79 as permitted sender) [email protected] Received: from nobody by bat.unixbsd.info with local (Exim 4.69) (envelope-from <[email protected]>) id 1O3sZP-0004mH-Ra for *my_email*@gmail.com; Mon, 19 Apr 2010 08:07:32 -0700 To: *my_email*@gmail.com Subject: Account details for Test at YuGiOh Rebirth MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed; delsp=yes Content-Transfer-Encoding: 8Bit X-Mailer: Drupal Errors-To: info -A T- yugiohrebirth.uni.cc From: info -A T- yugiohrebirth.uni.cc Message-Id: <[email protected]> Date: Mon, 19 Apr 2010 08:07:31 -0700 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - bat.unixbsd.info X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [99 500] / [47 12] X-AntiAbuse: Sender Address Domain - bat.unixbsd.info X-Source: X-Source-Args: /usr/local/apache/bin/httpd -DSSL X-Source-Dir: gmh.ugtech.net:/public_html/YuGiOhRebirth

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >