Search Results

Search found 11460 results on 459 pages for 'ip failover'.

Page 391/459 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • SQL Server Analysis Services, DNS, AD, Kerberos, Connection Issues

    - by ScaleOvenStove
    Running into a very weird issue. Converting servers to Windows 2008/SQL 2008. Have a server, SERVER_A, brand new, setup with Win2k8,Sql2k8 - works. Have a Server SERVER_B, running Windows2003/SQL2005. I want to migrate from SERVER_B to SERVER_A. I have all db's, cubes, etc setup on SERVER_A and it is mimicking functionality. Since users are using Excel to connect to SSAS, they connection string has SERVER_B in it. What I want to do, is, change DNS on the network to point SERVER_B (by name) at the ip of SERVER_A. I have successfully done this with another server, SERVER_C, but I need to do it with SERVER_B. What I have found is that with SERVER_C, after changing DNS, had to remove SERVER_C from AD and then it worked. I could connect to SERVER_C (DB), SERVER_C (SSAS Default Instance) and SERVER_C (SSAS Named instance) and it all was actually connecting to SERVER_A I tried to do the same with with SERVER_B, and no luck. Changed DNS, removed from AD, and it wouldn't connect. Found out that there were some SPN's in AD set up, so removed those and tried again. I then could connect to SERVER_B (DB), SERVER_B (SSAS Named Instance), but not SERVER_B (SSAS Default Instance). I could connect to SERVER_B (SSAS Default Intance WITH the Port #), but I need to be able to connect without the port number. I am at a loss to as why I can't connect to the default instance without a port #. Not sure if it is SPN's in AD, or another AD issue, or something else. Pretty sure it isnt something on the server (because SERVER_C works!) Any insight or suggestions would be greatly helpful!!

    Read the article

  • DNS Does Not Register at Off-site Locations

    - by Russ Warren
    First of all, let me give you the specifics of our setup: Windows Small Business Server 2008 Domain w/ all applicable updates on the DC The DC does DHCP for the main site The DC does DNS for all sites 3 sites including our headquarters where the DC is located All sites are connected through OpenVPN SSL tunnels terminated by an Untangle box at each site The 2 remote sites us the Untangle box as a DHCP server for their subnet, which assigns the DC as the primary DNS server Collection of Windows XP and Windows 7 workstations connected to the domain Here's the issue: All of the workstations at the main site register with the DNS server on the domain controller fine. As they grab an IP from the DHCP server, it updates the DNS server with the new host record. I have 2 systems (each at different remote sites) that fail to register with the DNS server. I've attempted the following troubleshooting steps: Confirmed the network adapter is using the DC as a DNS server Confirmed 2-way traffic is possible between DC and workstation Verified the "Register with DNS server" setting was checked in the adapter properties Attempted ipconfig /registerdns and received no errors For the time being, I have setup a DHCP reservation for these systems and manually created a host record. This seems to work fine, but I need a solution for any new systems that go out there.

    Read the article

  • VMware ARP/Mac Networking

    - by Ross Wilson
    Hi Guys, I am very interested in how VMware networking works. I have scoured the VMware website and read their data sheets, this has given me some basic knowledge. I now have some questions. Lets assume that we have a physical server running the VMware hypervisor. The physical server is running a Virtual Machine. The physical box has one physical NIC. The NIC is connected to a switch, as so is a desktop client. Now, this is where my first question lies. The VM has an IP address: 192.168.1.1. How do desktop clients on the network communicate with this VM? So, the client pings 192.168.1.1. The ping packet is sent to the switch. The switch checks its MAC address table and sees that 192.168.1.1 is associated with the MAC address of the physical NIC. Correct? I then assume that the ping packet is sent to the server's physical NIC, where the hypervisor routes the packet to the VM thats using 192.168.1.1? Please could you give me a run down as to how VM networking works? Many thanks, Ross

    Read the article

  • VirtualHost not using correct SSL certificate file

    - by Shawn Welch
    I got a doozy of a setup with my virtual hosts and SSL. I found the problem, I need a solution. The problem is, the way I have my virtual hosts and server names setup, the LAST VirtualHost directive is associating the SSL certificate file with the ServerName regardless of IP address or ServerAlias. In this case, SSL on www.site1.com is using the cert file that is established on the last VirtualHost; www.site2.com. Is this how it is supposed to work? This seems to be happening because both of them are using the same ServerName; but I wouldn't think this would be a problem. I am specifically using the same ServerName for a purpose and I really can't change that. So I need a good fix for this. Yes, I could buy another UCC SSL and have them both on it but I have already done that; these are actually UCC SSLs already. They just so happen to be two different UCC SSLs. <VirtualHost 11.22.33.44:80> ServerName somename ServerAlias www.site1.com UseCanonicalName On RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 11.22.33.44:443> ServerName somename ServerAlias www.site1.com UseCanonicalName On SSLEngine on SSLCertificateFile /usr/local/apache/conf/ssl.crt/cert1.crt SSLCertificateKeyFile /usr/local/apache/conf/ssl.key/cert1.key SSLCertificateChainFile /usr/local/apache/conf/chain/gd_bundle.crt RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 55.66.77.88:80> ServerName somename ServerAlias www.site2.com UseCanonicalName On RewriteEngine On RewriteOptions Inherit </VirtualHost> <VirtualHost 55.66.77.88:443> ServerName somename ServerAlias www.site2.com UseCanonicalName On SSLEngine on SSLCertificateFile /usr/local/apache/conf/ssl.crt/cert2.crt SSLCertificateKeyFile /usr/local/apache/conf/ssl.key/cert2.key SSLCertificateChainFile /usr/local/apache/conf/chain/gd_bundle.crt RewriteEngine On RewriteOptions Inherit </VirtualHost>

    Read the article

  • Postfix rewrite sender: why doesn't this work

    - by Nick Coleman
    I have server A with an IP address only and a dummy FQDN (on the basis all machines should have a FQDN): pants.net.invalid. All mail is relayed through another server elsewhere, which works fine. On server A, Postfix rewrites the sender address with smtp_generic_maps = hash:/etc/postfix/generic. According to the Rewrite manual at http://www.postfix.org/ADDRESS_REWRITING_README.html#remote, this should rewrite all outgoing external mail's Sender address: $ cat /etc/postfix/generic @pants.net.invalid [email protected] but it does not. postmap -q [email protected] returns nothing. This works: [email protected] [email protected] It seems as though it is doing regex matching even though I specify type hash:. Clearly I am misunderstanding the manual. I don't want to use regex or pcre expressions because there are only a couple of users (root and two others) and I don't want the overhead. I can specify the users exactly and it works. But, I would like to know what I am misunderstanding for future reference. Thanks.

    Read the article

  • Why can't I unblock postgres with shorewall?

    - by ryeguy
    I can't seem to unblock the port needed for postgres using Shorewall. I am developing a PHP app on my windows machine here, and then I upload it on my linux box to actually use it. The linux box runs the php files as well as hosts the db server. Since I need it working from both machines, in my PHP code I am referring to the database as the full IP instead of localhost. I can easily connect to postgres from my windows machine, but ironically, my PHP app can't connect to postgres even though it's on the same box. Here's what I have in /etc/shorewall/rules: #macro/action src dest PostgreSQL/ACCEPT net $FW PostgreSQL/ACCEPT loc $FW PostgreSQL/ACCEPT loc dmz PostgreSQL/ACCEPT net dmz PostgreSQL/ACCEPT loc net PostgreSQL/ACCEPT dmz $FW PostgreSQL/ACCEPT dmz loc PostgreSQL/ACCEPT dmz net PostgreSQL/ACCEPT dmz dmz Clearly I have a ton of crap there. The first line is all I needed to make it allow a connection from my windows machine. All the lines after it are me just trying everything to get it to work. What am I missing?

    Read the article

  • rsync over ssh backup failing after relocation of server

    - by OlduvaiHand
    I've got two FreeBSD machines set up; one serves video data and the other is the backup for the first. At this point I've got around 4TB of data. I add files to the video server a few at a time, and was planning to use rsync over ssh to keep the backup machine up to date. I did the initial, large backup with both machines hooked up to the same subnet at the lab with no problems using rsync. Then, when I moved the backup machine off-site (but still on the university network), I attempted a sync without changing anything other than the IP (as the machine is now on a different subnet) and got the following error: 2010/03/22 15:55:21 [1260] rsync: connection unexpectedly closed (6340840244 bytes received so far) [receiver] 2010/03/22 15:55:21 [1260] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [receiver=3.0.7] 2010/03/22 15:55:21 [1258] rsync: connection unexpectedly closed (60 bytes received so far) [generator] 2010/03/22 15:55:21 [1258] rsync error: unexplained error (code 255) at io.c(601) [generator=3.0.7] The script that handles the backup hasn't been changed, nor has the crontab that invokes it. Does anyone have any ideas about what might be causing the hiccup? I was under the impression that it might have something to do with the ssh connection timing out or something along those lines, but am not entirely clear on how to diagnose the cause of the problem.

    Read the article

  • Samba server NETBIOS name not resolving, WINS support not working

    - by Eric
    When I try to connect to my CentOS 6.2 x86_64 server's samba shares using address \\REPO (NETBIOS name of REPO), it times out and shows an error; if I do so directly via IP, it works fine. Furthermore, my server does not work correctly as a WINS server despite my samba settings being correct for it (see below for details). If I stop the iptables service, things work properly. I'm using this page as a reference for which ports to use: http://www.samba.org/samba/docs/server_security.html Specifically: UDP/137 - used by nmbd UDP/138 - used by nmbd TCP/139 - used by smbd TCP/445 - used by smbd I really really really want to keep the secure iptables design I have below but just fix this particular problem. SMB.CONF [global] netbios name = REPO workgroup = AWESOME security = user encrypt passwords = yes # Use the native linux password database #passdb backend = tdbsam # Be a WINS server wins support = yes # Make this server a master browser local master = yes preferred master = yes os level = 65 # Disable print support load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes # Restrict who can access the shares hosts allow = 127.0.0. 10.1.1. [public] path = /mnt/repo/public create mode = 0640 directory mode = 0750 writable = yes valid users = mangs repoman IPTABLES CONFIGURE SCRIPT # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTP #iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT #iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -p udp --dport 137 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 137 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p udp --dport 138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 445 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 445 -m state --state ESTABLISHED -j ACCEPT # Make these rules permanent service iptables save service iptables restart**strong text**

    Read the article

  • Window 7 image in vmware will allow network connection out but not http

    - by Ormis
    I am currently trying to create a set of images to deploy on my network, but I've run in to a snag. When I create my own Windows 7 image I can successfully use NAT for connecting to the network but whenever I try to access a webpage I get nothing. To be more specific, All firewalls/iptables are disabled on my host machine, my virtual machine, and my network. I can do lookups and all addresses respond correctly (i'm even using Google's DNS). On the host OS i have full connectivity. On the virtual machine I can ping any device I want and all addresses resolve correctly. Within a browser I cannot reach any page via hostname or IP. I feel almost like port 80 is being blocked but i can't find any reason this would be the case. If anyone has had this occur before, I would love some insight to the problem. I understand this question is a bit out of the norm for stackoverflow, but I've run out of ideas. Thank you for any help you can provide.

    Read the article

  • Providing access to a Samba server for VPN clients

    - by Kamil Kisiel
    We have some Windows users that connect to our network via VPN from home. They need to be able to connect to our Samba server and access a mapped network drive just as they do as when they are on our LAN. The complication is that VPN clients are placed on a subnet other than our office LAN, and behind a firewall. What's the easiest way for me to allow them to still connect to the network share? The solutions I've currently seen involve setting up a WINS server for name resolution purposes and then tunnelling a bunch of the NetBIOS stuff through the firewall. However that means I'd have to set up the VPN DHCP server to hand out the WINS address, something I'm not even sure is possible on the Cisco hardware we have. I'm thinking there must be an easier way. Should I use an LMHOSTS file? Or just map by IP address? Also, I'm not terribly familiar with Windows networking, so which ports would I need to pass through my firewall in order to get the file sharing through?

    Read the article

  • Sharing two SSL wildcard certificates in memory in nginx

    - by hvtilborg
    I have an nginx server running with two IP addresses, say 1.2.3.4 and 4.3.2.1. Besides there are two wildcard SSL certificates for *.example.net (i.e. wc1, pointing to 1.2.3.4) and *.sub.example.net (i.e. wc2, pointing to 4.3.2.1). The nginx docs mention that you can share a wildcard certificate between server instances like this: ssl_certificate wc1.crt; ssl_certificate_key wc1.key; server { listen 1.2.3.4:443; server_name www.example.net; ssl on; ... } server { listen 1.2.3.4:443; server_name test.example.net; ssl on; ... } However, I was wondering whether this same construct is possible to use with the second wildcard certificate too. Both domains have around 500 subdomains. Do they not get mixed up, since the ssl_certificate construct is now global?

    Read the article

  • 1 document pending in printer queue in System Tray that won't go away

    - by White Phoenix
    Edit: I didn't want to do it, but I restarted my computer - cleared the problem right away. Though if I were running a Windows 2003/2008 server, I would hate to have to restart the domain controller just to get rid of this irritating problem. If I run into this problem again, I'm going to try that remove printer/reinstall printer thing. Thanks for your help @Psychogeek. Points for your attempt. Running Windows 7, 32-bit Professional. My printer is an HP OfficeJet Wireless 8500. It's connected to my network wirelessly through TCP/IP as a standalone device. I was having some print problems awhile back and had to do some print spooler stuff as part of my troubleshooting (stopping the Print Spooler service, clearing the print spooler files from C:\Windows\System32\spool\PRINTERS and then restarting the service). I've finally narrowed it down to it being application specific, so that's that. However, as a leftover from all that troubleshooting, my printer icon is stuck in the tray - when I mouseover the icon, Windows says that there is 1 document(s) pending for my username. However, when I open up that printer's queue, there's nothing in there. I restarted the Printer Spooler service and also checked C:\Windows\System32\spool\PRINTERS if there's anything in there - nothing. I did a quick Google search and an answer from one of those "reps" at the Microsoft Socialnet site says for me to uninstall and reinstall the printer. The funny thing is, when I send print jobs, they print just fine - that 1 mystery document stuck in queue isn't stopping anything from happening. Short of having to do that, are there any other quick troubleshooting steps I may be missing?

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • BGP path prepended route not listed anywhere

    - by Julien Vehent
    We have a simple multi-homed setup with two routers that advertise our AS to two ISP. The second ISP (ISP B) is only used for backup when ISP A goes down, so we prepended our AS 3 times on this route. I spend a couple of hours this morning poking at looking glass routers all over the internet, and none of them list our backup route with the prepended path. I checked the south african internet exchange, the london internet exchange, oregon internet exchange and a couple dozen ISPs. All of them have multiples routes through ISP A, often with 3 or 4 hops. The route through ISP B should, at least, appear somewhere and have 5 or 6 hops. But I couldn't find it anywhere. (I checked the full bgp tables on the looking glass routers, using show ip bgp 65000) My questions are: Is there a limit to the size of a route after which most routers will simply discard the route ? Is our backup route even going to work when ISP A goes down, if no router knows about it ? Our two routers are connected on iBGP. Would it be possible that the route through ISPB is not announced because the iBGP session prefers the route through ISPA ? This is what non-exist-map and advertise-map are supposed to do, but none of those are used in either routers.

    Read the article

  • Is there a way to force spam-filter to change their policy or remove them as recognized spam service?

    - by Alvin Caseria
    As per mxtoolbox I got 1 blacklist still active for quite sometime now. UCEPROTECTL1's is running on 7 day policy since last spam mail. This is too strict compared to the 98 other spam filters out there as per mxtoolbox. (Or at least to the other 4 that detected the problem) I have no problem with our e-mail since it is hosted locally. But our domain is hosted outside the country and it run on a different IP. I contacted them but since it is the spam-filter's rule, there's nothing to be done but wait. I do believe services like spam-filters should at lease be bounded by guidelines and standards for this matter. Otherwise problem on delivering valid (after the fix) e-mails will be disastrous. Is there a way to force UCEPROTECT to change their policy or remove them as recognized spam service? Apart from contacting them in case they do not answer. Currently they are charging for fast removal if you pay by PayPal. I'm still looking for guideline/standard on how they should operate regarding this matter. Appreciate the help.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • How to set up QoS on ADSL router (terracom) for prioritizing browsing

    - by DBZ_A
    I want to configure the ADSL router which connects 10+ machines to the internet. I want to give maximum priority to browsing (ports 80,443) and set low priority for bittorrent etc.(port 42180) I have been experimenting with settings , but with no luck. There are three settings which confuse me, along with my understanding. 802.1 Priority - Related to LAN level, possible values 0-7 , higher numbers means higher priority. 'Mark traffic priority' - clueless about this. IPP/DS - IP Precedence - possible values 0-7 ; 6 & 7 are reserved, so set 5 for highest priority. Or when using DSCP - set 46 for highest priority. Please help me in getting this done... Similer question for another model of router here , but with less number of confusing options :) How to configure QoS on home router Update: from discussion on another thread, QoS can control only upstream traffic (from router to the internet) , while this may in turn affect downstream traffic rate, there is no direct control over data coming into the router.

    Read the article

  • Vyatta masquerade out bridge interface

    - by miquella
    We have set up a Vyatta Core 6.1 gateway on our network with three interfaces: eth0 - 1.1.1.1 - public gateway/router IP (to public upstream router) eth1 - 2.2.2.1/24 - public subnet (connected to a second firewall 2.2.2.2) eth2 - 10.10.0.1/24 - private subnet Our ISP provided the 1.1.1.1 address for us to use as our gateway. The 2.2.2.1 address is so the other firewall (2.2.2.2) can communicate to this gateway which then routes the traffic out through the eth0 interface. Here is our current configuration: interfaces { bridge br100 { address 2.2.2.1/24 } ethernet eth0 { address 1.1.1.1/30 vif 100 { bridge-group { bridge br100 } } } ethernet eth1 { bridge-group { bridge br100 } } ethernet eth2 { address 10.10.0.1/24 } loopback lo { } } service { nat { rule 100 { outbound-interface eth0 source { address 10.10.0.1/24 } type masquerade } } } With this configuration, it routes everything, but the source address after masquerading is 1.1.1.1, which is correct, because that's the interface it's bound to. But because of some of our requirements here, we need it to source from the 2.2.2.1 address instead (what's the point of paying for a class C public subnet if the only address we can send from is our gateway!?). I've tried binding to br100 instead of eth0, but it doesn't seem to route anything if I do that. I imagine I'm just missing something simple. Any thoughts?

    Read the article

  • Setting up 802.1X wireless connection on OSX

    - by hizki
    I am an OSX user, I have Snow Leopard 10.6.5 and an updated AirPort (version 5.5.2). I am trying to connect to my university's wireless network, but it has a 802.1x security that I am having trouble defining... Here there are instructions for connecting with Windows XP, Windows 7 and Linux. Can someone please instruct me what should I do to set up this network on my Mac? I have had previous success in setting up this network, but I have no idea what I did that made it work. Since I updated my AirPort (to version 5.5.2) it worked only seldomly and very slowly... Before the update, even when it worked it never remembered my password. Update: I have already tried to create a new "location", removed all the 802.1x user profiles and all the remembered networks, and made sure the in the TCP/IP tab 'Configure IPv4' is set to "Using DHCP". I also moved /Library/Preferences/SystemConfiguration/preferences.plist to my desktop in attempt to force the system to create a new set of settings. Still I can define the connection to work.

    Read the article

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • How can I use two Internet connections in Ubuntu?

    - by Martin
    My goal is to be able to do something like this: curl google.com --interface ppp0 curl google.com --interface p2p2 ppp0 is a DSL connection, and p2p2 is a separate direct Internet connection. Currently I can only get one of these to work at a time. When I enable one, the other one stops working. /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # DSL auto p2p1 iface p2p1 inet manual auto dsl-provider iface dsl-provider inet ppp pre-up /sbin/ifconfig p2p1 up # line maintained by pppoeconf provider dsl-provider # DIRECT auto p2p2 iface p2p2 inet dhcp ifconfig: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 p2p1 Link encap:Ethernet inet6 addr: fe80::20a:ebff:fe21:99c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 p2p2 Link encap:Ethernet inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20a:ebff:fe17:1249/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 ppp0 Link encap:Point-to-Point Protocol inet addr:53.193.231.167 P-t-P:53.193.224.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 10.0.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 53.193.224.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 p2p2 By default, only ppp0 works. If I run "route add default gw 192.168.1.1 p2p2" then I can use p2p2 but ppp0 stops working. If I then run "route add default gw 53.193.224.1 ppp0" then I can use ppp0 again but p2p2 stops working. What can I do to be able to use both interfaces selectively?

    Read the article

  • New XPC: No video, no ethernet link, but drive spins

    - by Mike Pennington
    I bought a Shuttle XPC SH67H3 with integrated video. I installed: An Intel i5 2450P 16GB DDR3 RAM A SATA hard drive from my old linux server that still is bootable I have both power connectors plugged into the motherboard. I realize that the Intel i5-2450P doesn't have video capabilities; however, the drive spins like it's doing something useful. It seems like I should get an ethernet link light when I fire this up. I plan to run this headless anyway, so it would be really nice if I could figure out how to run this without a video card at all. I know the IP address and login info for the linux install on the disk. I plugged in speakers, but get no bios beeps when I power it up. Shuttle's bios manual has nothing in there that indicates I should have problems in this configuration. My questions: Is there a reason that the missing video card would block usage of the ethernet port? Are there settings on the motherboard / bios I can change to get this working?

    Read the article

  • Setting Up Apache as a Forward Proxy with Cahcing

    - by Karl
    I am trying to set up Apache as a forward proxy with caching, but it does not seem to be working correctly. Getting Apache working as a forward proxy was no problem, but no matter what I do it is not caching anything, to disk or memory. I already checked to make sure nothing is conflicting in the mods_enabled directory with mod_cache (ended up commenting it all out) and also I tried moving all of the caching related fields to the configuration file for mod_cache. In addition I set up logging for caching requests, but nothing is being written to those logs. Below is my Apache config, any help would be greatly appreciated!! <VIRTUALHOST *:8080> ProxyRequests On ProxyVia On #ErrorLog "/var/log/apache2/proxy-error.log" #CustomLog "/var/log/apache2/proxy-access.log" common CustomLog "/var/log/apache2/cached-requests.log" common env=cache-hit CustomLog "/var/log/apache2/uncached-requests.log" common env=cache-miss CustomLog "/var/log/apache2/revalidated-requests.log" common env=cache-revalidate CustomLog "/var/log/apache2/invalidated-requests.log" common env=cache-invalidate LogFormat "%{cache-status}e ..." # This path must be the same as the one in /etc/default/apache2 CacheRoot /var/cache/apache2/mod_disk_cache # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / #CacheHeader on CacheDirLevels 3 CacheDirLength 5 ##<IfModule mod_mem_cache.c> # CacheEnable mem / # MCacheSize 4096 # MCacheMaxObjectCount 100 # MCacheMinObjectSize 1 # MCacheMaxObjectSize 2048 #</IfModule> <Proxy *> Order deny,allow Deny from all Allow from x.x.x.x #IP above hidden for this post <filesMatch "\.(xml|txt|html|js|css)$"> ExpiresDefault A7200 Header append Cache-Control "proxy-revalidate" </filesMatch> </Proxy> </VIRTUALHOST> Thank you once again!

    Read the article

  • Limit bandwith on network of computers

    - by Joseph34123
    We have network in office of approximately 10 computers sometimes when someone downloads something for work (e.g. syncing email with an attachment or Dropbox), everyone slows down. I could limit each computer in office but the problem is that we have an open WiFi network, and I cannot access the computers that use it. We have one main DSL router "Netopia 3347" and another router connected to it on wire for public wifi is "Linksys WRT110". I cannot change the setup we have and don't want to. There's 2 approaches here: Set office computers download limit in each computer and then I need to find out how I can set a download limit for the wifi network such as in the router settings. It's a Linksys router and I did not see the option for this, so maybe I need new router for WiFi? Don't bother with each computer and put some "specific hardware" or another computer before router so all traffic goes trough it and I can then assign priority maximum speed for each IP address. Question is what hardware do I need?

    Read the article

  • svn using nginx Commit failed: path not found

    - by Alaa Alomari
    I have built svn server on my nginx webserver. my nginx configuration is server { listen 80; server_name svn.mysite.com; location / { access_log off; proxy_pass http://svn.mysite.com:81; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } } Now, i can svn co and svn up normally without having any problem and when i try to commit i get error: $svn up At revision 1285. $ svn info Path: . URL: http://svn.mysite.com/elpis-repo/crons Repository Root: http://svn.mysite.com/elpis-repo Repository UUID: 5303c0ba-bda0-4e3c-91d8-7dab350363a1 Revision: 1285 Node Kind: directory Schedule: normal Last Changed Author: alaa Last Changed Rev: 1280 Last Changed Date: 2012-04-29 10:18:34 +0300 (Sun, 29 Apr 2012) $svn st M config.php $svn ci -m "Just a test, add blank line to config" config.php Sending config.php svn: Commit failed (details follow): svn: File 'config.php' is out of date svn: '/elpis-repo/!svn/bc/1285/crons/config.php' path not found if i try to svn co on port 81 (my proxy_pass which is apache) and then svn ci, it will work smoothly! but why it doesn't work when i use nginx to accomplish it? any idea is highly appreciated.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >