Search Results

Search found 40644 results on 1626 pages for 'ms access 2003'.

Page 416/1626 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • network design to segregate public and staff

    - by barb
    My current setup has: a pfsense firewall with 4 NICs and potential for a 5th 1 48 port 3com switch, 1 24 port HP switch, willing to purchase more subnet 1) edge (Windows Server 2003 for vpn through routing and remote access) and subnet 2) LAN with one WS2003 domain controller/dns/wins etc., one WS2008 file server, one WS2003 running Vipre anti-virus and Time Limit Manager which controls client computer use, and about 50 pcs I am looking for a network design for separating clients and staff. I could do two totally isolated subnets, but I'm wondering if there is anything in between so that staff and clients could share some resources such as printers and anti-virus servers, staff could access client resources, but not vice versa. I guess what I'm asking is can you configure subnets and/or vlans like this: 1)edge for vpn 2)services available to all other internal networks 3)staff which can access services and clients 4)clients which can access services but not staff By access/non-access, I mean stronger separation than domain usernames and passwords.

    Read the article

  • Continuing permissions issues - ASP.net, IIS 7, Server 2008 - 0x80070005 (http 500.19) error

    - by Re-Pieper
    I created an ASP.net MVC developed web application and I am trying to set up IIS. The Error: Http error 500.19, error code 0x80070005, Cannot read configuration file due to insufficient permissions, config file: C:\inetpub\wwwroot\BudgetManagerMain\BudgetManager\web.config If I set the AppPool to use 'administrator' i have no problems and can access the site just fine. If i set to NETWORK SERVICE (or anything else including self-created admin or non-admin user accounts), i get the above error. Things I have tried: identity for Application pool named 'test' is 'NetworkService' Set full access privs for wwwroot and all children files/folders verified effective permissions and NETWORK SERVICE has full access. Authentication on my site is set for anonymous and running under Application Pool Identity I do not have any physical path credentials set on the website confirmed website is set to run under the application pool named 'test' using Process Monitor, here is a summary of what i found on the ACCESS DENIED event EVENT TAB: Class: File System Operation: CreateFile Result: Access Denied Path: ..\web.config Desired Access: Generic Read Disposition: Open Options: Sybnchronous IO Non-Alert, Non-Directory file Attributes: N ShareMode: Read AllocaitonSize: n/a PROCESS TAB ...lots of stuff that seems irrelevant User: NT AUTHORITY\NETWORK SERVICE

    Read the article

  • Wget works, Ping doesn't

    - by derty
    There are some anomalies on a Virtuozzo virtualized Debian 4 (I know, I'm gonna upgrade this one asap, but there dependences). We run some Websites on this one. And a view Days ago exmi4 wasnt able to send mails to SOME people. I'll use live.com as exampledomain! So some of this people got mails and some didn't. Some of the mails got stuck in the queue, and after 2 days they went out!! My Nagios never showed problems with the internet connection or disk space Now i wanted to install "dig" to look how he's solving the dns request. And this Debian tells me he doesn't know dig.. Long story made short, Debian is able to download sites with exact IP or even with wget live.com, but it is not able to ping live.com. I'm 99% sure that the networking is right and the routing too! Some examples of my tring below: wget live.com downloads the site ping live.com ping http://www.live.com ping http://live.com returns: ping: unknown host live.com EDIT: i now use heise.de not live.com any more. and i found out i can ping the heise.de server by using it's IP-address. myserver:~# ping 193.99.144.85 PING 193.99.144.85 (193.99.144.85) 56(84) bytes of data. 64 bytes from 193.99.144.85: icmp_seq=1 ttl=248 time=12.7 ms 64 bytes from 193.99.144.85: icmp_seq=2 ttl=248 time=12.6 ms 64 bytes from 193.99.144.85: icmp_seq=3 ttl=248 time=12.9 ms 64 bytes from 193.99.144.85: icmp_seq=4 ttl=248 time=13.1 ms 64 bytes from 193.99.144.85: icmp_seq=5 ttl=248 time=13.1 ms --- 193.99.144.85 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 12.671/12.924/13.163/0.238 ms EDIT 2: myserver:/etc/apt# dig heise.de ; <<>> DiG 9.3.4-P1.2 <<>> heise.de ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40551 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 3 ;; QUESTION SECTION: ;heise.de. IN A ;; ANSWER SECTION: heise.de. 2266 IN A 193.99.144.80 ;; AUTHORITY SECTION: heise.de. 1622 IN NS ns.pop-hannover.de. heise.de. 1622 IN NS ns.s.plusline.de. heise.de. 1622 IN NS ns.plusline.de. heise.de. 1622 IN NS ns2.pop-hannover.net. heise.de. 1622 IN NS ns.heise.de. ;; ADDITIONAL SECTION: ns.plusline.de. 265 IN A 212.19.48.14 ns.pop-hannover.de. 5113 IN A 193.98.1.200 ns2.pop-hannover.net. 15150 IN A 62.48.67.66 ;; Query time: 2 msec ;; SERVER: 193.200.112.80#53(193.200.112.80) ;; WHEN: Tue Oct 9 13:03:50 2012 ;; MSG SIZE rcvd: 216

    Read the article

  • Pfsense 2.1 OpenVPN can't reach servers on the LAN

    - by Lucas Kauffman
    I have a small network set up like this: I have a Pfsense for connecting my servers to the WAN, they are using NAT from the LAN - WAN. I have an OpenVPN server using TAP to allow remote workers to be put on the same LAN network as the servers. They connect through the WAN IP to the OVPN interface. The LAN interface also servers as the gateway for the servers to get internet connection and has an IP of 10.25.255.254 The OVPN Interface and the LAN interface are bridged in BR0 Server A has an IP of 10.25.255.1 and is able to connect the internet Client A is connecting through the VPN and is assigned an IP address on its TAP interface of 10.25.24.1 (I reserved a /24 within the 10.25.0.0/16 for VPN clients) Firewall currently allows any-any connection OVPN towards LAN and vice versa Currently when I connect, all routes seem fine on the client side: Destination Gateway Genmask Flags Metric Ref Use Iface 300.300.300.300 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.25.0.0 10.25.255.254 255.255.0.0 UG 0 0 0 tap0 10.25.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tap0 0.0.0.0 300.300.300.300 0.0.0.0 UG 0 0 0 eth0 I can ping the LAN interface: root@server:# ping 10.25.255.254 PING 10.25.255.254 (10.25.255.254) 56(84) bytes of data. 64 bytes from 10.25.255.254: icmp_req=1 ttl=64 time=7.65 ms 64 bytes from 10.25.255.254: icmp_req=2 ttl=64 time=7.49 ms 64 bytes from 10.25.255.254: icmp_req=3 ttl=64 time=7.69 ms 64 bytes from 10.25.255.254: icmp_req=4 ttl=64 time=7.31 ms 64 bytes from 10.25.255.254: icmp_req=5 ttl=64 time=7.52 ms 64 bytes from 10.25.255.254: icmp_req=6 ttl=64 time=7.42 ms But I can't ping past the LAN interface: root@server:# ping 10.25.255.1 PING 10.25.255.1 (10.25.255.1) 56(84) bytes of data. From 10.25.255.254: icmp_seq=1 Redirect Host(New nexthop: 10.25.255.1) From 10.25.255.254: icmp_seq=2 Redirect Host(New nexthop: 10.25.255.1) I ran a tcpdump on my em1 interface (LAN interface which has the IP of 10.25.255.254) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on em1, link-type EN10MB (Ethernet), capture size 96 bytes 08:21:13.449222 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 10, length 64 08:21:13.458211 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:14.450541 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 11, length 64 08:21:14.458431 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:15.451794 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 12, length 64 08:21:15.458530 ARP, Request who-has 10.25.255.1 tell 10.25.24.1, length 28 08:21:16.453203 IP 10.25.24.1 > 10.25.255.1: ICMP echo request, id 23623, seq 13, length 64 So traffic is reaching the LAN interface, but it's not getting passed it. But no answer from the 10.25.255.1 host. I'm not sure what I'm missing.

    Read the article

  • Rate limit a wireless interface

    - by Jamie Hankins
    I have access to my routers SSH and IPTables. I want to rate limit my guest network to 1Mb/s so they can't guzzle my bandwidth. rai1 RTWIFI SoftAP ESSID:"GuestNetwork" Nickname:"" Mode:Managed Channel=6 Access Point: :F9 Bit Rate=300 Mb/s wdsi0 RTWIFI SoftAP ESSID:"YouCan'tTouchThis" Nickname:"" Mode:Managed Channel=6 Access Point: :F8 Bit Rate=300 Mb/s wdsi1 RTWIFI SoftAP ESSID:"YouCan'tTouchThis" Nickname:"" Mode:Managed Channel=6 Access Point: :F9 Bit Rate=300 Mb/s wdsi2 RTWIFI SoftAP ESSID:"YouCan'tTouchThis" Nickname:"" Mode:Managed Channel=6 Access Point: Not-Associated Bit Rate:300 Mb/s wdsi3 RTWIFI SoftAP ESSID:"YouCan'tTouchThis" Nickname:"" Mode:Managed Channel=6 Access Point: Not-Associated Bit Rate:300 Mb/s I'm just wondering the command I need to limit it. I tried the iwconfig limit command but it failed. Thanks

    Read the article

  • NIC Bonding/balance-rr with Dell PowerConnect 5324

    - by Branden Martin
    I'm trying to get NIC bonding to work with balance-rr so that three NIC ports are combined, so that instead of getting 1 Gbps we get 3 Gbps. We are doing this on two servers connected to the same switch. However, we're only getting the speed of one physical link. We are using 1 Dell PowerConnect 5324, SW version 2.0.1.3, Boot version 1.0.2.02, HW version 00.00.02. Both servers are CentOS 5.9 (Final) running OnApp Hypervisor (CloudBoot) Server 1 is using ports g5-g7 in port-channel 1. Server 2 is using ports g9-g11 in port-channel 2. Switch show interface status Port Type Duplex Speed Neg ctrl State Pressure Mode -------- ------------ ------ ----- -------- ---- ----------- -------- ------- g1 1G-Copper -- -- -- -- Down -- -- g2 1G-Copper Full 1000 Enabled Off Up Disabled Off g3 1G-Copper -- -- -- -- Down -- -- g4 1G-Copper -- -- -- -- Down -- -- g5 1G-Copper Full 1000 Enabled Off Up Disabled Off g6 1G-Copper Full 1000 Enabled Off Up Disabled Off g7 1G-Copper Full 1000 Enabled Off Up Disabled On g8 1G-Copper Full 1000 Enabled Off Up Disabled Off g9 1G-Copper Full 1000 Enabled Off Up Disabled On g10 1G-Copper Full 1000 Enabled Off Up Disabled On g11 1G-Copper Full 1000 Enabled Off Up Disabled Off g12 1G-Copper Full 1000 Enabled Off Up Disabled On g13 1G-Copper -- -- -- -- Down -- -- g14 1G-Copper -- -- -- -- Down -- -- g15 1G-Copper -- -- -- -- Down -- -- g16 1G-Copper -- -- -- -- Down -- -- g17 1G-Copper -- -- -- -- Down -- -- g18 1G-Copper -- -- -- -- Down -- -- g19 1G-Copper -- -- -- -- Down -- -- g20 1G-Copper -- -- -- -- Down -- -- g21 1G-Combo-C -- -- -- -- Down -- -- g22 1G-Combo-C -- -- -- -- Down -- -- g23 1G-Combo-C -- -- -- -- Down -- -- g24 1G-Combo-C Full 100 Enabled Off Up Disabled On Flow Link Ch Type Duplex Speed Neg control State -------- ------- ------ ----- -------- ------- ----------- ch1 1G Full 1000 Enabled Off Up ch2 1G Full 1000 Enabled Off Up ch3 -- -- -- -- -- Not Present ch4 -- -- -- -- -- Not Present ch5 -- -- -- -- -- Not Present ch6 -- -- -- -- -- Not Present ch7 -- -- -- -- -- Not Present ch8 -- -- -- -- -- Not Present Server 1: cat /etc/sysconfig/network-scripts/ifcfg-eth3 DEVICE=eth3 HWADDR=00:1b:21:ac:d5:55 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-eth4 DEVICE=eth4 HWADDR=68:05:ca:18:28:ae USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-eth5 DEVICE=eth5 HWADDR=68:05:ca:18:28:af USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond DEVICE=onappstorebond IPADDR=10.200.52.1 NETMASK=255.255.0.0 GATEWAY=10.200.2.254 NETWORK=10.200.0.0 USERCTL=no BOOTPROTO=none ONBOOT=yes cat /proc/net/bonding/onappstorebond Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:21:ac:d5:55 Slave Interface: eth4 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:28:ae Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:28:af Server 2: cat /etc/sysconfig/network-scripts/ifcfg-eth3 DEVICE=eth3 HWADDR=00:1b:21:ac:d5:a7 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-eth4 DEVICE=eth4 HWADDR=68:05:ca:18:30:30 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-eth5 DEVICE=eth5 HWADDR=68:05:ca:18:30:31 USERCTL=no BOOTPROTO=none ONBOOT=yes MASTER=onappstorebond SLAVE=yes cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond DEVICE=onappstorebond IPADDR=10.200.53.1 NETMASK=255.255.0.0 GATEWAY=10.200.3.254 NETWORK=10.200.0.0 USERCTL=no BOOTPROTO=none ONBOOT=yes cat /proc/net/bonding/onappstorebond Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:1b:21:ac:d5:a7 Slave Interface: eth4 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:30:30 Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 68:05:ca:18:30:31 Here are the results of iperf. ------------------------------------------------------------ Client connecting to 10.200.52.1, TCP port 5001 TCP window size: 27.7 KByte (default) ------------------------------------------------------------ [ 3] local 10.200.3.254 port 53766 connected with 10.200.52.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 950 MBytes 794 Mbits/sec

    Read the article

  • linux routing bug?

    - by Balázs Pozsár
    I have been struggling with this not easily reproducible issue since a while. I am using linux kernel v3.1.0, and sometimes routing to a few IP addresses does not work. What seems to happen is that instead of sending the packet to the gateway, the kernel treats the destination address as local, and tries to gets its MAC address via ARP. For example, now my current IP address is 172.16.1.104/24, the gateway is 172.16.1.254: # ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:1B:63:97:FC:DC inet addr:172.16.1.104 Bcast:172.16.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:230772 errors:0 dropped:0 overruns:0 frame:0 TX packets:171013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:191879370 (182.9 Mb) TX bytes:47173253 (44.9 Mb) Interrupt:17 # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.1.254 0.0.0.0 UG 0 0 0 eth0 172.16.1.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 I can ping a few addresses, but not 172.16.0.59: # ping -c1 172.16.1.254 PING 172.16.1.254 (172.16.1.254) 56(84) bytes of data. 64 bytes from 172.16.1.254: icmp_seq=1 ttl=64 time=0.383 ms --- 172.16.1.254 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms root@pozsybook:~# ping -c1 172.16.0.1 PING 172.16.0.1 (172.16.0.1) 56(84) bytes of data. 64 bytes from 172.16.0.1: icmp_seq=1 ttl=63 time=5.54 ms --- 172.16.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 5.545/5.545/5.545/0.000 ms root@pozsybook:~# ping -c1 172.16.0.2 PING 172.16.0.2 (172.16.0.2) 56(84) bytes of data. 64 bytes from 172.16.0.2: icmp_seq=1 ttl=62 time=7.92 ms --- 172.16.0.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.925/7.925/7.925/0.000 ms root@pozsybook:~# ping -c1 172.16.0.59 PING 172.16.0.59 (172.16.0.59) 56(84) bytes of data. From 172.16.1.104 icmp_seq=1 Destination Host Unreachable --- 172.16.0.59 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms When trying to ping 172.16.0.59, I can see in tcpdump that an ARP req was sent: # tcpdump -n -i eth0|grep ARP tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 15:25:16.671217 ARP, Request who-has 172.16.0.59 tell 172.16.1.104, length 28 and /proc/net/arp has an incomplete entry for 172.16.0.59: # grep 172.16.0.59 /proc/net/arp 172.16.0.59 0x1 0x0 00:00:00:00:00:00 * eth0 Please note, that 172.16.0.59 is accessible from this LAN from other computers. Does anyone have any idea of what's going on? Thanks. update: replies to the comments below: there are no interfaces besides eth0 and lo the ARP req cannot be seen on the other end, but that's how it should work. the main problem is that an ARP req should not even be sent at the first place the problem persist even if I add an explicit route with the command "route add -host 172.16.0.59 gw 172.16.1.254 dev eth0"

    Read the article

  • Hanging of host network connections when starting KVM guest on bridge

    - by Chris Phillips
    Hi, I've a KVM system upon which I'm running a network bridge directly between all VM's and a bond0 (eth0, eth1) on the host OS. As such, all machines are presented on the same subnet, available outside of the box. The bond is doing mode 1 active / passive, with an arp_ip_target set to the default gateway, which has caused some issues in itself, but I can't see the bond configs mattering here myself. I'm seeing odd things most times when I stop and start a guest on the platform, in that on the host I lose network connectivity (icmp, ssh) for about 30 seconds. I don't lose connectivity on the other already running VM's though... they can always ping the default GW, but the host can't. I say "about 30 seconds" but from some tests it actually seems to be 28 seconds usually (or at least, I lose 28 pings...) and I'm wondering if this somehow relates to the bridge config. I'm not running STP on the bridge at all, and the forwarding delay is set to 1 second, path cost on the bond0 lowered to 10 and port priority of bond0 also lowered to 1. As such I don't think that the bridge should ever be able to think that bond0 is not connected just fine (as continued guest connectivity implies) yet the IP of the host, which is on the bridge device (... could that matter?? ) becomes unreachable. I'm fairly sure it's about the bridged networking, but at the same time as this happens when a VM is started there are clearly loads of other things also happening so maybe I'm way off the mark. Lack of connectivity: # ping 10.20.11.254 PING 10.20.11.254 (10.20.11.254) 56(84) bytes of data. 64 bytes from 10.20.11.254: icmp_seq=1 ttl=255 time=0.921 ms 64 bytes from 10.20.11.254: icmp_seq=2 ttl=255 time=0.541 ms type=1700 audit(1293462808.589:325): dev=vnet6 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.604:326): dev=vnet7 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 type=1700 audit(1293462808.618:327): dev=vnet8 prom=256 old_prom=0 auid=42949672 95 ses=4294967295 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x130079 kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0xc1 data 0xffdd694a kvm: 14116: cpu0 unimplemented perfctr wrmsr: 0x186 data 0x530079 64 bytes from 10.20.11.254: icmp_seq=30 ttl=255 time=0.514 ms 64 bytes from 10.20.11.254: icmp_seq=31 ttl=255 time=0.551 ms 64 bytes from 10.20.11.254: icmp_seq=32 ttl=255 time=0.437 ms 64 bytes from 10.20.11.254: icmp_seq=33 ttl=255 time=0.392 ms brctl output of relevant bridge: # brctl showstp brdev brdev bridge id 8000.b2e1378d1396 designated root 8000.b2e1378d1396 root port 0 path cost 0 max age 19.99 bridge max age 19.99 hello time 1.99 bridge hello time 1.99 forward delay 0.99 bridge forward delay 0.99 ageing time 299.95 hello timer 0.50 tcn timer 0.00 topology change timer 0.00 gc timer 0.04 flags vnet5 (3) port id 8003 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8003 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags vnet0 (2) port id 8002 state forwarding designated root 8000.b2e1378d1396 path cost 100 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 8002 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags bond0 (1) port id 0001 state forwarding designated root 8000.b2e1378d1396 path cost 10 designated bridge 8000.b2e1378d1396 message age timer 0.00 designated port 0001 forward delay timer 0.00 designated cost 0 hold timer 0.00 flags I do see the new port listed as learning, but in line with the forward delay, only for 1 or 2 seconds when polling the brctl output on a loop. All pointers, tips or stabs in the dark appreciated.

    Read the article

  • Why is group write permission ignored in Ubuntu?

    - by NorthPole
    I want my user to have full access to the local Apache root folder, and I also want the Apache user to have full access to the same folder. What I did was create a new group called DevGroup and I added my user and www-data there. Also I changed the permissions to 770 to allow full group access. But now it won't allow me or the Apache user any kind of access to the folder. Here is what I get with ls: drwxrwx--- 12 root DevGroup 4096 Sep 27 17:34 testFolder Which seems perfect but when I try as a user to access the file I get this: var/www$ ls testFolder/ ls: cannot open directory testFolder/: Permission denied Also when I try to access a page in the folder from a browser: [Thu Sep 27 17:47:16 2012] [error] [client 127.0.0.1] PHP Fatal error: Unknown: Failed opening required '/var/www/testFolder/foo.php' (include_path='.:/usr/share/php:/usr/share/pear') in Unknown on line 0 What's the problem and how can I fix it?

    Read the article

  • logrotate> removing delaycompress function: should I compress the last log myself?

    - by user120006
    I'm removing the delaycompress function from my logrotating script. Before running logrotate again, should I compress the last log myself ? This is the actual situation: -rw-r----- 1 root adm 4,7M 5 mag 18:38 access.log -rw-r----- 1 root adm 5,2M 29 apr 05:44 access.log.1 -rw-r----- 1 root adm 473K 22 apr 05:45 access.log.2.gz -rw-r----- 1 root adm 605K 15 apr 05:44 access.log.3.gz -rw-r----- 1 root adm 588K 8 apr 05:44 access.log.4.gz The question is: Should I compress "access.log.1" and THEN launch logrotate ? Or logrotate will understand I removed the "delaycompress" option and fix things himself ?

    Read the article

  • Allowing users in from an IP address without certificate client authentication

    - by John
    I need to allow access to my site without SSL certificates from my office network and with SSL certificates outside. Here is my configuration: <Directory /srv/www> AllowOverride All Order deny,allow Deny from all # office network static IP Allow from xxx.xxx.xxx.xxx SSLVerifyClient require SSLOptions +FakeBasicAuth AuthName "My secure area" AuthType Basic AuthUserFile /etc/httpd/ssl/index Require valid-user Satisfy Any </Directory> When I'm inside network and have certificate - I can access. When I'm inside network and haven't certificate - I can't access, it requires certificate. When I'm outside network and have certificate - I can't access, it shows me basic login screen When I'm outside network and haven't certificate - I can't access, it shows me basic login screen and following configuration works perfectly <Directory /srv/www> AllowOverride All Order deny,allow Deny from all Allow from xxx.xxx.xxx.xxx AuthUserFile /srv/www/htpasswd AuthName "Restricted Access" AuthType Basic Require valid-user Satisfy Any </Directory>

    Read the article

  • group write permission ignored in ubuntu

    - by NorthPole
    Its probably my stupidy here but i'm stuck on this and would appreciate the help. I want my user to have full access to the local apache root folder, and i also want the apache to have full access to the same folder. What i did was create a new group called DevGroup and i added my username and www-data there. also i changed the permissions to 770 to allow full group access but now it wont allow me or the apache any kind of access to the folder. here is what i get with ls drwxrwx--- 12 root DevGroup 4096 Sep 27 17:34 testFolder which seems perfect but when i try as a user to access the file i get this var/www$ ls testFolder/ ls: cannot open directory testFolder/: Permission denied also when i try to access the a page in the folder from browser [Thu Sep 27 17:47:16 2012] [error] [client 127.0.0.1] PHP Fatal error: Unknown: Failed opening required '/var/www/testFolder/foo.php' (include_path='.:/usr/share/php:/usr/share/pear') in Unknown on line 0

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authorization

    - by Your DisplayName here!
    In the previous post I showed how token based authentication can be implemented for WCF HTTP based services. Authentication is the process of finding out who the user is – this includes anonymous users. Then it is up to the service to decide under which circumstances the client has access to the service as a whole or individual operations. This is called authorization. By default – my framework does not allow anonymous users and will deny access right in the service authorization manager. You can however turn anonymous access on – that means technically, that instead of denying access, an anonymous principal is placed on Thread.CurrentPrincipal. You can flip that switch in the configuration class that you can pass into the service host/factory. var configuration = new WebTokenWebServiceHostConfiguration {     AllowAnonymousAccess = true }; But this is not enough, in addition you also need to decorate the individual operations to allow anonymous access as well, e.g.: [AllowAnonymousAccess] public string GetInfo() {     ... } Inside these operations you might have an authenticated or an anonymous principal on Thread.CurrentPrincipal, and it is up to your code to decide what to do. Side note: Being a security guy, I like this opt-in approach to anonymous access much better that all those opt-out approaches out there (like the Authorize attribute – or this.). Claims-based Authorization Since there is a ClaimsPrincipal available, you can use the standard WIF claims authorization manager infrastructure – either declaratively via ClaimsPrincipalPermission or programmatically (see also here). [ClaimsPrincipalPermission(SecurityAction.Demand,     Resource = "Claims",     Operation = "View")] public ViewClaims GetClientIdentity() {     return new ServiceLogic().GetClaims(); }   In addition you can also turn off per-request authorization (see here for background) via the config and just use the “domain specific” instrumentation. While the code is not 100% done – you can download the current solution here. HTH (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Posting to Facebook Page using C# SDK from "offline" app

    - by James Crowley
    If you want to post to a facebook page using the Facebook Graph API and the Facebook C# SDK, from an "offline" app, there's a few steps you should be aware of. First, you need to get an access token that your windows service or app can permanently use. You can get this by visiting the following url (all on one line), replacing [ApiKey] with your applications Facebook API key. http://www.facebook.com/login.php?api_key=[ApiKey]&connect_display=popup&v=1.0 &next=http://www.facebook.com/connect/login_success.html&cancel_url=http://www.facebook.com/connect/login_failure.html &fbconnect=true&return_session=true&req_perms=publish_stream,offline_access,manage_pages&return_session=1 &sdk=joey&session_version=3 In the parameters of the URL you get redirected to, this will give you an access key. Note however, that this only gives you an access key to post to your own profile page. Next, you need to get a separate access key to post to the specific page you want to access. To do this, go to https://graph.facebook.com/[YourUserId]/accounts?access_token=[AccessTokenFromAbove] You can find your user id in the URL when you click on your profile image. On this page, you will then see a list of page IDs and corresponding access tokens for each facebook page. Using the appropriate pair, you can then use code like this: var app = new Facebook.FacebookApp(_accessToken); var parameters = new Dictionary { { "message" , promotionInfo.TagLine }, { "name" , promotionInfo.Title }, { "description" , promotionInfo.Description }, { "picture", promotionInfo.ImageUrl.ToString() }, { "caption" , promotionInfo.TargetUrl.Host }, { "link" , promotionInfo.TargetUrl.ToString() }, { "type" , "link" }, }; app.Post(_targetId + "/feed", parameters); And you're done!

    Read the article

  • The Mystery of the Vanishing Disk Space

    - by Oddthinking
    My disk space is dwindling by about 2GB a day! I only have a few more days before I run out of space. $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 143G 126G 11G 93% / udev 491M 4.0K 491M 1% /dev tmpfs 200M 696K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 499M 144K 499M 1% /run/shm /dev/sda2 1.9G 580M 1.2G 33% /tmp /dev/sda1 92M 29M 58M 33% /boot I have been searching for the biggest directories/log files, deleting and compressing. But I am still losing the war. Finally, I realised I have a big misunderstanding: julian@server1:~$ sudo du -h / | tail -n 1 16G / All of my files in / only add up to 16 GB. That leaves 110 GB unaccounted for! Clearly I have a misunderstanding: I thought the '/dev/sda4' line represented all the files visible from '/'. What should I be reading to understand where the other storage has gone? More details: I have an Ubuntu 11.10 server, that was set-up by data-center staff. It is running my own code (which is fairly prolific with log files, but otherwise doesn't store much stuff on the drive) duplicity for backups (which tends to store a lot of signature files) various other standard services, like Apache, nagios, etc. They are very lightly used. It has been up for about 4 months without a reboot. I lied about the du output (simplified it for effect). It also complained about not being able to access GVFS and the du processes's own resources. I believe they are irrelevant: . du: cannot access `/home/julian/.gvfs': Permission denied du: cannot access `/proc/10841/task/10841/fd/4': No such file or directory du: cannot access `/proc/10841/task/10841/fdinfo/4': No such file or directory du: cannot access `/proc/10841/fd/4': No such file or directory du: cannot access `/proc/10841/fdinfo/4': No such file or directory

    Read the article

  • Development teams do not scale

    - by Matt Watson
    Recently I have been thinking about how development teams don't scale very well. The bigger a team and the product get, the more time the team spends fixing software bugs. This means they spend more time doing troubleshooting and debugging as the grow. The problem is that since developers don't typically have access to production servers, there is a bottleneck in the process when doing production troubleshooting.For a team that has 10 developers, I would guess than 0-2 of them have access to production servers. If that team grows to 20 people, it is probably the same 0-2 people that have production access still. This means that those 2 key people are a bottleneck and the team does not scale correctly as you add more resources. All those new developers want is to help track down and fix software bugs, but they don't have the visibility to do it. So they end up being less productive and frustrated because they really want to fix the problems. The people who do have production access end up spending too much of their time doing troubleshooting instead of working on new projects.The solution is to remove the bottlenecks and get those people working on more important tasks. Stackify can solve this problem by giving all the developers read only access to production servers. This allows them to access the information they need to do troubleshooting on their own.

    Read the article

  • Ditch cPanel / WHM in favour of manual seup

    - by BWRic
    We currently use cPanel / WHM on a reseller account but are looking at getting a dedicated server. My first thought was to duplicate this set up on the dedicated box to allow us to quickly create new accounts. I'll be a managed server so they'll have set up the LAMP stack. I'm curious if I actually need cPanel and WHM. We don't use many of the features from cPanel / WHM, just creating accounts and databases, clients do not have FTP access. I'm no sys admin and come from a Windows / GUI background but have some knowledge in setting up development servers. WHM: Creating accounts I presume this sets up the Apache virtual host, FTP access and DNS settings. I've some knowledge of editing the Apache files to create virtual hosts. Am I correct in thinking as long as the DNS is pointing to the server IP and the virtual host is configured the server can serve the (php) pages? I'm not sure I need per site FTP access as only we will have access so I could have a server wide/htdocs only access to view all the site. The company who supply the dedicated hosts would also provide the own DNS management tool so I'm not need to cPanel one. MySQL: Creating users and databases We use cPanel to create the MySQL users and databases. As it's a dedicated box and I can have root access I think this could be replaced by SQLyog for db management and phpMyAdmin for user management. Do you I need cPanel or can I get by editing a few text files for creating the accounts, then use the MySQL tools for databases? Or am I missing something major with how the sites are configured?

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • ubuntu eth0 not reconnecting after cable unplugged

    - by Alex
    I'm running kubuntu 9.10 w/ gnome, I have a static IP defined in /etc/network/interfaces When I unplugged my network cable and rebooted, then reconnected the network cable I was not able to connect. I tried using sudo ifup eth0, and then ifconfig and it seemed as though the IP address had been assigned and I was connected, but I wasn't. I then did ifdown eth0, and again ifup eth0. For some reason I'm not able to access the network. Furthermore, I also attempted to connect via wlan, and was able to connect to the wireless network, but cannot "see" the network. I can't transfer data or access the internet or anything on the network including the router. How do I resolve this? topsy@monolyth:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:25:1c:df:70 inet addr:192.168.1.145 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:25ff:fe1c:df70/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5720 errors:0 dropped:0 overruns:0 frame:0 TX packets:565 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:378035 (378.0 KB) TX bytes:46832 (46.8 KB) Memory:fe000000-fe020000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) By access the network I mean the local network as well as the internet. topsy@monolyth:~$ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=9.14 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=1.24 ms 64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=1.01 ms 64 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=1.00 ms [snip... all OK, icmp_seq from 5-30, time between 0.981-1.25ms] ^C --- 192.168.1.1 ping statistics --- 30 packets transmitted, 30 received, 0% packet loss, time 29035ms rtt min/avg/max/mdev = 0.971/1.300/9.140/1.458 ms topsy@monolyth:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default 192.168.1.1 0.0.0.0 UG 100 0 0 eth0 root@monolyth:~# cat /etc/resolv.conf # Generated by NetworkManager

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    Hi all, This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • WSUS is not using Akamai CDN for syncronisation source

    - by Geekman
    I've just installed a WSUS onto our network, and I'm currently doing the initial sync. I've found that WSUS does not seem to be talking to an Akamai cache, but rather with MS directly. This is contrary to what I've always thought regarding Windows Update traffic. Tcpdump of our WSUS server doing initial sync... As you can see it's speaking with 65.55.194.221. For me to speak to this IP, I have to go over international transit links. Which is of course not ideal. 8:42:31.279757 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4379374:4380834, ack 289611, win 256, length 1460 18:42:31.279759 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4380834:4382294, ack 289611, win 256, length 1460 18:42:31.279762 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4382294:4383754, ack 289611, win 256, length 1460 18:42:31.279764 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [P.], seq 4383754:4384144, ack 289611, win 256, length 390 18:42:31.279793 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4369154, win 23884, length 0 18:42:31.279888 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4377914, win 23884, length 0 18:42:31.280015 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4384144, win 23884, length 0 And yet, if I ping download.windowsupdate.com it seems to resolve to a local (national) Akamai node, just fine: root@some-node:~# ping download.windowsupdate.com PING a26.ms.akamai.net (210.9.88.48) 56(84) bytes of data. 64 bytes from a210-9-88-48.deploy.akamaitechnologies.com (210.9.88.48): icmp_req=1 ttl=59 time=1.02 ms 64 bytes from a210-9-88-48.deploy.akamaitechnologies.com (210.9.88.48): icmp_req=2 ttl=59 time=1.10 ms Why is this? And how can I change that (if possible)? I know that I can manually specify a WSUS source to sync with instead of pick the default MS Update like I currently have... But it seems like I shouldn't have to do this. NOTE: I've haven't confirmed if a WUA speaks with Akamai, just looking at WSUS as all WUAs will use our internal WSUS from now on. We'll be looking to join an IX shortly with the hopes of peering with an Akamai cache and have very fast access to Windows Updates. Before I let this drive my motivations for an IX at all I want to first confirm it's actually possible for WSUS to speak with an Akamai cache. I know this is somewhat networking related, but I feel like it has more to do with WSUS than anything, so someone who knows WSUS better than me will likely be able to figure this out.

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >