Search Results

Search found 11316 results on 453 pages for 'ip geolocation'.

Page 311/453 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • NIC light is turned off after boot on Redhat 4.6 server

    - by hoffmandirt
    I have a 2950 blade server setup with Red Hat 4.6 installed. I cannot get the NIC to work properly after reinstalling Linux. I activated the NIC, but the NIC light will not turn on when I plug the network cable into the hub. The status light on the hub will not turn on either. If I run ifconfig, the NIC status is UP. Also I can ping the IP address that I assign to the Linux machine, but I can't ping anything else that is plugged into the hub. When I reboot the system, the NIC light will stay on until the system fully boots and then it will turn off again. Is there something else that I need to do to get the NIC working? It appears to be disabled even though ifconfig says that it is UP. Maybe I need to configure something within the blade server (iDRAC)?

    Read the article

  • AFXR problem using gradwell secondary DNS

    - by Roaders
    Hi All I use Gradwell.com to provide secondary DNS but I keep getting e-mails along the lines of the following saying that it's not working: You have asked us to provide a secondary DNS service for the following domain(s) Unfortunately, the primary DNS server(s) you specified are not permitting the necessary zone transfers from our servers, or they are not answering "SOA" queries for your domain correctly. I have gone through the support procedure and they weren't that helpful. They have suggested the following: Our secondline team have suggested setting the AXFR to use anouther machine. This will ensure that the transfer is not locked down to one machine and should allow any machine to make the request I don't really know what AFXR is and I only have 1 production machine so I can't set the AFXR to use another one! In previous support correspondence we confirmed that I am allowing transfers to the correct IP and that I have the correct ports open on the firewall. I am running Windows Server 2003. What can I do to try and get these zone transfers working? Thanks

    Read the article

  • Port Forwarding a Specific Port (e.g. 22)

    - by Jerry Blair
    I'm still confused about establishing an SSH connection (port 22) between two computers on different internal networks. For example: I am on my computer with internal IP address IIP-1, connected to my router RT-1. There are 10 IIPs connected to RT-1. I want to establish an SSH connection to IIP-3 which is connected to router RT-2. There are 10 IIPs connected to RT-2. At any time, there can be multiple SSH connections between IIPs on RT-1 and RT-2. Since I only have port 22 available, I don't know which SSH session is talking between which IIPs. I looked at a couple of similar questions but am still unclear on the solution. Thanks much, Jerry

    Read the article

  • How to reset password for Dell PowerConnect 2708?

    - by oherrala
    I do as the user manual says and press "Managed Mode" button to get into unmanaged mode and then press "Managed Mode" button again for managed mode. This should reset the device to factory defaults and username "Admin" with no password. However, the device resets (I think) and I can access the web console from IP 192.168.2.1, but the username and password doesn't work. Maybe the device doesn't reset after all. Or the username/password has been changed in some firmware upgrade? What should I do to get into management of this switch?

    Read the article

  • routing wiredness - traceroute 'vanishes' en route

    - by The Journeyman geek
    I'm attempting to set up one of my boxes as a server (again), but i'm having some odd connection issues- the box itself connects fine to the internet, but trying to connect to my external ip address seems to result in the trace getting 'lost' partway. http://pastebin.com/HCQAGbvn - this is a traceroute from another system that's connected to another ISP - starhub is my own one, while i have another system that i have access to on singtel. I'm wondering if my ISP is messing around with routing, or is something very odd going on. As you note, the traceroute dosen't reach me, but if it helps, i use a dd-wrt router.

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • "Request not supported" in IPCONFIG (WinXP SP3)

    - by pablog
    In a customer PC (Windows XP SP3), suddenly the network went down: the network adapter appears with an error mark. I replaced the network card, but the new one does the same thing. When I enter IPCONFIG, XP shows this error (in standard and safe mode): Internal error occurred Request not supported Unable to query host name If I start the system with a boot cd the PC runs fine, so the problem seems to be in the XP installation. I tried: uninstalling and reinstalling the network card in the Device Manager disabling and reenabling the card netsh int ip reset netsh winsock reset catalog and a couple of "reset" programs (WinsockxpFix.exe, etc) with no luck. Is there any way to fix it without reinstalling XP? TIA, Pablo

    Read the article

  • FreeBSD: Samba performance over GBit-Ethernet

    - by Axel Gneiting
    I'm using a FreeBSD NAS with RAID-Z. I can read ~300MB/s from the ZFS disks to /dev/null on the box, but only get about 50MB/s over GBit-Ethernet with SMB to Windows 7 (Samba 3.5.6). Both systems have Intel-PCIe-NICs and are connected directly. Samba is configured to use AIO and I already tried to tune TCP/IP: kern.ipc.maxsockbuf=16777216 net.inet.tcp.sendspace=1048576 net.inet.tcp.recvspace=1048576 net.inet.tcp.sendbuf_max=8388608 net.inet.tcp.recvbuf_max=8388608 net.inet.tcp.delayed_ack=0 Any ideas what's causing the bottleneck? I think the link should handle 100 MB/s easily.

    Read the article

  • How do we keep Active Directory resilient across multiple sites?

    - by Alistair Bell
    I handle much of the IT for a company of around 100 people, spread across about five sites worldwide. We're using Active Directory for authentication, mostly served to Linux (CentOS 5) systems via LDAP. We've been suffering through a spate of events where the IP tunnel between the two major sites goes down and the secondary domain controller at one site can't contact the primary domain controller at the other. It seems that the secondary domain controller starts denying user authentication within minutes of losing connectivity to the primary. How do we make the secondary domain controller more resilient to downtime? Is there a way for it to cache the entire directory and/or at least keep enough information locally to survive a multi-hour disconnection? (We're all in a single organizational unit if that makes any difference.) (The servers here are Windows Server 2003; don't assume that we set this up correctly. I'm a software engineer, not an IT specialist.)

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • Multiple Subnets on home network... would this work?

    - by rockinthesixstring
    We are looking at renting out the basement suit in our home and want to offer internet as part of the package. I however do not want the downstairs tennent to have access to our network (home office = private data). We currently have a pfsense firewall as our gateway and a Windows Server 2003 box is doing our primary DHCP (192.168.0.0/24) and DNS. Here's what I'd like to do... I'd like to setup another subnet on the DHCP server (192.168.1.0/24), and hook in another wireless router (as access point only) and address it as 192.168.1.1... from there the router will hook into our primary switch and then out through the firewall. Will Server 2003 (If I add a 192.168.1.x/24 IP to the nic) serve DHCP to the devices that connect to the new router, and will it isolate that from our network? Thanks in advance... I'm very new to multiple subnets.

    Read the article

  • Redhat doesn't set my desired hostname on reboot

    - by tomdee
    I have a redhat (EL5) server that I need to change the hostname on. I'm trying to put it back into a known state to help with server provisioning activities. As part of changing the hostname, I'm updating /etc/sysconfig/network and /etc/hosts. I also have an explicit call to hostname. My desired state is that the server thinks its hostname is "localhost". And a call to "hostname" returns "localhost". The problem I'm having is that when I reboot, the hostname is reverted to "localhost.companyname.com" which is not what I want. How do I ensure that the hostname is set up as just "localhost" when I reboot? My /etc/sysconfig/network file contains: NETWORKING=yes HOSTNAME=localhost GATEWAY=123.123.123.123 #I do have a proper IP address here My /etc/hosts file contains: 127.0.0.1 localhost.localdomain localhost 172.21.1.1 localhost.companyname.com localhost

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Mixed network, Linux-to-Linux hostname resolution issues

    - by James
    At work we have an WinSBS domain at the heart of our network, which is all Windows PCs. The domain controller is acting as a DNS for these computers. I have recently added some personal use Linux machines to the network, without joining them to the domain. I have set up Samba with "wins server" pointing to the domain controller, which lets the Windows boxes resolve the Linux hostnames just fine. I also have resolvconf set up with the domain controller as a nameserver and the local domain as a searched domain, which lets the Linux boxes resolve the Windows hostnames just fine. However, the Linux boxes will not resolve other Linux hostnames at all. Given that I don't have control over the DNS server (I am not the network admin) and that at least one of the Linux boxes is not an always-on machine and is likely to change its LAN IP frequently (via DHCP), what service am I missing to make their hostnames visible to each other?

    Read the article

  • private address in traceroute results

    - by misteryes
    I use traceroute to check paths on a remote host, and I notice that there are some private IPs, like 10.230.10.1 bash-4.0# traceroute -T 132.227.62.122 traceroute to 132.227.62.122 (132.227.62.122), 30 hops max, 60 byte packets 1 194.199.68.161 (194.199.68.161) 1.103 ms 1.107 ms 1.097 ms 2 sw-ptu.univ.run (10.230.10.1) 1.535 ms 1.625 ms 2.172 ms 3 sw-univ-gazelle.univ.run (10.10.20.1) 6.891 ms 6.937 ms 6.927 ms 4 10.10.5.6 (10.10.5.6) 1.544 ms 1.517 ms 1.518 ms why there are private addresses near the host? what are the purposes that these private addresses are used? I mean why they want to put the public IP behind private IPs? thanks!

    Read the article

  • Exposing a WebServer behind a firewall without Port Forwarding

    - by pbreault
    We are deploying web applications in java using tomcat on client machines across the country. Once they are installed, we want to allow a remote access to these web applications through a central server, but we do not want our clients to have to open ports on their routers. Is there a way to tunnel the http traffic so that people connected to the central server can access the web applications that are behind a firewall ? The central server has a static ip address and we have full control over it. Right now, it is a windows box but it could be changed to a linux box if necessary. Our clients are running windows xp and up. We don't need to access the filesystem, we only want to access the web application through a browser. We have looked at reverse ssh tunneling but it shows scaling problem since every packet would have to pass through the central server.

    Read the article

  • 3CX behind UT7.1 using a callcentric.com SIP account

    - by Corey
    Has anyone had any luck with getting 3CX working behind UT7.1 with a SIP account from callcentric.com? I am willing to reset my current UT box back to defaults, and start from there. I have a static public IP assigned to the external interface. My internal addressing is 192.168.76.0 . My 3CX box has 192.168.76.17 . Would anyone be willing to give me a step by step of changes to make in UT / 3CX. I currently have my UT box unplugged, and have replaced it with a Linksys unit. I have port forwarding setup for… TCP/UDP 5060 to 192.168.76.17 UDP 9000-9049 to 192.168.76.17 … and everything works great. I also have additional external IPs available if that helps.

    Read the article

  • Discover MAC address

    - by Kami
    I've to setup a bunch of server ! I need to discover their mac address with the following situation : MacBookPro >----------< Server I'm directly connected (not behind a router/switch) to the server. I've no clues about the ip address the server is using as default setup. I can't use a display connected to the server displaying its newtwork card config. How can I discover the MAC address of the server network card ? I'm looking for a command line tool. If something exists in MacPort it's also ok !

    Read the article

  • Routing to a Terminal Services Cluster

    - by Dave
    I am trying to connect to a Load Balanced Windows 2008 R2 cluster using Remote Desktop Services. I have no trouble connecting to the the Servers' IP addresses (.253.16 and .253.17) or the Cluster address (.253.20) from inside the subnet (.253). The trouble is when I try to connect from the other subnet(.251). I can remote to the other non-clustered servers (.253.12 and .253.15) inside the .253 subnet from the .251 without an issue. I receive a ping reply from the cluster and other servers when I am on the .251 subnet. But when I try to connect via remote desktop it times out but only to any of the IPs on the cluster (.20,.17,.16). My ASA 5510 handling the routing reports message in the log: Deny TCP (no connection) from 192.168.251.2/4283 to 192.168.253.16/3389 flag FIN PSH ACK Here is a picture if it helps http://dl.dropbox.com/u/4217864/terminal%20server.jpg Thanks for any help

    Read the article

  • xauth error with ssh X Forwarding

    - by bdk
    From my (Debain) Desktop machine, I am trying to ssh into a Debian Server with ssh -X remote-ip After logging into the remote host, I get: /usr/bin/X11/xauth: creating new authority file /root/.Xauthority /usr/bin/X11/xauth: (stdin):1: bad display name "unix:10.0" in "remove" command /usr/bin/X11/xauth: (stdin):2: bad display name "unix:10.0" in "add" command And the X Forwarding doesn't work. From my Desktop I can ssh -X into other Debian servers and it works fine. I found a lot of threads discussing similar issues on google, but they all seem to fade out without a solution, and the simple things suggested there like exporting DISPLAY or setting xhost + don't seem to make a difference.

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \\server, and it's listening on port 1234. When \\client tries to connect to \\server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \\server:1234\tables, I get error 6097, bad IP address specified in the connection path. \\server is pingable from \\client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron. Edit: I should have specified that the firewall is open to \\server:1234 specifically for TCP traffic. Is UDP involved here in some way?

    Read the article

  • Mounting to external NFS from a KVM VM.

    - by jbfink
    I've got a machine acting as a KVM host and another machine that NFS exports to that KVM host. I'd like for one of the internal VMs on the KVM host to be able to mount the NFS share. I can export to the KVM host IP fine and do a mount, but it doesn't work for the internal VM; I just get a failed error with "reason given by server: Permission denied". I've already tried to re-export the NFS from host to VM, but apparently doing two levels of NFS is not a Good Idea. Anyone know how I might get this working?

    Read the article

  • Can't enable Windows XP file sharing

    - by colemanm
    The "sharing and security" option in the right-click context menu on a folder is no longer there. The File and Print sharing option is "installed" and turned on in the TCP/IP properties, but the ability to share any folders has disappeared, along with all previously shared folders. Where should I check to see what the issue is here? EDIT From comments below: "Server" service is not starting, see comment below for more... Firewall is completely disabled, too. This is XP Pro, Simple File Sharing is turned on. We've discovered that the "Server" Windows service is not starting on bootup for some reason. When attempting manual start, we get "error 2001: specified driver is invalid."

    Read the article

  • FTP in DMZ, TCP Ports for LDAP Auth

    - by sam
    szenario: (outside)---(ASA5510)---(inside) -Windows2008 DC .....................(dmz) ..........-Win2008 FTP Server Which Ports do I need to open from DMZ-Inside that FTP Users can authentificated on the Inside DC? I have allready opend 389 (Ldap), 636 (secure Ldap) and 53 (dns). But the ftp Client stucks allways after processing the credentials and the FTP Server gives you an eventlog "logon error". the error messages indicates that there could be an issue with closed ports. if I turn the ACL to "IP", that means all ports are open, everything is working fine.

    Read the article

  • How can I find a computer on my network that is doing mass mailings?

    - by Alex Ciarlill
    I was notified by my isp that one of my machines is sending out spam. This happened about 3 months ago on windows machine running cygwin that was hacked due to an SSH vuln. The hackers setup IIS and SMTP. I cleared out the machine and all the services are disabled so I think that machine is okay I am wondering if there is any other way to identify which machine it could be coming from? The ISP has NO useful information such as source port, destination port, destination IP... nothing. I am running DD-WRT on my router, Windows 7 PC and a Windows XP PC.

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >