Search Results

Search found 7669 results on 307 pages for 'dealing with clients'.

Page 9/307 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Get OpenVPN clients names to resolve through dnsmasq

    - by Fake Name
    I have a PFSense box running as an OpenVPN server. There are several remote devices that connect through the VPN (as tap devices). The VPN stuff is working, I can access the remote hardware by looking up the IP assigned to each device on the PFSense router. What I'd like is to have it so I can resolve the remote hardware addresses via DNS while on the local network. Note that this is only local-network - remote-device (they're backup boxes). I don't need to have the remote devices resolve using the local DNS forwarding agent. I have the rest of the devices on the network that need to be accessible via DNS report their name during the DHCP process. However, the IP assignment for OpenVPN tap clients, while it is dynamic (which is why I need DNS), does not seem to use the local DHCP server. How can I have my openvpn server add information for it's clients to the dnsmask resolver? Is this setup even reasonable (I'm not familiar with openVPN at all)?

    Read the article

  • XP Clients can't copy to networkshare

    - by chewbacca76
    i have a windows 2003 domain where i have strange problem. One of our file shares is on a 2003r2 domain controller, xp clients trying to copy files on the share are always getting the error error copying file or folder filename could not be copied. path too long while windows 7 clients work fine. Nothing unusal is found in the eventlog on both the server and the client. It doesn't matter if i access the share by fqdn or ip, the path is including filename shorter than 20 characters i.e. \path\share\file.txt Copying files to other servers is fine. Reading from the shares is ok too. Happened from one day to the other, one windows update that was installed this day (kb2736233) was removed but nothing changed. thanks for any tips

    Read the article

  • can't enable share on clients in my network

    - by nahman
    i installed on my subnet a win 2003 server as the domain controller, with dhcp and dns options too. the clients, win xp pro and and win 2003 server. in my clients when i log in via the domain, i don't have the option to share folders in the netwrok! i want to share folders this way: right lcick on the fodler Properties Sharing Share how can i make it appear? (if i log in to the computer as the administrator i do have this option) p.s. please be specific for how to enable it, thanks a lot :) nahman.

    Read the article

  • How can I load one image over network to multiple computers on boot?

    - by user754730
    A few years ago I saw this in a company but I don't know how it was built. There was 1 Computer (I don't know if Windows Server or plain Windows 7 - the server) and 3 other computers (Windows 7 - the clients). As soon as the Windows 7 clients were started, they all started up the same image (Don't know if the same image file or just the same state) over network and were able to work on the computer. As soon as the machine was shutdown, all the changes made to the system were erased. How could I build a system like this so I have 1 image file which I keep up to date and then feed it to the other machines in my network? It would look this this basically:

    Read the article

  • NFS v4, HA Migration, and stale handles on clients

    - by Karl Katzke
    I'm managing a server running NFS v4 with Pacemaker/OpenAIS. NFS is configured to use TCP. When I migrate the NFS server to another node in the Pacemaker cluster, even though the metadata is persisted, connections from the clients 'hang' and eventually time out after 90 seconds. After that 90 seconds, the old mountpoint becomes 'stale' and the mounted files can no longer be accessed. The 90 second grace period seems to be part of the server configuration and not the client configuration. I see this message on the server: kernel: NFSD: starting 90-second grace period If I restart the NFS client on the client nodes after I migrate (unmounting and then remounting the share), then I don't experience the problem, but connections and file transfers still interrupted. Three questions: What is the 90 second grace period? What's it there for? How can I keep the files from going stale on the clients without restarting them after I migrate the NFS server to another node? Is it actually possible to migrate the NFS server without having large file uploads drop?

    Read the article

  • Asterisk/FreePBX: Allow other Google Talk clients to ring when using motif module

    - by larsks
    I've recently installed FreePBX to act as a link between a SIP soft phone and my Google Talk account. It was easy to set up and outbound calls work just fine, but I've run into two problems with inbound calls that I'm not sure how to resolve. I'm using an inbound route to forward all calls from Google to my soft phone. If the soft phone is not currently registered, Asterisk answers and immediately generates a fast-busy signal (reporting CHANUNAVAIL in the logs), and the call is lost. If the soft phone is registered, Asterisk "answers" the call before rining the soft phone, which means that other Google Talk clients never ring (since from their perspective someone has answered the call). For solving (1) seems like I could use the ChanIsAvail() function (or this answer) to prevent Asterisk from answering in the event that the phone isn't registered. However, I'm not sure what to do about (2), because the behavior I want is for Asterisk to not "answer" the call until I answer the call on the soft phone. How do I configure Asterisk (ideally within the FreePBX framework) such that I can continue to receive calls at other Google Talk clients in addition to forwarding them to a SIP phone?

    Read the article

  • Sharing large (multi-Gb) files with clients

    - by Tim Long
    I wasn't sure if this was the best place for this question, but I think it is squarely in the realm of the IT admin so that's the reason I put it here. We need to share large files (several Gigabytes) with external clients. We need a simple way of reliably and automatically publishing these files so that clients can then download them. Our organization has Windows desktops and a Windows SBS 2011 server. Sharing from our server is probably suboptimal from the client's perspective, because of the low upstream bandwidth of typical ADSL (around 1 Mbps) - it would take all day (9 hours for a 4Gb file) for the client to download the file. Uploading to a 3rd party sever is good for the client but painful for us, because we then have to deal with a multi-hour upload. Uploading to a third-part server would be less problematic if it could be made reliable and automatic, e.g. something like a Groove/SharePoint Workspace, simply drop the file in and wait for it to synchronize - but Groove has a 2Gb limit which is not big enough. So ideally I'd like a service with the following attributes: Must work for files of at least 5Gb, preferably 10Gb Once the transfer is started, it must be reliable (i.e. not sensitive to disconnections and service outages) and completely automatic Ideally, the sender would get a notification when the transfer completes. Has to work with Windows based systems. Any suggestions?

    Read the article

  • NFSv3 + ACL: mask is gone on clients

    - by Jorge Suárez de Lis
    I'm sharing a NFS folder among a user group. The default umask on the clients is 0700, and this is a problem because newly created files won't be readable/writable by another users. So, I'm using ACLs to force the umask 0770 on the shared folder, and this works OK on the server, but not on the clients. server # getfacl /export/proyectos getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:mask::rwx default:other::r-x server # getfacl /export/proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- As you see, the default (and also a specific on the second directory) mask ACLs are being applied. I mount the whole share on the client: 172.16.54.56:/export/proyectos on /proyectos type nfs (rw,noatime,rsize=131072,wsize=131072,acregmin=10,acl,nfsvers=3,addr=172.16.54.56) But the mask and default:mask ACLs are gone. client $ getfacl /proyectos/ getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/ # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:other::r-x client $ getfacl /proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx other::--- default:user::rwx default:group::rwx default:other::--- It lacks the default:mask and mask ACLs, the only ones that I've setted. So the proposed solution to enforce umask won't work for me. Why is happening this?

    Read the article

  • Windows clients unable to access Samba share on AD joined Linux box every 7 days

    - by Hassle2
    The problem: Every 7 days, 2 Windows Servers are unable to access a SMB/CIFS share. It will start working after a handful of hours. The environment: OpenFiler Linux box joined to 2003 AD Domain Foreground app on Win2003 server access the SMB/CIFS share with windows credentials Another process on Win2008 access the share via SQL Server with windows credentials The Samba version on the Linux box is 3.4.5. Security is set to ADS wbinfo and getent return back expected users and groups Does not look to be a double hop issue as it's always the 2 accounts, regardless of the calling user. There is a DNS entry in both forward and reverse lookup zone for the linux box The linux box's computer object in active directory shows that it was modified around/at the same time that the two clients started failing to access the share Trying to access the share via IP works when by name does not Rebooting the Windows server takes care of it (it's production and only restarted it once) Restarting smbd, winbind, nmbd had no effect Error in samba log for the client in question: smbd/sesssetup.c:342(reply_spnego_kerberos) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! The Question: Does this look like the machine account password is changing (hence the AD object showing the updated modified date) or are the two windows clients unable to request a new ticket that works against this linux box?

    Read the article

  • On clients use generic driver for printer shared by CUPS

    - by Daniel
    I know this worked really easy with a recent CUPS version some years ago. Unfortunately I fail to do the following with latest CUPS nowadays. How to share a printer without using printer specific drivers on the clients with CUPS? The printer is a Samsung ML-2010. The important fact is that it needs a quite specific library for printing called splix. That is installed on the server and prints well. What makes trouble is using that printer over the network. I found out how to use Avahi under Linux to make use of DNSSD to advertise and discover printers. But as far as I understand the new CUPS offers the internal driver interface on the network. This has 2 major issues: anybody can fiddle with the driver and I don't trust any uncommon printer library to be "network secure" anyone who wants to use my printer including guests needs to install the specific drivers first I remember the old days when I could enable "Share this printer" on the CUPS server and all clients would magically detect the printer and just send their job data to the server and have it do the driver stuff. After everything a read I guess this is related to the changes Apple introduced with CUPS including dropping of the integrated network discovery protocol. If it helps: Server: Ubuntu LTS 12.04 Server with CUPS 1.5.3 Client: Arch Linux with current CUPS 1.6.1 On another box with Ubuntu the printer was setup automatically at least but the mechanism used the Splix library for that.

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • Mail server DNS failed to resolved by Mac clients

    - by Concordus Applications
    We have two internal DNS servers. One is located on a linux server box and the other is the router's DNS management. We set the linux box as primary DNS via DHCP and the router as secondary. We have a few Mac clients that are accessing our internal mail server (hostnamed "mail" internally). When using IMAP or SMTP against the mail server internally, the mac boxes will sometimes fail to locate the server. If I use NSLOOKUP I can see that "mail" is pointed to the correct IP address and is being resolved via the correct DNS server, but if I ping "mail" it fails. ~ (bash)$ nslookup mail Server: 254.254.254.206 Address: 254.254.254.206#53 Name: mail.example.com Address: 254.254.254.205 Note: I replaced our actual internal IP address with 254.254.254.* If I wait a few minutes (3-5 minutes), somehow it resolves itself and sends successfully. This happens multiple times a day. The /etc/hosts file on the mac boxes is the default config. ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Is there something about Mac clients I should know to prevent this failed DNS resolution? Client boxes are: OSX 10.7.4, 8GB RAM, i5 MacBooks Server is: Ubuntu 12.04 Server

    Read the article

  • Some DHCP clients end up with wrong DNS server

    - by Nic Waller
    The scenario: DC running Windows Server 2008 R2 providing DNS + DHCP Cisco 1811 Router as the gateway 30 Windows XP DHCP clients on the LAN The problem: Some workstations are spontaneously switching to an incorrect DNS server. Specifically, ipconfig /all shows that they start using the gateway as a DNS server. This happens about 5-10 times a day to various computers, sometimes more than once per day. The workaround: Repairing the connection on the XP client always fixes the problem, and the correct DNS server address is obtained. We lost our main DNS/DHCP machine a week ago, and had to bring this one online as a spare. We've been having this issue since then. DHCP leases on the old and new servers are configured for "wired" (8 day) duration. There are definitely no other DHCP servers active on the LAN. So far there is no discernible pattern about which clients will show this problem, or when. When I ran DCDIAG /test:DNS it came back clean. Manual inspection of the DNS zone shows that all the records are appearing as expected, with no traces of the previous machine in there. Update Feb 27: Added screenshots. Here is a screenshot of the DHCP scope options on the 2008 R2 server. And here is a screenshot of ipconfig /all running on a healthy host. I don't have any ailing hosts at the moment, but will grab a screencap next time it happens. Update Feb 28: More screenshots. Here's a screenshot of DHCP and DNS traffic from a healthy client when repairing the local area connection. There's definitely only one server responding, but it does seem strange that the negotiation takes place twice. I'll try to get a similar capture from a sick machine this coming week. Update Mar 01: Caught a bad ipconfig. Here's a screenshot of ipconfig /all from a client that had this issue. It says the lease was issued this morning, but it doesn't even have an entry for the secondary DNS I set up yesterday. Both DNS servers were discovered properly when repairing the connection. Update Mar 01: It even got the sysadmin! This issue finally affected my personal workstation this morning. Unfortunately I had just rebooted and wasn't running a packet dump at the time. I set up a secondary server yesterday, and was logging all DNS traffic to it. My machine had not contacted the secondary DNS in over half an hour, so that says to me that it's just spontaneously reverting to the gateway without even failing over to secondary DNS first. Today I swapped the order of the DNS servers in DHCP, so the secondary is primary and vice versa. I will update again once I know how that goes.

    Read the article

  • Setting up a wireless connection b'ween 2 clients

    - by shadeMe
    I've got a DLINK router connected to my ADSL modem that forms the hub of my home network. I've two clients - A desktop with an onboard wireless adaptor and a laptop that's outfitted with the same. The former runs Windows 7 x64, the latter Windows XP x86. I'd like to setup a wireless connection between the two that would allow me play games over to and to a lesser extent, transfer and stream data. How would I go about doing it ?

    Read the article

  • Certain clients (IP range) can not ping server

    - by Logman
    I just virtualized a Windows 2003 Server SP2 x32. The server contained our help desk server (Spiceworks) and our anti virus management server (ESET RAC). The host computer actually contained the virtualized server originally; I created the vhd and then I wiped this system clean and installed Windows 2008 R2 x64 Datacenter and added the virtualized 2003 onto the Hyper-V 2008 R2 Server. I got the server running fine except for... certain ip ranges. Local clients can get updates from the AV server from my 192.168.180.xxx & 192.168.181.xxx BUT NOT from any 192.168.182.xxx, 192.168.183.xxx, 192.168.184.xxx etc... I can not ping the server from any clients except for the 180. & 181. ranges. Now I created 2 other virtualized servers (win2008 & a win7 pro) and they exist on the same virtual host as the 2003 server. And at first I could not ping those until I went to the "\Network and Sharing Center\Advanced sharing settings" and Turned On File and Print Sharing. Then I could ping and access those virtualized guests. Win2003 server isn't quite the same. But I am sure I have it on. But now when I ping from a client on one of those ranges that would not work I get this: As you can see the ping leaves our network. We have 2 ad/dns servers (one 180. & the other in the 181. range). Is it DNS? Both AD/DNS servers are Windows 2003. And we plan on upgrading both to 2008 R2 within a month or two but I need to fix this issue pronto (esp the AV end). btw, I did rename that 2003 Server (Spiceworks/AV) hostname. And I tried a CNAME. But I do not think that is the problem. EDIT: OR because this server existed on this hardware/computer before becoming virtualized?

    Read the article

  • Dealing with external processes

    - by Jesse Aldridge
    I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far: Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example: import textwrap, os, time from subprocess import Popen test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) proc = Popen('python -B "%s"' % test_path) for i in range(3): print 'Hello %i' % i time.sleep(1) os.remove(test_path) I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement. If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code. I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that. A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem... Blocking processes cause the gui to become unresponsive. import textwrap, sys, os from subprocess import Popen from PyQt4.QtGui import * from PyQt4.QtCore import * test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) app = QApplication(sys.argv) button = QPushButton('Launch process') def launch_proc(): # Can't move the window until process completes proc = Popen('python -B "%s"' % test_path) proc.communicate() button.connect(button, SIGNAL('clicked()'), launch_proc) button.show() app.exec_() os.remove(test_path) Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details... Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong. Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time. I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more... Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background. Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that. It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging. Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes. This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager. F***ing quotes Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''". A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this. Can't easily return data from a subprocess. You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine. Obscure command-line flags and a crappy terminal based help system. These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do. External factors. This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior. There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?

    Read the article

  • VPN clients under PROXY SERVER

    - by Kumar P
    I want to set proxy server for my windows xp desktops. Currently using Windows 2003 as server. But client machines using VPN clients for our work. I installed and run FREEPROXY for proxy connections. But after i installed proxy, i can't connect VPN. What can i do here for setup proxy server with vpn connection. If change server from windows to linux, i am ready for it too.

    Read the article

  • OpenWRT LAN clients don't see each

    - by Valentin Galea
    I'm a beginner at using OpenWRT but I'm loving it already. Everything is running fine (internet, wifi) on all clients except one thing: the computers running in my LAN don't see each other. Pings to them or between them gives "destination host unreachable". This happens when using either the machine's IP's or their assigned hostnames. Running Backfire 10.03.1 - default settings everywhere except of course for the specific ISP stuff in the WAN. What's going on?

    Read the article

  • System Centre Essentials 2007 Clients Losing Status

    - by David Collie
    Clients that have been reporting their state fine suddenly start reporting "Windows 0.0" for their OS and "Not yet contacted" for their Last Contacted date even though previously these values were present and correct. Any idea why this might be happening? I've spend a long time playing with wsauclt etc and just when I think I've the problem it comes back again! Thanks.

    Read the article

  • Automount of AFP shares on Lion clients from OS X Snow Leopard server - not working right

    - by Paul
    Recently upgraded Snow Leopard clients to Lion. AFP shares are set up as login items. The network shares on some of the machines now take a long time to come up (10 minutes). Several machines that were upgraded are acting normally, and three machines with clean installs of Lion (not upgraded) are also working normally. Is there a network file causing a conflict on some of these client machines? Thanks, Paul

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >