Search Results

Search found 21090 results on 844 pages for 'webservices client'.

Page 306/844 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • How to manage configuration software installations of non-domain Windows XP machines?

    - by Digi
    I have a large set of unattended Windows XP machines who are not connected to a domain or even to each other. I am struggling to find any tools out there that I can use to deal with them in one application. I am hoping to find software that I can perhaps install a client on each machine, then have it essentially proxy out configuration information and possibly commands (install, uninstall, stop service, etc) across the whole network. The closest I've come is Nagios and its client, but it cannot be used to push files through and run commands remotely. Any suggestions?

    Read the article

  • How can I unify my email, calendar and tasks (2 exchange accounts + 1 gmail)

    - by Assaf Stone
    This is my situation: I work as a consultant, and thus work out of multiple computers: my work-laptop a desktop at my primary client my desktop at home an android smartphone an android tablet Likewise, I have multiple accounts: A Microsoft Exchange (2010 AFAIK) account A Microsoft Exchange (2007 AFAIK) account A gmail account The most important thing I need is the ability to have events in one calendar affect the free / busy status of all other accounts (so that if I am busy on Monday 9am with an event from my employer's account, it will show that time as busy in my client's account, and in the gmail account. Second thing I need is a unified view of all of my accounts' info: Appointments, email, tasks, and contacts (in that order of importance). I've already tried outlook synchronization tools such as gSyncit, to sync both exchange accounts with gmail, but this creates a mess when updating appointments (deleted appointments sometimes return, timestamps revert). Is there perhaps some way to at least synchronize the free/busy state in a way that all of my calendar apps / accounts will look there to see if I can be invited? Just solving that would be well worth my while. Thanks, Assaf

    Read the article

  • How to properly uninstall/reinstall Ubuntu One on Windows XP?

    - by user73303
    I had previously installed an Ubuntu One client as a test on a Windows XP machine. Now I wanted to change the account for the client to a production one but had problems changing the email address so decided to do a reinstall. Ran uninstall. Downloaded ubuntuone-3.0.2-windows-installer.exe. It downloads, goes through the unpacking/install – strangely some of the messages say updating as if it was replacing something that was already there. I do not get the setup/signin screen. There is no ubuntu% processes running. The Program files/ubuntuone directory exists with data and dist folders. The U icon is on the desktop – pointing at ubuntuone/dist/ubuntuone-control-panel-qt.exe but this does not run. Ran uninstall again, deleted Program files/ubuntuone directory, removed any ubuntu entries for registry, rebooted. Downloaded install again - exactly the same as above. How can I uninstall Ubuntu One to get a clean reinstall? Or force the install to continue after downloading/unpacking?

    Read the article

  • How do I use a URL path instead of a file path in an Open File dialog in Mac OSX or ChromiumOS?

    - by Chris
    In Windows 7 (and perhaps earlier), the default "Open File" dialog box allows you to type a full URL into the "File name" section as if it were a file path, e.g. "http://www.example.com/pic.gif" instead of "C:/windows/pictures/pic.gif". When uploading a file to a website on the client side - say, an image - this allows the client to upload a picture located on a server accessible via the URL instead of downloading the image, saving it locally, then referencing the local image in the "Open File" dialog. It's a great option for Windows users. I have three separate questions: What is this procedure formally called? How do I describe this succinctly so that my searches for more information are fruitful? Can something similar be done in Mac OSX, Chromium OS, or a Linux environment? If so, how? Thanks!

    Read the article

  • Is Samba "remote browse sync" possible across OpenVPN tunnel?

    - by John Reynolds
    I'm connecting 2 TomatoUSB (Shibby build on WNR3500L v2) routers with an OpenVPN routed connection: ----------------------- ----------------------- | Router 1, subnet 20 | <--tunnel--> | Router 2, subnet 21 | ----------------------- ----------------------- Router 1 is the OpenVPN server and Router 2 is a client. Clients attached to the routers on both subnets can ping clients on the other subnet, so the tunnel and routing works. I've enabled file sharing on both, in order to get their Samba WINS servers running. Is it possible to get name resolution across the tunnel? I've tried remote browse sync = 192.168.21.1 in /etc/smb.conf on the server side, to no avail. Also tried using the IP adress that the client gets from the OpenVPN address pool (usually 10.8.0.something), but still no joy.

    Read the article

  • How do I daemonify my daemon?

    - by jonobacon
    As part of the Ubuntu Accomplishments system I have a daemon that runs as well as a client that connects to it. The daemon is written in Python (using Twisted) and provides a dbus service and a means of processing requests from the clients. Right now the daemon is just a program I run before I run the client and it sets up the dbus service and provides an API that can be used by the clients. I want to transform this into something that can be installed and run as a system service for the user's session (e.g. starting on boot) and providing a means to start and stop it etc. The problem is, I am not sure what I need to do to properly daemonify it so it can run as this service. I wanted to ask if others can provide some guidance. Some things I need to ask: How can I treat it as a service that is run for the current user service (not a system service right now)? How do I ensure I can start, stop, and restart this session service? When packaging this, how do I ensure that it installs it as a service for the user's session and is started on login etc? In responding, if you can point me to specific examples or solutions I need to implement, that would be helpful. :-) Thanks! Jono

    Read the article

  • Why does this service refuse to start on Windows server 2003?

    - by PenguinCoder
    We have a Windows 2003 server with Cebos MQ1 (ver. 7 and ver. GRI) products installed that have been operational for years. After installing Microsoft 2010 C++ Redistributable package needed for other development, the MQ1 GRI service now fails to start. Event logs showed that two additional updates (.NET4 and the 2010 C++ Redistributable SP2) where installed by the redistributable as well. As soon as we discovered the MQ1 service was not starting properly, we removed these three installed packages. However the service still does not start; the dialog that pops up states 'The service started then stopped. '. Event logs when we attempt to start the service show nothing; IE: No errors, crashes, failures, or other information related to this service. Executing the MQ1Serv.exe directly specifies an issue of 'Missing command line operation, must specify install, uninstall and company abbreviation.' sc query MQ1Service(GRI) shows a clean exit for the Win32ExitCode of 0x0. Attempting to reinstall the client or server software gives an error of 'The procedure entry point ReInitializeCriticalSection could not be located in the dynamic link library KERNEL32.dll.' at the 'Registering Libraries' stage. At this point, further research has stated that the required function is in URL.dll and to verify the library is not corrupted. Running an sfc /scannow on the server has replaced a few DLLS; including the URL.DLL to versions from 2005. This actually broke other applications which required a reinstall (one of them being IE 7). After reinstall and updates, url.dll version is 7.0.5730.13 (2009) and Kernel32.dll is version 5.2.3790.4480 (2009). The MQ1 GRI service still will not start, specifying the same error as previous 'Service started then stopped'. Running a disassembler on Kernel32.dll and Url.dll show no functions named ReinitializeCriticalSection. Attempting the reinstall of the MQ1 client and server as well as starting the service again, fails once more. However, setting the compatibility mode on the MQ1 client install exe to 'Windows 95' actually gets the program to install. Setting the compatibility mode on the MQ1 server service does not enable it to start. I have been researching this problem for nearly a week and besides the advice to scan and replace url.dll, have come to no successful conclusions. This service was operational prior to the 2010 C++ install, without any additional parameters or settings. After removing the C++ install and all servicepacks/updates it installed silently, still does not correct the issue of the MQ1 GRI service not starting. Q: Has anyone else run into this or similar issue while attempting to get a service initialized? What have I overlooked or what else can I try in order to get this service started??

    Read the article

  • Apache certificates for some urls not working

    - by Vegaasen
    We are having a rather strange problem with a Apache-installation. Here is a short summary: Currently I'm setting up Apache with https, and server-certificates. This is fairly easy and works straight out of the box - as expected. This is the configuration for this setup: Listen 443 SSLEngine on SSLCertificateFile "/progs/apache/ssl/example-site.no.pem" SSLCertificateKeyFile "/progs/apache/ssl/example-site.no.key" SSLCACertificateFile "/progs/apache/ssl/ca/example_root.pem" SSLCADNRequestFile "/progs/apache/ssl/ca/example_intermediate.pem" SSLVerifyClient none SSLVerifyDepth 3 SSLOptions +StdEnvVars +ExportCertData RequestHeader set ssl-ClientCert-Subject-CN "%{SSL_CLIENT_S_DN}s" RewriteEngine On ProxyPreserveHost On ProxyRequests On SSLProxyEngine On ... <LocationMatch /secureStuff/$> SSLVerifyClient require Order deny,allow Allow from All </LocationMatch> ... <Proxy balancer://exBalancer> Header add Set-Cookie "EX_ROUTE=EB.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED BalancerMember http://10.0.0.1:7200 route=ee1 retry=300 flushpackets=off keepalive=on BalancerMember http://10.0.0.2:7200 route=ee2 retry=300 flushpackets=off keepalive=on status=+H ProxySet stickysession=EX_ROUTE scolonpathdelim=Off timeout=10 nofailover=off failonstatus=505 maxattempts=1 lbmethod=bybusyness Order deny,allow Allow from all </Proxy> RewriteCond %{REQUEST_URI} !^/index.html [NC] RewriteRule ^/(.*)$ balancer://exBalancer/$1 [P,NC] ProxyPassReverse / balancer://exBalancer/ Header edit Set-Cookie "(.*)" "$1;HttpsOnly" ... So - everything works fine and as expected for all of the pages that are not a part of the LocationMatch-directive. When requesting something that matches the LocationMatch-directive, I'm asked for a certificate (hence the SSLVerifyClient required attribute) - and getting all the correct certificates in my browser that is based on the root/intermediate chain. After choosing a certificate and clicking "OK", this is what pops up in the apache logs: [ssl:info] [pid 9530:tid 25] [client :43357] AH01998: Connection closed to child 86 with abortive shutdown ( [Thu Oct 11 09:27:36.221876 2012] [ssl:debug] [pid 9530:tid 25] ssl_engine_io.c(1171): (70014)End of file found: [client 10.235.128.55:45846] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!] And this just spams the logs. What is happening here? I can see this configuration working on my local machine, but not on one of our servers. There is no configration differences between the servers, only minor application-wise-changes. I've tried the following: 1) Removing CA-certificate-checking (works) 2) Adding required CA-certificate for the whole site (works) 3) Adding "SSLVerifyClient optional" does not work 4) ++ Server/Application Information Local: -OpenSSL v.1.0.1x -Apache 2.4.3 -Ubuntu -mpm: event -every configuration should be turned on (failing) server: -OpenSSL 0.9.8e -Apache 2.4.2 -SunOS -mpm: worker -every configuration should be turned on Please let me know if more information is needed, I'll provide it instantly. Brief sum-up: -Running apache 2.4 -Server certificates works just fine -Client certificates for some /Locations does not work, fails with errors PS: Could it be related with the OpenSSL version and the "Renegotiation" stuff related to TLS/SSLv3?

    Read the article

  • Remote Desktop Encryption

    - by Kumar
    My client is RDP 6.1 (On Windows XP SP3) and Server is Windows Server 2003. I have installed an SSL certificate on server for RDP. In the RDP settings (General tab), the Encryption method is set to SSL/TLS 1.0 and Encryption level is set to "Client Compatible". I have following questions In this case is it guaranteed that all communication is encrypted even when I remote login to the server? I mean pwd is encrypted Does RDP always use some kind of encryption even if there is no SSL certificate installed on the server? In this case I do not see security lock in the connection bar. When I set encryption level to "High" then I see security lock. I do believe that communication is both cases will be encrypted. Is it true? Please reply to my questions Thanks in advance Kumar

    Read the article

  • Input prediction and server re-simultaion

    - by Lope
    I have read plenty of articles about multiplayer principles and I have basic client-server system set up. There is however one thing I am not clear on. When player enters input, it is sent to the server and steps back in time to check if what should have happened at the time of that input and it resimulates the world again. So far everything's clear. All articles took shooting as an example, because it is easy to explain and it is pretty straightforward, but I believe movement is more complicated. Imagine following situation: 2 players move towards each other. A------<------B Player A stops halfway towards the collision point, but there is lag spike so the command does not arrive on the server for a second or so. Current state of the world on the server (and on the other clients as well) at the time when input arrives is this: [1]: -------AB------- The command arrives and we go back in time and re-simulate the world, the result is this: [2]: ---AB----------- Player A sees situation [2] which is correct, but the player is suddenly teleported from the position in [1] (center) to the position in [2]. Is this how this is supposed to work? Point of the client prediction is to give lagged player feeling that everything is smooth, not to ruin experience for other players. Alternative is to discard timestamp on the player's input and handle it when it arrives on the server without going back in time. This, however, creates even more severe problems for lagged player (even if he is lagging just a bit)

    Read the article

  • PuTTY automatically supply password

    - by Kyle Cronin
    I have a situation where I need to have PuTTY (or another SSH client for Windows) automatically log into another machine via SSH. I realize that this isn't a good idea security-wise, but unfortunately I'm constrained by the limitations both on the client and the server. The best solution would be to have a shortcut or script on the desktop that, when double clicked, will connect to the server and automatically log in. Can I do this with PuTTY? I am willing to explore public key authentication, but I'm not sure where the PuTTY key resides or how to copy it to the server, as the app starts automatically upon login.

    Read the article

  • Tomcat and HTTPS connect timeout (local Proxy resolves it)

    - by smas
    I have web application on the Tomcat with webservices. I've noticed that all web services connected to https get timeout. I run this app on my localhost in my company. When I redirect all my connections through Fiddler (local proxy) everything works correctly. I don't want to execute fiddler all the time. my computer -> [FIDDLER local proxy] -> [remote proxy] // WORKS my computer -> [remote proxy] // timeout How to increase tomcat logging to get more technical logs than only "timeout". Is there any other way to get more information what blocks the https URL?

    Read the article

  • Slow WLAN file transfer between server and tablet

    - by user266985
    My file server is running Ubuntu 12.04 and I'm sharing files from it over samba. It is connected via gigabit ethernet. My desktop, running Windows 8.1, is also connected via gigabit ethernet. I can transfer files between the two and completely saturate that gigabit pipe. However, I just got a Surface Pro 2, and I'm trying to stream HD movies from my server to the device over WiFi. For some reason, I can't break much past 1.5MB/s transferring files over the network. I've tried streaming through XBMC and a standard file copy; no difference. To add the confusion, if I connect to my guest network and then use my VPN server (installed on the router) to access the file server, I get around 3.2MB/s. I've been running diagnostics to determine the root and I think I've found it but I have no idea what is causing it or how to fix it. Router: Asus RT-N66U Surface Pro 2 Network Card: Marvell Avastar 350N (Driver 19/09/2013 v14.69.24044.150) InSSIDer: Link Score: 100 Co-Channels: 0 Overlapping: 0 5GHz Network Channel: 48+44 iperf File Server as Server; Surface Pro 2 as Client - TCP Performance: Acceptable ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.90 port 5001 connected with 192.168.0.56 port 57367 [ ID] Interval Transfer Bandwidth [ 4] 0.0- 1.0 sec 10.1 MBytes 84.7 Mbits/sec [ 4] 1.0- 2.0 sec 10.4 MBytes 87.6 Mbits/sec [ 4] 2.0- 3.0 sec 10.6 MBytes 88.8 Mbits/sec [ 4] 3.0- 4.0 sec 10.7 MBytes 89.5 Mbits/sec [ 4] 4.0- 5.0 sec 10.1 MBytes 84.4 Mbits/sec [ 4] 5.0- 6.0 sec 10.2 MBytes 85.8 Mbits/sec [ 4] 6.0- 7.0 sec 7.04 MBytes 59.1 Mbits/sec [ 4] 7.0- 8.0 sec 10.8 MBytes 90.2 Mbits/sec [ 4] 8.0- 9.0 sec 10.6 MBytes 89.1 Mbits/sec [ 4] 9.0-10.0 sec 8.62 MBytes 72.3 Mbits/sec [ 4] 0.0-10.0 sec 99.2 MBytes 83.1 Mbits/sec iperf Surface Pro 2 as Server, File Server as Client Performance: Poor ------------------------------------------------------------ Client connecting to 192.168.0.56, TCP port 5001 TCP window size: 22.9 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.0.90 port 40233 connected with 192.168.0.56 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 1.0- 2.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 2.0- 3.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 3.0- 4.0 sec 1.25 MBytes 10.5 Mbits/sec [ 3] 4.0- 5.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 5.0- 6.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 6.0- 7.0 sec 1.38 MBytes 11.5 Mbits/sec [ 3] 7.0- 8.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 8.0- 9.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 9.0-10.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 0.0-10.1 sec 15.0 MBytes 12.4 Mbits/sec For some reason, it gets capped and I haven't got a clue why. Any suggestions? Edit: My link speed is reported as 270Mbps by Windows. I'm less than two metres from the router with a clear line of sight.

    Read the article

  • I try to access a NFS mount via FTP. It works but the FTP Dir listing is very slow

    - by W0bble
    I mount an NFS using this command: mount -o rsize=8192,wsize=8192,timeo=14,intr serverip:/directory /mnt/directory However the mount appears on the client as expected a cmd like "ls -a" work pretty fast on the nfs mount. But when I try to list the mounted directory via FTP it gets very very slow ( 1.250 bytes in 160,39s (0,01KB/s) ). But surprisingly downloading files via FTP from nfs work with normal speeds. I tested several values for rsize and wsize parameter with no success. Both client and server are running Debian squeeze and NFSv4

    Read the article

  • Apache mod_wsgi elegant clustering method

    - by Dr I
    I'm currently trying to build a scalable infrastructure for my Python webservers. Actually, I'm trying to find the most elegant way to build a scalable cluster to host all my Python WebServices. For now, I'm using three servers like this: 1 x PuppetMaster to deploy my servers. 2 x Apache Reverse Proxy Front-end servers. 1 x Apache HTTPd Server which host the Python WSGI Applications and binded to using mod_wsgi. 4 x MongoDB Clustered server. Everything is OK concerning the Reverse proxy and the DB Backend, I'm able to easily add a new Reverse Proxy and a new DB Node, but my problem is about the Python WebServer. I thinked to just provision a new node with exactly the same configuration and a rsync replication between the two nodes, but It's not really usefull in term of deployement for my developpers etc. So if you have a solution which is as efficient and elegant that the Tomcat Cluster I'll be really happy to ear it ;-)

    Read the article

  • How to build a good service layer in ASP.NET?

    - by Swippen
    I have looked through some questions, technologies for building a good service layer but I have some questions regarding this that I need help with. First some information of what I have for requirements. We currently have a number of web applications that talk to each other in a spiderweb looking way (all talking to each other in a confusing way via webservices and database data). We want to change this so that all applications go through a service layer where we can work more with cache and encapsulate common functionality and more. We want this layer to also have a Web API so that 3rd party clients can consume information from the service. The problem I see is that if we build the service layer with say MVC4 Web API don't we need to communicate between the application using the webAPI meaning we have to construct URLs and consume JSON/Xml. That does not sound too effective. I assume a better method would be working with entities and WCF to communicate between the application but then we might loose the Web API magic? So the question is if there is a way to consume a service layer as both a Web API (JSON/XML) and as a more backend service layer with entities. If we are forced to use 2 different service layers we might have to duplicate some functionality and other bad things. Hope the question is clear enough and please ask if you need any more information.

    Read the article

  • Looking for 'WinHlp32.exe compatible' replacement for free redistribution under vista and windows 7

    - by richardboon
    Our software installs a package of legacy software for the client, some of it has old hlp file from 3rd party vendor requiring winhlp32.exe (note: we have no legal right to modify the hlp). Those client may only have cd/dvd and might not have internet access, etc. So I need a free 'WinHlp32.exe compatible' replacement for our redistribution under vista and windows 7. Background of problem: -Microsoft stopped including the 32-bit Help file viewer in Windows releases beginning with Windows Vista and Windows Server 2008. -Starting with the release of Windows Vista and Windows Server 2008, third-party software developers are no longer authorized to redistribute WinHlp32.exe with their programs. http://support.microsoft.com/kb/917607

    Read the article

  • RSync over SSH hangs and fails with timeout

    - by tx2
    Client: Gentoo, GCC 4.3.4, RSync 3.0.9 Server: Ubuntu 10.04.4 LTS, RSync 3.0.7 Client and server connectet through is Internet, about 2Mbps. Ping is ok. RSync called on any files in any direction hangs on random file, then, after timeout, fails with: [sender] io timeout after 30 seconds -- exiting rsync error: timeout in data send/receive (code 30) at io.c(140) [sender=3.0.9] [sender] _exit_cleanup(code=30, file=io.c, line=140): about to call exit(30) In 1/10 trys is pass correctly. I've tryed to add SSH options TcpRcvBufPoll=yes, KeepAlive=yes; disable and enable rsync compression -- no changes. How can i make rsync works properly?

    Read the article

  • How to identify Draft from Inbox and Sent mails In ALL MAIl mailbox

    - by Subbi
    Hello, I am working on a mail client Application for downloading gmail emails, which uses IMAP C-client library. I want to download emails from "ALLMAIL" mailbox folder. as you know ALLMAIL folder consists of Inbox,Sent Mail and Draft Mails. Here my requirement is to distinguish Draft from Inbox and Sent mails. Usually if we download envelop of emails, that should give email's Draft info. But Gmail is failing to set this draft info. So can you please suggest how to identify draft? Thanks In advance Subbi

    Read the article

  • Will this SPF record restrict delivery of email for the original domain?

    - by user199421
    As part of the product we offer we send emails on behalf of our clients. Because the emails don't come from an IP associated with the client they are sometimes flagged as spam. We advised some of our clients to add an SPF record approving us to send emails on their behalf. We saw immediate improvement in deliverability rates after making the change however one of our clients was notified by his hosting provider that the SPF record we suggested to add would "slightly restrict" all emails that don't come from our servers (including our client's own servers). The record we use is this: v=spf1 a mx include:ourdomain.com ~all So my question is if the warning we received about this is correct and if so why and what can be done to solve this (allow sending email both from original domain and by ourselves).

    Read the article

  • How to make sure clients update their browser cache when my website is updated?

    - by user64204
    I am using the HTTP 1.1 Cache-Control header to implement client-side caching. Since I update my website only once a month I would like the CSS and JS files to be cached for 30 days with Cache-Control: max-age=2592000. The problem is that the 30-day period defined by Cache-Control doesn't coincide with the website update cycle, it starts from the moment the users visit the site and ends 30 days later, which means an update could occur in the meantime and users would be running with outdated content for a while, which could break the rendering of the website if for instance the HTML and CSS no longer match. How can I perform client-side caching of content for periods of several days but somehow get users to refresh their CSS/JS files after the website has been updated? One solution I could think of is that if website updates can be schedule, the max-age returned by the server could be decreased every day accordingly so that no matter when people visit the website, the end of caching period would coincide with the update of the website, but changing the server configuration every day goes against one of my sysadmin principles (once it's running, don't touch it).

    Read the article

  • How to export a list of addresses that email in my spam folder was sent to? [migrated]

    - by Hugo
    With a Gmail [email protected] address, you'll also receive email sent to [email protected] addresses, very handy for creating filters. I often [email protected] when signing up to websites, so if I end up getting lots of spam sent to that address, I know who to blame. But what's a good way to find a list of all username+anything@ addresses in my Gmail spam folder? I'd prefer to do this within the web client if possible. Next best is using external client such as Outlook or Opera mail but without having to download lots of mail if possible. I don't really want to download spam emails.

    Read the article

  • Do you use your personal laptop for work? [closed]

    - by davekaro
    We're trying to get our company to let us use our own personal laptops for client work. We've agreed that any code/data will be encrypted using something like TrueCrypt, in case a laptop is stolen or lost. However, the company is still skeptical and not sure they want to allow us to use our personal machines for development. They would rather buy us laptops... but we want to use MacBook Pros and they don't want to pay for them. Even if they did buy us laptops, we would stil have the issue of needing to encrypt the code/data in case of theft/loss. Do you use your own laptop for work? What are the arguments for/against this? UPDATE: Thanks for all the responses, its given us a lot to think about. This was originally brought up because we were asking for a "personal loan" to buy new laptops for ourselves, and then we threw in there that we would use these laptops for work too - since right now we use our personal laptops occasionally, e.g. at client site or weekend support.

    Read the article

  • VPN Authentication Credentials (Local/Remote Identifiers) For Remote Access VPN

    - by thatidiotguy
    So I am trying to set up a remote access VPN using the free ShrewSoft vpn client: https://www.shrew.net/software I want to use a PSK as the authentication mechanism combined with XAuth so that a connection requires a valid username/pass combo. Under the authentication tab this particular VPN Client however is asking for a Local Identity and a Remote Identity. The options for Local Identity Type are: Fully Qualified Domain Name User Fully Qualified Domain Name IP Address Key Identifier The options for Remote Identity are: Any Fully Qualified Domain Name User Fully Qualified Domain Name IP Address Key Identifier My current thinking is that I can use the Fully Qualifed Domain Name provided by the remote firewall for the Remote Identity, but I do not know what it wants for local identity. Just to stress: I am not trying to set up a site to site VPN. Can anybody shed any light on what I am missing here? A screenshot can be provided if that would be helpful. The current error I am getting during the connection is: IKE Responder: Proposed IKE ID mismatch

    Read the article

  • Windows clients cannot access machine running DHCP server

    - by science9712
    I'm trying to setup a small LAN, using an Ethernet switch, an Arch Linux server, and around 10 Windows XP machines. This network has no outside connections. The Arch machine has a self configured ip address (configured with ip addr add 192.168.0.1 dev eth0), and acts as a DHCP server(using dhcpd). This portion works great, windows clients get IP addresses, the correct gateway settings, perfect. However, the clients cannot connect to each other, or to the dhcp server. When I run ping 192.168.0.1 on any client, I get no response, same happens if I try to ping any other client. On the gateway machine, I can't ping any of the clients either. Any help would be much appreciated!

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >