Search Results

Search found 16616 results on 665 pages for 'home sharing'.

Page 282/665 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • How to configure auto-logon in Active Directory

    - by Jonas Stensved
    I need to improve our account management (using Active Directory) for a customer support site with 50+ computers. The default "AD"-way is to give each user their own account. This adds up with a lot of administration with adding/disabling/enabling user accounts. To avoid this supervisors have started to use shared "general" accounts like domain\callcenter2 etc and I don't like the idea of everyone knowing and sharing accounts and passwords. Our ideal solution would be to create a group with computers which requires no login by the user. I.e. the users just have to start the computer. Should I configure auto-logon with a single user account like domain\agentAccount? Is there anything else to consider if I use the same account for all users? How do I configure the actual auto-logon with a GPO on the group? Is there a "Microsoft way" without 3rd party plugins? Or is there a better solution?

    Read the article

  • Permissions in OS X for iTunes library with multiple users

    - by John
    I currently have a lot of music on an external drive and my iTunes set up from there. However, periodically, when the external drive isn't connected, iTunes will default back to the library location of my home directory user path. I don't want to mess with an external drive, as my Mac's HD is large enough to house the music collection. However, I have 4 family members – all with their own logins – using this same gob of music. I don't want four copies of the library, only one with all libraries referencing it. So, what I want to do is: Move all music files to a shared directory at /Macintosh HD/users/music. I created this directory and adjusted permissions, so all four users can read and write to this directory. Get all four accounts to reference this library instead of the external or local home locations I am hoping I can just check the box to keep library organized in my account, which is the admin and let iTunes move it all. Then delete current libraries for each account and re-add from the new shared location. Will the iTunes organization process cause permissions issues either by setting permissions to all the files access to my account only or write permissions or any other 'gotcha'? I am having a hard time coming up with a smooth solution that won't break everything and cause me to have mega duplicates or access issues. I would prefer not to do any XML library file editing if possible. Am I dreaming?

    Read the article

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • Setting up virtualbox for outside access

    - by Morgan Green
    I have a computer running a server that my subdomain on my shared hosting account points to. IE subdomain.mydomain.org goes to my home server. Now then; what I'm wanting to do is be able to access my VirtualBox servers through that subdomain and a different port. E.G Ubuntu Virtual Box Server 1 Username:Ubuntuhost1 Password:MyUbuntuHost1 Port:4000 Internal IP: 192.168.1.60 External IP: 24.29.138.45 Ubuntu Virtual Box Server 2 Username:UbuntuHost2 Password:MyUbuntuHost2 Port:4001 Internal IP: 192.168.1.61 External IP: 24.29.138.45 Now I want to be able to access RDP number 1 through Port 4000, but if I access Port 4001 it will connect to the server on port 4001; both using the same subdomain. The next issue is the fact that even though I know what the IP addresses are on the router for the virtualbox hosts through ifconfig it doesn't change the fact that they don't show up on the router. If anyone knows how to configure this to work please help me out because I've been racking my brain to the highest extent I can. Alright; here's an edit to clarify more; Sorry. My ports on the router are edited to forward Port 4000 on Internal IP 192.168.1.63 (My Ubuntu Internal IP address) Now when I go to my Router Home Page my VirtualBox Internal IP Address doesn't show on the attached device listings, so I set up port forwarding anyways to the VirtualBox Internal IP. My end goal is when I connect to mydomain.org and I connect through port 3389 it takes me to my host computers server, but if I put in mydomain.org and go through port 4000 it's going to redirect to my VirtualBox server; Is this even possible? Sorry; I'm trying to clarify the most I think I can I just don't know how else to explain my issue.

    Read the article

  • Web browsing through SSH tunnel gets stuck/clogged

    - by endolith
    I use tools like Tunnelier to log into my home Tomato router through SSH, and then use it as a proxy for web browsing, tunnel for Remote Desktop/VNC, etc. Most days it works great, but some days every page I try to view gets stuck, like the tunnel is clogged. I load a web page and it seems to be loading, then stops, with the little loading icon spinning and nothing happening. I refresh the page, I reboot the router, I reboot the other computers on my home network and turn off any bandwidth-hogging services on them, I've turned on QoS on the router to prioritize SSH. I don't understand what's getting stuck. Rebooting or disconnecting/reconnecting the SSH tunnel improves responsiveness for a minute, but then it gets clogged again. It also seems to help if I don't do anything on the tunnel for a few minutes, then it will be responsive for a bit and then get clogged again. Trying to open a terminal console from Tunnelier is also unresponsive, so it's not just a web browsing problem. Likewise, connecting to http://192.168.1.1 in the browser (to the router's web config through its own tunnel) is also slow/laggy/halting. The realtime bandwidth reported by the router is nowhere near my DSL connection's limits, though it does show big spikes during the laggy times, and the connection is responsive when it shows low bandwidths. How do I troubleshoot something like this?

    Read the article

  • Servers at remote sites vs. centralized servers?

    - by Boden
    Looking for some opinions here. We've got three physical locations and site-to-site VPN between all three. Currently we've got Windows domain controllers at each location, with roughly 50 clients at each. The domains are currently separate, and we're looking at integrating the three sites. Email (Exchange) will be located at the primary site, and RPD is already being used at the secondary branches to hit the app servers also located at the primary site. The bulk of the local user load at the other two sites is just file sharing. What would the main benefits and drawbacks be of replacing the local domain controllers with NAS devices, and only keeping the domain controller(s) at the primary site? (assuming upgrades are coming regardless) Under what circumstances would you choose one setup over the other?

    Read the article

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • Can remote LogMeIn Hamachi users access our local LAN?

    - by Kev
    Unknown to me, one of the kids has installed LogMeIn Hamachi on his PC so that he can access and play on his pal's Minecraft server, and vice versa. One of the things I did was disable the Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks on the Hamachi NIC in Windows 7's Network Connections. However, my lack of fu when it comes to these types of services is leaving me feeling a little uncomfortable about him using this. Is there anything I should be worried about here? For example, can his friends access our local LAN (which has a number of NAS boxes with unsecured shares) and get up to no good?

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • Why does the Mac OS X (10.6) Finder stop being able to connect to Windows computers?

    - by Drarok
    We have one Mac (a MacBook Pro Unibody) amongst our Windows machines which connects to the "server" (actually a Dell running Windows XP Pro) to access documents. This works just fine most of the time, but sometimes after waking from sleep, it cannot connect to any Windows computer on the network. There are no errors (nor even any messages) in the Console application when attempting to connect either by the Finder's network browsing, nor when using the "Connect To Server" menu. I have tried "Relaunch" on Finder, toggled File Sharing, disabled and re-enabled the Airport, but nothing makes the Mac able to connect again until I reboot it! Other computers can connect to the machines, so it is definitely the Mac at fault. Are there any workarounds that anyone has found? Is there perhaps a way to re-start the samba client?

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • Custom Domain for Google App Engine and Google Apps

    - by Kevin
    I have set up and configured Google App Engine and Google Apps to use my custom domain with a cname 'www'. I have configured my DNS (via fasthosts.co.uk) with the cname and pointed it to ghs.google.com. I can access the website using the google app engine domain at capel-y-crwys.appspot.com but I can't access it via my custom domain www.capelycrwys.org.uk. I have allowed several days for propagation of the DNS etc. The really strange this is I can access the app via my custom domain when I use the web browser on my Android mobile phone. I can't access the app via my custom domain from my home internet connection, my work internet connection or a friends internet connection. I tried a few online web proxies and I can access the app via the custom domain. I posted this question on the google forums code.google.com/appengine/forum/?place=topic%2Fgoogle-appengine%2FfUP-G_0FKE4%2Fdiscussion and a commentor has said he could access the app via the custom domain. So why can't I access it direct via my home internet connection etc? I've tried loads of google searching and even found a similar sounding post here on serverfault serverfault.com/questions/208461/custom-domain-name-server-not-found-google-app-engine-and-google-apps but it doesn't have an answer that helps me.

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system? (To move all files to a

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it) The XP machine runs XP home and hence has "simple file shairing" turn on. So it seems that even if I create a admin account (with password) and connect with that account, it still comes in as "guest" on the XP machine. Chooseing to share the folder I want access to rather then the top of the disk drive seems to work, but is a pain as I need to share each user's folder with a different share name. If the new computer was not a laptop, I would just plug the hard disk from the old machine into it, but being a laptop I don't have that option.

    Read the article

  • Cannot access any remote resource after connecting to Cisco VPN on Vista

    - by Deepak Singh Rawat
    I have installed Cisco vpn client version 5.0.07.0290 on Vista Business SP2. I am able to successfully connnect to the vpn. But after connecting I am not able to access any resource in the vpn (like database, other computers in the network etc.). I have tried the following without any success : Older versions of the client Other vpn clients like Shrewsoft : same issue as the cisco vpn client Disabled Internet Connection Sharing service Installed the client in the root administrator account Run the installer as administrator Run the vpngui and ipsecdialer in XP compatibility mode and as administrator I am not sure how to troubleshoot this issue. Can somebody please help me in troubleshooting this issue? P.S : I've Zonealarm firewall, can that be an issue?

    Read the article

  • cygwin rsync over ssh very slow

    - by Waleed Hamra
    I have 2 machines running Windows Xp SP3. I have cygwin installed on both, version 1.7. I have rsync and ssh installed on both, and configured using default settings as per ssh-host-config and ssh-user-config programs provided. I moved the public keys into their respective locations, and basically ssh is working fine. i began an rsync operation, using: rsync -av --delete --hard-links local_dir username@other_machine:/some_dir well... on both machines, the processor is running near idle, no heavy usage. I checked IO using process explorer on both machines, and that too is at normal levels (1~2 MB/s), so I can't see where the bottlenecks are, because network performance is aweful. I'm not going over 1MB/s... when a normal file copy using windows sharing achieves some ~10 MB/s.. What could be wrong?

    Read the article

  • Thunderbird segmentation fault problem

    - by Dariusz Górecki
    Hey all, my thunderbird crashes few seconds after it boots up regardless of in safe mode or normal, here is a strace log: http://pastebin.com/tccfYwcD I've searched google, ubuntu forums, and mozilla bug tracker about this problem, but any of found answers did not helped me :/ I've tried: nscd daemon install solutions posted earlier fresh clean instal of ubuntu 10.10 (32b) and thunderbird 3.1.7 from repos on a VM - problem still exists removed all thunderbrid related dot files and dot dirs, and setup profile from beginning Remove thunderbird and related stuff with apt purge, and install tb from official .deb package None of these steps helps me, tb still crashes with segmentation fault :/ I use gmail IMAP account, I've searched and found few other tips on google, but with no success. I've even tried to remove mails from web gui, that came after first notice of problem, still no luck. I'm not using any network fs, and not sharing this account with anyone, and I do not use filesystem or home encryption either just clean install :/ If You guys need some more info for this let me know. TB: 3.1.7 from repos and official package Ubuntu 32bit 10.10 with medibuntu repos, fully updated

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • How to allow an internal server accept remote connections not through RD Gateway

    - by Matt Ahrens
    So, I help administrate a collection of servers running various windows server environments. We have a RD Gateway server, properly configured, to gatekeep for us. It does not have the other servers listed in it's server farm category, though. I just added a refurbished server for a non-profit development environment that is sharing the rack space and port. I would like this server to be accessible via remote connection, but not require RD gateway certification (I cannot add the users for this development server to our gateway since they do not work for the organization hosting the rack.) Is there any way for me to add this dev. server as an exception to which servers should require RD Gateway clearance, or otherwise let users bypass RD gateway credentials for this one machine? Thanks, and let me know if I am misinformed on how RD gateway works or anything. I am still learning.

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Printer offline until spooler service is restarted multiple times

    - by Zian Choy
    When I try to print from my ThinkPad to a printer shared through a Windows 7 Homegroup hosted by a desktop computer, I often have to restart the Print Spooler service several times before the job will go through. In particular, this problem occurs when the desktop is in sleep mode when the print job is started and then brought out of sleep mode after the print job has been kicked off. Both computers are running Windows 7 32-bit edition with the latest patches. I have tried the following with no improvement: SNMP registry hack (see MS KB for details) Following the instructions in a blog post entitled "Sharing Printers on Vista 64-bit" Looking at Printer offline until spooler service is restarted

    Read the article

  • How can I throttle the bandwidth consumed by Windows Automatic Updates?

    - by eleven81
    We have many Windows XP computers sharing one connection to the internet. These machines are set to download all available automatic updates and then prompt the user to install them. Whenever Patch Tuesday rolls around, our internet usage pegs out, and remains that way for most of the day, and sometimes into the following Wednesday. This hurts! I still want the machines to start to download the updates as soon as they are available, but if it takes until Thursday or Friday before the last updates are downloaded, that's still better than the latency and dropped connections we are seeing now as a result of the internet connection bottleneck. What can I do to throttle back how rapidly each machine downloads the updates, while still having them all start the download process as soon as the updates are available? I have no desire to run a WSUS server. Also, the internet connection is more than enough, whenever there are no updates to download.

    Read the article

  • Win2008 - restrict VPN user permissions

    - by Sebas
    Windows 2008 R2 SP1 Foundations file server with no AD, only workgroup sharing some folders, and now a RRAS server. Shared folders are open to everyone in the office (XPs and Sevens) without accounts/passwords, but I was thinking about partially limiting access to the new "VPNuser" account. I'm new to Windows Server and its permissions settings: I thought about denying access to vpnuser through NTFS rights in some folders. It doesn't work, but now I'm guessing that the vpnuser is not considered as a logged user (doesn't appear as such) and is considered a "guest", like the rest of people connecting in the office. I say that because of this: http://social.technet.microsoft.com/Forums/windowsserver/en-US/ff6d3726-ff41-4d3f-9d97-5361af0206dd/vpn-users-on-server-shows-as-guest?forum=winserverNIS Also, because when I create a txt file using the VPN connection, owner field shows in description as "guest". Am I right? How can I set different rights for the VPNuser from the rest of "guest" users in the office?

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >