Search Results

Search found 5643 results on 226 pages for 'machines'.

Page 66/226 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Relax Linux - it's just me! (filesystem permissions)

    - by Xeoncross
    One of my favorite things about Linux is also the most annoying - file system permissions. In production machines and web servers I love how everything is so secure and locked down - but on development machines it really slows me down. I'll give one example out of the many that I discover weekly. Like most people, I dual-boot Ubuntu and Windows so I can continue using the Adobe CS4 suite. I often design web themes and other things while I'm still using windows. Later I'll boot into Ubuntu to take the themes and write the backend PHP for them. After mounting the windows C: drive partition I can copy the template files over so I can begin editing them. However, thanks to Linux desire to protect me I find that after coping the files I end up with a totally locked set of files where even I don't have read-write permissions. So after carful consideration about the tremendous risks that the HTML files pose to me - I chmod them so that I and apache can begin using them. Now given, the chmod process isn't that hard - but after you chmod enough files per day you get sick of doing it. I'm constantly creating, fetch, editing, and removing files from my user, git repos, php, or other random processes. This is a personal development machine after all. Everything changes on a day by day basis. So my question is, how can I get linux to relax about what I'm doing with my HTML/JS/PHP/TXT/SQL/etc. files so that I can work faster without constantly stopping to chmod things? I pinky-promise I won't hack into my account with an HTML file. ;)

    Read the article

  • Backup strategy for developer-focused Apple environments?

    - by ewwhite
    It's interesting to see the technological split between structured corporate environments and more developer-driven/startup environments. Some of the Microsoft technologies I take for granted (VSS, Folder Redirection, etc.) simply are not available when managing the increasing number of Apple laptops I see in DevOps shops. I'm interested in centralized and automated backup strategies for a group of 30-40 Apple laptops... How is this typically done safely and securely, assuming these are company-owned machines (versus BYOD)? While Apple has Time Machine, it's geared toward individual computer backups and doesn't seem to work reliably in a group setting. Another issue with these workstations is the presence of Vagrant/Virtual Box VMs on the developers' systems. Time Machine and virtual machines typically don't work well unless the VMs are excluded from the backup set. I'd like a push-based backup process with some flexible scheduling options. I know how to handle the backend storage, but I'm not sure on what needs to be presented to the client systems. Due to the nature of the data here, cloud-based backup may not be a viable option. Any suggestions about how you handle this in your environment would be appreciated. Edit: The virtual machine backups are no longer important. They can be excluded from the process and planning.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Vista WHS Client stopped resolving local names

    - by andrewcr
    I’m running Windows Home Server PP2 in my home, with 3 client computers: two XP and one Vista. I have a router that provides my local DHCP and the server has a static IP address. The other day the Vista machine hung, and on reboot stopped resolving local names. It will show the green home server client icon in the system tray, but if I attempt to log in to the console, I get a “This computer cannot connect to your home server” message. If I ping the server name from the command line, it does not resolve, and gives a “could not find host” message. Oddly enough, if I browse the network, I can see the server, but double clicking on it fails. The other machines on the local network have no problems seeing the server, and the Vista machine has no problems resolving names from the internet, it just can’t see any local machines. I’m aware that I can work around this by adding entries to my HOSTS file (it does work), but I’d like this to work the way it’s “supposed” to. I’m an experienced computer user and developer, but not a networking whiz. Can anyone tell me how local name resolution is supposed to work in my environment and/or suggest ways to troubleshoot this? Thanks, Andy

    Read the article

  • Setting up Red Hat Enterprise Linux Server as a mail exchange server

    - by Syedur
    I am a Unix/Linux/Windows Server noob. So, keep that in mind before you throw your stones at my glass house. :P I have a Windows Server 2008 R2 machine that's acting as domain controller, Server A. It's also running a DNS server. I have a Red Hat Enterprise Linux Server 5.3, Server B that is intended for mail server. In order for the mail delivery to happen, I understand that I have to set an MX record on Server A and point it to Server B. Well, I did. I manually added a host name on Server A and pointed to Server B's IP address. Then I added an MX record and pointed it to the host name. That didn't do the trick. After taking the above steps, I used the "dig" command on Server B to lookup the MX record coming back from Server A and it wasn't what I was expecting. What am I doing wrong here? I have noticed that... my Windows machines that are joined to the domain (Server A) are listed under the host names. The machines that are not joined to the domain are not list. This is fine, I am not worried about this. What does concern me, do I have to join the Server B to domain in order for Server A to recognize as a valid host and forward the MX properly? If so, some simple steps on how to join Server B to the domain would also help.

    Read the article

  • Web filtering (Proxy or DNS) with option for users to ignore the block

    - by Jon Rhoades
    We are struggling with our users visiting infected or "attack" sites and Phising in general. Most of our machines are protected by an Enterprise anti virus and monitoring solution (McAffe ePO) and we try to get people to use Firefox... But no AV is perfect and we have to endure personal machines as well (albeit on their own 'Plague' VLANs) and would like to do something about Phishing as our users seem intent on disclosing their passwords to the world... To complicate matters we don't want to implement a block for many many reasons instead we would like to implement something akin to Firefox's "Reported Scam/Phish/Attack Site" - "Get me out of here" or crucially "Let me in anyway", giving the user a choice to still infect themselves if they feel like it (or look at a site incorrectly blacklisted). The reason we can't just use Firefox is we have a core enterprise App only certified on IE6&7 - thank you Oracle. Is it possible to implement this type of advisory filtering either using a proxy (in our case Squid) or DNS? http://serverfault.com/questions/15801/what-free-options-are-available-for-web-content-filtering http://serverfault.com/questions/47520/open-source-filtering-of-https-traffic Were a good start, but they don't address the advisory aspect of the filtering.

    Read the article

  • SCM8014 to FVS338

    - by Jack
    I have a SMC8014 Router/Modem that Comcast provided me with their business class service. It was not filtering malicious traffic as aggressively as I had hoped, so I purchased a NetGear ProSafe FVS338, and put this behind the SMC8014, and all my machines behind that. After some brief configuration, all machines can see out to the internet. I also have a single web server, and I have not been able to configure things so that incoming requests can reach it. This is where I need help! I would like to have the FVS339 do NAT, so that I can assign a 192.168 address to my webserver. I've tried everything I know of, and I can't get things going. I set the SMC8014 to have a LAN facing IP of 10.0.0.1, and I assigned the FVS339 a WAN facing IP of 10.0.0.2. I would like to be able to tell the SMC8014 to just forward all traffic to 10.0.0.2, but I haven't had any success. In my (unfortunately limited) understanding, what I probably want here is a static route, but I don't know how to cofigure one, or if this is really what I want. The SMC8014 wants a Destination IP, a Subnet Mask and a Gateway IP. Any help would be appreciated.

    Read the article

  • Cisco Pix 515 ip addressing

    - by Rickard
    I have just gotten my hands on a Cisco Pix 515 (not 515E) with 3 interfaces, and are just about to start some labs. In my lab, I am using a real life scenario from an actual setup at work. As I have no access to the device at work, I am simply trying to replicate the scenario by trial and error. At work, we are given two IP addresses from the provider, which is 1-to-1 nated addresses. The addresses we are allowed to use are: 10.131.35.4-5/29 Now, we have 3 servers on a DMZ, 192.168.2.2-4/24 and 17 client computers on 192.168.1.100-117/24 aswell as some static addressed devices on 192.168.1.8-18/27 My question is, how would I best set up so that the machines on the DMZ get's translated to 10.131.35.4 and the machines on 192.168.1.* will be translated to 10.131.35.5 I don't expect or want anyone to give me a fully functional config, I may learn from it, but I'd prefer to just have some advices or maybe a guide on how to set it up. I hope someone can shed some light over my situation, have been looking through google but I guess I don't the searchwords I'm using isn't too good as I can't find any good clues. THank you very much! PS. Maybe I should add, I am not unfamiliar with the Cisco CLI, as I prefer using that before any gui's. So not really looking for any solutions for the ASDM. DS.

    Read the article

  • Very slow browsing shared folder XP client/host

    - by Ickster
    I have a pretty straightforward setup where I'm storing media files on an XP pro machine, and sharing the folder to be accessed by other XP pro machines around the house. (Typically, there's only one client accessing the share at a time, although there may be several with the share mounted.) It's been working just fine for years, but I've recently started having some problems. A couple of days ago, the host PC had power disconnected while it was running. It was restarted and everything seemed fine initially, but since then browsing the shared folder from client machines has been extremely slow and actually reading data is all but impossible. The problem exists in every access method I've tried: Windows Explorer, VLC dialogs, command line, etc. My first thought was that the disk was experiencing problems, but there are no problems viewing the files locally on the host machine. My second thought was that there was a network problem on the host machine, so I removed and reinstalled drivers for the NIC with no change. My third thought was that there might've been a problem elsewhere on the network, so I swapped out hardware to no avail. I'm regrouping and trying to come up with a methodical approach to figuring out what might be wrong. I would of course be thrilled if you can suggest specific problems (Microsoft KB articles, etc.) that I might check, but I'm not expecting a silver bullet. If you can help me outline an approach to identify the problem (including recommended tools, e.g., disk checkers, network analyzers, etc.) I'd greatly appreciate it.

    Read the article

  • Thin client - cloud machine - to run via iPad, iPhone, most Androids etc

    - by Carl Lindberg
    I'm tired of having a laptop macbook that breaks down or having files that I need to sync via dropbox etc all the time via the machines to different OS installations. It sucks. I want a thin client where I can login on any machine - my iPhone, PC desktop, iPad etc to one running machine. I would like to replace a modernly powerful desktop iMac with a thin client running via my iPad. I will connect the iPad with a keyboard/mouse too so you get the idea. But I want to be able to use some of the Android phones as well (I guess most Android phones today has a good enough performance/resolution etc to run a thin client). Of course it has to be able to have input/output in sound. Printing can be solved by PDF/emailing etc - so no direct communication to the printer ports to USB etc is necessary. Is there such a service today? It should cost somewhere under something like $40/ month. I will run stuff like CPU heavy duty ableton for music production, xCode for making iOS apps, some games etc. And on the thin client also run virtual machines. VM of Ubuntu and Windows.

    Read the article

  • Need Suggestions on Backup Strategies and Alternatives?

    - by Leejo
    I'm not sure where else to post this question since it is not exactly Code or Development related...but I know Stackoverflow is a very responsive to questions... Currently, I use Mozy Home to perform an online backup of my laptop. So far, this works well, since I only use one laptop that needs to be backed up. But, soon this may change and I want to explore other alternatives than having to perform an online backup on all machines. Ideally, I want to set up a Network Computer (Laptop/Desktop) with enough storage to hold the backups for all other machines that I would have. Each machine should be responsible for performing their backup (to the Network Computer). This would require some capability like Mozy's incremental backup strategy, but instead of online backup, I would prefer it to be done locally to the Network Computer. Can you recommend a local backup software (backup to a network pc, incremental backup, good restore options)? I'm also looking for any ideas on a local backup strategy even if its different from what I've stated? What works and what doesn't work? Thanks in advance for your help!

    Read the article

  • Very slow connection to xserve via afp or smb

    - by Mhoffman13
    Help. File transfer and connection speed to our Xserve are painfully slow from newly purchased iMacs. The Xserve is only used as a file server, its running 10.4.11. The problem seems to be only happening on brand new iMacs running 10.6.3. When connected either over afp or smb copying files is many times slower than usual. Other machines on the network running either 10.4 or 10.5 have a normal connection speed. To try to rule out OS incompatibility I connected the new iMac running 10.6 to another computer running 10.4 over the network. The file transfer speed was fast as normal. So it seems the problems lies with the X serve (maybe). The afp logs either access or error don't show anything unusual. One thing that did look different was when the imac was connected to the Xserve the user had its id listed as its IP address. The other machines connected, had the id of broadcasthost. I also noticed that when connected from the new iMac I can only see one of the mirrors. When any other computer connects both mirrors are shown. Tried a restart of the Xserve but the problem persists. Thanks in advance for any advice

    Read the article

  • vmware - ACE, Workstation - how to manage remoe clients??

    - by tom smith
    Hi. Exploring Vmware products/services and have a few questions. As I understand VM, you can use VMWare Workstation to create a VM of a target machine/box/OS. Let's call this VM, "foo". If I have 100 client PCs in my dept, and I want to install the VM (foo) on each client, and also manage the remote VM instances of (foo). How can I accompish this? Let's assume that the client machines are running Windows7, and have the vmplayer app installed on the box. I'm looking to do the following kinds of actions regarding the remote client machines: -Update the foo VM/image with new updated copies -Make sure that every VM "foo" has the same user, but a unique passwd -Monitor the traffic/status of each client VM "foo" oin each client -Start/Stop each client VN "foo" from the master console -Etc... Can this be accomplished? How would I do it, what services/products would I need? I've tried toalking to a few of the pre-sales guys in VMWare, and got nowhere, other than telling me to email my questions!! Looking at google shed more insight, but I still have questions. So, if you have detailed VMWare understanding, pointers to consultants, or resellers who can help, all pointers are greatly appreciated. Thanks -tom

    Read the article

  • Ubuntu: unattended-upgrades from a local package archive

    - by Novelocrat
    I have a local apt archive with a bunch of packages I built in it. The Packages and Release file are generated by apt-ftparchive. The Release file looks like Date: Thu, 06 May 2010 23:04:33 UTC Label: PPL Origin: PPL Suite: ppl MD5Sum: ebec3527ebc8351468b2ef8796c19855 37325 Packages d41d8cd98f00b204e9800998ecf8427e 0 Release SHA1: a0593b663d77fde88ee35b56ae1f3c17801cfe99 37325 Packages da39a3ee5e6b4b0d3255bfef95601890afd80709 0 Release SHA256: dd73a02846aee111cac58a869c6bf650886632ba82c2172ffddd81aa4429981c 37325 Packages e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 Release I'm using unattended-upgrades to keep the machines in the lab up to date on security and bug fixes, but I'm finding that it doesn't pull from my local archive. The configuration file for it looks like // Automaticall upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu hardy-security"; "Ubuntu hardy-updates"; "PPL ppl"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. //Unattended-Upgrade::Mail "root@localhost"; Yet, when I run sudo unattended-upgrade on one of these machines, newer package versions don't get installed. Can anyone point out what I'm getting wrong?

    Read the article

  • probems using ssh from cron

    - by Travis
    I am attempting to automate a script that executes commands on remote machines via ssh. I have public key authentication setup between the machines using ssh-agent. The script runs fine when executed from the command prompt. I suspect my problem is that cron isn't starting the ssh-agent due to it's minimalist environment. Here is the output when I add the -v flag to ssh: debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: gssapi-with-mic debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /home/<user>/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 149 debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> debug1: Trying private key: /home/<user>/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: No more authentication methods to try. Permission denied (publickey,gssapi-with-mic,password). How can I make this work? Thanks!

    Read the article

  • Chrome browser completely messing with network?

    - by kiasecto
    I have a bizare problem with Google Chrome on a intel core i5 running windows 7 32bit. Whenever chrome is installed, access to other computers in the home group becomes really slow - such as opening shares. Its becomes really slow to resolve windows names. Something goes hay-wire with the local network - pining local machines which is usually 0mS pings I get random timeouts and random successes. Whenever I try to load a local address inside of chrome (including localhost, 192.168.0.1 etc) - it always says something in the status bar about resolving proxy and times out after about 5 seconds, then seems to work fine. If I go to settings inside of chrome, it just brings up the internet explorer connection settings, where I have not set any proxy settings. One I uninstall chrome, all these problems go away. Network shares and name resolvings work instantly, pings to any machines never have a problem. Localhost and other network IP address work fine in all other browsers. Anyone heard of this problem before and know what it might be? I even tried re-installing winodws 7 and the problem came straight back when chrome was loaded on again.

    Read the article

  • Mac OS X - configuring ntpd server with on LAN with D-Link DIR-655

    - by Mark C
    Hey all, This question is pretty specific, but I hope someone will have seen this error elsewhere. I a configuring a machine running OS X 10.5.8 to be an NTP server for machines connected to a LAN that is not connected to the Internet. I am not too worried about knowing the "right" time on all the machines, but rather worried about making sure everyone has the same notion of time. I configured the NTP daemon on Mac by turning on the Set date and time automatically in System Preferences, using the server's clock, 127.127.1.0 as the reference clock. I figured I should see if the server can NTP query itself before proceeding to the clients. The weird part is when I run the ntpq -p command in a command-prompt when connected to my D-Link DIR-655 (firmware: 1.33), it hangs for about a minute or so each time before finally giving me some output. I thought the problem might have to do with Port Forwarding, so I configured the router to forward port 123 for the IP of the server, but that did not improve the situation. When I run the ntpq -p command on my school's network, on a Linksys WRT54G router, or with the wireless Airport card turned off - I have absolutely no problems - the command returns a response instantly. Is this normal? I can see why a query might take a minute or so, but I don't understand why one router does it faster than the other. I tried messing around with the ntp.conf file adding the burst, minpoll, and maxpoll options: server 127.127.1.0 burst minpoll 4 maxpoll 5 Figuring that perhaps I am polling too often and the configuration file is slowing me down, but even with this, the ntpq still hangs on the D-Link DIR-655, but does just fine on the other routers. Any thoughts on where the lag is coming from or if the lag is even a problem?

    Read the article

  • File in use when it's really not

    - by C-dizzle
    I am running Windows 7 Professional 32 bit on a Server 2008 network. I am getting a weird issue with an excel document where I open it up one morning, update it, save and close, the next morning I come in, open it up and it says "This file is in use and locked by csmith" which "csmith" is me! So I click on the cancel button, open it up again and it comes up fine. I can edit, save and close with no problem. But then have the same issue the next morning. Another weird thing is that we have a calendar shared in "Public Folders" under Outlook that seems to be having the same issue, which happens to be a calendar made in Excel. Exchange 2010 is installed on the server and the clients are using Exchange 2007. In the instance with the calendar, it will show conflicting edits have been made and you must keep one item or all items. And it shows an edit date of 4/24/2012 and 6/1/2012. But, there were NO edits done on 6/1, just tried opening it. This problem does not occur under my profile, but 2 others. These machines are ALSO running Windows 7 Professional 32 bit. We have a mix of Windows 7 and Windows XP machines on our network if that is any help. These issues did not start happening until we migrated from a server running Server 2003 and Exchange 2000, which the new server is running Server 2008 and Exchange 2010 as stated above. Is there something on the server side that is configured wrong?

    Read the article

  • Specific DNS sometimes resolves to wildcard, incorrectly

    - by Mojo
    I have an intermittent problem, and I'm not sure where to start trying to troubleshoot it. In our dev environment, we have two visible IP addresses on load balancers, one to the front-end, and one to a number of back-end service machines. The front-end is configured to take a wildcard DNS name to support generic "portals." dev.example.com A 10.1.1.1 *.dev.example.com CNAME dev.example.com The back-end servers are all specific names within the same space: core.dev.example.com A 10.1.1.2 cms.dev.example.com CNAME core.dev.example.com search.dev.example.com CNAME core.dev.example.com Here's the problem. Periodically a developer or a program trying to reach, say, cms.dev.example.com will get a result that points to the front-end, instead of the back-end load balancer: cms.dev.example.com is an alias to core.dev.example.com core.dev.example.com is an alias to dev.example.com (WRONG!) dev.example.com 10.1.1.1 The developers are all on Mac OS X machines, though I've seen the problem occur on an Ubuntu machine as well, using a local cloud host DNS resolver. Sometimes the developer is using a VPN, which directs the DNS to its own resolver, and sometimes he's on the local net using a DNS resolver assigned by the NAT router. Sometimes clearing the Mac OS X DNS cache, logging into the VPN, then logging out of the VPN, will make the problem go away. The origin authoritative server is on zerigo, and a dig directly to their name servers always seems to give the correct answer. The published DNS cache time for these records is 15 minutes, but the problem has been intermittent for about a week. Any troubleshooting suggestions?

    Read the article

  • Mac creating files w/ wrong perms on samba share

    - by geoffjentry
    In my group, which is very heterogeneous in terms of machines, we use a samba share to collaborate on files and such. In all but one case, it works as expected (or at least close enough). The one exception is my boss' laptop, a snow leopard macbook air. On his desktop (also snow leopard), if he creates a file it ends up serverside with perms of 774, but when he creates it with the Air, the perms are 644. The key problem is the lack of group write permission on the laptop created files. What's really confusing is that everything that I've looked at on the two machines are identical - same version of OS X, same version of samba (3.0.25b-apple), same settings for the same software, etc. I can't imagine why one machine would be different than the other, but it is. To try to be complete w/ the description, here is the relevant portion of my smb.conf file: comment = my Share path = /path/to/share public = no writeable = yes printable = no force group = myshare directory mask = 0770 create mask = 0770 force create mode = 0770 force directory mode = 0770 EDIT: I looked at three more Macs and all of them worked as expected which leaves this one laptop the true oddball. This wasn't as good as a test as the others though, as they were all leopard.

    Read the article

  • Copying compressed files from Server 2008 R2 network share to XP client via VPN fails

    - by Dejan Janjuševic
    At the first sight the question looks similar to this one. I have experienced an odd behavior while trying to copy a certain file from Windows Server 2008 R2 network share to Windows XP Professional client via VPN. The VPN was set up using RRAS on the server machine. I will try to provide as much informations as possible in order to make the issue more clear. When trying to copy the compressed file sized ~2.5 MB (via Explorer or CMD, doesn't matter), the process stalls after some 20%, producing an error message after few seconds: Cannot copy filename: The specified network name is no longer available. If i start the command ping -t 192.168.2.1 (where the IP address specified belongs to the server) side by side with the copy command, I can clearly see that the ping command times out for few seconds as the copy process stalls. When this happens all network activities are frozen. After a few seconds, the network recovers, ping continues to run normally, however the copy process stands still before it displays the above error message. Copying other files (I tried 4-5 files), of which some are larger and some are smaller, succeeds. Seems to me that I can copy all uncompressed files. As soon as I try to copy an archive, the process freezes. Even a 707 KB large archive can't be copied. I can only reproduce this behavior on 2 machines, both Windows XP Professional, one is w/ SP2 and the other w/ SP3. Other XP clients don't have this problem, neither do Windows 7 clients. If I connect to the server using Remote Desktop Connection without using VPN from either of these 2 machines (using the same user account), I can copy anything I want normally, even these "problematic" files. Does anyone have any clue about what could possibly be going on?

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • Cannot find network path for computer in workgroup of home Windows XP PCs

    - by John Galt
    VMWare Workstation 6.5 is running as an app on a Windows Vista 64bit PC host. Thanks to Workstation we have 2 guest machines running: TerriVM and MattVM (both of these run Windows XP SP2). We are attempting to get virtual networking configured so we can access the files of both of these VM guest systems from other real PCs connected to this home network. We think we are close but we can't quite get it right... Here is what we've done so far: * On VM Workstation, we set "Host Virtual Network Mapping" to use VMnet0 with the setting "Bridge to an automatically chosen adapter". * On each VM guest (i.e. using Windows explorer on XP), we rightmouse on the C disk, click "Sharing" tab, set shareName to "C_Disk" and check both boxes labeled "Share this folder on the network" and "Allow network users to change my files". Symptoms: On "JohnsRealXP" PC, we go to Windows Explorer, My Computer, Map Network Drive, type into Folder textbox: \TerriVM\C_Disk and assign drive letter T. We see all the folders on this shared drive and can open files on them. So that is good. On same "JohnsRealXP" PC, we go to Windows Explorer, My Computer, Map Network Drive, type into Folder textbox: \MattVM\C_Disk and assign drive letter M. We get a message box "_The network path \mattvm\C_Disk could not be found_". Alternatively, we type just \mattvm\ into the Folder box and click "Browse" and get a dialog box where we drill down from "Entire Network" to "Microsoft Windows Network" to "Workgroup" where both TerriVM and MattVM are listed as computers on the network. Clicking the + sign next to MattVM gives an hourglass and never enables the OK button and I have to cancel. In summary, I think we've attempted to share both of these virtual machines using the same techniques and connect to them in similar fashion, but one connects properly and the other machine can be seen but no shared resources on it can be accessed. Can anyone suggest something possibly overlooked or something to try? Thanks so much in advance.

    Read the article

  • Can't access certain web sites - reset router, any ideas?

    - by IniTech
    EDIT: This problem was resolved by my ISP - had to do with damaged fiber in one of their locations. Thanks to everyone that helped. Not sure if this is the right site (I'm a StackOverflow user) so I thought I'd give it a shot. I'm having trouble connecting to certain sites on any of the 3 machines that are on my LAN. The following sites are returning "Problem Loading Page - The connection has timed out" Sourceforge.net CNet.com Microsoft.com OpenDNS.com even my company's webiste I was worried about possible malware/virus, but I don't think that is the case (given the inability to access my company's site and the fact that all 3 machines are having the same issues.) I've tried with IE8, FF, and Chrome I have reset my router (WRT54G) and my machine(s) multiple times. EDIT: It is also worth noting that this page spins constantly and no avatars show up (I'm assuming it is trying to access gravatar.com with no success.) EDIT: I have the same issues directly connected to the modem. So, any router config is probably not the issue I'm a programmer, not a network guy - any ideas?

    Read the article

  • Why is my connection slow?

    - by Jay R.
    I have a Dell Precision T5400 with a Broadcom 1Gb onboard NIC. For some strange reason, when I access machines on our local network, the best I can get is around 125KB/s download speed. My laptop that has a 10/100Mb NIC onboard usually gets around 300KB/s or better from the same network resource. Both machines are plugged into the same 1Gb switch which connects to our local network wall jack at 100Mb half duplex. There is also a printer plugged into the same switch at 100Mb full. The resource I'm using for the test is a 30MB zip file copied from a jetty webserver that is running as part of a cruisecontrol installation. The cruisecontrol installation is running WindowsXP with full real-time antivirus and Altiris patch management and inventory running. That stuff on its own is eating some of the download speed. I've seen the laptop reach into the multiple MB/s download speed before, but the desktop never seems to get past 125KB/s to 130KB/s. In WindowsXP, before I upgraded the driver in the desktop, it was that slow. In Fedora, it is still slow even though it appears to be using the same driver version as the upgraded Windows driver. The upgraded Windows driver is faster, but still not nearly as fast as the laptop. What gives? Any insight to improve the situation would be appreciated. Could it be that the BroadCom board just isn't that good, or the driver in linux is just not as good as the Windows one?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >