Search Results

Search found 5643 results on 226 pages for 'machines'.

Page 172/226 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Is there a way to tell SGE to run specific jobs as root on the execution node?

    - by Rick Reynolds
    The title kinda says it all... We're using SGE/OGE to submit jobs to a set of worker nodes that then do things with specific pieces of equipment. The programs and scripts that have been created that manipulate this equipment rely on running as root. I'd like SGE to handle allocation of resources in a way that is mindful of users, groups, projects, etc., but I also need the actual jobs to run with root permissions. I've read up on How can one run a prologue script as root in gridengine? to see if anything there was pertinent, but it seems that SGE is providing the "user@" kind of spec specifically for prolog and epilog kinds of actions. Is there any similar functionality for the job itself? I'm aware of su/sudo approaches, but that won't really work in this environment because the sudoers file isn't globally managed (i.e. I'd have to add a whole set of users to /etc/sudoers on lots of machines). I'm currently looking into a setuid kind of solution, but that would definitely be an unnecessary kind of work-around if SGE provides me a way to declare that a specific job (or jobs in a specific queue) always needs to run with a specific user's rights.

    Read the article

  • Leopard Network Shares and browsing are unreliable

    - by EvilChookie
    I have two macs, running Leopard 10.5.8. One is the 13" MBP connected via WiFi, and the other is a 24" 2008 iMac, connected via ethernet. There are at least another 6-10 machines (windows and mac) awake on the network (with shares) at any given time, yet there are plenty of times where I cannot see any devices/shares in either my "Shared" section in Finder, nor can I see any computers in "Network" in Finder. Restarting doesn't help. I've restarted all the networking gear in the house to no avail. Our network is a series of gigabit switches connected to a D-Link gaming router. I believe we use OpenDNS, and our provider is Cox. I hate having to use "Go - Connect to Server" to browse to commonly used file shares (by IP). I'd like to know why my shares do not always and consistently appear in Leopard. Edit: I ran OnyX this morning, and performed the cleaning and maintenance operations (including disk permissions) on both my Macs, and at least one of my macs has started showing network devices again. (the other is still going). No idea how long this will last. Any ideas as to what is causing this issue, and how to prevent it? Edit 2: Aaaand there the shares go again. So running OnyX is not a permanent or reliable fix for this issue. Edit 3: After a clean reinstall and update, network shares are still unreliable. The SMBClient command mentioned in comments shows me the information it's supposed to show, but the shares do not appear in the shared section. They'll also vanish at random and reappear at random throughout the day.

    Read the article

  • Word documents very slow to open over network, but fine when opened locally - on one machine

    - by Craig H
    Windows XP, Word 2003, patched. The issue is happening with several Word documents stored on a network drive. The Word documents are clearly a bit wonky (i.e. one is 675k, but if you copy everything but the last paragraph marker into a new document, the new document is only 30k). But that's only part of the problem. On one weird machine, and one machine only, it takes ~20 seconds to open these Word documents from the network drive. Copy the file to C: on that werid machine? Opens immediately. Go to other machines (that are very similar - same patch level, etc.) and open the same document from the network? Opens immediately. Delete normal.dot? 20 seconds. Login with a different user on the weird machine? 20 seconds. Plug wonky machine into a different network port? 20 seconds. So the problem appears to be hardware related (i.e. wonky internal NIC) or related to a setting that is not profile specific. Any ideas? "Scrubbing" all the documents isn't ideal for several reasons. This is driving me nuts because I swear I ran into this before many years ago and eventually figured it out. But I appear to have lost my notes.

    Read the article

  • Running Mathematica-5 remotely

    - by oxinabox.ucc.asn.au
    I have Mathematica 5 - a powerful CAS. I have a cheap netbook (running Windows XP), wich not only is too slow to run mathmatica on, I doubt it has the harddrive space. I do however have remote access to a number of very powerful computers, (most of wich run variose Linuxes, but one of which is Windows Server 2008, though I'ld rather not use this one*). Mostly over SSH but other protocols can be arraged for some, I'm sure. So I'ld like to install Mathematica onto one of these machine and then run it remotely. Either from the command line via Putty or via some other method. I glanced through the mathematical documentation and read something about using some MathLink program, which links the front end installed on my computer to a remote kernel. Anyone have any experience with this? I'm not sure if this belongs here or in SuperUser. At the moment, it's being tinkered with, and when the tinkering stops it'll likely be used to run multiple thin terms. As compared to the Linux machines: I have access to a dual 2.4 Xeon with 3GB RAM, which the rest of the world seems to have completely forgotten about (runs freeBSD!).

    Read the article

  • Reloading NAT configuration on a running VMWare Server 2.0.2

    - by Jonathan Clarke
    I have a server running VMWare Server 2.0.2. The host is Debian Lenny. I have 15-20 virtual machines running, all attached to a single NAT network (named vmnet8). I have configured VMWare's NAT (the vmnet-natd daemon) to forward some incoming to ports to one of the VMs, since it hosts some publicly accessible services. I did this via the file /etc/vmware/vmnet8/nat/nat.conf by adding lines like the following: 80 = 192.168.100.100:80 This works great, I can reach the web server on the VM at 192.168.100.100 by connecting to the host's IP address. Sometimes, I need to add port redirections to this NAT configuration. So, I add a line to the configuration file. Now for the question. How do I make the natd process take this new configuration into account? Clearly, restarting the host machine does take it into account, and the newly added port is forwarded. However, this is not an option on this server, so how should one do this without restarting the whole host? Thanks for any ideas!

    Read the article

  • Mac Management and Security

    - by Bart Silverstrim
    I was going through some literature on managing OS X laptops and asked someone some questions about usage scenarios when using the MacBooks. I asked someone more knowledgeable than I about whether it was possible for my Mac to be taken over if I were visiting another site for a conference or if I went on a wifi network at a local coffee house with policies from an OS X Server with workgroup manager (either legit for the site or someone running a version of OS X Server on hardware they have hidden somewhere on the network), which apparently could be set up to do things like limit my access to Finder or impose other neat whiz-bang management features. He said that it is indeed possible for it to happen as it would be assigned via the DHCP server and the OS X server would assume my Mac is a guest and could hand out restrictions and apparently my Mac will happily accept them without notifying me or giving me an option, unlike Windows which I believe would need to be joined to a domain before it becomes "managed" by Active Directory. So my question is as network admins and sysadmins with users traveling with MacBooks, is there a way to reasonably protect your users from having their machines hijacked without resorting to just turning off networking all the time? Or isn't this much of a security hazard? What threat does this pose to the road warriors in your businesses?

    Read the article

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • Automatically check for Security Updates on CentOS or Scientific Linux?

    - by Stefan Lasiewski
    We have machines running RedHat-based distros such as CentOS or Scientific Linux. We want the systems to automatically notify us if there are any known vulnerabilities to the installed packages. FreeBSD does this with the ports-mgmt/portaudit port. RedHat provides yum-plugin-security, which can check for vulnerabilities by their Bugzilla ID, CVE ID or advisory ID. In addition, Fedora recently started to support yum-plugin-security. I believe this was added in Fedora 16. Scientific Linux 6 did not support yum-plugin-security as of late 2011. It does ship with /etc/cron.daily/yum-autoupdate, which updates RPMs daily. I don't think this handles Security Updates only, however. CentOS does not support yum-plugin-security. I monitor the CentOS and Scientific Linux mailinglists for updates, but this is tedious and I want something which can be automated. For those of us who maintain CentOS and SL systems, are there any tools which can: Automatically (Progamatically, via cron) inform us if there are known vulnerabilities with my current RPMs. Optionally, automatically install the minimum upgrade required to address a security vulnerability, which would probably be yum update-minimal --security on the commandline? I have considered using yum-plugin-changelog to print out the changelog for each package, and then parse the output for certain strings. Are there any tools which do this already?

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • NAT, iptables and problematic ports

    - by Rajie
    I am building a small office network with virtual machines. My schema is this: Computer A: gateway, ip 1.1.1.1, iptables used for NAT [eth0=public internet dhcp, dhcp; eth1=gateway] Computer B: client, ip 1.1.1.2, using gateway from Computer A. NAT is working, and Computer B can access the internet using the A's gateway. I redirected some incoming ports from A to B (for instance, if A receives a request to port 80, it goes automatically to Computer B's Apache). The thing is that I do not really understand how to open/close ports for Computer B from Computer A. I know how to close a port: iptables -A INPUT -p tcp --dport 80 -j DROP And it will refuse all incoming (not output) connections to port 80. However, this works for main interface eth0. I tried to, for instance, drop ingoing and outgoing connections for Computer B, port 80: iptables -A FORWARD -i eth1 -o eth0 -p tcp --dport 80 -j DROP iptables -A FORWARD -i eth0 -o eth1 -p tcp --dport 80 -j DROP But it does not work. And I cannot figure out what I am doing wrong. Any clue?

    Read the article

  • Inter-vlan routing issues

    - by DKNUCKLES
    I've been brought in to help administer a network and I've run into an issue - I'm not sure why this one is beyond me, however I figure an extra set of eyes on the problem may help resolve the issue. I have an HP MSM720 controller and at the time I'm trying to set up a basic hotspot set up with access points. For the time being I'm just looking to have people authenticate with a PSK and access the internet and other resources (namely printers) on other vlans. The user authenticates and the DHCP server on the controller gives them a 192.168.1.0/24 address. They are able to successfully browse the internet and ping machines on other networks, however they are unable to print to network printers that sit on the same LANs as the very computers that wireless clients can ping. The (extremely simplified) topology is as follows Computers on the wireless 192.168.1.1 network are able to ping computers on the 192.168.0.0 network, however cannot ping or print to the printers on the same network. I'm baffled and I have no idea why this is the case. Can anyone shed some light on this for me? Can someone spot the error of my configuration? EDIT : It should be noted that for whatever reason other computers on the 10.0.100.0/24 network cannot even ping the gateway of the Wireless Access network (192.168.1.1) - I'm not sure if this is relevant. These are the VLANS listed on the controller.

    Read the article

  • SMB access from XP to Windows 2008 R2

    - by Pablo
    Here's the thing... I have a very slow file copy performance from Windows XP clients to Windows 2008R2 servers. Here are the facts: Windows XP to Windows 2K3: Fast Windows XP to Windows 2K8: Very Slow Windows 7 to Windows (any): Fast Despite the fact that the obvious solution would be to upgrade to Windows 7, well, we have 900 desktops so it's not an option in the short time. I have tried everything: Disabling SMB2.0, disabling security signatures, changing the TCP Window size, disabling the W2K8 auto tuning, upgraded the drivers, etc. We eliminated the network; both the server and the client are connected to the same core switch (no hops, no routers, same VLAN). Upon monitoring the network with a packet capture utility, we see that the SMB packets being exchanged between the W2K8 and the XP machines are very small packets (256 bytes); despite the fact that the MTUs are properly set (1500) and there is no fragmentation whatsoever. In fact, those SMB packets show, on the IP datagram, that the window is 65535 or close. The same trace, made using the same application but instead of using a W2K8 share uses a Windows XP share (and that goes FAST) shows SMB packets of 4096 bytes. I can post the traces if necessary. So, why does XP-W2K8 negotiation arrange for 24-bytes SMB payload, whereas the XP-XP negotiation arranges for 4096 SMB packets? Any ideas? I am running short of those...

    Read the article

  • ESXi 4.0 Guests Locking up

    - by Brendan Sherwin
    I installed ESXi 4.0 on an HP Proliant g5 with a 64bit Xeon processor and took advantage of the free license as I work for a public school. I created two instances of server 2003 from scratch, one to be the DC, DHCP, the other to be a file server and DNS/DHCP backup. I had both guests up and running fine, setup my user accounts, transferred the data, etc etc. Once I joined a client machine to the domain, I would find that both of my Windows guests would lock up. Sometimes it would be for five or so minutes, once it was overnight. The "locked up" state means that as far I could tell, all services were stopped; dhcp no longer handed out IP's, DNS stopped working, I couldn't RDP into the server. The ESXi host, my HP server, was still running fine. VSphere was working, and I could look at the performance of the individual guests.I would try Powering off the hosts from inside VSPhere, and the hosts would start powering off, but get stuck at 95%, and stay that way, sometimes only for 10 minutes, others for hours. Several times I had to restart ESXi from it's console in order to restart my machines. Now, can anyone tell me what is happening, and how I can fix it, or take steps to prevent it? I hired a consultant to come take a look at it, someone who's experience and knowledge I trust, and he told me he had never seen anything like this ever before. He spoke to a friend of his who is VM certified, and he also said he had never heard of this issue. Thanks for your replies, and I'll do my best to respond ASAP. Currently, the server is powered off, and I've reinstituted my nine year old Server 2000 boxes, and I'm considering installing ESXi 3.5. Does anyone know a host created in 4.0 will work in 3.5? I'd really like to avoid having to rebuild those accounts! I know 4.0 works on this server, as I have another server in another school with the same exact hardware running 4.0 fine. Brendan

    Read the article

  • Can't view other computers on Network

    - by Darkart
    Systems: My Machine: Windows 7 Ultimate connected through ethernet into router Her Machine--Other Machine: Windows 7 Ultimate connected through wireless Router:F5D8236-4 N Wireless Router Version 1 Firmware: 2.01.03 (Apr 28 2009) ISP: Comcast Problem: I can not view the "Other Machine" on the network at all. I opened command prompt and ran net view and saw the pc name. I tried pinging the pc and it times out. Went inside the router and tried viewing the computer on the DHCP list and it can not be seen. I restored the router back to default settings and firmware and completely reset the modem and router, and created home group. I went to the other machine to configure home group settings and made sure that both PC's had identical settings. She was able to see my machine but I could not see hers. I restarted both machines and now we cant see each other at all. Also her PC ("Other Machine") had exclamation mark in the wireless icon but was connected just fine. There is no firewalls on currently or anti-virus enabled, and still can not see each other. Right now I am checking for updated drivers for the wireless card, but my question is could it be the router or something hardware related? I have went through all the settings in the Home group and visited most FAQ's and still no luck. Also as it stands I can not view her machine inside the router DHCP Client List :(

    Read the article

  • Google Chrome mouse wheel takes keyboard focus

    - by Steve Crane
    I recently switched the default browser on all my machines from Firefox to Google Chrome. In general I’m loving Chrome but there is one behaviour that is driving me nuts. Scrolling with the mouse wheel takes keyboard focus away from the document. Here’s what happens. I’ve just opened a page or switched to a new tab and the page has keyboard focus as I can use the keyboard up/down and PgUp/PgDn keys to scroll it; no problem there. But if I then use the wheel on my mouse to scroll the page, it loses the keyboard focus and no longer responds to up/down, PgUp/PgDn, or in fact any other keyboard keys. I have to click on the page background to restore the keyboard focus. This is a minor inconvenience for scrolling but where it really drives me nuts is in Google Reader and Gmail where I use keyboard shortcuts a lot. Here I find that I scroll the article or e-mail I’m reading with the mouse wheel then get no response when I press j/k to move to the next or previous article or e-mail. I am using Windows 7 and the Chrome dev channel (version 4.0.249.43).

    Read the article

  • Network corruption - corrupt downloads, corrupt streams, etc.

    - by rfrankel
    I've been having some problems with my home LAN. Downloaded executables won't run, my remote desktop sessions keep getting interrupted due to encryption errors, flash video streams show visible corruption (both Hulu and YouTube), and I've had a couple downloads for which the md5 hashes don't match. The problem has even occurred with a couple images embedded in webpages, though that's rare enough (presumably because images are relatively smaller files). I've had this problem across two Windows machines and a Mac, so it's neither machine-specific nor at the app or OS level. Comcast claims it's nothing to do with them, and my Linksys/Cisco RV016 router is out of warranty, so I have no access to official support. When I log into my router, it shows no error packets or dropped packets received. I plugged a laptop directly into the router and was able to download a 5.5 MB file and verify its MD5 hash, which is not proof that the problem is downstream of the router, but makes it seem quite likely, since I failed to download the same file several times from two desktops (one Mac, one Windows). Could this be a wiring problem? If so, is there any way clever/elegant to determine which wiring is faulty with just software? If I can avoid tracing all the wires throughout my entire house it would make my life quite a bit easier.

    Read the article

  • NFS4 permission denied when userid does not match (even though idmap is working)

    - by SystemParadox
    I have NFS4 setup with idmapd working correctly. ls -l from the client shows the correct user names, even though the user ids differ between the machines. However, when the user ids do not match, I get 'permission denied' errors trying access files, even though ls -l shows the correct username. When the user ids do happen to match by coincidence, everything works fine. sudo sysctl -w sunrpc.nfsd_debug=1023 gives the following output in the server syslog for the failed file access: nfsd_dispatch: vers 4 proc 1 nfsv4 compound op #1/3: 22 (OP_PUTFH) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsv4 compound op ffff88003d0f5078 opcnt 3 #1: 22: status 0 nfsv4 compound op #2/3: 3 (OP_ACCESS) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsv4 compound op ffff88003d0f5078 opcnt 3 #2: 3: status 0 nfsv4 compound op #3/3: 9 (OP_GETATTR) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsv4 compound op ffff88003d0f5078 opcnt 3 #3: 9: status 0 nfsv4 compound returned 0 nfsd_dispatch: vers 4 proc 1 nfsv4 compound op #1/7: 22 (OP_PUTFH) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsv4 compound op ffff88003d0f5078 opcnt 7 #1: 22: status 0 nfsv4 compound op #2/7: 32 (OP_SAVEFH) nfsv4 compound op ffff88003d0f5078 opcnt 7 #2: 32: status 0 nfsv4 compound op #3/7: 18 (OP_OPEN) NFSD: nfsd4_open filename dom_file op_stateowner (null) renewing client (clientid 4f96587d/0000000e) nfsd: nfsd_lookup(fh 28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba, dom_file) nfsd: fh_verify(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking nfsd: fh_lock(28: 00070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) locked = 0 nfsd: fh_compose(exp 08:01/22806529 srv/dom_file, ino=22809724) nfsd: fh_verify(36: 01070001 015c0001 00000000 9853d400 2a4892a5 4918a0ba) nfsd: fh_verify - just checking fh_verify: srv/dom_file permission failure, acc=804, error=13 nfsv4 compound op ffff88003d0f5078 opcnt 7 #3: 18: status 13 nfsv4 compound returned 13 Is that useful to anyone? Any hints on to debug this would be greatly appreciated. Server kernel: 2.6.32-40-server (Ubuntu 10.04) Client kernel: 3.2.0-27-generic (Ubuntu 12.04) Same problem with my new server running 3.2.0-27-generic (Ubuntu 12.04). Thanks.

    Read the article

  • Internal but no external Citrix Access?

    - by leeand00
    We recently had to reload our configuration of Citrix on our server Server1, and since we have, we can access Citrix internally, but not externally. Normally we access Citrix from http://remote.xyz.org/Citrix/XenApp but since the configuration was reloaded we are met with a Service Unavailable message. Internally accessing the Citrix web application from http://localhost/Citrix/XenApp/ on Server1 we are able to access the web application. And also from machines on our local network using http://Server1/Citrix/XenApp/. I have gone into the Citrix Access Management Console and from the tree pane on the left clicked on Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/PNAgent Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/XenApp, which in both cases displays a screen that reads Secure client access. Here it offers me several options: Direct, Alternate, Translated, Gateway Direct, Gateway Alternate, Gateway Translated. I know that I can change the method of use by clicking Manage secure client access->Edit secure client access settings which opens a window that reads "Specify Access Methods", and below that reads "Specify details of the DMZ settings, including IP address, mask, and associated access method", I don't know what the original settings were, and I also don't know how our DMZ is configured so that I can specify the correct settings, to give access to our external users on the http://remote.xyz.org/Citrix/XenApp site. We have a vendor who setup our DMZ and does not allow us access to the gateway to see these settings. What sorts of questions should I ask them to restore remote access?

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • Network profile reverts to 'Unidentified' following Windows Update reboot

    - by user140575
    I have searched high and low for a solution to this problem. I have multiple servers running Windows 2000 Server as well as Windows Server 2003, 2003 R2, and 2008 R2. All of these servers are on the same Active Directory domain. The servers run showing the network profile as Domain Network, which is fine and correct. However, when a Windows update is installed, the server changes the profile to Unidentified Network once it has rebooted. This then doesn't allow any traffic to the server. For security reasons, we can't turn the firewalls off for. The only way to fix the problem is to physically be in front of the machine and work on it to change the profile back. Once the Profile has been reinstated to the Domain profile, it will be fine until the next month's update. This happens on all the Windows software mentioned above. The machines are not all identical, so it's not a hardware problem either. If anyone can help I'd be very grateful.

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • VMware vSphere Hypervisor 5 with Intel SPL 5000 in Raid 0 no boot from DVD?

    - by Richard
    I hope this is the correct StackExchange, since I am only using StackOverflow for Web development, but need some help with my server configuration. I would like to install VMware vSphere Hypervisor 5 on my server here at home and run a view machines on it such as Windows Server 2008 and Red Hat. I used to have either OpenSuse or Windows Server 2008 installed but I would like to get into VMWare Hypervisor. My hardware configuration: - Intel S5000PSL with bios version S5000.86B.10.60.0091 build date 10/09/2008 as of read out of bios - E5420 @ 2.5GHz Intel Xeon CPU The Intel Virtualization Technology is enabled in the BIOS - DVD DH20A4P DVD Writer - 8GB ECC Ram I have configured a RAID 0 on my 2 WD 2TB SATA drives I have burned the Hypervisor 5 on an empty DVD and it is bootable, I tested it on my client PC. The main problem here is basically, that I cannot boot the DVD on my server. I have set the Boot Option to the DVD drive. I have booted from the BIOS straight in the DVD drive and it does not work. I do not see any error messages. The only thing I see are the PXE error messages when it tries booting from the network and other devices, obviously without any result. Does anybody know why I cannot boot the DVD? What could cause the problem? I have sucessfully installed Windows Server 2008 via original DVD about 1 year ago, so the DVD drive can read and does work. The DVD drive is available in the BIOS and I have checked all cables and none of them is loose in any way. I even see the light flashing but it does not want to boot from the DVD. I am looking forward to suggestions and things that I should check. Thank you very much

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >