Search Results

Search found 5723 results on 229 pages for 'turing machines'.

Page 176/229 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Network profile reverts to 'Unidentified' following Windows Update reboot

    - by user140575
    I have searched high and low for a solution to this problem. I have multiple servers running Windows 2000 Server as well as Windows Server 2003, 2003 R2, and 2008 R2. All of these servers are on the same Active Directory domain. The servers run showing the network profile as Domain Network, which is fine and correct. However, when a Windows update is installed, the server changes the profile to Unidentified Network once it has rebooted. This then doesn't allow any traffic to the server. For security reasons, we can't turn the firewalls off for. The only way to fix the problem is to physically be in front of the machine and work on it to change the profile back. Once the Profile has been reinstated to the Domain profile, it will be fine until the next month's update. This happens on all the Windows software mentioned above. The machines are not all identical, so it's not a hardware problem either. If anyone can help I'd be very grateful.

    Read the article

  • VMware vSphere Hypervisor 5 with Intel SPL 5000 in Raid 0 no boot from DVD?

    - by Richard
    I hope this is the correct StackExchange, since I am only using StackOverflow for Web development, but need some help with my server configuration. I would like to install VMware vSphere Hypervisor 5 on my server here at home and run a view machines on it such as Windows Server 2008 and Red Hat. I used to have either OpenSuse or Windows Server 2008 installed but I would like to get into VMWare Hypervisor. My hardware configuration: - Intel S5000PSL with bios version S5000.86B.10.60.0091 build date 10/09/2008 as of read out of bios - E5420 @ 2.5GHz Intel Xeon CPU The Intel Virtualization Technology is enabled in the BIOS - DVD DH20A4P DVD Writer - 8GB ECC Ram I have configured a RAID 0 on my 2 WD 2TB SATA drives I have burned the Hypervisor 5 on an empty DVD and it is bootable, I tested it on my client PC. The main problem here is basically, that I cannot boot the DVD on my server. I have set the Boot Option to the DVD drive. I have booted from the BIOS straight in the DVD drive and it does not work. I do not see any error messages. The only thing I see are the PXE error messages when it tries booting from the network and other devices, obviously without any result. Does anybody know why I cannot boot the DVD? What could cause the problem? I have sucessfully installed Windows Server 2008 via original DVD about 1 year ago, so the DVD drive can read and does work. The DVD drive is available in the BIOS and I have checked all cables and none of them is loose in any way. I even see the light flashing but it does not want to boot from the DVD. I am looking forward to suggestions and things that I should check. Thank you very much

    Read the article

  • VMware Workstation update stopped autofit guest working

    - by u8sand
    VMware one day offered an update (8.0.1 build-528992 my current version) from 8.0. I accepted it, because updates usually fix problems. However not in this case... Previously it worked very well. Now, it still "works" but the one glitch I'm getting makes it too hard to deal with. This screenshot will explain my problem: As you can see, my virtual PC is not resizing correctly. (This happens with any operating system), autofit guest just doesn't work - it only results in things like this happening. Thanks to the tools it becomes very hard to NOT autofit guest. I've tried uninstalling 8.0.1 completely and installing 8.0 again but with the same results. I don't really understand what the new update has done to VMware Workstation or to my virtual machines. I do believe this isn't VMware Workstation's direct fault but from VMware Tools, which would explain why going back to 8.0 didn't work since VMware tools has its own updates. How can I fix this? The host is running Windows 7 Ultimate 64-bit.

    Read the article

  • Backing up VMs to a tape drive

    - by Aljoscha Vollmerhaus
    I've got myself one of these fancy tape drives, HP LTO2 with 200/400 GB cartridges. The st driver reports it like this: scsi 1:0:0:0: Sequential-Access HP Ultrium 2-SCSI T65D I can store and retrieve files like a charm using tar, both tar cf /dev/st0 somedirectory and tar xf /dev/st0 work flawless. However, what I really would like to backup are LVM LVs. They contain entire virtual machines with varying partition layouts, so using mount and tar is not an option. I've tried using something like dd if=/dev/VG/LV bs=64k of=/dev/st0 to achieve this, but there seem to be various problems associated with this approach. Firstly, I would like to be able to store more than 1 LV on a single tape. Now I guess I could seek to concatenate the data on the tape, but I think this would not work very well in an automated scenario with many different LVs of various sizes. Secondly, I would like to store a small XML file along with the raw data that contains some information about the VM contained in the LV. I could dump everything to a directory and tar it up - not very desirable, I would have to set aside huge amounts of scratch space. Is there an easier way to achieve this? Thirdly, from googling around it seems like it would be wise to use something like mbuffer when writing to the tape, to prevent what wikipedia calls "shoe-shining" the tape. However, I can't get anything useful done with mbuffer. The mbuffer man page suggests this for writing to a tape device: mbuffer -t -m 10M -p 80 -f -o $TAPE So I've tried this: dd if=/dev/VG/LV | mbuffer -t -m 10M -p 80 -f -d 64k -o /dev/st0 Note the added "-d 64k" to account for the 64k block size of the tape. However, reading data back from a tape written in this way never seems to yield any useful results - dd has been running for ages now, and managed to transfer only 361M of data from the tape. What's wrong here?

    Read the article

  • No Outbound Internet on Windows Home Server

    - by Kyle B.
    Could someone provide some steps for me to check my internet connection on my Windows Home Server? It seems to have intermittent connectivity issues and I am unsure of how to diagnose the problem because it is a headless (no monitor, no keyboard) machine so the only way to get to the device is via remote desktop (which works fine). When connected to the machine, it doesn't pull up any microsoft.com sites and some other sites it does pull up (i.e. gmail.com) and some it doesn't (stackoverflow.com). To make matters more complicated, it has worked intermittently in the past for reasons unknown. Are there tools I can use to properly diagnose the reason for the connection failure? I can ping 127.0.0.1 just fine, I have internet working on my other router-connected machines, so I'm not sure why this one would fail. Any suggestions would be much appreciated and up-voted :) ** edit - thanks for suggestions guys, I'm going to try these tonight and will update my post. ** edit #2 - I hoping this is a more permanant fix, but I have both changed my port on the router as well as restarted the router at the same time. The internet (for the moment) appears to be working. I will be sure to try everything we have discussed should this problem persist. Thanks, Kyle

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Windows Server 2008 Remote Desktop printing blank pages

    - by Colin Pickard
    I have a Windows Server 2008 (not R2) machine which has problems with redirected printing. Clients connecting via Remote Desktop have their printers redirected and appearing for them to print to, but printing from applications on the server to local printers is giving blank pages, missing pages, or pages with headers/footers but no middle section. The issues are consistant for similar prints, but sometimes other prints and/or applications will work correctly. I have installed PDFCreator locally on the server, and the same print jobs sent by the same application appear correctly in the PDFs. Printing that PDF via the redirected printer prints correctly. I have tried the following: Installing drivers. I’ve installed several drivers different drivers, for both the client and server operating system and architecture, on the client and the server. Reinstalling the printers. I’ve tried reinstalling on remote print servers, the clients, and the host server, and tried different client machines. Granting everyone full permissions on the print spool folder on the server. Editing the registry to forward non-USB ports (http://support.microsoft.com/kb/302361) None of these have made any difference. The clients are using Windows 7 or Windows XP and none of them have any issues with printing locally. Any ideas? Thanks!

    Read the article

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • Ripping CD Audio simultaneously from 2 drives on one PC via USB or PATA - rip accuracy preserved?

    - by Rob
    I'm considering ripping audio (reading audio) from CDs using 2 drives simultaneously to speed up the process of ripping the CDs - i.e. 2 at a time rather than 1. Are there any issues with achieving maximum rip accuracy? In general I wondered if people have tried this and if the simultaneous streams from both rip activities would overload the host machine and cause packet loss or read retries resulting in a sub-standard CD-DA Audio CD rip? If it just means the rip is slightly slower (but still faster than sequentially doing one rip followed by another) but still of maximum accuracy then that is OK for me. I will be using dbPowerAmp to rip the CDs and converting to FLAC lossless format. Specific examples: There are 2 machines I intend to do it on: A Toshiba NB100 1.6Ghz Atom netbook, 2Gb RAM, running Windows XP Home with 1 external LG DVD/CD burner and external 1 LG Blu-ray burner attached via USB 2.0, ripping to the machine's 5400rpm internal hard drive. This rips from one CD drive very well, more than adequate, it is a nippy, fast little machine for its specification. A Desktop PC running Windows 7 Home Premium with MSI P4M900M2-L/ MS-7255v2.0 motherboard and 1.86Ghz Intel Core 2 Duo E6320, 7200rpm hard drive and 2Gb RAM, with an internal LG PATA DVD/CD burner (master) and a Philips DVD/CD burner (slave) on the same PATA bus (perhaps separate buses would be another option to consider here). Thoughts?

    Read the article

  • A Domain Admin user doesn't have effective Administrative rights on a Domain Computer

    - by rwetzeler
    I am a developer who is setting up a virtual domain environment of testing purposes and am having trouble with the setup. I have created a new DC on a new Forest... call it dev.contoso.com. I have setup a virtual internal network for all machines that are going to be apart of this virtual test environment and have given each machine a static IP address in the 192.169.150.0 subnet. I have added machine1.dev.contoso.com to the domain dev.contoso.com. I have also provisioned a user account (adminuser) in the domain and made that user a member of Domain Admins group. Upon logging into machine1 using my newly created Domain Admin account, I cannot access/run any files on machine1. When I go into the advanced permissions for the c:\ folder and goto properties - Security Tab - Advanced - Effective Permissions and search for the dev\adminuser (mentioned above), I get an error saying: Windows can't calculate the effective permissions for admin user What do I need to do to get Administrative rights on Machine1? I am using Server 2008 R2 for both the AD controller and machine1.

    Read the article

  • Ensuring a repeatable directory ordering in linux

    - by Paul Biggar
    I run a hosted continuous integration company, and we run our customers' code on Linux. Each time we run the code, we run it in a separate virtual machine. A frequent problem that arises is that a customer's tests will sometimes fail because of the directory ordering of their code checked out on the VM. Let me go into more detail. On OSX, the HFS+ file system ensures that directories are always traversed in the same order. Programmers who use OSX assume that if it works on their machine, it must work everywhere. But it often doesn't work on Linux, because linux file systems do not offer ordering guarantees when traversing directories. As an example, consider there are 2 files, a.rb, b.rb. a.rb defines MyObject, and b.rb uses MyObject. If a.rb is loaded first, everything will work. If b.rb is loaded first, it will try to access an undefined variable MyObject, and fail. But worse than this, is that it doesn't always just fail. Because the file system ordering on Linux is not ordered, it will be a different order on different machines. This is worse because sometimes the tests pass, and sometimes they fail. This is the worst possible result. So my question is, is there a way to make file system ordering repeatable. Some flag to ext4 perhaps, that says it will always traverse directories in some order? Or maybe a different file system that has this guarantee?

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • Windows file locks allowing multiple users to write to open file over network

    - by JPbuntu
    I have 6 windows computers (xp,vista,7) that need to access a samba share (Ubuntu 12.04). I am trying to make it so only one client can open a file at a given time. I thought this was pretty standard behavior of file locks, but I can't get it to work. The way it is right now a file can be open by two users, and changed and saved by either one of them. The last file saved overwrites what ever changes the other user made. At first I thought this was a Samba configuration problem, but I get this behavior even between two windows machines. So far I have only tested: Windows Xp Windows Vista Windows XP Samba << Windows Vista and both give the same behavior. When I tested the Samba configuration, I had set strict locking = yes and get errors logged like this: close_remove_share_mode: Could not get share mode lock for file _prod/part_number_list_COPY.xlsx Eventually all of the files are going to be moved onto the Samba share, so that is the configuration I am most concerned about fixing. Any ideas? Thanks in advance. EDIT: I tested an excel file again, and it is now working properly in both above mentioned cases, I am also no longer getting the above mentioned error. I don't know what happened, perhaps a restart fixed it? (also works with strict locking = no) Although I still need to find a solution for the CAD/CAM files we use, the software is Vector and it does not seem to be using file locks. Is there any software that I can use to manage these files, so two people can't open/edit them at a time? Maybe a windows application that forces file locks? Or a dirt simple version control system? (the only ones I have seen at are too complicated for our needs).

    Read the article

  • Dual Monitor support rdp 7 to win 7 on esxi

    - by rphilli5
    I am trying to RDP from a Windows 7 Professional dual monitor physical machine to a Windows 7 Professional VM hosted on esxi 4.0. I can get the spanning option to work to both monitors, but I have tried 3 different methods of connecting but have not been able to use true multiple monitors. At different times, I tried checking the "use all monitors" option, command line mstsc /multimon and added the line use multimon:i:1 to the .rdp file. None of these worked. Any ideas? The physical machine can connect to other Windows 7 physical machines with true multi monitor access. I also have the same issue when going from a 32bit RC1 machine to a Windows 7 Professional x64, but not when going in the reverse direction. Here's the .rdp: screen mode id:i:2 use multimon:i:1 desktopwidth:i:1440 desktopheight:i:900 session bpp:i:16 winposstr:s:0,1,341,118,1139,568 compression:i:1 keyboardhook:i:2 audiocapturemode:i:0 videoplaybackmode:i:1 connection type:i:1 displayconnectionbar:i:1 disable wallpaper:i:1 allow font smoothing:i:0 allow desktop composition:i:0 disable full window drag:i:1 disable menu anims:i:1 disable themes:i:1 disable cursor setting:i:0 bitmapcachepersistenable:i:1 full address:s:192.168.1.5 audiomode:i:0 redirectprinters:i:1 redirectcomports:i:0 redirectsmartcards:i:1 redirectclipboard:i:1 redirectposdevices:i:0 redirectdirectx:i:1 autoreconnection enabled:i:1 authentication level:i:2 prompt for credentials:i:0 negotiate security layer:i:1 remoteapplicationmode:i:0 alternate shell:s: shell working directory:s: gatewayhostname:s: gatewayusagemethod:i:4 gatewaycredentialssource:i:4 gatewayprofileusagemethod:i:0 promptcredentialonce:i:1 use redirection server name:i:0 drivestoredirect:s:

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • Setting up Squid -> VPN connection

    - by Nedlinin
    I recently purchased a VPS and am wanting to use it as a VPN server. However, it has bandwidth limitations. So, I figured since I already have a local Squid proxy caching things for me, I could have users connect to the proxy and the proxy connect to the VPN. Then when someone hits the web, Squid will serve it from cache if available and, if not, it will use the VPN to download it. My issue is, I have no idea how to set this up :p - Essentially I want Machine - Squid - VPN. My VPN is running on Ubuntu Server with pptpd. Squid is running on a local Arch Linux box. Squid and the VPN are both working perfectly independently. Any help on how to have Squid push traffic through the VPN would be greatly appreciated! Also: I don't actually want to use the VPN for all traffic. Otherwise, I'd just connect my router to the VPN and be happy. I only want to use it for web traffic from specific machines on the network.

    Read the article

  • "Run As Administrator" on program right click failing and not launching program

    - by GONeale
    This problem lies within a relatively fresh x64 Windows 7 install ~4 weeks, but is also a problem I have seen on Windows Vista machines (x86 versions). Since the other day, any programs attempted to be launched via right clicking on a shortcut (.lnk)'s context menu and pressing - "Run As Administrator" for instance, in the Quick Launch/Jump List in Windows 7 has failed, screen has not dimmed, no UAC popup. In fact the program does not even load. There is no way around this unless I use the shortcut version from "All Programs" which appears to work, very strange? I have performed no major software installs, nothing out of the ordinary. Has anybody encountered this or know what would be causing it? Here's an example of somebody else experiencing this problem in Vista with no solution: http://www.vistax64.com/vista-general/131918-strange-run-administrator-problem.html and I believe this problem is related, I also cannot right click - "Manage" on my computer): http://windows7forums.com/windows-7-support/5501-run-administrator-broken.html I am running the latest version of Avira AntiVir Virus Scanner and pretty concious of what I download, I don't think it is a virus, nor do I believe it is due to the RC Version of Windows 7, because I have seen the problem across multiple Operating Systems versions. Thanks guys.

    Read the article

  • How To Set Up A Loadbalanced High-Availability Apache Cluster On Windows

    - by bReAd
    Setting up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another “Single Point Of Failure”, we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently. The following setup is proposed: Apache node 1: webserver1.example.com (webserver1) – IP address: 192.168.0.101; Apache document root: /var/www Apache node 2: webserver2.example.com (webserver2) – IP address: 192.168.0.102; Apache document root: /var/www Load Balancer node 1: loadb1.example.com (loadb1) – IP address: 192.168.0.103 Load Balancer node 2: loadb2.example.com (loadb2) – IP address: 192.168.0.104 Virtual IP Address: 192.168.0.105 (used for incoming requests) Currently, there are many solutions for Linux machines and there aren't any on windows. I've tried searching a long time for solutions on Windows platform How do I create the virtual IP in windows and perform monitoring and make the load balancer listen to the virtual IP Address?

    Read the article

  • Rack layout for future growth

    - by bleything
    We're getting ready to move to a new colo facility and I'm designing the rack layout. While we have a full rack, we only have 12U worth of hardware right now: 1x 1U switch 7x 1U servers 1x 2U server 1x 2U disk shelf The colo facility requires us to front-mount the switch and use a 1U brush strip, so we'll be using a total of 13U of space. Regarding growth, I'm reasonably sure we'll be adding another 4U in servers, 1-2U of network gear, and 2-4U of storage in the mid-term. Specific questions I'm hoping to get help with: where should I mount the switch? the LEDs are on top... should I group the servers by function with space for adding new machines? as an alternative, should I group servers based on whether they are production or staging? where in the rack should I start? in the middle? at the top? at the bottom? equally spaced? Here's a silly little ASCII diagram of what I'm thinking right now. Please feel free to tear my design apart, I've really no idea what I'm doing :) Any advice is very welcome. edit: to be clear, the colo is providing redundant power with UPS and generator, so that's why there's no power gear in the plan, except for the 0U PDU that I didn't diagram. 42 | -- switch ---------------------- 41 | -- brush strip ----------------- 40 | ~~ reserved for second switch ~~ 39 | ~~ reserved for firewall ~~~~~~~ 38 | 37 | -- admin01 --------------------- 36 | 35 | -- vm01 ------------------------ 34 | -- vm02 ------------------------ 33 | ~~ reserved for vm03 ~~~~~~~~~~~ 32 | ~~ reserved for vm04 ~~~~~~~~~~~ 31 | ~~ reserved for vm05 ~~~~~~~~~~~ 30 | 29 | -- web01 ----------------------- 28 | -- web02 ----------------------- 27 | ~~ reserved for web03 ~~~~~~~~~~ 26 | ~~ reserved for web04 ~~~~~~~~~~ 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | -- db01 ------------------------ 15 | +- disks ----------------------+ 14 | +------------------------------+ 13 | ~~ reserved for more ~~~~~~~~~~~ 12 | ~~ db01 disks ~~~~~~~~~~~~~~~~~~ 11 | 10 | +- db02 -----------------------+ 9 | +------------------------------+ 8 | ~~ reserved for db02 ~~~~~~~~~~~ 7 | ~~ disks ~~~~~~~~~~~~~~~~~~~~~~~ 6 | ~~ reserved for more ~~~~~~~~~~~ 5 | ~~ db02 disks ~~~~~~~~~~~~~~~~~~ 4 | 3 | 2 | 1 |

    Read the article

  • Rename a Network Printer win Win 7

    - by Alex
    Seems this is a common question but the answers regularly miss the point. I have a server. Server has some printers connected. Server has all drivers for x32 & x64 OS PLUS ALL DEFAULTS set. Server also manages print queue. I have many workstations, all need to use the printers. All NEED to have drivers print queue and DEFAULTS propagated from server. Now.... When I add the printers on the workstations, I get: "ABC Printer on SERVER123" I need something less long - just "ABC Printer" So how? Tip: Please don't show me how to change the name of your locally installed printer. I know how to do this - I am particularly interested in shared printers that look like "ABC Printer on SERVER123" Tip: Installing the driver with a local port wont cut it because then I loose the server propagated defaults, the driver updates and I need to run around with driver disks/confuse trembling users with hard things like choosing drivers. I am happy for a hack if there is no official way to do this in the group policy.... I tried looking in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Printers on the workstation machines but those are only local printers :( I can see the network printer details on the workstations here: HKEY_USERS[Some GUID]\Printers\Connections But there is nothing obvious like a description string... Can anyone help with this?

    Read the article

  • DisableCrossAccountCopy not working on some Outlook installs, working on others, both going against Exchange

    - by MikeBaz
    As part of a mail migration project from one Exchange organization to another, we need to be able to prevent users from moving/copying messages between their accounts in each organization. (Yes, users will think this is evil; no, it's not my decision; yes, users will hate us.) Luckily, we thought, Outlook 2010 provides the DisableCrossAccountCopy registry value/policy (cf. http://technet.microsoft.com/en-us/library/ff800883.aspx). (Because you can't do multiple Exchange organizations in a single profile before Outlook 2010, this only matters on Outlook 2010. Yes, I'm ignoring for the sake of this question copy/move to/from the filesystem.) In our test lab, in a test forest with a test Exchange organization, with a second Exchange account added to the profile in either of the "real" Exchange organizations, with the value set to "*", everything works as expected. On a workstation in one of the production domains, however, the setting does not seem to work. We have tried it under HKCU, HKLM, HKCU\Software\Policies, and HKLM\Software\Policies. It simply seems to be ignored. The value was set in the OCT on a test machine, but the OCT (and the ADM/ADMX file) have the wrong type for the value. We have located the value in the registry and removed it everywhere it is found, we think, and put it back in HKCU, but it still isn't taking. At the moment, a clean Outlook install is not an option - even if it was, we at this point would need to know what to do to fix the pushed copy (I didn't push the copy out to thousands of machines, I've just been asked to help clean up the current mess). Thoughts?

    Read the article

  • Mac Mini server (10.6) behind router with FQDN hostname

    - by thechriskelley
    I have a Mac Mini running Mac OS 10.6.6 Server that will be part of a local network, and a static IP from my ISP. I'd like to set up DNS for the Mini with a FQDN as the hostname (example.com) properly. The Mini is behind a router (Apple Airport Extreme) and is given a private, static IP address. I can't assign it the public static IP directly because it's behind a router with DHCP/NAT for other machines on the local net. My end goal here is for services to resolve to the server properly from outside and inside the local network to users via example.com (and subdomains like mail.example.com, www.example.com), which will point to the public static IP assigned to the router. Will DNS work/resolve properly (for mail services and other subdomains) if it has a private ip address, but the necessary services are forwarded properly through NAT? I'm open to any (hopefully better) suggestions, as my current setup doesn't seem like it's the best way. Currently, more hardware or another public static IP is not possible. With the current setup, it seems as though one static IP is not necessary anyway. Thanks in advance for any insight.

    Read the article

  • NFSv4 "Too many levels of symbolic links" error

    - by user1434058
    Both machines are running Ubuntu 12.04 Remote NFSv4 Client $ ls /mnt/storage/aaaaaaa_aaa/bbbb/cccc_ccccc gives this error: ls: reading directory .: Too many levels of symbolic links How can I fix this? When error occurs ls start listing the files, however PHP brakes. On the NFSv4 Server In /etc/fstab: /mnt/storage /srv/storage none bind 0 0 In /etc/exports /srv 192.168.1.0/24(rw,async,insecure,no_subtree_check,crossmnt,fsid=0,no_root_squash) /srv/storage 192.168.1.0/24(rw,async,nohide,insecure,no_subtree_check,no_root_squash) ERROR root@ds:root@ds:/mnt/storage/foreign_dbs/imdb/imdb_htmls# ls -l | head ls: reading directory .: Too many levels of symbolic links total 10302840 -rw-r--r-- 1 root root 10484 Jul 5 13:56 0019038.gz -rw-r--r-- 1 root root 16264 Mar 30 00:31 0259701.gz -rw-r--r-- 1 root root 13784 Mar 30 14:20 1000000.gz -rw-r--r-- 1 root root 12741 Mar 30 13:04 1000003.gz -rw-r--r-- 1 root root 12794 Mar 30 12:40 1000004.gz -rw-r--r-- 1 root root 13123 Mar 30 12:07 1000005.gz -rw-r--r-- 1 root root 13183 Mar 30 12:04 1000006.gz -rw-r--r-- 1 root root 13443 Jul 4 01:16 1000007.gz -rw-r--r-- 1 root root 12968 Mar 30 11:05 1000008.gz I came across it in PHP. scandir would return 1612577.gz & 1612579.gz, but skips 1612578.gz and yet the file types and properties are identical on them and this only happens on the nfs client, works 100% on the server

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >