Search Results

Search found 5643 results on 226 pages for 'machines'.

Page 29/226 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • faking NAT with a VMware distributed switch across multiple hosts

    - by romant
    I need to construct a NAT for certain machines within the network. Wish to do this with dvSwitch - as it seems the logical way of attacking the problem as in this scenario there's just under 30 hosts. In order for the NAT'ed VM's to have access to the 'real' network. I am providing a 'router' VM, which will have access to the WAN/outside network, and also act as the DHCP server for the NAT'ed machines. Problem Space When the machines connected to the NAT interface and the router are on the same host, then they get an IP from the router VM, and work perfectly (routed outside). Unfortunately machines on other Hosts that are connected to the dvSwitch do not get an IP and further tcpdump shows no network data getting through across the hosts within the dvSwitch. Has anyone achieved a NAT solution using a dvSwitch before that they could share?! Thank you. EDIT: Including the diagram.

    Read the article

  • How can one machine ping another, but the reverse ping doesn't work

    - by SteveC
    I've got two VMware Workstation virtual machines running ... Virtual A can ping the host laptop most of the time, other real machines on my home network all of the time, but it gets a "request timed out" for virtual B Virtual B can ping the host laptop most of the time, and the machines, both real and virtual A all of the time The only difference I know of is virtual B has been joined to my works domain, whereas virtual A is still in workgroup mode Can anyone explain how / why this is occurring ?

    Read the article

  • How do I add a WMware ESXi Host to Microsoft Virtual Machine Manager?

    - by user63250
    I am trying to manage virtual machines running on a VMware ESXi host using Microsoft System Center Virtual Machine Manager. I was able to add the ESXi machine using the "Add VMware VirtualCenter server" option, but can't access any of the VMs on the datastore associated with this ESXi server. The datastore of the ESXi box is showing up with the correct name, but it won't let me see any of the VMs that have already been created; I get "There are no virtual machines on this host." Because I couldn't get any of the existing virtual machines to show up, I tried creating some new ones. When using VMM to connect to ESXi and create new VMs, I get the following error messages in the "rating explanation" section: The virtualization software on the selected host does not support virtual hard disks on an IDE bus. and The virtualization software on the host XXXXXX does not support the creation of dynamic virtual hard disk. Any ideas on why I can't manage existing machines and why I can't create new ones? The existing machines were created in vSphere. I should note that the ESXi server and the server running SCVMM are both on the same domain. I should also note that although the ESXi box has been added as a VirtualCetner server, when I try to add it through the "Add Host" option, I get an error message saying "Virtual Machine Manager cannot complete the VirtualCenter action on server EXSi because of the following error: The operation is not supported on the object."

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • Defragging Host OS of VMWare

    - by JackLocke
    Hi All, I want to ask something that has been puzzling me from last few days. I will try to explain my problem as clear as I can ... I have VMWare Workstation installed in my machine. And I use one separate 100Gb drive which stores all of my virtual machines, nothing else. Now, last week I was playing with a De-fragmentation tool called "Smart Defrag" which showed me in its analysis report that my drive where I am currently storing all of my Virtual Machines has more than 80% of fragmentation !!! Now my question is ... What will be the effect on my Guest / VM machine performance if I defrag my Host machine ... I mean this Host machine is essentially storing those virtual machines, but still dont have any direct access to what ever is stored in those machines ... so defraging the host should not cause any problem. But before proceeding, I want to hear from other people who may have met same problem. I will really appreciate any help ... BTW, I am using Windows 7 as Host and the guest machines I am using are Windows 2008 & 2003 & Ubuntu 10.04 THanks, Jack

    Read the article

  • College network - can I point non-domain student computers to our SUS server?

    - by Joel Coel
    Since I started here 3 months ago, one of the things that's really bothered me about the way this network is setup is something that shows up on the daily bandwidth consumption report. I get a list of top-visited sites by hits and by size, and invariably the top site (to the point that it's bigger than all the other top sites combined) is au.download.windowsupdate.com. We're pulling in ~30GB/day in windows updates. This is every day, not just after a patch Tuesday. After a patch day, it jumps closer to 40GB for a couple days. The key here is that almost none if it is by machines that I'm responsible for. My machines are for the most part fully patched, and when they're not they'll pull from a SUS server, so new updates are downloaded only once. It used to be closer to 50GB/day because most of the machines in our computer labs use DeepFreeze and weren't applying updates correctly, but that's fixed now. So the problem is definitely student-owned machines in the dorms, some of which are re-downloading the same updates in background each day, over and over. I'd love to have these machines start pulling from our SUS server. Then, if they don't ever actually install them at least they're not leeching bandwidth from our public internet connection. Any ideas on how to resolve the situation?

    Read the article

  • deploying AV via GPO only to workstations

    - by jeremy
    We have a small (100 machines) Windows domain running Server 2008R2. We use Symantec Endpoint Protection 12.1 I want to have GPO deploy the AV software to client machines automatically, but only to client workstations, not to servers, which run a different software. I've set it up before using a GPO linked to the domain mycompany.local and it works, but it deploys the AV software to ALL machines on the domain, including my servers. I can create an OU in active directory for Servers, and perhaps create one for client machines too, but I'd rather not have to go and move new domain members from the default under Computers into a different folder. How can I use GPO to deploy this AV software only to workstations on our network, and not to servers?

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • SSH Connection Error : No route to host

    - by dewbot
    There are three machines in this scenario: Desktop A : [email protected] Laptop A : [email protected] Machine B : [email protected] All the machines have Ubuntu 11.04 (Desktop A is a 64bit one) and have both openssh-server and openssh-client. Now when I try to connect Desktop A to Laptop A or vice-versa by ssh [email protected] I get an error as port 22: No route to host in both the cases. I own both the machines, now if I try same commands from my friend's machine, i.e. via Desktop B, I can access both my Laptop and Desktop. But if I try to access Desktop B from my Laptop or by Desktop I get port 22: Connection timed out I even tried changing ssh port no. in ssh_config file but no success. Note: that 'Laptop A' uses WiFi connection while 'Machine A' uses Ethernet Connection and 'Machine B' is on an entirely different network. Laptop A && Desktop A - Router/Nano_Rcvr provided to me by ISP. So to one Router two Machines are connected and can be accessed at the same time. here is my ifconfig output for both the machines :- Laptop wlan0 Link encap:Ethernet HWaddr X:X:X:X:00:bc inet addr:1.23.73.111 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::219:e3ff:fe04:bc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:108409 errors:0 dropped:0 overruns:0 frame:0 TX packets:82523 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:44974080 (44.9 MB) TX bytes:22973031 (22.9 MB) Desktop eth0 Link encap:Ethernet HWaddr X:X:X:X:c5:78 inet addr:1.23.68.209 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::227:eff:fe04:c578/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10380 errors:0 dropped:0 overruns:0 frame:0 TX packets:4509 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1790366 (1.7 MB) TX bytes:852877 (852.8 KB) Interrupt:43 Base address:0x2000

    Read the article

  • One bigger Virtual Machine distributed across many Nodes [on hold]

    - by flyer
    I just setup virtual machines on one hardware with Vagrant (this is just a test environment, not production!). I want to use a Puppet to configure them and next try to setup OpenStack. I am not sure If I am understanding how this should look at the end. Is it possible to have below architecture with OpenStack after all where I will run one Virtual Machine with Linux? ------------------------------- | VM | ------------------------------- | NOVA | NOVA | NOVA | ------------------------------- | OpenStack | ------------------------------- | Node | Node | Node | ------------------------------- (In my environment Nodes are just virtual machines, but my question concerns separate Hardware nodes) After some comments... Is it a language barrier, or? This is only my 'virtual environment'. If we imagine this virtual machines are a separate Nodes (e.g. every has 4 cores) the OpenStack is still the same, right? Can I run one Virtual Machine across many Nodes with OpenStack? Is it possible to aggregate the computation power of separate machines in one virtual distributed operating system?

    Read the article

  • Remote SCCM deployment of Operating Systems

    - by Decad
    I am currently using sccm 2007 for our software deployment and PXE. During this summer I have been tasked with upgrading 2000+ machines from Windows XP to Windows 7. My plan is to use sccm to advertise the Windows 7 task sequence to the machines. However my question is, what is the best way to automate the deployment? Can I make SCCM turn a machine on and make it run an advertised task sequence without having to be in the same room as the machines?

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Can Windows Home Server be used on an active directory domain?

    - by Parvenu74
    The situation: an Active Directory network with a few dozen machines. Most of the machines have the same vanilla image applied to them so if there was a hard drive failure getting the machine back up to the standard network image would be quick and easy. However, there are a handful of (eight) machines which have rather unique setups (accounting, developers, the "artist" with CS4 and such). For these machines we would like to use Windows Home Server since the backups are automatic and recovery from a machine failure is quite painless. The question though is whether or not WHS can be used on an A/D network. If not, what "set it and forget it" backup/imaging product is recommended for this scenario?

    Read the article

  • Passwords longer than 8 letter in Red Hat 4

    - by Oz123
    I have some machines with RHEL4 Nahant Update 6. Oddly, I found that passwords longer than 8 digits are not stored. So if I had a password 1ABCDEa!, and I changed it to 1ABCDEa!1ABCDEa! I could still log in to the machine with the old password. This machines use NIS authentication, but other machines with Red Hat 5 which use the same NIS server allow login ONLY with the NEW password (16 digits long...)!

    Read the article

  • Is it possible to cause artificial network packet loss or latency?

    - by nbolton
    I'm trying to reproduce some issues on a deployed application where the MSSQL server and client are running in two separate machines. I think there may be network issues between the two machines, so I'd like to try and reproduce these conditions on two Hyper-V virtual machines (on the same virtual server). Of course, the network for these virtual machines is "local" so it's actually far from the conditions in a live environment. Is there a program I can run on either virtual machine which will degrade the network performance? Or maybe any other work arounds? For example, one way to reproduce the conditions may be to run the VMs on separate Hyper-V servers in geographically dispersed locations (so the SQL traffic goes over VPN or something) -- but this is a little long winded I think. There must be a simpler way.

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • Creating a Linux cluster/cloud to act as a server.

    - by Kavon Farvardin
    I have about four or five machines in the Pentium 3-4 era and I'm interested in creating a Linux server comprised of these machines. The server's main purposes would be to host several low-medium traffic websites/services (voice and game), and share terabytes of data on a local network. I could probably throw together one modern computer as a server and call it a day, but I'm interested in using these machines to do it instead. Where would I get started in this cluster/cloud setup?

    Read the article

  • Add users in Windows machine without AD

    - by guillem
    I have several development machines where I am the administrator. We are using AD in my organization but is maintained by and offshore IT group any request takes a long time. We are currently granting access to developers on development machines manually so it's a bit annoying to maintain although at least it's fast. We have also a lot of external consultants that need to use those machines for some time. Is there any tool or method to maintain a set of users synced on those machines without the need add them to an AD group?

    Read the article

  • VPN connection over apache mod_proxy

    - by This is it
    Hi We have several virtual machines which are connected in a private virtual network connection. Internet access for these machines is provided via dedicated virtual machine which has apache proxy server on it (they all use this machine as proxy). The problem now is that from several machines we need to connect to external VPN Server, but it seems that VPN connections don't work over apache proxy. Any suggestions on how to enable VPN connection over apache proxy (or some other proxy)? Some other solution? Thanks

    Read the article

  • Per Sender Traffic Limit on Cisco 6500

    - by user71557
    Hi All, I have a 6509 with ~1000 user machines in different vlans, I want to allow 10 server machines to send as much as they can/want but to limit all client machines from all subnets to have a sedning rate limit of 1mbps with no receiving limitation. It is worth noting that all my ip addresses are assigned using a DCHP server and there are 1000 of them so I can not write ACLs for every address seperately. Can any one provide some kind of help please?

    Read the article

  • Should Production Windows Web Servers (IIS & SQL) be in a domain?

    - by tlianza
    We have a few web servers and a few database servers. To date, they've been standalone machines that are not part of a domain. The web servers don't talk to each other, and the web servers talk to the database servers via SQL Auth. My concern with putting the machines in a domain together were added complexity - it's one more "thing" running, and doing "things" that could go wrong. risk - if a domain controller fails, am I now putting other machines at risk? However, in certain scenarios it does seem convenient for them to be on a domain, sharing credentials. For example, if I want to give the "services" control on one machine access to another machine (because Remote Desktop craps out) I need to go in and assign privileges on multiple machines - something that I believe Active Directory and Domain Accounts set to simplify. My question: I'm sure there are things I'm not considering here. Is there a best practice?

    Read the article

  • How many different servers are needed to keep a website running with no downtime? [closed]

    - by Mason Wheeler
    Machines go down. It's a fact of life. They may need to be rebooted for some reason, or they may have a hardware failure, or a power outage. So if I wanted to deploy a website with a server backed by a SQL database, putting the whole thing on one server wouldn't be good enough. It obviously needs at least two servers, so that if one goes down, the other can pick up the slack until the first comes back up. Of course, if I have the server software on two machines, either one of which could go down, I can't place the database on either of those two machines, because it could go down. So the database needs its own server. But that server can go down, so I need a backup database server and some sort of replication system to keep it in sync so the main can fail-over to it. So far, that's a bare minimum of 4 machines to keep one website running with a reasonable chance of no downtime. (Assuming no catastrophic events take place that take down both front-end servers at once or both DB servers at once, and no hacks, DDOS attacks, etc. Am I missing any other factors, or should I consider 4 servers to be the minimum for running a website with a goal of continuing operation without downtime even when a server goes down?

    Read the article

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >